Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Wyszukujesz frazę "data uncertainty" wg kryterium: Temat


Tytuł:
The uncertainty of the water flow velocity data obtained in the laboratory test
Miara niepewności wyników pomiarów uzyskanych z badań laboratoryjnych przepływu
Autorzy:
Malinowska, E.
Sas, W.
Szymanski, A.
Powiązania:
https://bibliotekanauki.pl/articles/81805.pdf
Data publikacji:
2008
Wydawca:
Szkoła Główna Gospodarstwa Wiejskiego w Warszawie. Wydawnictwo Szkoły Głównej Gospodarstwa Wiejskiego w Warszawie
Tematy:
data uncertainty
water flow
laboratory test
organic soil
soil
flow characteristics
soil strength
deformation parameter
soil structure
statistical analysis
Źródło:
Annals of Warsaw University of Life Sciences - SGGW. Land Reclamation; 2008, 40
0208-5771
Pojawia się w:
Annals of Warsaw University of Life Sciences - SGGW. Land Reclamation
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Fuzzy similarity and fuzzy inclusion measures in polyline matching : a case study of potential streams identification for archaeological modelling in GIS
Autorzy:
Ďuračiová, R.
Rášová, A.
Lieskovský, T.
Powiązania:
https://bibliotekanauki.pl/articles/106895.pdf
Data publikacji:
2017
Wydawca:
Politechnika Warszawska. Wydział Geodezji i Kartografii
Tematy:
spatial data uncertainty
similarity measure
fuzzy inclusion
spatial object matching
identity determination
niepewność danych przestrzennych
miara podobieństwa
dopasowanie obiektów przestrzennych
określenie tożsamości
Opis:
When combining spatial data from various sources, it is often important to determine similarity or identity of spatial objects. Besides the differences in geometry, representations of spatial objects are inevitably more or less uncertain. Fuzzy set theory can be used to address both modelling of the spatial objects uncertainty and determining the identity, similarity, and inclusion of two sets as fuzzy identity, fuzzy similarity, and fuzzy inclusion. In this paper, we propose to use fuzzy measures to determine the similarity or identity of two uncertain spatial object representations in geographic information systems. Labelling the spatial objects by the degree of their similarity or inclusion measure makes the process of their identification more efficient. It reduces the need for a manual control. This leads to a more simple process of spatial datasets update from external data sources. We use this approach to get an accurate and correct representation of historical streams, which is derived from contemporary digital elevation model, i.e. we identify the segments that are similar to the streams depicted on historical maps.
Źródło:
Reports on Geodesy and Geoinformatics; 2017, 104; 115-130
2391-8365
2391-8152
Pojawia się w:
Reports on Geodesy and Geoinformatics
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Methodologies to assess uncertainties in the tritium production within lithium breeding blankets
Autorzy:
Salvador, J.
Cabellos, O.
Díez, C. J.
Powiązania:
https://bibliotekanauki.pl/articles/147864.pdf
Data publikacji:
2012
Wydawca:
Instytut Chemii i Techniki Jądrowej
Tematy:
uncertainty
nuclear data
lithium breeding blankets
Opis:
Tritium would have to be produced and controlled in the future fusion facilities. The capture of fusion neutrons by lithium has been proposed as a possible tritium reproduction reaction and lithium blankets for tritium breeding based on this reaction were designed. For the purpose of plant operation and for the safety reasons it is necessary to assess the accuracy with which we can predict the amount of tritium that can be produced. In particular, it is important to assess the impact that the uncertainties inherent in the nuclear data have on the prediced values. By focusing on specific applications and finding specific deficiencies, such studies point to possible directions to improve nuclear data sources. In this paper experimental data on tritium production in a mock-up system are reproduced and their uncertainties assessed in order to identify the reactions that have largest contributions to the total uncertainty.
Źródło:
Nukleonika; 2012, 57, 1; 61-66
0029-5922
1508-5791
Pojawia się w:
Nukleonika
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
About evaluation of multivariate measurements results
Autorzy:
Warsza, Z. L.
Powiązania:
https://bibliotekanauki.pl/articles/384932.pdf
Data publikacji:
2012
Wydawca:
Sieć Badawcza Łukasiewicz - Przemysłowy Instytut Automatyki i Pomiarów
Tematy:
uncertainty
indirect measurements
multi-measurand
correlated data
Źródło:
Journal of Automation Mobile Robotics and Intelligent Systems; 2012, 6, 4; 27-32
1897-8649
2080-2145
Pojawia się w:
Journal of Automation Mobile Robotics and Intelligent Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Project management subject to imprecise activity network and cost estimation constraints
Autorzy:
Pisz, I.
Banaszak, Z.
Powiązania:
https://bibliotekanauki.pl/articles/117663.pdf
Data publikacji:
2010
Wydawca:
Polskie Towarzystwo Promocji Wiedzy
Tematy:
project management
uncertainty
imprecise data
time
cost
soft logic
Opis:
The new approach to project planning assuming soft links between activities and imprecise cost of activities execution is considered. In that context, the method allowing one to estimate the duration and the cost of project execution is proposed. The illustrative example emphasizing the advantages of the approach proposed is enclosed.
Źródło:
Applied Computer Science; 2010, 6, 1; 7-28
1895-3735
Pojawia się w:
Applied Computer Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
New method of selecting efficient project portfolios in the presence of hybrid uncertainty
Autorzy:
Rębiasz, B.
Powiązania:
https://bibliotekanauki.pl/articles/406365.pdf
Data publikacji:
2016
Wydawca:
Politechnika Wrocławska. Oficyna Wydawnicza Politechniki Wrocławskiej
Tematy:
portfolio selection
data processing
hybrid uncertainty
random fuzzy sets
Opis:
A new methods of selecting efficient project portfolios in the presence of hybrid uncertainty has been presented. Pareto optimal solutions have been defined by an algorithm for generating project portfolios. The method presented allows us to select efficient project portfolios taking into account statistical and economic dependencies between projects when some of the parameters used in the calculation of effectiveness can be expressed in the form of an interactive possibility distribution and some in the form of a probability distribution. The procedure for processing such hybrid data combines stochastic simulation with nonlinear programming. The interaction between data are modeled by correlation matrices and the interval regression. Economic dependences are taken into account by the equations balancing the production capacity of the company. The practical example presented indicates that an interaction between projects has a significant impact on the results of calculations.
Źródło:
Operations Research and Decisions; 2016, 26, 4; 65-90
2081-8858
2391-6060
Pojawia się w:
Operations Research and Decisions
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Contextual probability
Autorzy:
Wang, H.
Powiązania:
https://bibliotekanauki.pl/articles/307791.pdf
Data publikacji:
2003
Wydawca:
Instytut Łączności - Państwowy Instytut Badawczy
Tematy:
mathematical foundations
knowledge representation
machine learning
uncertainty
data mining
Opis:
In this paper we present a new probability function G that generalizes the classical probability function. A mass function is an assignment of basic probability to some context (events, propositions). It represents the strength of support for some contexts in a domain. A context is a subset of the basic elements of interest in a domain - the frame of discernment. It is a medium to carry the "probabilistic" knowledge about a domain. The G function is defined in terms of a mass function under various contexts. G is shown to be a probability function satisfying the axioms of probability. Therefore G has all the properties attributed to a probability function. If the mass function is obtained from probability function by normalization, then G is shown to be a linear function of probability distribution and a linear function of probability. With this relationship we can estimate probability distribution from probabilistic knowledge carried in some contexts without any model assumption.
Źródło:
Journal of Telecommunications and Information Technology; 2003, 3; 92-97
1509-4553
1899-8852
Pojawia się w:
Journal of Telecommunications and Information Technology
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Challenges for the DOE methodology related to the introduction of Industry 4.0
Autorzy:
Pietraszek, Jacek
Radek, Norbert
Goroshko, Andrii V.
Powiązania:
https://bibliotekanauki.pl/articles/1839493.pdf
Data publikacji:
2020
Wydawca:
Stowarzyszenie Menedżerów Jakości i Produkcji
Tematy:
DOE
industry 4.0
data-driven approaches
uncertainty
przemysł 4.0
niepewność
Opis:
The introduction of solutions conventionally called Industry 4.0 to the industry resulted in the need to make many changes in the traditional procedures of industrial data analysis based on the DOE (Design of Experiments) methodology. The increase in the number of controlled and observed factors considered, the intensity of the data stream and the size of the analyzed datasets revealed the shortcomings of the existing procedures. Modifying procedures by adapting Big Data solutions and data-driven methods is becoming an increasingly pressing need. The article presents the current methods of DOE, considers the existing problems caused by the introduction of mass automation and data integration under Industry 4.0, and indicates the most promising areas in which to look for possible problem solutions.
Źródło:
Production Engineering Archives; 2020, 26, 4; 190-194
2353-5156
2353-7779
Pojawia się w:
Production Engineering Archives
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Standard Deviation of the Mean of Autocorrelated Observations Estimated with the Use of the Autocorrelation Function Estimated From the Data
Autorzy:
Zięba, A.
Ramza, P.
Powiązania:
https://bibliotekanauki.pl/articles/220588.pdf
Data publikacji:
2011
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
autocorrelated data
time series
effective number of observations
estimators of variance
measurement uncertainty
Opis:
Prior knowledge of the autocorrelation function (ACF) enables an application of analytical formalism for the unbiased estimators of variance s²a and variance of the mean s²a(x‾). Both can be expressed with the use of so-called effective number of observations neff. We show how to adopt this formalism if only an estimate {rk} of the ACF derived from a sample is available. A novel method is introduced based on truncation of the {rk} function at the point of its first transit through zero (FTZ). It can be applied to non-negative ACFs with a correlation range smaller than the sample size. Contrary to the other methods described in literature, the FTZ method assures the finite range 1 < nˆeff ≤ n for any data. The effect of replacement of the standard estimator of the ACF by three alternative estimators is also investigated. Monte Carlo simulations, concerning the bias and dispersion of resulting estimators sa and sa(x‾), suggest that the presented formalism can be effectively used to determine a measurement uncertainty. The described method is illustrated with the exemplary analysis of autocorrelated variations of the intensity of an X-ray beam diffracted from a powder sample, known as the particle statistics effect.
Źródło:
Metrology and Measurement Systems; 2011, 18, 4; 529-542
0860-8229
Pojawia się w:
Metrology and Measurement Systems
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Risk assessment related to information uncertainty components
Autorzy:
Rosická, Z.
Powiązania:
https://bibliotekanauki.pl/articles/2069587.pdf
Data publikacji:
2007
Wydawca:
Uniwersytet Morski w Gdyni. Polskie Towarzystwo Bezpieczeństwa i Niezawodności
Tematy:
assets
data
experience
information uncertainty
knowledge management
organization knowledge
tacit knowledge
explicit knowledge
Opis:
Both organization and individuals deal with and manage knowledge. Considering the basic approach, we distinguish two principal clusters: tacit and explicit knowledge. The knowledge management is targeted at making the organization knowledge operation more effective and providing the right people with relevant information at the right time. Knowledge and information uncertainty components have become one of crucial assets of any company or organization. Their crucial potential consists in smart knowledge management handling, proficiency and art to fit the risky market needs better than competitors.
Źródło:
Journal of Polish Safety and Reliability Association; 2007, 2; 297--302
2084-5316
Pojawia się w:
Journal of Polish Safety and Reliability Association
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
A new approach for modelling uncertainty in expert systems knowledge bases
Autorzy:
Niederliński, A.
Powiązania:
https://bibliotekanauki.pl/articles/229898.pdf
Data publikacji:
2018
Wydawca:
Polska Akademia Nauk. Czytelnia Czasopism PAN
Tematy:
expert systems
uncertainty
certainty factors
knowledge bases
data marks
SWOT
SWOT knowledge base
Opis:
The current paradigm of modelling uncertainty in expert systems knowledge bases using Certainty Factors (CF) has been critically evaluated. A way to circumvent the awkwardness, non-intuitiveness and constraints encountered while using CF has been proposed. It is based on introducing Data Marks for askable conditions and Data Marks for conclusions of relational models, followed by choosing the best suited way to propagate those Data Marks into Data Marks of rule conclusions. This is done in a way orthogonal to the inference using Aristotelian Logic. Using Data Marks instead of Certainty Factors removes thus the intellectual discomfort caused by rejecting the notion of truth, falsehood and the Aristotelian law of excluded middle, as is done when using the CF methodology. There is also no need for changing the inference system software (expert system shell): the Data Marks approach can be implemented by simply modifying the knowledge base that should accommodate them. The methodology of using Data Marks to model uncertainty in knowledge bases has been illustrated by an example of SWOT analysis of a small electronic company. A short summary of SWOT analysis has been presented. The basic data used for SWOT analysis of the company are discussed. The rmes_EE SWOT knowledge base consisting of a rule base and model base have been presented and explained. The results of forward chaining for this knowledge base have been presented and critically evaluated.
Źródło:
Archives of Control Sciences; 2018, 28, 1; 19-34
1230-2384
Pojawia się w:
Archives of Control Sciences
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Rule modeling of ADI cast iron structure for contradictory data
Autorzy:
Soroczyński, Artur
Biernacki, Robert
Kochański, Andrzej
Powiązania:
https://bibliotekanauki.pl/articles/29520059.pdf
Data publikacji:
2022
Wydawca:
Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie. Wydawnictwo AGH
Tematy:
rule modeling
contradictory data set
uncertainty
data preparation
decision tree
rough set theory
niepewność
drzewo decyzyjne
teoria zbiorów przybliżonych
Opis:
Ductile iron is a material that is very sensitive to the conditions of crystallization. Due to this fact, the data on the cast iron properties obtained in tests are significantly different and thus sets containing data from samples are contradictory, i.e. they contain inconsistent observations in which, for the same set of input data, the output values are significantly different. The aim of this work is to try to determine the possibility of building rule models in conditions of significant data uncertainty. The paper attempts to determine the impact of the presence of contradictory data in a data set on the results of process modeling with the use of rule-based methods. The study used the well-known dataset (Materials Algorithms Project Data Library, n.d.) pertaining to retained austenite volume fraction in austempered ductile cast iron. Two methods of rulebased modeling were used to model the volume of the retained austenite: the decision trees algorithm (DT) and the rough sets algorithm (RST). The paper demonstrates that the number of inconsistent observations depends on the adopted data discretization criteria. The influence of contradictory data on the generation of rules in both algorithms is considered, and the problems that can be generated by contradictory data used in rule modeling are indicated.
Źródło:
Computer Methods in Materials Science; 2022, 22, 4; 211-228
2720-4081
2720-3948
Pojawia się w:
Computer Methods in Materials Science
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Designing medical production rules from semantically integrated data
Autorzy:
Jankowska, B.
Szymkowiak, M.
Powiązania:
https://bibliotekanauki.pl/articles/333095.pdf
Data publikacji:
2010
Wydawca:
Uniwersytet Śląski. Wydział Informatyki i Nauki o Materiałach. Instytut Informatyki. Zakład Systemów Komputerowych
Tematy:
integracja danych semantycznych
medyczne systemy oparte na regułach
niepewność
semantic data integration
medical rule-based systems
uncertainty
Opis:
In the paper an algorithm for automatic knowledge acquisition is proposed. The knowledge is acquired from aggregate data stored in different repositories. The algorithm operates by means of semantic data integration, allowing both syntax and semantic differences between data coming from different sources. If only we know data taxonomies, can interpret data schemas and design schema mappings, then the differences are not an obstacle to integration. The acquired knowledge is being defined in a form of production rules with uncertainty. The considerations are illustrated with medical examples.
Źródło:
Journal of Medical Informatics & Technologies; 2010, 16; 95-102
1642-6037
Pojawia się w:
Journal of Medical Informatics & Technologies
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Cross-Examining the Computer: Uncertainty in the Court
„Przesłuchiwanie komputera”. Niepewność na sali sądowej
Autorzy:
McClellan Marshall, John
Powiązania:
https://bibliotekanauki.pl/articles/31344099.pdf
Data publikacji:
2023
Wydawca:
Uniwersytet Marii Curie-Skłodowskiej. Wydawnictwo Uniwersytetu Marii Curie-Skłodowskiej
Tematy:
judicial axiology
uncertainty
Big Data
technoevidence
cyberethics
black swan
aksjologia prawnicza
niepewność
dowody oparte na środkach technicznych
etyka informatyczna
czarny łabędź
Opis:
This paper is intended to provide lawyers, young and old, with an analytical approach to their practice that is perhaps broader than they originally learned in law school or as young associates. Because lawyers and judges tend to be derived in large part from the liberal arts, this approach broadens that view borrowed in part on the principles of quantum mechanics, in particular Heisenberg’s “uncertainty principle.” While lawyers and judges are accustomed to some level of uncertainty, whether in an office context or at trial, the question of how to deal with it varies quite widely from person to person, and the subjectivity itself creates problems. Admittedly, this is an exercise in the “intellectual aspects of the practice of law,” which is an eminently practical activity, but it is intended to raise questions as to the role of modern technology in the legal context, as well as provide, to some extent, answers.
Niniejszy artykuł ma na celu przedstawienie prawnikom, zarówno młodym, jak i nieco starszym, podejścia analitycznego do praktyki zawodowej, być może szerszego niż to, które przyswoili sobie podczas studiów lub aplikacji. Ponieważ pełnomocnicy procesowi i sędziowie zwykle funkcjonują w środowisku nauk humanistycznych i społecznych, proponowane podejście rozszerza tę perspektywę, zapożyczając elementy z mechaniki kwantowej, w szczególności zasady nieoznaczoności Heisenberga. O ile pełnomocnicy i sędziowie miewają do czynienia z pewnym stopniem niepewności, czy to w kontekście pracy w kancelarii, czy to na sali sądowej, to kwestia jak z nią postępować różni się w zależności od osoby, a sama subiektywność bywa przyczyną problemów. Wprawdzie jest to ćwiczenie z „intelektualnych aspektów praktyki prawniczej”, będącej przecież działalnością wybitnie praktyczną, ale ma ono na celu postawienie pytań dotyczących roli współczesnej technologii w kontekście prawnym, a także do pewnego stopnia udzielenie na nie odpowiedzi.
Źródło:
Studia Iuridica Lublinensia; 2023, 32, 4; 97-115
1731-6375
Pojawia się w:
Studia Iuridica Lublinensia
Dostawca treści:
Biblioteka Nauki
Artykuł
Tytuł:
Knowledge-based clustering as a conceptual and algorithmic environment of biomedical data analysis
Autorzy:
Pedrycz, W.
Gacek, A.
Powiązania:
https://bibliotekanauki.pl/articles/333706.pdf
Data publikacji:
2004
Wydawca:
Uniwersytet Śląski. Wydział Informatyki i Nauki o Materiałach. Instytut Informatyki. Zakład Systemów Komputerowych
Tematy:
wiedza i dane
grupowanie rozmyte
bliskość
włączenie
nadzór częściowy
niepewność
entropia
knowledge and data
fuzzy clustering
guidance mechanisms
proximity
inclusion
partial supervision
uncertainty
entropy
Opis:
While a genuine abundance of biomedical data available nowadays becomes a genuine blessing, it also posses a lot of challenges. The two fundamental and commonly occurring directions in data analysis deal with its supervised or unsupervised pursuits. Our conjecture is that in the area of biomedical data processing and understanding where we encounter a genuine diversity of patterns, problem descriptions and design objectives, this type of dichotomy is neither ideal nor the most productive. In particular, the limitations of such taxonomy become profoundly evident in the context of unsupervised learning. Clustering (being usually regarded as a synonym of unsupervised data analysis) is aimed at determining a structure in a data set by optimizing a given partition criterion. In this sense, a structure emerges (becomes formed) without a direct intervention of the user. While the underlying concept looks appealing, there are numerous sources of domain knowledge that could be effectively incorporated into clustering mechanisms and subsequently help navigate throughout large data spaces. In unsupervised learning, this unified treatment of data and domain knowledge leads to the general concept of what could be coined as knowledge-based clustering. In this study, we discuss the underlying principles of this paradigm and present its various methodological and algorithmic facets. In particular, we elaborate on the main issues of incorporating domain knowledge into the clustering environment such as (a) partial labelling, (b) referential labelling (including proximity and entropy constraints), (c) usage of conditional (navigational) variables, (d) exploitation of external structure. Presented are also concepts of stepwise clustering in which the structure of data is revealed via a series of refinements of existing domain granular information.
Źródło:
Journal of Medical Informatics & Technologies; 2004, 7; KB13-22
1642-6037
Pojawia się w:
Journal of Medical Informatics & Technologies
Dostawca treści:
Biblioteka Nauki
Artykuł

Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies