Home About us Contact | |||
Uncertainty Inherent (uncertainty + inherent)
Selected AbstractsAssessing early warning signals of currency crises: a fuzzy clustering approachINTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 4 2006Shuhua Liu In the period of 1990s alone, four waves of financial crises occurred around the world. The repeated occurrence of financial crises stimulated a large number of theoretical and empirical studies on the phenomena, in particular studies on the determinants of or early warning signals of financial crises. Nonetheless, the different studies of early warning systems have achieved mixed results and there remains much room for further investigation. Since, so far, the empirical studies have focused on conventional economic modelling methods such as simplified probabilistic models and regression models, in this study we examine whether new insights can be gained from the application of the fuzzy clustering method. The theories of fuzzy sets and fuzzy logic offer us the means to deal with uncertainties inherent in a wide variety of tasks, especially when the uncertainty is not the result of randomness but the result of unknown factors and relationships that are difficult to explain. They also provide us with the instruments to treat vague and imprecise linguistic values and to model nonlinear relationships. This paper presents empirical results from analysing the Finnish currency crisis in 1992 using the fuzzy C-means clustering method. We first provide the relevant background knowledge and introduce the fuzzy clustering method. We then show how the use of fuzzy C-means method can help us to identify the critical levels of important economic indicators for predicting of financial crises. Copyright © 2007 John Wiley & Sons, Ltd. [source] Applicability of published data for fatigue-limited designQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 8 2009Martin Leary Abstract The use of published fatigue data provides an expedient basis for fatigue-limited engineering design by alleviating the necessity of explicit testing. However, published fatigue data often exhibits incomplete documentation of the associated test conditions. Incomplete documentation introduces uncertainties in fatigue life prediction that may limit the applicability of the published fatigue data for design applications. Characterization of the applicability of published fatigue data is critical for robust fatigue-limited design. However, no quantitative methods have been identified which respond to this requirement. A novel method has been developed to provide a systematic characterization of the applicability of published fatigue data based on internationally recognized standards. This method provides a conceptual mechanism to: identify the applicability of published fatigue test data for specific design scenarios,thereby informing engineers of potential limitations of published data and allowing prioritization of multiple data sources; identify material domains of insufficient applicability,thereby providing a robust basis for identifying beneficial fatigue test programs; compromise between design complexity and the uncertainties inherent in fatigue life prediction; define a framework for the appropriate documentation of published fatigue data. A sample of published fatigue data sources associated with a specific fatigue-limited safety,critical design scenario was assessed by the method presented in this paper. For the majority of the sampled references, the associated documentation was insufficient to allow the fatigue test data to be confidently applied to the subsequent design activity. Copyright © 2009 John Wiley & Sons, Ltd. [source] Dissecting galaxies with quantitative spectroscopy of the brightest stars in the UniverseASTRONOMISCHE NACHRICHTEN, Issue 5 2010R.-P. KudritzkiArticle first published online: 20 MAY 2010 Abstract Measuring distances to galaxies, determining their chemical composition, investigating the nature of their stellar populations and the absorbing properties of their interstellar medium are fundamental activities in modern extragalactic astronomy helping to understand the evolution of galaxies and the expanding universe. The optically brightest stars in the universe, blue supergiants of spectral A and B, are unique tools for these purposes. With absolute visual magnitudes up to MV , -9.5 they are ideal to obtain accurate quantitative information about galaxies through the powerful modern methods of quantitative stellar spectroscopy. The spectral analysis of individual blue supergiant targets provides invaluable information about chemical abundances and abundance gradients, which is more comprehensive than the one obtained from HII regions, as it includes additional atomic species, and which is also more accurate, since it avoids the systematic uncertainties inherent in the strong line studies usually applied to the HII regions of spiral galaxies beyond the Local Group. Simultaneously, the spectral analysis yields stellar parameters and interstellar extinction for each individual supergiant target, which provides an alternative very accurate way to determine extragalactic distances through a newly developed method, called the Flux-weighted Gravity,Luminosity Relationship (FGLR). With the present generation of 10 m-class telescopes these spectroscopic studies can reach out to distances of 10 Mpc. The new generation of 30 m-class telescopes will allow to extend this work out to 30 Mpc, a substantial volume of the local universe (© 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Conservation Biogeography: assessment and prospectDIVERSITY AND DISTRIBUTIONS, Issue 1 2005Robert J. Whittaker ABSTRACT There is general agreement among scientists that biodiversity is under assault on a global basis and that species are being lost at a greatly enhanced rate. This article examines the role played by biogeographical science in the emergence of conservation guidance and makes the case for the recognition of Conservation Biogeography as a key subfield of conservation biology delimited as: the application of biogeographical principles, theories, and analyses, being those concerned with the distributional dynamics of taxa individually and collectively, to problems concerning the conservation of biodiversity. Conservation biogeography thus encompasses both a substantial body of theory and analysis, and some of the most prominent planning frameworks used in conservation. Considerable advances in conservation guidelines have been made over the last few decades by applying biogeographical methods and principles. Herein we provide a critical review focussed on the sensitivity to assumptions inherent in the applications we examine. In particular, we focus on four inter-related factors: (i) scale dependency (both spatial and temporal); (ii) inadequacies in taxonomic and distributional data (the so-called Linnean and Wallacean shortfalls); (iii) effects of model structure and parameterisation; and (iv) inadequacies of theory. These generic problems are illustrated by reference to studies ranging from the application of historical biogeography, through island biogeography, and complementarity analyses to bioclimatic envelope modelling. There is a great deal of uncertainty inherent in predictive analyses in conservation biogeography and this area in particular presents considerable challenges. Protected area planning frameworks and their resulting map outputs are amongst the most powerful and influential applications within conservation biogeography, and at the global scale are characterised by the production, by a small number of prominent NGOs, of bespoke schemes, which serve both to mobilise funds and channel efforts in a highly targeted fashion. We provide a simple typology of protected area planning frameworks, with particular reference to the global scale, and provide a brief critique of some of their strengths and weaknesses. Finally, we discuss the importance, especially at regional scales, of developing more responsive analyses and models that integrate pattern (the compositionalist approach) and processes (the functionalist approach) such as range collapse and climate change, again noting the sensitivity of outcomes to starting assumptions. We make the case for the greater engagement of the biogeographical community in a programme of evaluation and refinement of all such schemes to test their robustness and their sensitivity to alternative conservation priorities and goals. [source] Evidence-based policy or policy-based evidence gathering?ENVIRONMENTAL POLICY AND GOVERNANCE, Issue 5 2010Biofuels, the 10% target, the EU Abstract The 2009 Renewable Energy Directive mandates EU member-states' road transport fuel to comprise a minimum of 10% renewable content by 2020. This target is expected to be met predominantly from biofuels. However, scientific evidence is increasingly questioning the ability of biofuels to reduce greenhouse gas emissions when factors such as indirect land-use change are taken into consideration. This paper interrogates the 10% target, critically assessing its political motivations, use of scientific evidence and the actions of an individual policy entrepreneur who played a central role in its adoption. We find that the commitment of EU decision-making bodies to internal guidelines on the use of expertise and the precautionary principle was questionable, despite the scientific uncertainty inherent in the biofuels debate. Imperatives located in the political space dominated scientific evidence and led to a process of ,policy-based evidence gathering' to justify the policy choice of a 10% renewable energy/biofuels target. Copyright © 2010 John Wiley & Sons, Ltd and ERP Environment. [source] Model uncertainty in the ecosystem approach to fisheriesFISH AND FISHERIES, Issue 4 2007Simeon L. Hill Abstract Fisheries scientists habitually consider uncertainty in parameter values, but often neglect uncertainty about model structure, an issue of increasing importance as ecosystem models are devised to support the move to an ecosystem approach to fisheries (EAF). This paper sets out pragmatic approaches with which to account for uncertainties in model structure and we review current ways of dealing with this issue in fisheries and other disciplines. All involve considering a set of alternative models representing different structural assumptions, but differ in how those models are used. The models can be asked to identify bounds on possible outcomes, find management actions that will perform adequately irrespective of the true model, find management actions that best achieve one or more objectives given weights assigned to each model, or formalize hypotheses for evaluation through experimentation. Data availability is likely to limit the use of approaches that involve weighting alternative models in an ecosystem setting, and the cost of experimentation is likely to limit its use. Practical implementation of an EAF should therefore be based on management approaches that acknowledge the uncertainty inherent in model predictions and are robust to it. Model results must be presented in ways that represent the risks and trade-offs associated with alternative actions and the degree of uncertainty in predictions. This presentation should not disguise the fact that, in many cases, estimates of model uncertainty may be based on subjective criteria. The problem of model uncertainty is far from unique to fisheries, and a dialogue among fisheries modellers and modellers from other scientific communities will therefore be helpful. [source] Modelling near-infrared signals for on-line monitoring in cheese manufactureJOURNAL OF CHEMOMETRICS, Issue 2 2002B. J. A. Mertens Abstract This paper considers the analysis of a continuously monitored near-infrared reflectance signal at a single wavelength for the calibration of a process parameter in an application to food engineering for quality control. We describe how we may summarize the information in the observed signals by postulating an explicit statistical model on the signal. An exploratory data analysis may then be carried out on the profile summaries to evaluate whether and how the functional data provide information on the parameter which we would like to calibrate. From a conceptual point of view, such an approach is not dissimilar to principal component regression methods which use an intermediate decomposition through which information is summarised for calibration of a response. The paper demonstrates how the approach leads to important insights into the manner in which the functional data provide the required information on the desired outcome variable, in the context of the practical application in quality control which is discussed and by using a designed experiment. Calculations are implemented through the Gibbs sampler. Calibration of the prediction equation takes place through meta-analysis of the summarized profile data in order to take the uncertainty inherent in the summaries into account. Copyright © 2002 John Wiley & Sons, Ltd. [source] Returns to Schooling and Bayesian Model Averaging: A Union of Two LiteraturesJOURNAL OF ECONOMIC SURVEYS, Issue 2 2004Justin L. Tobias Abstract., In this paper, we review and unite the literatures on returns to schooling and Bayesian model averaging. We observe that most studies seeking to estimate the returns to education have done so using particular (and often different across researchers) model specifications. Given this, we review Bayesian methods which formally account for uncertainty in the specification of the model itself, and apply these techniques to estimate the economic return to a college education. The approach described in this paper enables us to determine those model specifications which are most favored by the given data, and also enables us to use the predictions obtained from all of the competing regression models to estimate the returns to schooling. The reported precision of such estimates also account for the uncertainty inherent in the model specification. Using U.S. data from the National Longitudinal Survey of Youth (NLSY), we also revisit several ,stylized facts' in the returns to education literature and examine if they continue to hold after formally accounting for model uncertainty. [source] Vision-based terrain characterization and traversability assessmentJOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 10 2001Ayanna Howard This article presents novel techniques for real-time terrain characterization and assessment of terrain traversability for a field mobile robot using a vision system and artificial neural networks. The key terrain traversability characteristics are identified as roughness, slope, discontinuity, and hardness. These characteristics are extracted from imagery data obtained from cameras mounted on the robot and are represented in a fuzzy logic framework using perceptual, linguistic fuzzy sets. The approach adopted is highly robust and tolerant to imprecision and uncertainty inherent in sensing and perception of natural environments. The four traversability characteristics are combined to form a single Fuzzy Traversability Index, which quantifies the ease-of-traversal of the terrain by the mobile robot. Experimental results are presented to demonstrate the capability of the proposed approach for classification of different terrain segments based on their traversability. © 2001 John Wiley & Sons, Inc. [source] Characterizing, Propagating, and Analyzing Uncertainty in Life-Cycle Assessment: A Survey of Quantitative ApproachesJOURNAL OF INDUSTRIAL ECOLOGY, Issue 1 2007Shannon M. Lloyd Summary Life-cycle assessment (LCA) practitioners build models to quantify resource consumption, environmental releases, and potential environmental and human health impacts of product systems. Most often, practitioners define a model structure, assign a single value to each parameter, and build deterministic models to approximate environmental outcomes. This approach fails to capture the variability and uncertainty inherent in LCA. To make good decisions, decision makers need to understand the uncertainty in and divergence between LCA outcomes for different product systems. Several approaches for conducting LCA under uncertainty have been proposed and implemented. For example, Monte Carlo simulation and fuzzy set theory have been applied in a limited number of LCA studies. These approaches are well understood and are generally accepted in quantitative decision analysis. But they do not guarantee reliable outcomes. A survey of approaches used to incorporate quantitative uncertainty analysis into LCA is presented. The suitability of each approach for providing reliable outcomes and enabling better decisions is discussed. Approaches that may lead to overconfident or unreliable results are discussed and guidance for improving uncertainty analysis in LCA is provided. [source] |