Home About us Contact | |||
Smaller Set (smaller + set)
Selected AbstractsRule reduction in fuzzy logic for better interpretability in reservoir operationHYDROLOGICAL PROCESSES, Issue 21 2007C. Sivapragasam Abstract Decision-making in reservoir operation has become easy and understandable with the use of fuzzy logic models, which represent the knowledge in terms of interpretable linguistic rules. However, the improvement in interpretability with increase in number of fuzzy sets (,low', ,high', etc) comes with the disadvantage of increase in number of rules that are difficult to comprehend by decision makers. In this study, a clustering-based novel approach is suggested to provide the operators with a limited number of most meaningful operating rules. A single triangular fuzzy set is adopted for different variables in each cluster, which are fine-tuned with genetic algorithm (GA) to meet the desired objective. The results are compared with the multi fuzzy set fuzzy logic model through a case study in the Pilavakkal reservoir system in Tamilnadu State, India. The results obtained are highly encouraging with a smaller set of rules representing the actual fuzzy logic system. Copyright © 2007 John Wiley & Sons, Ltd. [source] How quickly do forecasters incorporate news?JOURNAL OF APPLIED ECONOMETRICS, Issue 6 2006Evidence from cross-country surveys Using forecasts from Consensus Economics Inc., we provide evidence on the efficiency of real GDP growth forecasts by testing whether forecast revisions are uncorrelated. As the forecast data used are multi-dimensional,18 countries, 24 monthly forecasts for the current and the following year and 16 target years,the panel estimation takes into account the complex structure of the variance,covariance matrix due to propagation of shocks across countries and economic linkages among them. Efficiency is rejected for all 18 countries: forecast revisions show a high degree of serial correlation. We then develop a framework for characterizing the nature of the inefficiency in forecasts. For a smaller set of countries, the G-7, we estimate a VAR model on forecast revisions. The degree of inefficiency, as manifested in the serial correlation of forecast revisions, tends to be smaller in forecasts of the USA than in forecasts for European countries. Our framework also shows that one of the sources of the inefficiency in a country's forecasts is resistance to utilizing foreign news. Thus the quality of forecasts for many of these countries can be significantly improved if forecasters pay more attention to news originating from outside their respective countries. This is particularly the case for Canadian and French forecasts, which would gain by paying greater attention than they do to news from the USA and Germany, respectively. Copyright © 2006 John Wiley & Sons, Ltd. [source] Hybrid Dirichlet mixture models for functional dataJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 4 2009Sonia Petrone Summary., In functional data analysis, curves or surfaces are observed, up to measurement error, at a finite set of locations, for, say, a sample of n individuals. Often, the curves are homogeneous, except perhaps for individual-specific regions that provide heterogeneous behaviour (e.g. ,damaged' areas of irregular shape on an otherwise smooth surface). Motivated by applications with functional data of this nature, we propose a Bayesian mixture model, with the aim of dimension reduction, by representing the sample of n curves through a smaller set of canonical curves. We propose a novel prior on the space of probability measures for a random curve which extends the popular Dirichlet priors by allowing local clustering: non-homogeneous portions of a curve can be allocated to different clusters and the n individual curves can be represented as recombinations (hybrids) of a few canonical curves. More precisely, the prior proposed envisions a conceptual hidden factor with k -levels that acts locally on each curve. We discuss several models incorporating this prior and illustrate its performance with simulated and real data sets. We examine theoretical properties of the proposed finite hybrid Dirichlet mixtures, specifically, their behaviour as the number of the mixture components goes to , and their connection with Dirichlet process mixtures. [source] Principal component analysis applied to filtered signals for maintenance managementQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 6 2010Fausto Pedro García Márquez Abstract This paper presents an approach for detecting and identifying faults in railway infrastructure components. The method is based on pattern recognition and data analysis algorithms. Principal component analysis (PCA) is employed to reduce the complexity of the data to two and three dimensions. PCA involves a mathematical procedure that transforms a number of variables, which may be correlated, into a smaller set of uncorrelated variables called ,principal components'. In order to improve the results obtained, the signal was filtered. The filtering was carried out employing a state,space system model, estimated by maximum likelihood with the help of the well-known recursive algorithms such as Kalman filter and fixed interval smoothing. The models explored in this paper to analyse system data lie within the so-called unobserved components class of models. Copyright © 2009 John Wiley & Sons, Ltd. [source] Brief communication: Identification of the authentic ancient DNA sequence in a human bone contaminated with modern DNAAMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY, Issue 3 2006Abigail S. Bouwman Abstract We present a method to distinguish authentic ancient DNA from contaminating DNA in a human bone. This is achieved by taking account of the spatial distribution of the various sequence families within the bone and the extent of degradation of the template DNAs, as revealed by the error content of the sequences. To demonstrate the veracity of the method, we handled two ancient human tibiae in order to contaminate them with modern DNA, and then subjected segments of the bones to various decontaminating treatments, including removal of the outer 1,2 mm, before extracting DNA, cloning, and obtaining a total of 107 mitochondrial DNA sequences. Sequences resulting from the deliberate contamination were located exclusively in the outer 1,2 mm of the bones, and only one of these 27 sequences contained an error that could be ascribed to DNA degradation. A second, much smaller set of relatively error-free sequences, which we ascribe to contamination during excavation or curation, was also located exclusively in the outer 1,2 mm. In contrast, a family of 72 sequences, displaying extensive degradation products but identifiable as haplogroup U5a1a, was distributed throughout one of the bones and represents the authentic ancient DNA content of this specimen. Am J Phys Anthropol, 2006. © 2006 Wiley-Liss, Inc. [source] The role of indicators in improving timeliness of international environmental reportsENVIRONMENTAL POLICY AND GOVERNANCE, Issue 1 2006Ulla Rosenström Abstract Environmental indicators were developed mainly to improve information flows from scientists to policy-makers. This article discusses the importance of timely environmental data and investigates the influence of indicator-based reporting on the data timeliness of environmental reports by international organizations. Timeliness of information contributes to the quality and appeal of the reports, and to their role as early warning tools, and increases their usability by decision-makers in short-term decision cycles. The results of an analysis of 11 international reports by the European Environmental Agency (EEA) and the Organization for Economic Co-Operation and Development (OECD) show a considerable time lag of three years on average, with only minor development towards more timely reporting. The results suggest that the introduction of environmental indicators has not improved the timeliness of reporting. In order to overcome these problems, the article recommends some methods for improving timeliness. These include better choice of indicators in smaller sets, use of preliminary data and outlooks, development of new indicators, publishing on the internet and more effective use of internet databases to avoid intermediate levels in data collection. Copyright © 2006 John Wiley & Sons, Ltd and ERP Environment. [source] |