Home About us Contact | |||
Minimum Variance (minimum + variance)
Selected AbstractsThe Kalman filter for the pedologist's tool kitEUROPEAN JOURNAL OF SOIL SCIENCE, Issue 6 2006R. Webster Summary The Kalman filter is a tool designed primarily to estimate the values of the ,state' of a dynamic system in time. There are two main equations. These are the state equation, which describes the behaviour of the state over time, and the measurement equation, which describes at what times and in what manner the state is observed. For the discrete Kalman filter, discussed in this paper, the state equation is a stochastic difference equation that incorporates a random component for noise in the system and that may include external forcing. The measurement equation is defined such that it can handle indirect measurements, gaps in the sequence of measurements and measurement errors. The Kalman filter operates recursively to predict forwards one step at a time the state of the system from the previously predicted state and the next measurement. Its predictions are optimal in the sense that they have minimum variance among all unbiased predictors, and in this respect the filter behaves like kriging. The equations can also be applied in reverse order to estimate the state variable at all time points from a complete series of measurements, including past, present and future measurements. This process is known as smoothing. This paper describes the ,predictor,corrector' algorithm for the Kalman filter and smoother with all the equations in full, and it illustrates the method with examples on the dynamics of groundwater level in the soil. The height of the water table at any one time depends partly on the height at previous times and partly on the precipitation excess. Measurements of the height of water table and their errors are incorporated into the measurement equation to improve prediction. Results show how diminishing the measurement error increases the accuracy of the predictions, and estimates achieved with the Kalman smoother are even more accurate. Le filtre de Kalman comme outil pour le pédologue Résumé Le filtre de Kalman est un outil conçu essentiellement pour estimer les valeurs de l'état d'un système dynamique dans le temps. Il comprend deux équations principales. Celles-ci sont l'équation d'état, qui décrit l'évolution de l'état pendant le temps, et l'équation de mesure qui decrit à quel instants et de quelle façon on observe l'état. Pour le filtre discret de Kalman, décrit dans cet article, l'équation d'état est une équation stochastique différentielle qui comprend une composante aléatoire pour le bruit dans le système et qui peut inclure une force extérieure. On définit l'équation de mesure de façon à ce qu'elle puisse traiter des mesures indirectes, des vides dans des séquences de mesures et des erreurs de mesure. Le filtre de Kalman fonctionne récursivement pour prédire en avance une démarche à temps l'état du système de la démarche prédite antérieure plus l'observation prochaine. Ses prédictions sont optimales dans le sens qu'elles minimisent la variance parmi toutes les prédictions non-biasées, et à cet égard le filtre se comporte comme le krigeage. On peut appliquer, aussi, les équations dans l'ordre inverse pour estimer la variable d'état à toutes pointes à toutes les instants d'une série complète d'observations, y compris les observations du passé, du présent et du futur. Ce processus est connu comme ,smoothing'. Cet article décrit l'algorithme ,predictor,corrector' du filtre de Kalman et le ,smoother' avec toutes les équations entières. Il illustre cette méthode avec des exemples de la dynamique du niveau de la nappe phréatique dans le sol. Le niveau de la nappe à un instant particulier dépend en partie du niveau aux instants précédents et en partie de l'excès de la précipitation. L'équation d'état fournit la relation générale entre les deux variables et les prédictions. On incorpore les mesures du niveau de la nappe et leurs erreurs pour améliorer les prédictions. Les résultats mettent en évidence que lorsqu'on diminue l'erreur de mesure la précision des prédictions augmente, et aussi que les estimations avec le ,smoother' de Kalman sont encore plus précises. [source] How do we tell which estimates of past climate change are correct?,INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 10 2009Steven C. Sherwood Abstract Estimates of past climate change often involve teasing small signals from imperfect instrumental or proxy records. Success is often evaluated on the basis of the spatial or temporal consistency of the resulting reconstruction, or on the apparent prediction error on small space and time scales. However, inherent methodological trade-offs illustrated here can cause climate signal accuracy to be unrelated, or even inversely related, to such performance measures. This is a form of the classic conflict in statistics between minimum variance and unbiased estimators. Comprehensive statistical simulations based on climate model output are probably the best way to reliably assess whether methods of reconstructing climate from sparse records, such as radiosondes or paleoclimate proxies, actually work on longer time scales. Copyright © 2008 Royal Meteorological Society [source] Irreducibility and structural cointegrating relations: an application to the G-7 long-term interest ratesINTERNATIONAL JOURNAL OF FINANCE & ECONOMICS, Issue 2 2001Marco R. Barassi C32; C51 Abstract In this paper we examine the causal linkages between the G-7 long-term interest rates by using a new technique, which enables the researcher to analyse relations between a set of I(1) series without imposing any identification conditions based on economic theory. Specifically, we apply the so-called Extended Davidson's Methodology (EDM), which is based on the innovative concept of an irreducible cointegrating (IC) vector, defined as a subset of a cointegrating relation that does not have any cointegrated subsets. Ranking the irreducible vectors according to the criterion of minimum variance allows us to distinguish between structural and solved relations. The empirical results provide support for the hypothesis that larger, more stable economies can achieve policy objectives more successfully by accommodating rather than driving other countries' policies. It appears that the driving force is Canada, which is linked to the USA, UK and France in three out of the four fundamental relations, and which is a reference point for the US, Italian and German rates, which are not cointegrated, seem to be determined by country-specific factors. Copyright © 2001 John Wiley & Sons, Ltd. [source] Estimation Optimality of Corrected AIC and Modified Cp in Linear RegressionINTERNATIONAL STATISTICAL REVIEW, Issue 2 2006Simon L. Davies Summary Model selection criteria often arise by constructing unbiased or approximately unbiased estimators of measures known as expected overall discrepancies (Linhart & Zucchini, 1986, p. 19). Such measures quantify the disparity between the true model (i.e., the model which generated the observed data) and a fitted candidate model. For linear regression with normally distributed error terms, the "corrected" Akaike information criterion and the "modified" conceptual predictive statistic have been proposed as exactly unbiased estimators of their respective target discrepancies. We expand on previous work to additionally show that these criteria achieve minimum variance within the class of unbiased estimators. Résumé Les critères de modèle de sélection naissent souvent de la construction de mesures d'estimation impartiales, ou approximativement impartiales, connues comme divergences globales prévues. De telles mesures quantifient la disparité entre le vrai modèle (c'est-à-dire le modèle qui a produit les données observées) et un modèle candidat correspondant. En ce qui concerne les applications de régression linéaires contenant des erreurs distribuées normalement, le modèle de critère d'information "corrigé" Akaike et le modèle conceptuel de statistique de prévision "modifié" ont été proposés comme étant des instruments exacts de mesures d'estimation impartiales de leurs objectifs respectifs de divergences. En nous appuyant sur les travaux précédents et en les développant, nous proposons de démontrer, en outre, que ces critères réalisent une variance minimum au sein de la classe des instruments de mesures d'estimation impartiales. [source] Forecasting the Treasury's balance at the FedJOURNAL OF FORECASTING, Issue 5 2004Daniel L. Thornton Abstract As part of the Fed's daily operating procedure, the Federal Reserve Bank of New York, the Board of Governors and the Treasury make a forecast of that day's Treasury balance at the Fed. These forecasts are an integral part of the Fed's daily operating procedure. Errors in these forecasts can generate variation in reserve supply and, consequently, the federal funds rate. This paper evaluates the accuracy of these forecasts. The evidence suggests that each agency's forecast contributes to the optimal, i.e., minimum variance, forecast and that the Trading Desk of the Federal Reserve Bank of New York incorporates information from all three of the agency forecasts in conducting daily open market operations. Moreover, these forecasts encompass the forecast of an economic model. Copyright © 2004 John Wiley & Sons, Ltd. [source] PID control performance assessment: The single-loop caseAICHE JOURNAL, Issue 6 2004Byung-Su Ko Abstract An iterative solution is developed for the calculation of the best achievable (minimum variance) PID control performance and the corresponding optimal PID setting in an existing control loop. An analytic expression is derived for the closed-loop output as an explicit function of PID setting. The resulting benchmark allows for realistic performance assessment of an existing PID control loop, especially when the control loop fails to meet the minimum variance performance. A PID performance index is then defined based on the PID performance bound, and its confidence interval is estimated. A series of simulated examples are used to demonstrate the utility of the proposed method. © 2004 American Institute of Chemical Engineers AIChE J, 50: 1211,1218, 2004 [source] CONTINUOUS-TIME MEAN-VARIANCE PORTFOLIO SELECTION WITH BANKRUPTCY PROHIBITIONMATHEMATICAL FINANCE, Issue 2 2005Tomasz R. Bielecki A continuous-time mean-variance portfolio selection problem is studied where all the market coefficients are random and the wealth process under any admissible trading strategy is not allowed to be below zero at any time. The trading strategy under consideration is defined in terms of the dollar amounts, rather than the proportions of wealth, allocated in individual stocks. The problem is completely solved using a decomposition approach. Specifically, a (constrained) variance minimizing problem is formulated and its feasibility is characterized. Then, after a system of equations for two Lagrange multipliers is solved, variance minimizing portfolios are derived as the replicating portfolios of some contingent claims, and the variance minimizing frontier is obtained. Finally, the efficient frontier is identified as an appropriate portion of the variance minimizing frontier after the monotonicity of the minimum variance on the expected terminal wealth over this portion is proved and all the efficient portfolios are found. In the special case where the market coefficients are deterministic, efficient portfolios are explicitly expressed as feedback of the current wealth, and the efficient frontier is represented by parameterized equations. Our results indicate that the efficient policy for a mean-variance investor is simply to purchase a European put option that is chosen, according to his or her risk preferences, from a particular class of options. [source] The three-dimensional power spectrum of dark and luminous matter from the VIRMOS-DESCART cosmic shear surveyMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 3 2003Ue-Li Pen ABSTRACT We present the first optimal power spectrum estimation and three-dimensional deprojections for the dark and luminous matter and their cross-correlations. The results are obtained using a new optimal fast estimator, deprojected using minimum variance and Singular Value Decomposition (SVD) techniques. We show the resulting 3D power spectra for dark matter and galaxies, and their covariance for the VIRMOS-DESCART weak lensing shear and galaxy data. The survey is most sensitive to non-linear scales kNL, 1 h Mpc,1. On these scales, our 3D power spectrum of dark matter is in good agreement with the RCS 3D power spectrum found by Tegmark & Zaldarriaga. Our galaxy power is similar to that found by the 2MASS survey, and larger than that of SDSS, APM and RCS, consistent with the expected difference in galaxy population. We find an average bias b= 1.24 ± 0.18 for the I -selected galaxies, and a cross-correlation coefficient r= 0.75 ± 0.23. Together with the power spectra, these results optimally encode the entire two point information about dark matter and galaxies, including galaxy,galaxy lensing. We address some of the implications regarding galaxy haloes and mass-to-light ratios. The best-fitting ,halo' parameter h,r/b= 0.57 ± 0.16, suggesting that dynamical masses estimated using galaxies systematically underestimate total mass. Ongoing surveys, such as the Canada,France,Hawaii Telescope Legacy Survey, will significantly improve on the dynamic range, and future photometric redshift catalogues will allow tomography along the same principles. [source] Construction and Optimality of a Special Class of Balanced DesignsQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 5 2006Stefano Barone Abstract The use of balanced designs is generally advisable in experimental practice. In technological experiments, balanced designs optimize the exploitation of experimental resources, whereas in marketing research experiments they avoid erroneous conclusions caused by the misinterpretation of interviewed customers. In general, the balancing property assures the minimum variance of first-order effect estimates. In this work the authors consider situations in which all factors are categorical and minimum run size is required. In a symmetrical case, it is often possible to find an economical balanced design by means of algebraic methods. Conversely, in an asymmetrical case algebraic methods lead to expensive designs, and therefore it is necessary to adopt heuristic methods. The existing methods implemented in widespread statistical packages do not guarantee the balancing property as they are designed to pursue other optimality criteria. To deal with this problem, the authors recently proposed a new method to generate balanced asymmetrical designs aimed at estimating first- and second-order effects. To reduce the run size as much as possible, the orthogonality cannot be guaranteed. However, the method enables designs that approach the orthogonality as much as possible (near orthogonality). A collection of designs with two- and three-level factors and run size lower than 100 was prepared. In this work an empirical study was conducted to understand how much is lost in terms of other optimality criteria when pursuing balancing. In order to show the potential applications of these designs, an illustrative example is provided. Copyright © 2006 John Wiley & Sons, Ltd. [source] Multi-sensor track-to-track fusion via linear minimum variance sense estimatorsASIAN JOURNAL OF CONTROL, Issue 3 2008Li-Wei Fong Abstract An integrated approach that consists of sensor-based filtering algorithms, local processors, and a global processor is employed to describe the distributed fusion problem when several sensors execute surveillance over a certain area. For the sensor tracking systems, each filtering algorithm utilized in the reference Cartesian coordinate system is presented for target tracking, with the radar measuring range, bearing, and elevation angle in the spherical coordinate system (SCS). For the local processors, each track-to-track fusion algorithm is used to merge two tracks representing the same target. The number of 2-combinations of a set with N distinct sensors is considered for central track fusion. For the global processor, the data fusion algorithms, simplified maximum likelihood (SML) estimator and covariance matching method (CMM), based on linear minimum variance (LMV) estimation fusion theory, are developed for use in a centralized track-to-track fusion situation. The resulting global fusers can be implemented in a parallel structure to facilitate estimation fusion calculation. Simulation results show that the proposed SML estimator has a more robust capability of improving tracking accuracy than the CMM and the LMV estimators. Copyright © 2008 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society [source] Cosmic flows on 100 h,1 Mpc scales: standardized minimum variance bulk flow, shear and octupole momentsMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2010Hume A. Feldman ABSTRACT The low-order moments, such as the bulk flow and shear, of the large-scale peculiar velocity field are sensitive probes of the matter density fluctuations on very large scales. In practice, however, peculiar velocity surveys are usually sparse and noisy, which can lead to the aliasing of small-scale power into what is meant to be a probe of the largest scales. Previously, we developed an optimal ,minimum variance' (MV) weighting scheme, designed to overcome this problem by minimizing the difference between the measured bulk flow (BF) and that which would be measured by an ideal survey. Here we extend this MV analysis to include the shear and octupole moments, which are designed to have almost no correlations between them so that they are virtually orthogonal. We apply this MV analysis to a compilation of all major peculiar velocity surveys, consisting of 4536 measurements. Our estimate of the BF on scales of ,100 h,1 Mpc has a magnitude of |v| = 416 ± 78 km s ,1 towards Galactic l= 282°± 11° and b= 6°± 6°. This result is in disagreement with , cold dark matter with Wilkinson Microwave Anisotropy Probe 5 (WMAP5) cosmological parameters at a high confidence level, but is in good agreement with our previous MV result without an orthogonality constraint, showing that the shear and octupole moments did not contaminate the previous BF measurement. The shear and octupole moments are consistent with WMAP5 power spectrum, although the measurement noise is larger for these moments than for the BF. The relatively low shear moments suggest that the sources responsible for the BF are at large distances. [source] |