Home About us Contact | |||
Point Estimation (point + estimation)
Selected AbstractsBreak Point Estimation and Spurious Rejections With Endogenous Unit Root TestsOXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 5 2001Junsoo Lee This paper examines the accuracy of break point estimation using the endogenous break unit root tests of Zivot and Andrews (1992) and Perron (1997). We find that these tests tend to identify the break point incorrectly at one-period behind (TB -1) the true break point (TB), where bias in estimating the persistence parameter and spurious rejections are the greatest. In addition, this outcome occurs under the null and alternative hypotheses, and more so as the magnitude of the break increases. Consequences of utilizing these endogenous break tests are similar to (incorrectly) omitting the break term Bt in Perron's (1989) exogenous test. [source] Sample Size Reassessment in Adaptive Clinical Trials Using a Bias Corrected EstimateBIOMETRICAL JOURNAL, Issue 7 2003Silke Coburger Abstract Point estimation in group sequential and adaptive trials is an important issue in analysing a clinical trial. Most literature in this area is only concerned with estimation after completion of a trial. Since adaptive designs allow reassessment of sample size during the trial, reliable point estimation of the true effect when continuing the trial is additionally needed. We present a bias adjusted estimator which allows a more exact sample size determination based on the conditional power principle than the naive sample mean does. [source] Improved likelihood inference for the roughness parameter of the GA0 distributionENVIRONMETRICS, Issue 4 2008Michel Ferreira da Silva Abstract This paper presents adjusted profile likelihoods for ,, the roughness parameter of the distribution. This distribution has been widely used in the modeling, processing and analysis of data corrupted by speckle noise, e.g., synthetic aperture radar images. Specifically, we consider the following modified profile likelihoods: (i) the one proposed by Cox and Reid, and (ii) approximations to adjusted profile likelihood proposed by Barndorff,Nielsen, namely the approximations proposed by Severini and one based on results by Fraser, Reid and Wu. We focus on point estimation and on signalized likelihood ratio tests, the parameter of interest being the roughness parameter that indexes the distribution. As far as point estimation is concerned, the numerical evidence presented in the paper favors the Cox and Reid adjustment, and in what concerns signalized likelihood ratio tests, the results favor the approximation to Barndorff7mdash;Nielsen's adjustment based on the results by Fraser, Reid and Wu. An application to real synthetic aperture radar imagery is presented and discussed. Copyright © 2007 John Wiley & Sons, Ltd. [source] Models for the estimation of a ,no effect concentration'ENVIRONMETRICS, Issue 1 2002Ana M. Pires Abstract The use of a no effect concentration (NEC), instead of the commonly used no observed effect concentration (NOEC), has been advocated recently. In this article models and methods for the estimation of an NEC are proposed and it is shown that the NEC overcomes many of the objections to the NOEC. The NEC is included as a threshold parameter in a non-linear model. Numerical methods are then used for point estimation and several techniques are proposed for interval estimation (based on bootstrap, profile likelihood and asymptotic normality). The adequacy of these methods is empirically confirmed by the results of a simulation study. The profile likelihood based interval has emerged as the best method. Finally the methodology is illustrated with data obtained from a 21 day Daphnia magna reproduction test with a reference substance, 3,4-dichloroaniline (3,4-DCA), and with a real effluent. Copyright © 2002 John Wiley & Sons, Ltd. [source] Predictive distributions in risk analysis and estimation for the triangular distributionENVIRONMETRICS, Issue 7 2001Yongsung Joo Abstract Many Monte Carlo simulation studies have been done in the field of risk analysis. This article demonstrates the importance of using predictive distributions (the estimated distributions of the explanatory variable accounting for uncertainty in point estimation of parameters) in the simulations. We explore different types of predictive distributions for the normal distribution, the lognormal distribution and the triangular distribution. The triangular distribution poses particular problems, and we found that estimation using quantile least squares was preferable to maximum likelihood. Copyright © 2001 John Wiley & Sons, Ltd. [source] Group contribution prediction of surface charge density profiles for COSMO-RS(Ol)AICHE JOURNAL, Issue 12 2007Tiancheng Mu Abstract A new method for predicting the surface charge density distribution (, profile) and cavity volume of molecules based on group contributions was developed. The original , profiles used for the regression were obtained using Gaussian 03 B3LYP/6-311G(d,p). In total 1363 , profiles were used for the regression of group parameters. Group definitions are identical to those used previously for boiling point estimation. Original and estimated , profiles were used to predict activity coefficients at infinite dilution and VLE data of binary systems using the COSMO-RS(Ol) model. The results were compared with the experimental data stored in the Dortmund Data Bank. In many cases the results were of comparable accuracy. However, for a few compounds, poor results were obtained, in particular for conjugated components like nitrobenzenes. The method offers a fast and reliable generation of , profiles to be used with COSMO-RS(Ol) within its range of applicability. © 2007 American Institute of Chemical Engineers AIChE J, 2007 [source] Using data augmentation to correct for non-ignorable non-response when surrogate data are available: an application to the distribution of hourly payJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 3 2006Gabriele B. Durrant Summary., The paper develops a data augmentation method to estimate the distribution function of a variable, which is partially observed, under a non-ignorable missing data mechanism, and where surrogate data are available. An application to the estimation of hourly pay distributions using UK Labour Force Survey data provides the main motivation. In addition to considering a standard parametric data augmentation method, we consider the use of hot deck imputation methods as part of the data augmentation procedure to improve the robustness of the method. The method proposed is compared with standard methods that are based on an ignorable missing data mechanism, both in a simulation study and in the Labour Force Survey application. The focus is on reducing bias in point estimation, but variance estimation using multiple imputation is also considered briefly. [source] Break Point Estimation and Spurious Rejections With Endogenous Unit Root TestsOXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 5 2001Junsoo Lee This paper examines the accuracy of break point estimation using the endogenous break unit root tests of Zivot and Andrews (1992) and Perron (1997). We find that these tests tend to identify the break point incorrectly at one-period behind (TB -1) the true break point (TB), where bias in estimating the persistence parameter and spurious rejections are the greatest. In addition, this outcome occurs under the null and alternative hypotheses, and more so as the magnitude of the break increases. Consequences of utilizing these endogenous break tests are similar to (incorrectly) omitting the break term Bt in Perron's (1989) exogenous test. [source] Identifying the time of polynomial drift in the mean of autocorrelated processesQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 5 2010Marcus B. Perry Abstract Control charts are used to detect changes in a process. Once a change is detected, knowledge of the change point would simplify the search for and identification of the special ause. Consequently, having an estimate of the process change point following a control chart signal would be useful to process engineers. This paper addresses change point estimation for covariance-stationary autocorrelated processes where the mean drifts deterministically with time. For example, the mean of a chemical process might drift linearly over time as a result of a constant pressure leak. The goal of this paper is to derive and evaluate an MLE for the time of polynomial drift in the mean of autocorrelated processes. It is assumed that the behavior in the process mean over time is adequately modeled by the kth-order polynomial trend model. Further, it is assumed that the autocorrelation structure is adequately modeled by the general (stationary and invertible) mixed autoregressive-moving-average model. The estimator is intended to be applied to data obtained following a genuine control chart signal in efforts to help pinpoint the root cause of process change. Application of the estimator is demonstrated using a simulated data set. The performance of the estimator is evaluated through Monte Carlo simulation studies for the k=1 case and across several processes yielding various levels of positive autocorrelation. Results suggest that the proposed estimator provides process engineers with an accurate and useful estimate for the last sample obtained from the unchanged process. Copyright © 2009 John Wiley & Sons, Ltd. [source] Sample Size Reassessment in Adaptive Clinical Trials Using a Bias Corrected EstimateBIOMETRICAL JOURNAL, Issue 7 2003Silke Coburger Abstract Point estimation in group sequential and adaptive trials is an important issue in analysing a clinical trial. Most literature in this area is only concerned with estimation after completion of a trial. Since adaptive designs allow reassessment of sample size during the trial, reliable point estimation of the true effect when continuing the trial is additionally needed. We present a bias adjusted estimator which allows a more exact sample size determination based on the conditional power principle than the naive sample mean does. [source] Bayesian estimation of finite time ruin probabilitiesAPPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 6 2009M. Concepcion Ausin Abstract In this paper, we consider Bayesian inference and estimation of finite time ruin probabilities for the Sparre Andersen risk model. The dense family of Coxian distributions is considered for the approximation of both the inter-claim time and claim size distributions. We illustrate that the Coxian model can be well fitted to real, long-tailed claims data and that this compares well with the generalized Pareto model. The main advantage of using the Coxian model for inter-claim times and claim sizes is that it is possible to compute finite time ruin probabilities making use of recent results from queueing theory. In practice, finite time ruin probabilities are much more useful than infinite time ruin probabilities as insurance companies are usually interested in predictions for short periods of future time and not just in the limit. We show how to obtain predictive distributions of these finite time ruin probabilities, which are more informative than simple point estimations and take account of model and parameter uncertainty. We illustrate the procedure with simulated data and the well-known Danish fire loss data set. Copyright © 2009 John Wiley & Sons, Ltd. [source] |