Efficient Estimates (efficient + estimate)

Distribution by Scientific Domains


Selected Abstracts


A posteriori error estimator for expanded mixed hybrid methods,

NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 2 2007
Dongho Kim
Abstract In this article, we construct an a posteriori error estimator for expanded mixed hybrid finite-element methods for second-order elliptic problems. An a posteriori error analysis yields reliable and efficient estimate based on residuals. Several numerical examples are presented to show the effectivity of our error indicators. © 2006 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 23: 330,349, 2007 [source]


Fixed fees and physician-induced demand: A panel data study on French physicians

HEALTH ECONOMICS, Issue 9 2003
Eric Delattre
Abstract This paper investigates on the existence of physician-induced demand (PID) for French physicians. The test is carried out for GPs and specialists, using a representative sample of 4500 French self-employed physicians over the 1979,1993 period. These physicians receive a fee-for-services (FFS) payment and fees are controlled. The panel structure of our data allows us to take into account unobserved heterogeneity related to the characteristics of physicians and their patients. We use generalized method of moments (GMM) estimators in order to obtain consistent and efficient estimates. We show that physicians experience a decline of the number of consultations when they face an increase in the physician:population ratio. However this decrease is very slight. In addition, physicians counterbalance the fall in the number of consultations by an increase in the volume of care delivered in each encounter. Econometric results give a strong support for the existence of PID in the French system for ambulatory care. Copyright © 2003 John Wiley & Sons, Ltd. [source]


High-dimensional model representation for structural reliability analysis

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 4 2009
Rajib Chowdhury
Abstract This paper presents a new computational tool for predicting failure probability of structural/mechanical systems subject to random loads, material properties, and geometry. The method involves high-dimensional model representation (HDMR) that facilitates lower-dimensional approximation of the original high-dimensional implicit limit state/performance function, response surface generation of HDMR component functions, and Monte Carlo simulation. HDMR is a general set of quantitative model assessment and analysis tools for capturing the high-dimensional relationships between sets of input and output model variables. It is a very efficient formulation of the system response, if higher-order variable correlations are weak, allowing the physical model to be captured by the first few lower-order terms. Once the approximate form of the original implicit limit state/performance function is defined, the failure probability can be obtained by statistical simulation. Results of nine numerical examples involving mathematical functions and structural mechanics problems indicate that the proposed method provides accurate and computationally efficient estimates of the probability of failure. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Study of geometric degeneracies in electromagnetic characteristics of magnetron-type corrugated cavity

INTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 4 2002
Andriy E. Serebryannikov
Abstract A geometric degeneracy that can occur in natural frequencies and external Q factors of magnetron-type cavities is studied. We analyze the causes of its appearance and consider its feasibility to enhance the efficiency of the cavity design. Transcendental frequency-independent equations, the solutions of which can be easily obtained, and efficient estimates based on rigorous analysis are suggested to predict the existence of the degeneracy. © 2002 Wiley Periodicals, Inc. Int J RF and Microwave CAE 12: 320,331, 2002. Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mmce.10030 [source]


Multi-scale occupancy estimation and modelling using multiple detection methods

JOURNAL OF APPLIED ECOLOGY, Issue 5 2008
James D. Nichols
Summary 1Occupancy estimation and modelling based on detection,nondetection data provide an effective way of exploring change in a species' distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method. 2We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species' use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site. 3We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species. 4Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design. [source]


Efficient estimation of three-dimensional curves and their derivatives by free-knot regression splines, applied to the analysis of inner carotid artery centrelines

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 3 2009
Laura M. Sangalli
Summary., We deal with the problem of efficiently estimating a three-dimensional curve and its derivatives, starting from a discrete and noisy observation of the curve. This problem is now arising in many applicative contexts, thanks to the advent of devices that provide three-dimensional images and measures, such as three-dimensional scanners in medical diagnostics. Our research, in particular, stems from the need for accurate estimation of the curvature of an artery, from image reconstructions of three-dimensional angiographies. This need has emerged within the AneuRisk project, a scientific endeavour which aims to investigate the role of vessel morphology, blood fluid dynamics and biomechanical properties of the vascular wall, on the pathogenesis of cerebral aneurysms. We develop a regression technique that exploits free-knot splines in a novel setting, to estimate three-dimensional curves and their derivatives. We thoroughly compare this technique with a classical regression method, local polynomial smoothing, showing that three-dimensional free-knot regression splines yield more accurate and efficient estimates. [source]


Bootstrap inference in a linear equation estimated by instrumental variables

THE ECONOMETRICS JOURNAL, Issue 3 2008
Russell Davidson
Summary, We study several tests for the coefficient of the single right-hand-side endogenous variable in a linear equation estimated by instrumental variables. We show that writing all the test statistics,Student's t, Anderson,Rubin, the LM statistic of Kleibergen and Moreira (K), and likelihood ratio (LR),as functions of six random quantities leads to a number of interesting results about the properties of the tests under weak-instrument asymptotics. We then propose several new procedures for bootstrapping the three non-exact test statistics and also a new conditional bootstrap version of the LR test. These use more efficient estimates of the parameters of the reduced-form equation than existing procedures. When the best of these new procedures is used, both the K and conditional bootstrap LR tests have excellent performance under the null. However, power considerations suggest that the latter is probably the method of choice. [source]


Nonlinear econometric models with cointegrated and deterministically trending regressors

THE ECONOMETRICS JOURNAL, Issue 1 2001
Yoosoon Chang
This paper develops an asymptotic theory for a general class of nonlinear non-stationary regressions, extending earlier work by Phillips and Hansen (1990) on linear cointegrating regressions. The model considered accommodates a linear time trend and stationary regressors, as well as multiple I(1) regressors. We establish consistency and derive the limit distribution of the nonlinear least squares estimator. The estimator is consistent under fairly general conditions but the convergence rate and the limiting distribution are critically dependent upon the type of the regression function. For integrable regression functions, the parameter estimates converge at a reduced n1/4 rate and have mixed normal limit distributions. On the other hand, if the regression functions are homogeneous at infinity, the convergence rates are determined by the degree of the asymptotic homogeneity and the limit distributions are non-Gaussian. It is shown that nonlinear least squares generally yields inefficient estimators and invalid tests, just as in linear nonstationary regressions. The paper proposes a methodology to overcome such difficulties. The approach is simple to implement, produces efficient estimates and leads to tests that are asymptotically chi-square. It is implemented in empirical applications in much the same way as the fully modified estimator of Phillips and Hansen. [source]


Range-Based Estimation of Stochastic Volatility Models

THE JOURNAL OF FINANCE, Issue 3 2002
Sassan Alizadeh
We propose using the price range in the estimation of stochastic volatility models. We show theoretically, numerically, and empirically that range-based volatility proxies are not only highly efficient, but also approximately Gaussian and robust to microstructure noise. Hence range-based Gaussian quasi-maximum likelihood estimation produces highly efficient estimates of stochastic volatility models and extractions of latent volatility. We use our method to examine the dynamics of daily exchange rate volatility and find the evidence points strongly toward two-factor models with one highly persistent factor and one quickly mean-reverting factor. [source]


Trend estimation of financial time series

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 3 2010
Víctor M. Guerrero
Abstract We propose to decompose a financial time series into trend plus noise by means of the exponential smoothing filter. This filter produces statistically efficient estimates of the trend that can be calculated by a straightforward application of the Kalman filter. It can also be interpreted in the context of penalized least squares as a function of a smoothing constant has to be minimized by trading off fitness against smoothness of the trend. The smoothing constant is crucial to decide the degree of smoothness and the problem is how to choose it objectively. We suggest a procedure that allows the user to decide at the outset the desired percentage of smoothness and derive from it the corresponding value of that constant. A definition of smoothness is first proposed as well as an index of relative precision attributable to the smoothing element of the time series. The procedure is extended to series with different frequencies of observation, so that comparable trends can be obtained for say, daily, weekly or intraday observations of the same variable. The theoretical results are derived from an integrated moving average model of order (1, 1) underlying the statistical interpretation of the filter. Expressions of equivalent smoothing constants are derived for series generated by temporal aggregation or systematic sampling of another series. Hence, comparable trend estimates can be obtained for the same time series with different lengths, for different time series of the same length and for series with different frequencies of observation of the same variable. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Sequential design in quality control and validation of land cover databases

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 2 2009
Elisabetta Carfagna
Abstract We have faced the problem of evaluating the quality of land cover databases produced through photo-interpretation of remote-sensing data according to a legend of land cover types. First, we have considered the quality control, that is, the comparison of a land cover database with the result of the photo-interpretation made by a more expert photo-interpreter, on a sample of the polygons. Then we have analysed the problem of validation, that is, the check of the photo-interpretation through a ground survey. We have used the percentage of area correctly photo-interpreted as a quality measure. Since the kind of land cover type and the size of the polygons affect the probability of making mistakes in the photo-interpretation, we stratify the polygons according to two variables: the land cover type of the photo-interpretation and the size of the polygons. We have proposed an adaptive sequential procedure with permanent random numbers in which the sample size per stratum is dependent on the previously selected units but the sample selection is not, and the stopping rule is not based on the estimates of the quality parameter. We have proved that this quality control and validation procedure allows unbiased and efficient estimates of the quality parameters and allows reaching high precision of estimates with the smallest sample size. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Flood prone risk and amenity values: a spatial hedonic analysis

AUSTRALIAN JOURNAL OF AGRICULTURAL & RESOURCE ECONOMICS, Issue 4 2010
Oshadhi Samarasinghe
This study examines the impact of flood-hazard zone location on residential property prices. The study utilises data from over 2000 private residential property sales occurred during 2006 in North Shore City, New Zealand. A spatial autoregressive hedonic model is developed to provide efficient estimates of the marginal effect of flood prone risks on property prices. Results suggest that the sale price of a residential property within a flood prone area is lower than an equivalent property outside the flood prone area. The flood plain location discount is reduced by the release of public information regarding flood risk. [source]


Estimating Transition Probabilities from Aggregate Samples Plus Partial Transition Data

BIOMETRICS, Issue 3 2000
D. L. Hawkins
Summary. Longitudinal studies often collect only aggregate data, which allows only inefficient transition probability estimates. Barring enormous aggregate samples, improving the efficiency of transition probability estimates seems to be impossible without additional partial-transition data. This paper discusses several sampling plans that collect data of both types, as well as a methodology that combines them into efficient estimates of transition probabilities. The method handles both fixed and time-dependent categorical covariates and requires no assumptions (e.g., time homogeneity, Markov) about the population evolution. [source]