Error Terms (error + term)

Distribution by Scientific Domains
Distribution within Business, Economics, Finance and Accounting


Selected Abstracts


Modelling peak accelerations from earthquakes

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 8 2006
Debbie J. Dupuis
Abstract This paper deals with the prediction of peak horizontal accelerations with emphasis on seismic risk and insurance concerns. Non-linear mixed effects models are used to analyse well-known earthquake data and the consequences of mis-specifying assumptions on the error term are quantified. A robust fit of the usual model, using recently developed robust weighted maximum likelihood estimators, is presented. Outlying data are automatically identified and subsequently investigated. A more appropriate model accounting for the extreme value nature of the responses, is also developed and implemented. The implication on acceleration predictions is demonstrated. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Unemployment and liquidity constraints

JOURNAL OF APPLIED ECONOMETRICS, Issue 3 2007
Vassilis A. Hajivassiliou
We present a dynamic framework for the interaction between borrowing (liquidity) constraints and deviations of actual hours from desired hours, both measured by discrete-valued indicators, and estimate it as a system of dynamic binary and ordered probit models with panel data from the Panel Study of Income Dynamics. We analyze a household's propensity to be liquidity constrained by means of a dynamic binary probit model. We analyze qualitative aspects of the conditions of employment, namely whether the household head is involuntarily overemployed, voluntarily employed, or involuntarily underemployed or unemployed, by means of a dynamic ordered probit model. We focus on the possible interaction between the two types of constraints. We estimate these models jointly using maximum simulated likelihood, where we allow for individual random effects along with an autoregressive process for the general error term in each equation. A novel feature of our method is that it allows for the random effects to be correlated with regressors in a time-invariant fashion. Our results provide strong support for the basic theory of constrained behavior and the interaction between liquidity constraints and exogenous constraints on labor supply. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Description of growth by simple versus complex models for Baltic Sea spring spawning herring

JOURNAL OF APPLIED ICHTHYOLOGY, Issue 1 2001
J. Gröger
The objective was to find a length,growth model to help differentiate between herring stocks (Clupea harengus l.) when their length,growth shows systematically different patterns. The most essential model restriction was that it should react robustly against variations in the underlying age range which varies not only over time but also between the different herring stocks. Because of the limited age range, significance tests as well as confidence intervals of the model parameters should allow a small sample restriction. Thus, parameter estimation should be of an analytical rather than asymptotic nature and the model should contain a minimum set of parameters. The article studies the comparative characteristics of a simple non-asymptotic two-parameter growth model (allometric length,growth function, abbreviated as ALG model) in contrast to higher parametric and more complex growth models (logistic and von-Bertalanffy growth functions, abbreviated as LGF and VBG models). An advantage of the ALG model is that it can be easily linearized and the growth coefficients can be directly derived as regression parameters. The intrinsic ALG model linearity makes it easy to test restrictions (normality, homoscedasticity and serial uncorrelation of the error term) and to formulate analytic confidence intervals. The ALG model features were exemplified and validated by a 1995 Baltic spring spawning herring (BSSH) data set that included a 12-year age range. The model performance was compared with that of the logistic and the von-Bertalanffy length,growth curves for different age ranges and by means of various parameter estimation techniques. In all cases the ALG model performed better and all ALG model restrictions (no autocorrelation, homoscedasticity, and normality of the error term) were fulfilled. Furthermore, all findings seemed to indicate a pseudo-asymptotic growth for BSSH. The proposed model was explicitly derived for of herring length-growth; the results thus should not be generalized interspecifically without additional proof. [source]


A Recursive Thick Frontier Approach to Estimating Production Efficiency,

OXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 2 2006
Rien J. L. M. Wagenvoort
Abstract We introduce a new panel data estimation technique for production and cost functions, the recursive thick frontier approach (RTFA). RTFA has two advantages over existing econometric frontier methods. First, technical inefficiency is allowed to be dependent on the explanatory variables of the frontier model. Secondly, RTFA does not hinge on distributional assumptions on the inefficiency component of the error term. We show by means of simulation experiments that RTFA outperforms the popular stochastic frontier approach and the ,within' ordinary least squares estimator for realistic parameterizations of a productivity model. Although RTFAs formal statistical properties are unknown, we argue, based on these simulation experiments, that RTFA is a useful complement to existing methods. [source]


On the equivalence between Kalman smoothing and weak-constraint four-dimensional variational data assimilation

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 613 2005
M. Fisher
Abstract The fixed-interval Kalman smoother produces optimal estimates of the state of a system over a time interval, given observations over the interval, together with a prior estimate of the state and its error covariance at the beginning of the interval. At the end of the interval, the Kalman smoother estimate is identical to that produced by a Kalman filter, given the same observations and the same initial state and covariance matrix. For an imperfect model, the model error term in the covariance evolution equation acts to reduce the dependence of the estimate on observations and prior states that are well separated in time. In particular, if the assimilation interval is sufficiently long, the estimate at the end of the interval is effectively independent of the state and covariance matrix specified at the beginning of the interval. In this case, the Kalman smoother provides estimates at the end of the interval that are identical to those of a Kalman filter that has been running indefinitely. For a linear model, weak-constraint four-dimensional variational data assimilation (4D-Var) is equivalent to a fixed-interval Kalman smoother. It follows that, if the assimilation interval is made sufficiently long, the 4D-Var analysis at the end of the assimilation interval will be identical to that produced by a Kalman filter that has been running indefinitely. The equivalence between weak-constraint 4D-Var and a long-running Kalman filter is demonstrated for a simple analogue of the numerical weather-prediction (NWP) problem. For this nonlinear system, 4D-Var analysis with a 10-day assimilation window produces analyses of the same quality as those of an extended Kalman filter. It is demonstrated that the current ECMWF operational 4D-Var system retains a memory of earlier observations and prior states over a period of between four and ten days, suggesting that weak-constraint 4D-Var with an analysis interval in the range of four to ten days may provide a viable algorithm with which to implement an unapproximated Kalman filter. Whereas assimilation intervals of this length are unlikely to be computationally feasible for operational NWP in the near future, the ability to run an unapproximated Kalman filter should prove invaluable for assessing the performance of cheaper, but suboptimal, alternatives. Copyright © 2005 Royal Meteorological Society [source]


Tropical Pacific Ocean model error covariances from Monte Carlo simulations

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 613 2005
O. Alves
Abstract As a first step towards the development of an Ensemble Kalman Filter (EnKF) for ocean data assimilation in the tropical oceans, this article investigates a novel technique for explicitly perturbing the model error in Monte Carlo simulations. The perturbation technique involves perturbing the surface zonal stress. Estimates of the characteristics of the wind stress errors were obtained from the difference between zonal wind fields from the NCEP and ECMWF re-analyses. In order to create random zonal wind stress perturbations, an EOF analysis was performed on the intraseasonally time-filtered difference between the two re-analysis products. The first 50 EOFs were retained and random wind stress fields for each ensemble member were created by combining random amounts of each EOF. Ensemble runs were performed using a shallow-water model, with both short forecasts and long simulations. Results show covariance patterns characteristic of Kelvin wave and Rossby wave dynamics. There are interesting differences between covariances using short forecasts and those using long simulations. The use of the long simulations produced non-local covariances (e.g. negative covariances between east and west Pacific), whereas short forecasts produced covariances that were localized by the time it takes Kevin and Rossby waves to travel over the forecast period and the scales of spatial covariance in the wind stress errors. The ensembles of short forecasts produced covariances and cross-covariances that can be explained by the dynamics of equatorial Rossby and Kevin waves forced by wind stress errors. The results suggest that the ensemble generation technique to explicitly represent the model error term can be used in an EnKF. Copyright © 2005 Royal Meteorological Society [source]


Forecasting Models of Emergency Department Crowding

ACADEMIC EMERGENCY MEDICINE, Issue 4 2009
Lisa M. Schweigler MD
Abstract Objectives:, The authors investigated whether models using time series methods can generate accurate short-term forecasts of emergency department (ED) bed occupancy, using traditional historical averages models as comparison. Methods:, From July 2005 through June 2006, retrospective hourly ED bed occupancy values were collected from three tertiary care hospitals. Three models of ED bed occupancy were developed for each site: 1) hourly historical average, 2) seasonal autoregressive integrated moving average (ARIMA), and 3) sinusoidal with an autoregression (AR)-structured error term. Goodness of fits were compared using log likelihood and Akaike's Information Criterion (AIC). The accuracies of 4- and 12-hour forecasts were evaluated by comparing model forecasts to actual observed bed occupancy with root mean square (RMS) error. Sensitivity of prediction errors to model training time was evaluated, as well. Results:, The seasonal ARIMA outperformed the historical average in complexity adjusted goodness of fit (AIC). Both AR-based models had significantly better forecast accuracy for the 4- and the 12-hour forecasts of ED bed occupancy (analysis of variance [ANOVA] p < 0.01), compared to the historical average. The AR-based models did not differ significantly from each other in their performance. Model prediction errors did not show appreciable sensitivity to model training times greater than 7 days. Conclusions:, Both a sinusoidal model with AR-structured error term and a seasonal ARIMA model were found to robustly forecast ED bed occupancy 4 and 12 hours in advance at three different EDs, without needing data input beyond bed occupancy in the preceding hours. [source]


Limited arbitrage in international wheat markets: threshold and smooth transition cointegration

AUSTRALIAN JOURNAL OF AGRICULTURAL & RESOURCE ECONOMICS, Issue 3 2001
Stefano Mainardi
The strength of the adjustment towards arbitrage equilibrium can be expected to be somehow proportional to the extent of market price deviations from equilibrium. In this article, threshold and smooth transition cointegration models are applied to quarterly wheat prices of three major world suppliers over the period 1973,99. Results based on arranged autoregressions of the error term of a static regression do not prove to be robust. Although non-linear models relying on a multivariate system approach yield partly contradictory results, the main evidence from the latter suggests a weakening, rather than an outright inaction, of the adjustment process in the inner regime. [source]


Stable signal recovery from incomplete and inaccurate measurements

COMMUNICATIONS ON PURE & APPLIED MATHEMATICS, Issue 8 2006
Emmanuel J. Candès
Suppose we wish to recover a vector x0 , ,,, (e.g., a digital signal or image) from incomplete and contaminated observations y = A x0 + e; A is an ,, × ,, matrix with far fewer rows than columns (,, , ,,) and e is an error term. Is it possible to recover x0 accurately based on the data y? To recover x0, we consider the solution x# to the ,,1 -regularization problem where , is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the vector x0 is sufficiently sparse, then the solution is within the noise level As a first example, suppose that A is a Gaussian random matrix; then stable recovery occurs for almost all such A's provided that the number of nonzeros of x0 is of about the same order as the number of observations. As a second instance, suppose one observes few Fourier samples of x0; then stable recovery occurs for almost any set of ,, coefficients provided that the number of nonzeros is of the order of ,,/(log ,,)6. In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights into the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals. © 2006 Wiley Periodicals, Inc. [source]


Factors influencing the temporal coherence of five lakes in the English Lake District

FRESHWATER BIOLOGY, Issue 3 2000
D. G. George
1. The lakes in the Windermere catchment are all deep, glacial lakes but they differ in size, shape and general productivity. Here, we examine the extent to which year-to-year variations in the physical, chemical and biological characteristics of these lakes varied synchronously over a 30,40-year period. 2. Coherence was estimated by correlating time-series of the spring, summer, autumn and winter characteristics of five lakes: Esthwaite Water, Blelham Tarn, Grasmere and the North and South Basins of Windermere. Three physical, four chemical and two biological time-series were analysed and related to year-to-year variations in a number of key driving variables. 3. The highest levels of coherence were recorded for the physical and chemical variables where the average coherence was 0.81. The average coherence for the biological variables was 0.11 and there were a number of significant negative relationships. The average coherence between all possible lake pairs was 0.59 and average values ranged from 0.50 to 0.74. A graphical analysis of these results demonstrated that the coherence between individual lake pairs was influenced by the relative size of the basins as well as their trophic status. 4. A series of examples is presented to demonstrate how a small number of driving variables influenced the observed levels of coherence. These range from a simple example where the winter temperature of the lakes was correlated with the climatic index known as the North Atlantic Oscillation, to a more complex example where the summer abundance of zooplankton was correlated with wind-mixing. 5. The implications of these findings are discussed and a conceptual model developed to illustrate the principal factors influencing temporal coherence in lake systems. The model suggests that our ability to detect temporal coherence depends on the relative magnitude of three factors: (a) the amplitude of the year-to-year variations; (b) the spatial heterogeneity of the driving variables and (c) the error terms associated with any particular measurement. [source]


Modelling opportunity in health under partial observability of circumstances

HEALTH ECONOMICS, Issue 3 2010
Pedro Rosa Dias
Abstract This paper proposes a behavioural model of inequality of opportunity in health that integrates John Roemer's framework of inequality of opportunity with the Grossman model of health capital and demand for health. The model generates a recursive system of equations for health and lifestyles, which is then jointly estimated by full information maximum likelihood with freely correlated error terms. The analysis innovates by accounting for the presence of unobserved heterogeneity, therefore addressing the partial-circumstance problem, and by extending the examination of inequality of opportunity to health outcomes other than self-assessed health, such as long-standing illness, disability and mental health. The results provide evidence for the existence of third factors that simultaneously influence health outcomes and lifestyle choices, supporting the empirical relevance of the partial-circumstance problem. Accounting for these factors, the paper corroborates that the effect of parental and early circumstances on adult health disparities is paramount. However, the particular set of circumstances that affect each of the analysed health outcomes differs substantially. The results also show that differences in educational opportunities, and in social development in childhood, are crucial determinants of lifestyles in adulthood, which, in turn, shape the observed health inequalities. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Health status and labour force participation: evidence from Australia

HEALTH ECONOMICS, Issue 3 2006
Lixin Cai
Abstract This paper examines the effect of health on labour force participation using the Household, Income and Labour Dynamics in Australia (HILDA) Survey. The potential endogeneity of health, especially self-assessed health, in the labour force participation equation is addressed by estimating the health equation and the labour force participation equation simultaneously. Taking into account the correlation between the error terms in the two equations, the estimation is conducted separately for males aged 15,49, males aged 50,64, females aged 15,49 and females aged 50,60. The results indicate that better health increases the probability of labour force participation for all four groups. However, the effect is larger for the older groups and for women. As for the feedback effect, it is found that labour force participation has a significant positive impact on older females' health, and a significant negative effect on younger males' health. For younger females and older males, the impact of labour force participation on health is not significant. The null-hypothesis of exogeneity of health to labour force participation is rejected for all groups. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Error estimates in 2-node shear-flexible beam elements

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 7 2003
Gajbir Singh
Abstract The objective of the paper is to report the investigation of error estimates/or convergence characteristics of shear-flexible beam elements. The order and magnitude of principal discretization error in the usage of various types beam elements such as: (a) 2-node standard isoparametric element, (b) 2-node field-consistent/reduced integration element and (c) 2-node coupled-displacement field element, is assessed herein. The method employs classical order of error analyses that is commonly used to evaluate the discretization error of finite difference methods. The finite element equilibrium equations at any node are expressed in terms of differential equations through the use of Taylor series. These differential equations are compared with the governing equations and error terms are identified. It is shown that the discretization error in coupled-field elements is the least compared to the field-consistent and standard finite elements (based on exact integration). Copyright © 2003 John Wiley & Sons, Ltd. [source]


Moving least-square interpolants in the hybrid particle method

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 4 2005
H. Huang
Abstract The hybrid particle method (HPM) is a particle-based method for the solution of high-speed dynamic structural problems. In the current formulation of the HPM, a moving least-squares (MLS) interpolant is used to compute the derivatives of stress and velocity components. Compared with the use of the MLS interpolant at interior particles, the boundary particles require two additional treatments in order to compute the derivatives accurately. These are the rotation of the local co-ordinate system and the imposition of boundary constraints, respectively. In this paper, it is first shown that the derivatives found by the MLS interpolant based on a complete polynomial are indifferent to the orientation of the co-ordinate system. Secondly, it is shown that imposing boundary constraints is equivalent to employing ghost particles with proper values assigned at these particles. The latter can further be viewed as placing the boundary particle in the centre of a neighbourhood that is formed jointly by the original neighbouring particles and the ghost particles. The benefit of providing a symmetric or a full circle of neighbouring points is revealed by examining the error terms generated in approximating the derivatives of a Taylor polynomial by using a linear-polynomial-based MLS interpolant. Symmetric boundaries have mostly been treated by using ghost particles in various versions of the available particle methods that are based on the strong form of the conservation equations. In light of the equivalence of the respective treatments of imposing boundary constraints and adding ghost particles, an alternative treatment for symmetry boundaries is proposed that involves imposing only the symmetry boundary constraints for the HPM. Numerical results are presented to demonstrate the validity of the proposed approach for symmetric boundaries in an axisymmetric impact problem. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Generalized forgetting functions for on-line least-squares identification of time-varying systems

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 4 2001
R. E. Mahony
The problem of on-line identification of a parametric model for continuous-time, time-varying systems is considered via the minimization of a least-squares criterion with a forgetting function. The proposed forgetting function depends on two time-varying parameters which play crucial roles in the stability analysis of the method. The analysis leads to the consideration of a Lyapunov function for the identification algorithm that incorporates both prediction error and parameter convergence measures. A theorem is proved showing finite time convergence of the Lyapunov function to a neighbourhood of zero, the size of which depends on the evolution of the time-varying error terms in the parametric model representation. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Estimation Optimality of Corrected AIC and Modified Cp in Linear Regression

INTERNATIONAL STATISTICAL REVIEW, Issue 2 2006
Simon L. Davies
Summary Model selection criteria often arise by constructing unbiased or approximately unbiased estimators of measures known as expected overall discrepancies (Linhart & Zucchini, 1986, p. 19). Such measures quantify the disparity between the true model (i.e., the model which generated the observed data) and a fitted candidate model. For linear regression with normally distributed error terms, the "corrected" Akaike information criterion and the "modified" conceptual predictive statistic have been proposed as exactly unbiased estimators of their respective target discrepancies. We expand on previous work to additionally show that these criteria achieve minimum variance within the class of unbiased estimators. Résumé Les critères de modèle de sélection naissent souvent de la construction de mesures d'estimation impartiales, ou approximativement impartiales, connues comme divergences globales prévues. De telles mesures quantifient la disparité entre le vrai modèle (c'est-à-dire le modèle qui a produit les données observées) et un modèle candidat correspondant. En ce qui concerne les applications de régression linéaires contenant des erreurs distribuées normalement, le modèle de critère d'information "corrigé" Akaike et le modèle conceptuel de statistique de prévision "modifié" ont été proposés comme étant des instruments exacts de mesures d'estimation impartiales de leurs objectifs respectifs de divergences. En nous appuyant sur les travaux précédents et en les développant, nous proposons de démontrer, en outre, que ces critères réalisent une variance minimum au sein de la classe des instruments de mesures d'estimation impartiales. [source]


The estimation of utility-consistent labor supply models by means of simulated scores

JOURNAL OF APPLIED ECONOMETRICS, Issue 4 2008
Hans G. Bloemen
We consider a utility-consistent static labor supply model with flexible preferences and a nonlinear and possibly non-convex budget set. Stochastic error terms are introduced to represent optimization and reporting errors, stochastic preferences, and heterogeneity in wages. Coherency conditions on the parameters and the support of error distributions are imposed for all observations. The complexity of the model makes it impossible to write down the probability of participation. Hence we use simulation techniques in the estimation. We compare our approach with various simpler alternatives proposed in the literature. Both in Monte Carlo experiments and for real data the various estimation methods yield very different results. Copyright © 2008 John Wiley & Sons, Ltd. [source]


The Monday effect in U.S. cotton prices

AGRIBUSINESS : AN INTERNATIONAL JOURNAL, Issue 3 2009
Stephen P. Keef
There is an extensive literature on the Monday effect with stock indices. It is regularly reported that the return on Monday is correlated with the return on the prior Friday. The bad Monday effect occurs when the return on the preceding Friday is negative. Cotton is an economically important commodity in the United States and around the world. This investigation into the daily price seasonality in the U.S. cotton market is based on spot prices from Memphis and futures prices from the New York Cotton Exchange. The regression methodologies employ adjustments to control for undesirable properties in the error terms. There are three main conclusions. First, the close-to-close changes in the futures price and in the spot price exhibit a negative Monday effect. Second, a negative bad Monday effect is observed on Mondays using close-to-close prices. The effect is present during the weekend nontrading period and continues into trading on Mondays. Third, the negative bad Monday effect does not appear to weaken in close-to-close prices and during the weekend over the period examined (1987,2003). However, there is weak evidence of a temporal decline during trading on Mondays. [EconLit Citations: G12, G14, Q14]. © 2009 Wiley Periodicals, Inc. [source]


Application of Richardson extrapolation to the numerical solution of partial differential equations

NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 4 2009
Clarence Burg
Abstract Richardson extrapolation is a methodology for improving the order of accuracy of numerical solutions that involve the use of a discretization size h. By combining the results from numerical solutions using a sequence of related discretization sizes, the leading order error terms can be methodically removed, resulting in higher order accurate results. Richardson extrapolation is commonly used within the numerical approximation of partial differential equations to improve certain predictive quantities such as the drag or lift of an airfoil, once these quantities are calculated on a sequence of meshes, but it is not widely used to determine the numerical solution of partial differential equations. Within this article, Richardson extrapolation is applied directly to the solution algorithm used within existing numerical solvers of partial differential equations to increase the order of accuracy of the numerical result without referring to the details of the methodology or its implementation within the numerical code. Only the order of accuracy of the existing solver and certain interpolations required to pass information between the mesh levels are needed to improve the order of accuracy and the overall solution accuracy. Using the proposed methodology, Richardson extrapolation is used to increase the order of accuracy of numerical solutions of the linear heat and wave equations and of the nonlinear St. Venant equations in one-dimension. © 2008 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2009 [source]


On the solution of an initial-boundary value problem that combines Neumann and integral condition for the wave equation,

NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 1 2005
Mehdi Dehghan
Abstract Numerical solution of hyperbolic partial differential equation with an integral condition continues to be a major research area with widespread applications in modern physics and technology. Many physical phenomena are modeled by nonclassical hyperbolic boundary value problems with nonlocal boundary conditions. In place of the classical specification of boundary data, we impose a nonlocal boundary condition. Partial differential equations with nonlocal boundary specifications have received much attention in last 20 years. However, most of the articles were directed to the second-order parabolic equation, particularly to heat conduction equation. We will deal here with new type of nonlocal boundary value problem that is the solution of hyperbolic partial differential equations with nonlocal boundary specifications. These nonlocal conditions arise mainly when the data on the boundary can not be measured directly. Several finite difference methods have been proposed for the numerical solution of this one-dimensional nonclassic boundary value problem. These computational techniques are compared using the largest error terms in the resulting modified equivalent partial differential equation. Numerical results supporting theoretical expectations are given. Restrictions on using higher order computational techniques for the studied problem are discussed. Suitable references on various physical applications and the theoretical aspects of solutions are introduced at the end of this article. © 2004 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2005 [source]


Semiparametric Estimation of a Duration Model

OXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 5 2001
A. Alonso Anton
Within the framework of the proportional hazard model proposed in Cox (1972), Han and Hausman (1990) consider the logarithm of the integrated baseline hazard function as constant in each time period. We, however, proposed an alternative semiparametric estimator of the parameters of the covariate part. The estimator is considered as semiparametric since no prespecified functional form for the error terms (or certain convolution) is needed. This estimator, proposed in Lewbel (2000) in another context, shows at least four advantages. The distribution of the latent variable error is unknown and may be related to the regressors. It takes into account censored observations, it allows for heterogeneity of unknown form and it is quite easy to implement since the estimator does not require numerical searches. Using the Spanish Labour Force Survey, we compare empirically the results of estimating several alternative models, basically on the estimator proposed in Han and Hausman (1990) and our semiparametric estimator. [source]


A unified approach to estimation of nonlinear mixed effects and Berkson measurement error models

THE CANADIAN JOURNAL OF STATISTICS, Issue 2 2007
Liqun Wang
Abstract Mixed effects models and Berkson measurement error models are widely used. They share features which the author uses to develop a unified estimation framework. He deals with models in which the random effects (or measurement errors) have a general parametric distribution, whereas the random regression coefficients (or unobserved predictor variables) and error terms have nonparametric distributions. He proposes a second-order least squares estimator and a simulation-based estimator based on the first two moments of the conditional response variable given the observed covariates. He shows that both estimators are consistent and asymptotically normally distributed under fairly general conditions. The author also reports Monte Carlo simulation studies showing that the proposed estimators perform satisfactorily for relatively small sample sizes. Compared to the likelihood approach, the proposed methods are computationally feasible and do not rely on the normality assumption for random effects or other variables in the model. Une stratégie d'estimation commune pour les modèles non linéaires à effets mixtes et les modèles d'erreur de mesure de Berkson Les modèles à effets mixtes et les modèles d'erreur de mesure de Berkson sont très usités. Ils par-tagent certaines caractéristiques que l'auteur met à profit pour élaborer une stratégie d'estimation commune. II considère des modèles dans lesquels la loi des effets aléatoires (ou des erreurs de mesure) est paramé-trique tandis que celles des coefficients de régression aléatoires (ou de variables exogènes non observées) et des termes d'erreur ne le sont pas. II propose une estimation des moindres carrés au second ordre et une approche par simulation fondées sur les deux premiers moments conditionnels de la variable endogène, sachant les variables exogènes observées. Les deux estimateurs s'avèrent convergents et asymptotiquement gaussiens sous des conditions assez générales. L'auteur fait aussi état d'études de Monte-Carlo attestant du bon comportement des deux estimations dans des échantillons relativement petits. Les méthodes proposées ne posent aucune difficulté particulière au plan numérique et au contraire de l'approche par vraisemblance, ne supposent ni la normalité des effets aléatoires, ni celle des autres variables du modèle. [source]


Selection Bias and Continuous-Time Duration Models: Consequences and a Proposed Solution

AMERICAN JOURNAL OF POLITICAL SCIENCE, Issue 1 2006
Frederick J. Boehmke
This article analyzes the consequences of nonrandom sample selection for continuous-time duration analyses and develops a new estimator to correct for it when necessary. We conduct a series of Monte Carlo analyses that estimate common duration models as well as our proposed duration model with selection. These simulations show that ignoring sample selection issues can lead to biased parameter estimates, including the appearance of (nonexistent) duration dependence. In addition, our proposed estimator is found to be superior in root mean-square error terms when nontrivial amounts of selection are present. Finally, we provide an empirical application of our method by studying whether self-selectivity is a problem for studies of leaders' survival during and following militarized conflicts. [source]


Bootstrap Methods in Econometrics*

THE ECONOMIC RECORD, Issue 2006
JAMES G. MacKINNON
There are many bootstrap methods that can be used for econometric analysis. In certain circumstances, such as regression models with independent and identically distributed error terms, appropriately chosen bootstrap methods generally work very well. However, there are many other cases, such as regression models with dependent errors, in which bootstrap methods do not always work well. This paper discusses a large number of bootstrap methods that can be useful in econometrics. Applications to hypothesis testing are emphasized, and simulation results are presented for a few illustrative cases. [source]


Wage differentials and state-private sector employment choice in Yugoslavia*

THE ECONOMICS OF TRANSITION, Issue 3 2003
Michael M. Lokshin
Abstract In this study we use the newly available Yugoslavian Labor Force Survey data to investigate wage differentials and employment decisions in the state and private sectors in Yugoslavia. For the analysis we use three empirical models that rely on different statistical assumptions. We extend the standard switching regression model to allow non-normality in the joint distribution of the error terms. After correcting for the sector selection bias and controlling for workers' characteristics we find a private sector wage advantage. The wage premium is largest for workers with low education levels and declining for workers with higher educational levels. Given the regulatory and tax policies that pushed the private sector into the informal sphere of the economy during the period covered by our data, we argue that the state/private wage gap is likely to grow in the future. This will make it increasingly difficult for the state sector to attract and retain highly skilled employees. [source]


Bias and backwardation in natural gas futures prices

THE JOURNAL OF FUTURES MARKETS, Issue 3 2005
Nahid Movassagh
This paper tests the fair-game efficient-markets hypothesis for the natural gas futures prices over the period 1990 through 2003. We find evidence consistent with the Keynesian notion of normal backwardation. Regressing the future spot prices on the lagged futures prices and using the Stock-Watson (1993) procedure to correct for the correlation between the error terms and the futures prices, we find that natural gas futures are biased predictors of the corresponding future spot prices for contracts ranging from 3 to 12 months. These results cast a serious doubt on the commonly held view that natural gas futures sell at a premium over the expected future spot prices, and that this bias is due to the systematic risk of the futures price movements represented by a negative "beta." We also find evidence for the Samuelson effect. © 2005 Wiley Periodicals, Inc. Jrl Fut Mark 25:281,308, 2005 [source]


Robust linear mixed models using the skew t distribution with application to schizophrenia data

BIOMETRICAL JOURNAL, Issue 4 2010
Hsiu J. Ho
Abstract We consider an extension of linear mixed models by assuming a multivariate skew t distribution for the random effects and a multivariate t distribution for the error terms. The proposed model provides flexibility in capturing the effects of skewness and heavy tails simultaneously among continuous longitudinal data. We present an efficient alternating expectation-conditional maximization (AECM) algorithm for the computation of maximum likelihood estimates of parameters on the basis of two convenient hierarchical formulations. The techniques for the prediction of random effects and intermittent missing values under this model are also investigated. Our methodologies are illustrated through an application to schizophrenia data. [source]