Estimation Procedure (estimation + procedure)

Distribution by Scientific Domains

Kinds of Estimation Procedure

  • likelihood estimation procedure
  • maximum likelihood estimation procedure
  • parameter estimation procedure


  • Selected Abstracts


    Development of an Estimation Procedure for an Activity-Based Travel Demand Model

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 7 2008
    W. Recker
    The method uses a genetic algorithm to estimate coefficient values of the utility function, based on a particular multidimensional sequence alignment method to deal with the nominal, discrete attributes of the activity/travel pattern (e.g., which household member performs which activity, which vehicle is used, sequencing of activities), and a time sequence alignment method to handle temporal attributes of the activity pattern (e.g., starting and ending time of each activity and/or travel). The estimation procedure is tested on data drawn from a well-known activity/travel survey. [source]


    Significance of Modeling Error in Structural Parameter Estimation

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 1 2001
    Masoud Sanayei
    Structural health monitoring systems rely on algorithms to detect potential changes in structural parameters that may be indicative of damage. Parameter-estimation algorithms seek to identify changes in structural parameters by adjusting parameters of an a priori finite-element model of a structure to reconcile its response with a set of measured test data. Modeling error, represented as uncertainty in the parameters of a finite-element model of the structure, curtail capability of parameter estimation to capture the physical behavior of the structure. The performance of four error functions, two stiffness-based and two flexibility-based, is compared in the presence of modeling error in terms of the propagation rate of the modeling error and the quality of the final parameter estimates. Three different types of parameters are used in the parameter estimation procedure: (1) unknown parameters that are to be estimated, (2) known parameters assumed to be accurate, and (3) uncertain parameters that manifest the modeling error and are assumed known and not to be estimated. The significance of modeling error is investigated with respect to excitation and measurement type and locations, the type of error function, location of the uncertain parameter, and the selection of unknown parameters to be estimated. It is illustrated in two examples that the stiffness-based error functions perform significantly better than the corresponding flexibility-based error functions in the presence of modeling error. Additionally, the topology of the structure, excitation and measurement type and locations, and location of the uncertain parameters with respect to the unknown parameters can have a significant impact on the quality of the parameter estimates. Insight into the significance of modeling error and its potential impact on the resulting parameter estimates is presented through analytical and numerical examples using static and modal data. [source]


    Optimal Nonparametric Estimation of First-price Auctions

    ECONOMETRICA, Issue 3 2000
    Emmanuel Guerre
    This paper proposes a general approach and a computationally convenient estimation procedure for the structural analysis of auction data. Considering first-price sealed-bid auction models within the independent private value paradigm, we show that the underlying distribution of bidders' private values is identified from observed bids and the number of actual bidders without any parametric assumptions. Using the theory of minimax, we establish the best rate of uniform convergence at which the latent density of private values can be estimated nonparametrically from available data. We then propose a two-step kernel-based estimator that converges at the optimal rate. [source]


    Recollective experience in alcohol dependence: a laboratory study

    ADDICTION, Issue 12 2008
    Patrizia Thoma
    ABSTRACT Aims Alcohol dependence has been linked to dysfunction of fronto-temporo-striatal circuits which mediate memory and executive function. The present study aimed to explore the specificity of recognition memory changes in alcohol dependence. Design, setting and participants Twenty hospitalized alcohol-dependent detoxified patients and 20 healthy control subjects completed a verbal list discrimination task. Measurements Hits and false alarm rates were analysed. Additionally, both the dual process signal detection model (DPSD) and the process dissociation procedure (PDP) were used to derive estimates of the contribution of recollection and familiarity processes to the recognition memory performance in patients and controls. Findings Alcohol-dependent patients showed intact hit rates, but increased false alarm rates and an impaired ability to remember the learning context. Both the DPSD model and PDP estimates yielded significantly reduced recollection estimates in the alcohol-dependent compared to control subjects. Whether or not familiarity was impaired, depended upon the sensitivity of the estimation procedure. Conclusion Taken together, the result pattern suggests a significant impairment in recollection and mild familiarity changes in recently detoxified, predominantly male, alcohol-dependent subjects. [source]


    Predicting intra-urban variation in air pollution concentrations with complex spatio-temporal dependencies,

    ENVIRONMETRICS, Issue 6 2010
    Adam A. Szpiro
    Abstract We describe a methodology for assigning individual estimates of long-term average air pollution concentrations that accounts for a complex spatio-temporal correlation structure and can accommodate spatio-temporally misaligned observations. This methodology has been developed as part of the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air), a prospective cohort study funded by the US EPA to investigate the relationship between chronic exposure to air pollution and cardiovascular disease. Our hierarchical model decomposes the space--time field into a "mean" that includes dependence on covariates and spatially varying seasonal and long-term trends and a "residual" that accounts for spatially correlated deviations from the mean model. The model accommodates complex spatio-temporal patterns by characterizing the temporal trend at each location as a linear combination of empirically derived temporal basis functions, and embedding the spatial fields of coefficients for the basis functions in separate linear regression models with spatially correlated residuals (universal kriging). This approach allows us to implement a scalable single-stage estimation procedure that easily accommodates a significant number of missing observations at some monitoring locations. We apply the model to predict long-term average concentrations of oxides of nitrogen (NOx) from 2005 to 2007 in the Los Angeles area, based on data from 18 EPA Air Quality System regulatory monitors. The cross-validated IR2 is 0.67. The MESA Air study is also collecting additional concentration data as part of a supplementary monitoring campaign. We describe the sampling plan and demonstrate in a simulation study that the additional data will contribute to improved predictions of long-term average concentrations. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    The inclusion of exogenous variables in functional autoregressive ozone forecasting

    ENVIRONMETRICS, Issue 7 2002
    Julien Damon
    Abstract In this article, we propose a new technique for ozone forecasting. The approach is functional, that is we consider stochastic processes with values in function spaces. We make use of the essential characteristic of this type of phenomenon by taking into account theoretically and practically the continuous time evolution of pollution. One main methodological enhancement of this article is the incorporation of exogenous variables (wind speed and temperature) in those models. The application is carried out on a six-year data set of hourly ozone concentrations and meteorological measurements from Béthune (France). The study examines the summer periods because of the higher values observed. We explain the non-parametric estimation procedure for autoregressive Hilbertian models with or without exogenous variables (considering two alternative versions in this case) as well as for the functional kernel model. The comparison of all the latter models is based on up-to-24 hour-ahead predictions of hourly ozone concentrations. We analyzed daily forecast curves upon several criteria of two kinds: functional ones, and aggregated ones where attention is put on the daily maximum. It appears that autoregressive Hilbertian models with exogenous variables show the best predictive power. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Specific aspects on crack advance during J -test method for structural materials at cryogenic temperatures

    FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 2 2006
    K. WEISS
    ABSTRACT Cryogenic elastic plastic, J -integral investigations on metallic materials often show negative crack extension values with respect to resistance curve J - R. According to the present ASTM standard, the use of unloading compliance technique relies on the estimation procedure of the crack lengths during the unloading sequences of the test. The current standard, however, does not give any specific procedure for treating such negative data. To date, the applied procedure uses the shifting of the negative crack extension values either to the onset of the blunting line or to the offset of the resistance curve. The present paper represents a solution of the negative crack length problem on the basis of a mechanical evaluation procedure of the unloading slopes. The achieved progress using this evaluation technique is demonstrated on different materials such as cryogenic high toughness stainless steels, low carbon ferritic steel and aluminum alloys from the series of 7000 and 5000. In addition, this work deals with the crack tunnelling phenomenon, observed for high toughness materials, and shows the reduction of this crack extension appearance by using electro discharge machining (EDM) side groove technique. The differences between EDM processed side grooves and standard V-notch machining have been investigated within these test series. [source]


    Numerical investigation on J -integral testing of heterogeneous fracture toughness testing specimens: Part I , weld metal cracks

    FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 8 2003
    Y.-J. KIM
    ABSTRACT Based on extensive two-dimensional (2D) finite element (FE) analyses, the present work provides the plastic , factor solutions for fracture toughness J -integral testing of heterogeneous specimens with weldments. Solutions cover practically interesting ranges of strength mismatch and relative weld width, and are given for three typical geometries for toughness testing: a middle cracked tension (M(T)) specimen, single edge cracked bend (SE(B)) specimen and (C(T)) specimen. For mismatched M(T) specimens, both plane strain and plane stress conditions are considered, whereas for SE(B) and C(T) specimens, only the plane strain condition is considered. For all cases, only deep cracks are considered, and an idealized butt weld configuration is considered, where the weld metal strip has a rectangular cross section. Based on the present solutions for the strength mismatch effect on plastic , factors, a window is provided, within which the homogeneous J estimation procedure can be used for weldment toughness testing. The effect of the weld groove configuration on the plastic , factor is briefly discussed, concluding the need for further systematic analysis to provide guidance to practical toughness testing. [source]


    A Numerical Model and Spreadsheet Interface for Pumping Test Analysis

    GROUND WATER, Issue 4 2001
    Gary S. Johnson
    Curve-matching techniques have been the standard method of aquifer test analysis for several decades. A variety of techniques provide the capability of evaluating test data from confined, unconfined, leaky aquitard, and other conditions. Each technique, however, is accompanied by a set of assumptions, and evaluation of a combination of conditions can be complicated or impossible due to intractable mathematics or nonuniqueness of the solution. Numerical modeling of pumping tests provides two major advantages: (1) the user can choose which properties to calibrate and what assumptions to make; and (2) in the calibration process the user is gaining insights into the conceptual model of the flow system and uncertainties in the analysis. Routine numerical modeling of pumping tests is now practical due to computer hardware and software advances of the last decade. The RADFLOW model and spreadsheet interface presented in this paper is an easy-to-use numerical model for estimation of aquifer properties from pumping test data. Layered conceptual models and their properties are evaluated in a trial-and-error estimation procedure. The RADFLOW model can treat most combinations of confined, unconfined, leaky aquitard, partial penetration, and borehole storage conditions. RADFLOW is especially useful in stratified aquifer systems with no identifiable lateral boundaries. It has been verified to several analytical solutions and has been applied in the Snake River Plain Aquifer to develop and test conceptual models and provide estimates of aquifer properties. Because the model assumes axially symmetrical flow, it is limited to representing multiple aquifer layers that are laterally continuous. [source]


    Identification of a class of non-linear parametrically varying models

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 1 2003
    F. Previdi
    The aim of this paper is to propose a novel class of non-linear, possibly parameter-varying models suitable for system identification purposes. These models are given in the form of a linear fractional transformation (LFT) where the ,forward' part is represented by a conventional linear regression and the ,feedback' part is given by a non-linear dynamic map parameterized by a neural network (NN) which can take into account scheduling variables available for measurement. For this specific model structure a parameter estimation procedure has been set up, which turns out to be particularly efficient from the computational point of view. Also, it is possible to establish a connection between this model class and the well known class of local model networks (LMNs): this aspect is investigated in the paper. Finally, we have applied the proposed identification procedure to the problem of determining accurate non-linear models for knee joint dynamics in paraplegic patients, within the framework of a functional electrical stimulation (FES) rehabilitation engineering project. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Preferences and the Democratic Peace

    INTERNATIONAL STUDIES QUARTERLY, Issue 2 2000
    Erik Gartzke
    A debate exists over whether (and to what degree) the democratic peace is explained by joint democracy or by a lack of motives for conflict between states that happen to be democratic. Gartzke (1998) applies expected utility theory to the democratic peace and shows that an index of states' preference similarity based on United Nations General Assembly roll-call votes (affinity) accounts for much of the lack of militarized interstate disputes (MIDs) between democracies. Oneal and Russett (1997b, 1998, 1999) respond by arguing that UN voting is itself a function of regime type,that democracy ,causes'affinity. Oneal and Russett seek to demonstrate their thesis by regressing affinity on democracy and other variables from a standard model of the democratic peace. I replicate results reported by Oneal and Russett and then extend the analysis in several ways. I find that the residuals from Oneal and Russett's regression of affinity remain highly significant as a predictor of the absence of MIDs. Further, significance for democracy is shown to be fragile and subject to variable construction, model specification, and the choice of estimation procedure. [source]


    Constrained least squares methods for estimating reaction rate constants from spectroscopic data

    JOURNAL OF CHEMOMETRICS, Issue 1 2002
    Sabina Bijlsma
    Abstract Model errors, experimental errors and instrumental noise influence the accuracy of reaction rate constant estimates obtained from spectral data recorded in time during a chemical reaction. In order to improve the accuracy, which can be divided into the precision and bias of reaction rate constant estimates, constraints can be used within the estimation procedure. The impact of different constraints on the accuracy of reaction rate constant estimates has been investigated using classical curve resolution (CCR). Different types of constraints can be used in CCR. For example, if pure spectra of reacting absorbing species are known in advance, this knowledge can be used explicitly. Also, the fact that pure spectra of reacting absorbing species are non-negative is a constraint that can be used in CCR. Experimental data have been obtained from UV-vis spectra taken in time of a biochemical reaction. From the experimental data, reaction rate constants and pure spectra were estimated with and without implementation of constraints in CCR. Because only the precision of reaction rate constant estimates could be investigated using the experimental data, simulations were set up that were similar to the experimental data in order to additionally investigate the bias of reaction rate constant estimates. From the results of the simulated data it is concluded that the use of constraints does not result self-evidently in an improvement in the accuracy of rate constant estimates. Guidelines for using constraints are given. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Validation of Group Domain Score Estimates Using a Test of Domain

    JOURNAL OF EDUCATIONAL MEASUREMENT, Issue 2 2006
    Mary Pommerich
    Domain scores have been proposed as a user-friendly way of providing instructional feedback about examinees' skills. Domain performance typically cannot be measured directly; instead, scores must be estimated using available information. Simulation studies suggest that IRT-based methods yield accurate group domain score estimates. Because simulations can represent best-case scenarios for methodology, it is important to verify results with a real data application. This study administered a domain of elementary algebra (EA) items created from operational test forms. An IRT-based group-level domain score was estimated from responses to a subset of taken items (comprised of EA items from a single operational form) and compared to the actual observed domain score. Domain item parameters were calibrated both using item responses from the special study and from national operational administrations of the items. The accuracy of the domain score estimates were evaluated within schools and across school sizes for each set of parameters. The IRT-based domain score estimates typically were closer to the actual domain score than observed performance on the EA items from the single form. Previously simulated findings for the IRT-based domain score estimation procedure were supported by the results of the real data application. [source]


    A new Bayesian formulation for Holt's exponential smoothing

    JOURNAL OF FORECASTING, Issue 3 2009
    Robert R. Andrawis
    Abstract In this paper we propose a Bayesian forecasting approach for Holt's additive exponential smoothing method. Starting from the state space formulation, a formula for the forecast is derived and reduced to a two-dimensional integration that can be computed numerically in a straightforward way. In contrast to much of the work for exponential smoothing, this method produces the forecast density and, in addition, it considers the initial level and initial trend as part of the parameters to be evaluated. Another contribution of this paper is that we have derived a way to reduce the computation of the maximum likelihood parameter estimation procedure to that of evaluating a two-dimensional grid, rather than applying a five-variable optimization procedure. Simulation experiments confirm that both proposed methods give favorable performance compared to other approaches. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    A joint test of market power, menu costs, and currency invoicing

    AGRICULTURAL ECONOMICS, Issue 1 2009
    Jean-Philippe Gervais
    Exchange rate pass-through; Currency invoicing; Menu costs; Threshold estimation Abstract This article investigates exchange rate pass-through (ERPT) and currency invoicing decisions of Canadian pork exporters in the presence of menu costs. It is shown that when export prices are negotiated in the exporter's currency, menu costs cause threshold effects in the sense that there are bounds within (outside of) which price adjustments are not (are) observed. Conversely, the pass-through is not interrupted by menu costs when export prices are denominated in the importer's currency. The empirical model focuses on pork meat exports from two Canadian provinces to the U.S. and Japan. Hansen's (2000) threshold estimation procedure is used to jointly test for currency invoicing and incomplete pass-through in the presence of menu costs. Inference is conducted using the bootstrap with pre-pivoting methods to deal with nuisance parameters. The existence of menu cost is supported by the data in three of the four cases. It also appears that Quebec pork exporters have some market power and invoice in Japanese yen their exports to Japan. Manitoba exporters also seem to follow the same invoicing strategy, but their ability to increase their profit margin in response to large enough own-currency devaluations is questionable. Our currency invoicing results for sales to the U.S. are consistent with subsets of Canadian firms using either the Canadian or U.S. currency. [source]


    Estimation of an effective water diffusion coefficient during infrared-convective drying of a polymer solution

    AICHE JOURNAL, Issue 9 2009
    N. Allanic
    Abstract This article deals with the drying of an aqueous solution of polyvinyl alcohol mixed with a plasticizer. A heating combining forced convection and short-infrared radiation was investigated. A one-dimensional model taking into account the shrinkage of the product was developed to get the temperature and moisture content evolutions during the drying. The water diffusion coefficient was estimated by an inverse method. A sensitivity analysis and numerical tests showed the relevance of using an objective function taking both mass and temperature measurements into account for the estimation procedure. This estimation was performed on several convective and infrared-convective experimental drying kinetics. The model predictions fit well mass and temperature experimental data. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source]


    OLS ESTIMATION AND THE t TEST REVISITED IN RANK-SIZE RULE REGRESSION,

    JOURNAL OF REGIONAL SCIENCE, Issue 4 2008
    Yoshihiko Nishiyama
    ABSTRACT The rank-size rule and Zipf's law for city sizes have been traditionally examined by means of OLS estimation and the t test. This paper studies the accurate and approximate properties of the OLS estimator and obtains the distribution of the t statistic under the assumption of Zipf's law (i.e., Pareto distribution). Indeed, we show that the t statistic explodes asymptotically even under the null, indicating that a mechanical application of the t test yields a serious type I error. To overcome this problem, critical regions of the t test are constructed to test the Zipf's law. Using these corrected critical regions, we can conclude that our results are in favor of the Zipf's law for many more countries than in the previous researches such as Rosen and Resnick (1980) or Soo (2005). By using the same database as that used in Soo (2005), we demonstrate that the Zipf law is rejected for only one of 24 countries under our test whereas it is rejected for 23 of 24 countries under the usual t test. We also propose a more efficient estimation procedure and provide empirical applications of the theory for some countries. [source]


    Penalized spline models for functional principal component analysis

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 1 2006
    Fang Yao
    Summary., We propose an iterative estimation procedure for performing functional principal component analysis. The procedure aims at functional or longitudinal data where the repeated measurements from the same subject are correlated. An increasingly popular smoothing approach, penalized spline regression, is used to represent the mean function. This allows straightforward incorporation of covariates and simple implementation of approximate inference procedures for coefficients. For the handling of the within-subject correlation, we develop an iterative procedure which reduces the dependence between the repeated measurements that are made for the same subject. The resulting data after iteration are theoretically shown to be asymptotically equivalent (in probability) to a set of independent data. This suggests that the general theory of penalized spline regression that has been developed for independent data can also be applied to functional data. The effectiveness of the proposed procedure is demonstrated via a simulation study and an application to yeast cell cycle gene expression data. [source]


    Proportional hazards estimate of the conditional survival function

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 4 2000
    Ronghui Xu
    We introduce a new estimator of the conditional survival function given some subset of the covariate values under a proportional hazards regression. The new estimate does not require estimating the base-line cumulative hazard function. An estimate of the variance is given and is easy to compute, involving only those quantities that are routinely calculated in a Cox model analysis. The asymptotic normality of the new estimate is shown by using a central limit theorem for Kaplan,Meier integrals. We indicate the straightforward extension of the estimation procedure under models with multiplicative relative risks, including non-proportional hazards, and to stratified and frailty models. The estimator is applied to a gastric cancer study where it is of interest to predict patients' survival based only on measurements obtained before surgery, the time at which the most important prognostic variable, stage, becomes known. [source]


    Local Linear M-estimation in non-parametric spatial regression

    JOURNAL OF TIME SERIES ANALYSIS, Issue 3 2009
    Zhengyan Lin
    primary 62G07; secondary 60F05 Abstract., A robust version of local linear regression smoothers augmented with variable bandwidths is investigated for dependent spatial processes. The (uniform) weak consistency as well as asymptotic normality for the local linear M-estimator (LLME) of the spatial regression function g(x) are established under some mild conditions. Furthermore, an additive model is considered to avoid the curse of dimensionality for spatial processes and an estimation procedure based on combining the marginal integration technique with LLME is applied in this paper. Meanwhile, we present a simulated study to illustrate the proposed estimation method. Our simulation results show that the estimation method works well numerically. [source]


    Robust Estimation For Periodic Autoregressive Time Series

    JOURNAL OF TIME SERIES ANALYSIS, Issue 2 2008
    Q. Shao
    Abstract., A robust estimation procedure for periodic autoregressive (PAR) time series is introduced. The asymptotic properties and the asymptotic relative efficiency are discussed by the estimating equation approach. The performance of the robust estimators for PAR time-series models with order one is illustrated by a simulation study. The technique is applied to a real data analysis. [source]


    Maximum likelihood estimation in space time bilinear models

    JOURNAL OF TIME SERIES ANALYSIS, Issue 1 2003
    YUQING DAI
    The space time bilinear (STBL) model is a special form of a multiple bilinear time series that can be used to model time series which exhibit bilinear behaviour on a spatial neighbourhood structure. The STBL model and its identification have been proposed and discussed by Dai and Billard (1998). The present work considers the problem of parameter estimation for the STBL model. A conditional maximum likelihood estimation procedure is provided through the use of a Newton,Raphson numerical optimization algorithm. The gradient vector and Hessian matrix are derived together with recursive equations for computation implementation. The methodology is illustrated with two simulated data sets, and one real-life data set. [source]


    Parameter Estimation of Stochastic Processes with Long-range Dependence and Intermittency

    JOURNAL OF TIME SERIES ANALYSIS, Issue 5 2001
    Jiti Gao
    This paper considers the case where a stochastic process may display both long-range dependence and second-order intermittency. The existence of such a process is established in Anh, Angulo and Ruiz-Medina (1999). We systematically study the estimation of parameters involved in the spectral density function of a process with long-range dependence and second-order intermittency. An estimation procedure for the parameters is given. Numerical results are presented to support the estimation procedure proposed in this paper. [source]


    Functional Coefficient Autoregressive Models: Estimation and Tests of Hypotheses

    JOURNAL OF TIME SERIES ANALYSIS, Issue 2 2001
    Rong Chen
    In this paper, we study nonparametric estimation and hypothesis testing procedures for the functional coefficient AR (FAR) models of the form Xt=f1(Xt,d)Xt, 1+ ... +fp(Xt,d)Xt,p+,t, first proposed by Chen and Tsay (1993). As a direct generalization of the linear AR model, the FAR model is a rich class of models that includes many useful parametric nonlinear time series models such as the threshold AR models of Tong (1983) and exponential AR models of Haggan and Ozaki (1981). We propose a local linear estimation procedure for estimating the coefficient functions and study its asymptotic properties. In addition, we propose two testing procedures. The first one tests whether all the coefficient functions are constant, i.e. whether the process is linear. The second one tests if all the coefficient functions are continuous, i.e. if any threshold type of nonlinearity presents in the process. The results of some simulation studies as well as a real example are presented. [source]


    Fine-scale genetic structure and gene dispersal inferences in 10 Neotropical tree species

    MOLECULAR ECOLOGY, Issue 2 2006
    OLIVIER J. HARDY
    Abstract The extent of gene dispersal is a fundamental factor of the population and evolutionary dynamics of tropical tree species, but directly monitoring seed and pollen movement is a difficult task. However, indirect estimates of historical gene dispersal can be obtained from the fine-scale spatial genetic structure of populations at drift,dispersal equilibrium. Using an approach that is based on the slope of the regression of pairwise kinship coefficients on spatial distance and estimates of the effective population density, we compare indirect gene dispersal estimates of sympatric populations of 10 tropical tree species. We re-analysed 26 data sets consisting of mapped allozyme, SSR (simple sequence repeat), RAPD (random amplified polymorphic DNA) or AFLP (amplified fragment length polymorphism) genotypes from two rainforest sites in French Guiana. Gene dispersal estimates were obtained for at least one marker in each species, although the estimation procedure failed under insufficient marker polymorphism, limited sample size, or inappropriate sampling area. Estimates generally suffered low precision and were affected by assumptions regarding the effective population density. Averaging estimates over data sets, the extent of gene dispersal ranged from 150 m to 1200 m according to species. Smaller gene dispersal estimates were obtained in species with heavy diaspores, which are presumably not well dispersed, and in populations with high local adult density. We suggest that limited seed dispersal could indirectly limit effective pollen dispersal by creating higher local tree densities, thereby increasing the positive correlation between pollen and seed dispersal distances. We discuss the potential and limitations of our indirect estimation procedure and suggest guidelines for future studies. [source]


    Econometric Analysis of Fisher's Equation

    AMERICAN JOURNAL OF ECONOMICS AND SOCIOLOGY, Issue 1 2005
    Peter C. B. Phillips
    Fisher's equation for the determination of the real rate of interest is studied from a fresh econometric perspective. Some new methods of data description for nonstationary time series are introduced. The methods provide a nonparametric mechanism for modelling the spatial densities of a time series that displays random wandering characteristics, like interest rates and inflation. Hazard rate functionals are also constructed, an asymptotic theory is given, and the techniques are illustrated in some empirical applications to real interest rates for the United States. The paper ends by calculating semiparametric estimates of long-range dependence in U.S. real interest rates, using a new estimation procedure called modified log periodogram regression and new asymptotics that covers the nonstationary case. The empirical results indicate that the real rate of interest in the United States is (fractionally) nonstationary over 1934,1997 and over the more recent subperiods 1961,1985 and 1961,1997. Unit root nonstationarity and short memory stationarity are both strongly rejected for all these periods. [source]


    Habits, Complementarities and Heterogeneity in Alcohol and Tobacco Demand: A Multivariate Dynamic Model,

    OXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 4 2010
    David Aristei
    Abstract In this paper we test the existence of forward-looking behaviour in a multivariate model for alcohol and tobacco consumption. The theoretical framework, based on a dynamic adjustment cost model with forward-looking behaviour, is enhanced to include the intertemporal interactions between the two goods. The analysis of the within-period preferences completes the intertemporal model, allowing to evaluate the static substitutability/complementarity relationships. The empirical strategy consists in a two-step estimation procedure. In a first stage, we obtain the parameters of the demand system, while in a second stage Euler equations are estimated. Results, based on a cohort data set constructed from a series of cross-sections of the Italian Household Budget Survey, reveal a significant complementarity relationship between alcohol and tobacco. Estimation of the Euler equations does not lead to rejection of the hypothesis of intertemporal dependence, providing evidence for a forward-looking behaviour in alcohol and tobacco consumption. Moreover, we find significant intertemporal interactions that support the adjustment cost setting in a multivariate model with rational expectations. [source]


    Modelling approaches to compare sorption and degradation of metsulfuron-methyl in laboratory micro-lysimeter and batch experiments

    PEST MANAGEMENT SCIENCE (FORMERLY: PESTICIDE SCIENCE), Issue 12 2003
    Maik Heistermann
    Abstract Results of laboratory batch studies often differ from those of outdoor lysimeter or field plot experiments,with respect to degradation as well as sorption. Laboratory micro-lysimeters are a useful device for closing the gap between laboratory and field by both including relevant transport processes in undisturbed soil columns and allowing controlled boundary conditions. In this study, sorption and degradation of the herbicide metsulfuron-methyl in a loamy silt soil were investigated by applying inverse modelling techniques to data sets from different experimental approaches under laboratory conditions at a temperature of 10 °C: first, batch-degradation studies and, second, column experiments with undisturbed soil cores (28 cm length × 21 cm diameter). The column experiments included leachate and soil profile analysis at two different run times. A sequential extraction method was applied in both study parts in order to determine different binding states of the test item within the soil. Data were modelled using ModelMaker and Hydrus-1D/2D. Metsulfuron-methyl half-life in the batch-experiments (t1/2 = 66 days) was shown to be about four times higher than in the micro-lysimeter studies (t1/2 about 17 days). Kinetic sorption was found to be a significant process both in batch and column experiments. Applying the one-rate-two-site kinetic sorption model to the sequential extraction data, it was possible to associate the stronger bonded fraction of metsulfuron-methyl with its kinetically sorbed fraction in the model. Although the columns exhibited strong significance of multi-domain flow (soil heterogeneity), the comparison between bromide and metsulfuron-methyl leaching and profile data showed clear evidence for kinetic sorption effects. The use of soil profile data had significant impact on parameter estimates concerning sorption and degradation. The simulated leaching of metsulfuron-methyl as it resulted from parameter estimation was shown to decrease when soil profile data were considered in the parameter estimation procedure. Moreover, it was shown that the significance of kinetic sorption can only be demonstrated by the additional use of soil profile data in parameter estimation. Thus, the exclusive use of efflux data from leaching experiments at any scale can lead to fundamental misunderstandings of the underlying processes. Copyright © 2003 Society of Chemical Industry [source]


    Procedure for determining the uncertainty of photovoltaic module outdoor electrical performance,

    PROGRESS IN PHOTOVOLTAICS: RESEARCH & APPLICATIONS, Issue 2 2001
    K. Whitfield
    This paper sets forth an uncertainty estimation procedure for the measurement of photovoltaic (PV) electrical performance using natural sunlight and calibrated secondary reference cells. The actual test irradiance should be restricted to values between 800 and 1000,W/m2 in order to assume that maximum power varies linearly with irradiance. Only the uncertainty of maximum power at standard test conditions (STC), i.e., 1000,W/m2 plane-of-array irradiance and 25°C cell temperature, is developed in its entirety. The basic uncertainty analysis principles developed herein, however, can be applied to any electrical variable of interest (e.g., short-circuit current, open-circuit voltage and fill factor). Although the equations presented appear cumbersome, they are easily implemented into a computer spreadsheet. Examples of uncertainty analyses are also presented herein to make the concepts more concrete. Published in 2001 by John Wiley & Sons, Ltd. [source]


    Field Reliability Prediction in Consumer Electronics Using Warranty Data

    QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 4 2007
    Roxana A. Ion
    Abstract In innovative fast product development processes, such as consumer electronics, it is necessary to check as quickly as possible, using field data, whether the product reliability is at the right level. In consumer electronics, some major companies use the Warranty Call Rate (WCR) for this purpose. This paper discusses extensively the theoretical and practical drawbacks of the WCR. Subsequently, it is demonstrated, using a Weibull failure distribution, that only a few months after product launch, say three months, the warranty data offer the opportunity to estimate the parameters of the failure distribution. Of course, this requires that the warranty data are available in the quality department. Unfortunately, for some companies the field feedback information process from the repair centres to the quality department causes a delay of several months. These companies have to speed up their field feedback information process before they can fully take advantage of the proposed estimation procedure. Copyright © 2006 John Wiley & Sons, Ltd. [source]