Distribution by Scientific Domains

Kinds of Bootstrap

  • block bootstrap
  • parametric bootstrap

  • Terms modified by Bootstrap

  • bootstrap algorithm
  • bootstrap analysis
  • bootstrap approach
  • bootstrap confidence interval
  • bootstrap method
  • bootstrap methods
  • bootstrap procedure
  • bootstrap sample
  • bootstrap simulation
  • bootstrap support
  • bootstrap technique
  • bootstrap techniques
  • bootstrap value

  • Selected Abstracts


    Samuel Müller
    Summary We propose a new approach to the selection of partially linear models based on the conditional expected prediction square loss function, which is estimated using the bootstrap. Because of the different speeds of convergence of the linear and the nonlinear parts, a key idea is to select each part separately. In the first step, we select the nonlinear components using an ,m -out-of- n' residual bootstrap that ensures good properties for the nonparametric bootstrap estimator. The second step selects the linear components from the remaining explanatory variables, and the non-zero parameters are selected based on a two-level residual bootstrap. We show that the model selection procedure is consistent under some conditions, and our simulations suggest that it selects the true model most often than the other selection procedures considered. [source]

    Consistent poverty comparisons and inference

    Channing Arndt
    Poverty measurement; Entropy estimation; Revealed preferences; Bootstrap Abstract Building upon the cost of basic needs (CBN) approach, an integrated approach to making consumption-based poverty comparisons is presented. This approach contains two principal modifications to the standard CBN approach. The first permits the development of multiple poverty bundles that are utility consistent. The second recognizes that the poverty line itself is a random variable whose variation influences the degree of confidence in poverty measures. We illustrate the empirical importance of these two methodological changes for the case of Mozambique. With utility consistency imposed, estimated poverty rates tend to be systematically higher in rural areas and lower in urban areas. We also find that the true confidence intervals on the poverty estimates,those incorporating poverty line variance,tend to be considerably larger than those that ignore poverty line variance. Finally, we show that these two methodological changes interact. Specifically, we find that imposing utility consistency on poverty bundles tends to tighten confidence intervals, sometimes dramatically, on provincial poverty estimates. We conclude that this revised approach represents an important advance in poverty analysis. The revised approach is straightforward and directly applicable in empirical work. [source]


    JOURNAL OF PHYCOLOGY, Issue 2 2003
    Leslie R. Goertzen
    Nuclear ribosomal small subunit and chloroplast rbcL sequence data for heterokont algae and potential outgroup taxa were analyzed separately and together using maximum parsimony. A series of taxon sampling and character weighting experiments was performed. Traditional classes (e.g. diatoms, Phaeophyceae, etc.) were monophyletic in most analyses of either data set and in analyses of combined data. Relationships among classes and of heterokont algae to outgroup taxa were sensitive to taxon sampling. Bootstrap (BS) values were not always predictive of stability of nodes in taxon sampling experiments or between analyses of different data sets. Reweighting sites by the rescaled consistency index artificially inflates BS values in the analysis of rbcL data. Inclusion of the third codon position from rbcL enhanced signal despite the superficial appearance of mutational saturation. Incongruence between data sets was largely due to placement of a few problematic taxa, and so data were combined. BS values for the combined analysis were much higher than for analyses of each data set alone, although combining data did not improve support for heterokont monophyly. [source]

    Prognostic value of the ratio of metastatic lymph nodes in gastric cancer: An analysis based on a Chinese population,

    Xi Wang MD
    Abstract Background and Objectives To determine the prognostic value of the ratio of metastatic lymph nodes (RML) for gastric cancer and compare it to the prognostic value of the number-based pN classification. Methods The survival of 513 patients who underwent curative resection between 2000 and 2005 was retrieved. The prognostic value of two factors for nodal status: RML classification (RML0, 0%; RML1, ,30%; RML2, ,50%; RML3, >50%) and pN classification (6th TNM system), was analyzed. Results Both RML and pN classifications were independent prognostic factors when considered separately in multivariate analysis (P -values,<,0.05). Moreover, the proportion of explained variation (PEV) analysis showed that each classification had more prognostic value than other prognostic factors in two models respectively (P -values,<,0.05). The D-measure for prognostic separation was 1.563 versus 1.383 for RML versus pN. Bootstrap results for the difference of D-measures did not show a significant difference between RML and pN in terms of prognostic power (95% CI, ,0.102 to 0.175). Conclusions RML is an independent prognostic factor for gastric cancer. However, no significant evidence is found to support the hypothesis that RML classification carries more prognostic value than pN classification. J. Surg. Oncol. 2009;99:329,334. © 2009 Wiley-Liss, Inc. [source]

    Bootstrap-based bandwidth choice for log-periodogram regression

    Josu Arteche
    Abstract., The choice of the bandwidth in the local log-periodogram regression is of crucial importance for estimation of the memory parameter of a long memory time series. Different choices may give rise to completely different estimates, which may lead to contradictory conclusions, for example about the stationarity of the series. We propose here a data-driven bandwidth selection strategy that is based on minimizing a bootstrap approximation of the mean-squared error (MSE). Its behaviour is compared with other existing techniques for optimal bandwidth selection in a MSE sense, revealing its better performance in a wider class of models. The empirical applicability of the proposed strategy is shown with two examples: the widely analysed in a long memory context Nile river annual minimum levels and the input gas rate series of Box and Jenkins. [source]

    Properties of the Sieve Bootstrap for Fractionally Integrated and Non-Invertible Processes

    D. S. Poskitt
    Abstract., In this article, we investigate the consequences of applying the sieve bootstrap under regularity conditions that are sufficiently general to encompass both fractionally integrated and non-invertible processes. The sieve bootstrap is obtained by approximating the data-generating process by an autoregression, whose order h increases with the sample size T. The sieve bootstrap may be particularly useful in the analysis of fractionally integrated processes since the statistics of interest can often be non-pivotal with distributions that depend on the fractional index d. The validity of the sieve bootstrap is established for |d|<1/2 and it is shown that when the sieve bootstrap is used to approximate the distribution of a general class of statistics then the error rate will be of an order smaller than , ,>0. Practical implementation of the sieve bootstrap is considered and the results are illustrated using a canonical example. [source]

    Bootstrap Methods for Markov Processes

    ECONOMETRICA, Issue 4 2003
    Joel L. Horowitz
    The block bootstrap is the best known bootstrap method for time-series data when the analyst does not have a parametric model that reduces the data generation process to simple random sampling. However, the errors made by the block bootstrap converge to zero only slightly faster than those made by first-order asymptotic approximations. This paper describes a bootstrap procedure for data that are generated by a Markov process or a process that can be approximated by a Markov process with sufficient accuracy. The procedure is based on estimating the Markov transition density nonparametrically. Bootstrap samples are obtained by sampling the process implied by the estimated transition density. Conditions are given under which the errors made by the Markov bootstrap converge to zero more rapidly than those made by the block bootstrap. [source]

    Consistent Tests for Stochastic Dominance

    ECONOMETRICA, Issue 1 2003
    Garry F. Barrett
    Methods are proposed for testing stochastic dominance of any pre,specified order, with primary interest in the distributions of income. We consider consistent tests, that are similar to Kolmogorov,Smirnov tests, of the complete set of restrictions that relate to the various forms of stochastic dominance. For such tests, in the case of tests for stochastic dominance beyond first order, we propose and justify a variety of approaches to inference based on simulation and the bootstrap. We compare these approaches to one another and to alternative approaches based on multiple comparisons in the context of a Monte Carlo experiment and an empirical example. [source]

    A Three-step Method for Choosing the Number of Bootstrap Repetitions

    ECONOMETRICA, Issue 1 2000
    Donald W. K. Andrews
    This paper considers the problem of choosing the number of bootstrap repetitions B for bootstrap standard errors, confidence intervals, confidence regions, hypothesis tests, p -values, and bias correction. For each of these problems, the paper provides a three-step method for choosing B to achieve a desired level of accuracy. Accuracy is measured by the percentage deviation of the bootstrap standard error estimate, confidence interval length, test's critical value, test's p -value, or bias-corrected estimate based on B bootstrap simulations from the corresponding ideal bootstrap quantities for which B=,. The results apply quite generally to parametric, semiparametric, and nonparametric models with independent and dependent data. The results apply to the standard nonparametric iid bootstrap, moving block bootstraps for time series data, parametric and semiparametric bootstraps, and bootstraps for regression models based on bootstrapping residuals. Monte Carlo simulations show that the proposed methods work very well. [source]

    Vertical distribution of picoeukaryotic diversity in the Sargasso Sea

    Fabrice Not
    Summary Eukaryotic molecular diversity within the picoplanktonic size-fraction has primarily been studied in marine surface waters. Here, the vertical distribution of picoeukaryotic diversity was investigated in the Sargasso Sea from euphotic to abyssal waters, using size-fractionated samples (< 2 ,m). 18S rRNA gene clone libraries were used to generate sequences from euphotic zone samples (deep chlorophyll maximum to the surface); the permanent thermocline (500 m); and the pelagic deep-sea (3000 m). Euphotic zone and deep-sea data contrasted strongly, the former displaying greater diversity at the first-rank taxon level, based on 232 nearly full-length sequences. Deep-sea sequences belonged almost exclusively to the Alveolata and Radiolaria, while surface samples also contained known and putative photosynthetic groups, such as unique Chlorarachniophyta and Chrysophyceae sequences. Phylogenetic analyses placed most Alveolata and Stramenopile sequences within previously reported ,environmental' clades, i.e. clades within the Novel Alveolate groups I and II (NAI and NAII), or the novel Marine Stramenopiles (MAST). However, some deep-sea NAII formed distinct, bootstrap supported clades. Stramenopiles were recovered from the euphotic zone only, although many MAST are reportedly heterotrophic, making the observed distribution a point for further investigation. An unexpectedly high proportion of radiolarian sequences were recovered. From these, five environmental radiolarian clades, RAD-I to RAD-V, were identified. RAD-IV and RAD-V were composed of Taxopodida-like sequences, with the former solely containing Sargasso Sea sequences, although from all depth zones sampled. Our findings highlight the vast diversity of these protists, most of which remain uncultured and of unknown ecological function. [source]

    Non-parametric tests and confidence regions for intrinsic diversity profiles of ecological populations

    ENVIRONMETRICS, Issue 8 2003
    Tonio Di Battista
    Abstract Evaluation of diversity profiles is useful for ecologists to quantify the diversity of biological communities. Measures of diversity profile can be expressed as a function of the unknown abundance vector. Thus, the estimators and related confidence regions and tests of hypotheses involve aspects of multivariate analysis. In this setting, using a suitable sampling design, inference is developed assuming an asymptotic specific distribution of the profile estimator. However, in a biological framework, ecologists work with small sample sizes, and the use of any probability distribution is hazardous. Assuming that a sample belongs to the family of replicated sampling design, we show that the diversity profile estimator can be expressed as a linear combination of the ranked abundance vector estimators. Hence we are able to develop a non-parametric approach based on a bootstrap in order to build balanced simultaneous confidence sets and tests of hypotheses for diversity profiles. Finally, the proposed procedure is applied on the avian populations of four parks in Milan, Italy. Copyright © 2003 John Wiley & Sons, Ltd. [source]

    Models for the estimation of a ,no effect concentration'

    ENVIRONMETRICS, Issue 1 2002
    Ana M. Pires
    Abstract The use of a no effect concentration (NEC), instead of the commonly used no observed effect concentration (NOEC), has been advocated recently. In this article models and methods for the estimation of an NEC are proposed and it is shown that the NEC overcomes many of the objections to the NOEC. The NEC is included as a threshold parameter in a non-linear model. Numerical methods are then used for point estimation and several techniques are proposed for interval estimation (based on bootstrap, profile likelihood and asymptotic normality). The adequacy of these methods is empirically confirmed by the results of a simulation study. The profile likelihood based interval has emerged as the best method. Finally the methodology is illustrated with data obtained from a 21 day Daphnia magna reproduction test with a reference substance, 3,4-dichloroaniline (3,4-DCA), and with a real effluent. Copyright © 2002 John Wiley & Sons, Ltd. [source]

    Confidence intervals for the calibration estimator with environmental applications

    ENVIRONMETRICS, Issue 1 2002
    I. Müller
    Abstract The article investigates different estimation techniques in the simple linear controlled calibration model and provides different types of confidence limits for the calibration estimator. In particular, M-estimation and bootstrapping techniques are implemented to obtain estimates of regression parameters during the training stage. Moreover, bootstrap is used to construct several types of confidence intervals that are compared to the classical approach based on the assumption of normality. For some of these intervals, the second order asymptotic properties can be established by means of Edgeworth expansions. Two data sets,one on space debris and the other on bacteriological counts in water samples,are used to illustrate the method's environmental applications. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    EVOLUTION, Issue 10 2009
    Brittny Calsbeek
    A central assumption of quantitative genetic theory is that the breeder's equation (R=GP,1S) accurately predicts the evolutionary response to selection. Recent studies highlight the fact that the additive genetic variance,covariance matrix (G) may change over time, rendering the breeder's equation incapable of predicting evolutionary change over more than a few generations. Although some consensus on whether G changes over time has been reached, multiple, often-incompatible methods for comparing G matrices are currently used. A major challenge of G matrix comparison is determining the biological relevance of observed change. Here, we develop a "selection skewers"G matrix comparison statistic that uses the breeder's equation to compare the response to selection given two G matrices while holding selection intensity constant. We present a bootstrap algorithm that determines the significance of G matrix differences using the selection skewers method, random skewers, Mantel's and Bartlett's tests, and eigenanalysis. We then compare these methods by applying the bootstrap to a dataset of laboratory populations of Tribolium castaneum. We find that the results of matrix comparison statistics are inconsistent based on differing a priori goals of each test, and that the selection skewers method is useful for identifying biologically relevant G matrix differences. [source]


    EVOLUTION, Issue 8 2009
    Bryn T. M. Dentinger
    The ,50 million-year-old fungus-farming ant mutualism is a classic example of coevolution, involving ants that subsist on asexual, fungal biomass, in turn propagating the fungus clonally through nest-to-nest transmission. Most mutualistic ants cultivate two closely related groups of gilled mushrooms, whereas one small group of ants in the genus Apterostigma cultivates a distantly related lineage comprised of the G2 and G4 groups. The G2 and G4 fungi were previously shown to form a monophyletic group sister to the thread-like coral mushroom family Pterulaceae. Here, we identify an enigmatic coral mushroom that produces both fertile and sterile fruiting structures as the closest free-living relative of the G4 fungi, challenging the monophyly of the Apterostigma -cultivated fungi for the first time. Both nonparametric bootstrap and Bayesian posterior probability support the node leading to the G4 cultivars and a free-living Pterula mushroom. These data suggest three scenarios that contradict the hypothesis of strict coevolution: (1) multiple domestications, (2) escape from domestication, (3) selection of single cultivar lineages from an ancestral mixed-fungus garden. These results illustrate how incomplete phylogenies for coevolved symbionts impede our understanding of the patterns and processes of coevolution. [source]

    Time series analyses reveal transient relationships between abundance of larval anchovy and environmental variables in the coastal waters southwest of Taiwan

    Abstract We investigated environmental effects on larval anchovy fluctuations (based on CPUE from 1980 to 2000) in the waters off southwestern Taiwan using advanced time series analyses, including the state-space approach to remove seasonality, wavelet analysis to investigate transient relationships, and stationary bootstrap to test correlation between time series. For large-scale environmental effects, we used the Southern Oscillation Index (SOI) to represent the El Niño Southern Oscillation (ENSO); for local hydrographic conditions, we used sea surface temperature (SST), river runoff, and mixing conditions. Whereas the anchovy catch consisted of a northern species (Engraulis japonicus) and two southern species (Encrasicholina heteroloba and Encrasicholina punctifer), the magnitude of the anchovy catch appeared to be mainly determined by the strength of Eng. japonicus (Japanese anchovy). The main factor that caused the interannual variation of anchovy CPUE might change through time. The CPUE showed a negative correlation with combination of water temperature and river runoff before 1987 and a positive correlation with river runoff after 1988. Whereas a significant negative correlation between CPUE and ENSOs existed, this correlation was driven completely by the low-frequency ENSO events and explained only 10% of the variance. Several previous studies on this population emphasized that the fluctuations of larval anchovy abundance were determined by local SST. Our analyses indicated that such a correlation was transient and simply reflected ENSO signals. Recent advances in physical oceanography around Taiwan showed that the ENSOs reduced the strength of the Asian monsoon and thus weakened the China Coastal Current toward Taiwan. The decline of larval anchovy during ENSO may be due to reduced China Coastal Current, which is important in facilitating the spawning migration of the Japanese anchovy. [source]

    The bootstrap and cross-validation in neuroimaging applications: Estimation of the distribution of extrema of random fields for single volume tests, with an application to ADC maps

    HUMAN BRAIN MAPPING, Issue 10 2007
    Roberto Viviani
    Abstract We discuss the assessment of signal change in single magnetic resonance images (MRI) based on quantifying significant departure from a reference distribution estimated from a large sample of normal subjects. The parametric approach is to build a test based on the expected distribution of extrema in random fields. However, in conditions where the variance is not uniform across the volume and the smoothness of the images is moderate to low, this test may be rather conservative. Furthermore, parametric tests are limited to datasets for which distributional assumptions hold. This paper investigates resampling methods that improve statistical tests for signal changes in single images in such adverse conditions, and that can be used for the assessment of images taken for clinical purposes. Two methods, the bootstrap and cross-validation, are compared. It is shown that the bootstrap may fail to provide a good estimate of the distribution of extrema of parametric maps. In contrast, calibration of the significance threshold by means of cross-validation (or related sampling without replacement techniques) address three issues at once: improved power, better voxel-by-voxel estimate of variance by local pooling, and adaptation to departures from ideal distributional assumptions on the signal. We apply the cross-validated tests to apparent diffusion coefficient maps, a type of MRI capable of detecting changes in the microstructural organization of brain parenchyma. We show that deviations from parametric assumptions are strong enough to cast doubt on the correctness of parametric tests for these images. As case studies, we present parametric maps of lesions in patients suffering from stroke and glioblastoma at different stages of evolution. Hum Brain Mapp 2007. © 2007 Wiley-Liss, Inc. [source]

    Wobbles, humps and sudden jumps: a case study of continuity, discontinuity and variability in early language development

    Marijn van Dijk
    Abstract Current individual-based, process-oriented approaches (dynamic systems theory and the microgenetic perspective) have led to an increase of variability-centred studies in the literature. The aim of this article is to propose a technique that incorporates variability in the analysis of the shape of developmental change. This approach is illustrated by the analysis of time serial language data, in particular data on the development of preposition use, collected from four participants. Visual inspection suggests that the development of prepositions-in-contexts shows a characteristic pattern of two phases, corresponding with a discontinuity. Three criteria for testing such discontinuous phase-wise change in individual data are presented and applied to the data. These are: (1) the sub-pattern criterion, (2) the peak criterion and (3) the membership criterion. The analyses rely on bootstrap and resampling procedures based on various null hypotheses. The results show that there are some indications of discontinuity in all participants, although clear inter-individual differences have been found, depending on the criteria used. In the discussion we will address several fundamental issues concerning (dis)continuity and variability in individual-based, process-oriented data sets. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Valentina Corradi
    We introduce block bootstrap techniques that are (first order) valid in recursive estimation frameworks. Thereafter, we present two examples where predictive accuracy tests are made operational using our new bootstrap procedures. In one application, we outline a consistent test for out-of-sample nonlinear Granger causality, and in the other we outline a test for selecting among multiple alternative forecasting models, all of which are possibly misspecified. In a Monte Carlo investigation, we compare the finite sample properties of our block bootstrap procedures with the parametric bootstrap due to Kilian (Journal of Applied Econometrics 14 (1999), 491,510), within the context of encompassing and predictive accuracy tests. In the empirical illustration, it is found that unemployment has nonlinear marginal predictive content for inflation. [source]

    Evaluating screening questionnaires using receiver Operating Characteristic (ROC) curves from two-phase (double) samples

    Giulia Bisoffi
    Abstract The characteristics of psychiatric screening tests (for example, sensitivity, specificity, and AUC , the area under an ROC curve) are frequently assessed using data arising from two-phase samples. Too often, however, the statistical methods that are used are incorrect. They do not appropriately account for the sampling design. Valid methods for the estimate of sensitivity, specificity and, in particular, the AUC, together with its standard error, are discussed in detail and a Stata macro for the implementation of these methods is provided. Simple weighting procedures are used to correct for verification biases arising from the two-phase design, together with bootstrap or jackknife sampling for the calculation of valid standard errors. Copyright © 2000 Whurr Publishers Ltd. [source]

    Realising the future: forecasting with high-frequency-based volatility (HEAVY) models

    Professor Neil Shephard
    This paper studies in some detail a class of high-frequency-based volatility (HEAVY) models. These models are direct models of daily asset return volatility based on realised measures constructed from high-frequency data. Our analysis identifies that the models have momentum and mean reversion effects, and that they adjust quickly to structural breaks in the level of the volatility process. We study how to estimate the models and how they perform through the credit crunch, comparing their fit to more traditional GARCH models. We analyse a model-based bootstrap which allows us to estimate the entire predictive distribution of returns. We also provide an analysis of missing data in the context of these models. Copyright © 2010 John Wiley & Sons, Ltd. [source]

    Statistical inference for aggregates of Farrell-type efficiencies

    Léopold Simar
    In this study, we merge results of two recent directions in efficiency analysis research,aggregation and bootstrap,applied, as an example, to one of the most popular point estimators of individual efficiency: the data envelopment analysis (DEA) estimator. A natural context of the methodology developed here is a study of efficiency of a particular economic system (e.g., an industry) as a whole, or a comparison of efficiencies of distinct groups within such a system (e.g., regulated vs. non-regulated firms or private vs. public firms). Our methodology is justified by the (neoclassical) economic theory and is supported by carefully adapted statistical methods. Copyright © 2007 John Wiley & Sons, Ltd. [source]

    Phylogenetic study on Shiraia bambusicola by rDNA sequence analyses

    Tian-Fan Cheng
    In this study, 18S rDNA and ITS-5.8S rDNA regions of four Shiraia bambusicola isolates collected from different species of bamboos were amplified by PCR with universal primer pairs NS1/NS8 and ITS5/ITS4, respectively, and sequenced. Phylogenetic analyses were conducted on three selected datasets of rDNA sequences. Maximum parsimony, distance and maximum likelihood criteria were used to infer trees. Morphological characteristics were also observed. The positioning of Shiraia in the order Pleosporales was well supported by bootstrap, which agreed with the placement by Amano (1980) according to their morphology. We did not find significant inter-hostal differences among these four isolates from different species of bamboos. From the results of analyses and comparison of their rDNA sequences, we conclude that Shiraia should be classified into Pleosporales as Amano (1980) proposed and suggest that it might be positioned in the family Phaeosphaeriaceae. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]

    Bootstrap methods for assessing the performance of near-infrared pattern classification techniques

    Brandye M. Smith
    Abstract Two parametric bootstrap techniques were applied to near-infrared (NIR) pattern classification models for two classes of microcrystalline cellulose, Avicel® PH101 and PH102, which differ only in particle size. The development of pattern classification models for similar substances is difficult, since their characteristic clusters overlap. Bootstrapping was used to enlarge small test sets for a better approximation of the overlapping area of these nearly identical substances, consequently resulting in better estimates of misclassification rates. A bootstrap that resampled the residuals, referred to as the outside model space bootstrap in this paper, and a novel bootstrap that resampled principal component scores, referred to as the inside model space bootstrap, were studied. A comparison revealed that classification rates for both bootstrap techniques were similar to the original test set classification rates. The bootstrap method developed in this study, which resampled the principal component scores, was more effective for estimating misclassification volumes than the residual-resampling method. Copyright © 2002 John Wiley & Sons, Ltd. [source]

    Correction of a bootstrap approach to testing for evolution along lines of least resistance

    Abstract Testing for an association between the leading vectors of multivariate trait (co)variation within populations (the ,line of least resistance') and among populations is an important tool for exploring variational bias in evolution. In a recent study of stickleback fish populations, a bootstrap-based test was introduced that takes into account estimation error in both vectors and hence improves the previously available bootstrap method. Because this test was implemented incorrectly, however, I here describe the correct test protocol and provide a reanalysis of the original data set. The application of this new test protocol should improve future investigations of evolution along lines of least resistance and other vector comparisons. [source]

    Selection of evolutionary models for phylogenetic hypothesis testing using parametric methods

    B. C. Emerson
    Recent molecular studies have incorporated the parametric bootstrap method to test a priori hypotheses when the results of molecular based phylogenies are in conflict with these hypotheses. The parametric bootstrap requires the specification of a particular substitutional model, the parameters of which will be used to generate simulated, replicate DNA sequence data sets. It has been both suggested that, (a) the method appears robust to changes in the model of evolution, and alternatively that, (b) as realistic model of DNA substitution as possible should be used to avoid false rejection of a null hypothesis. Here we empirically evaluate the effect of suboptimal substitution models when testing hypotheses of monophyly with the parametric bootstrap using data sets of mtDNA cytochrome oxidase I and II (COI and COII) sequences for Macaronesian Calathus beetles, and mitochondrial 16S rDNA and nuclear ITS2 sequences for European Timarcha beetles. Whether a particular hypothesis of monophyly is rejected or accepted appears to be highly dependent on whether the nucleotide substitution model being used is optimal. It appears that a parameter rich model is either equally or less likely to reject a hypothesis of monophyly where the optimal model is unknown. A comparison of the performance of the Kishino,Hasegawa (KH) test shows it is not as severely affected by the use of suboptimal models, and overall it appears to be a less conservative method with a higher rate of failure to reject null hypotheses. [source]

    A neural network versus Black,Scholes: a comparison of pricing and hedging performances

    Henrik Amilon
    Abstract An Erratum has been published for this article in Journal of Forecasting 22(6-7) 2003, 551 The Black,Scholes formula is a well-known model for pricing and hedging derivative securities. It relies, however, on several highly questionable assumptions. This paper examines whether a neural network (MLP) can be used to find a call option pricing formula better corresponding to market prices and the properties of the underlying asset than the Black,Scholes formula. The neural network method is applied to the out-of-sample pricing and delta-hedging of daily Swedish stock index call options from 1997 to 1999. The relevance of a hedge-analysis is stressed further in this paper. As benchmarks, the Black,Scholes model with historical and implied volatility estimates are used. Comparisons reveal that the neural network models outperform the benchmarks both in pricing and hedging performances. A moving block bootstrap is used to test the statistical significance of the results. Although the neural networks are superior, the results are sometimes insignificant at the 5% level.,Copyright © 2003 John Wiley & Sons, Ltd. [source]

    Bootstrap prediction intervals for autoregressive models of unknown or infinite lag order

    Jae H. Kim
    Abstract Recent studies on bootstrap prediction intervals for autoregressive (AR) model provide simulation findings when the lag order is known. In practical applications, however, the AR lag order is unknown or can even be infinite. This paper is concerned with prediction intervals for AR models of unknown or infinite lag order. Akaike's information criterion is used to estimate (approximate) the unknown (infinite) AR lag order. Small-sample properties of bootstrap and asymptotic prediction intervals are compared under both normal and non-normal innovations. Bootstrap prediction intervals are constructed based on the percentile and percentile- t methods, using the standard bootstrap as well as the bootstrap-after-bootstrap. It is found that bootstrap-after-bootstrap prediction intervals show small-sample properties substantially better than other alternatives, especially when the sample size is small and the model has a unit root or near-unit root. Copyright © 2002 John Wiley & Sons, Ltd. [source]

    A joint test of market power, menu costs, and currency invoicing

    Jean-Philippe Gervais
    Exchange rate pass-through; Currency invoicing; Menu costs; Threshold estimation Abstract This article investigates exchange rate pass-through (ERPT) and currency invoicing decisions of Canadian pork exporters in the presence of menu costs. It is shown that when export prices are negotiated in the exporter's currency, menu costs cause threshold effects in the sense that there are bounds within (outside of) which price adjustments are not (are) observed. Conversely, the pass-through is not interrupted by menu costs when export prices are denominated in the importer's currency. The empirical model focuses on pork meat exports from two Canadian provinces to the U.S. and Japan. Hansen's (2000) threshold estimation procedure is used to jointly test for currency invoicing and incomplete pass-through in the presence of menu costs. Inference is conducted using the bootstrap with pre-pivoting methods to deal with nuisance parameters. The existence of menu cost is supported by the data in three of the four cases. It also appears that Quebec pork exporters have some market power and invoice in Japanese yen their exports to Japan. Manitoba exporters also seem to follow the same invoicing strategy, but their ability to increase their profit margin in response to large enough own-currency devaluations is questionable. Our currency invoicing results for sales to the U.S. are consistent with subsets of Canadian firms using either the Canadian or U.S. currency. [source]

    Application of the parametric bootstrap method to determine statistical errors in quantitative X-ray microanalysis of thin films

    Summary We applied the parametric bootstrap to the X-ray microanalysis of Si-Ge binary alloys, in order to assess the dependence of the Ge concentrations and the local film thickness, obtained by using previously described Monte Carlo methods, on the precision of the measured intensities. We show how it is possible by this method to determine the statistical errors associated with the quantitative analysis performed in sample regions of different composition and thickness, but by conducting only one measurement. We recommend the use of the bootstrap for a broad range of applications for quantitative microanalysis to estimate the precision of the final results and to compare the performances of different methods to each other. Finally, we exploited a test based on bootstrap confidence intervals to ascertain if, for given X-ray intensities, different values of the estimated composition in two points of the sample are indicative of an actual lack of homogeneity. [source]