Statistical Inference (statistical + inference)

Distribution by Scientific Domains


Selected Abstracts


Principles of Statistical Inference

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2008
Alan Kimber
No abstract is available for this article. [source]


Introductory Statistical Inference by N. Mukhopadhyay

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2007
Hassan S. Bakouch
No abstract is available for this article. [source]


Essentials of Statistical Inference by G. A. Young and R. L. Smith

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 4 2006
Thomas Hochkirchen
No abstract is available for this article. [source]


Constrained Statistical Inference: Inequality, Order and Shape Restrictions by M. J. Silvapulle and P. K. Sen

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 3 2006
Eugenia Stoimenova
No abstract is available for this article. [source]


Statistical Inference and Changes in Income Inequality in Australia

THE ECONOMIC RECORD, Issue 247 2003
George Athanasopoulos
This paper studies the changes in income inequality in Australia between 1986 and 1999, using the Gini coefficient and Theil's inequality measure. Individuals are divided into various subgroups along several dimensions, namely region of residence, employment status, occupation and age. The change in inequality over time, between and within these subgroups is studied, and the bootstrap method is used to establish whether these changes are statistically significant. [source]


Statistical Inference For Risk Difference in an Incomplete Correlated 2 × 2 Table

BIOMETRICAL JOURNAL, Issue 1 2003
Nian-Sheng Tang
Abstract In some infectious disease studies and 2-step treatment studies, 2 × 2 table with structural zero could arise in situations where it is theoretically impossible for a particular cell to contain observations or structural void is introduced by design. In this article, we propose a score test of hypotheses pertaining to the marginal and conditional probabilities in a 2 × 2 table with structural zero via the risk/rate difference measure. Score test-based confidence interval will also be outlined. We evaluate the performance of the score test and the existing likelihood ratio test. Our empirical results evince the similar and satisfactory performance of the two tests (with appropriate adjustments) in terms of coverage probability and expected interval width. Both tests consistently perform well from small- to moderate-sample designs. The score test however has the advantage that it is only undefined in one scenario while the likelihood ratio test can be undefined in many scenarios. We illustrate our method by a real example from a two-step tuberculosis skin test study. [source]


Models for Probability and Statistical Inference: Theory and Applications by STAPLETON, J. H.

BIOMETRICS, Issue 1 2009
Article first published online: 17 MAR 200
No abstract is available for this article. [source]


Statistical Inference in a Stochastic Epidemic SEIR Model with Control Intervention: Ebola as a Case Study

BIOMETRICS, Issue 4 2006
Phenyo E. Lekone
Summary A stochastic discrete-time susceptible-exposed-infectious-recovered (SEIR) model for infectious diseases is developed with the aim of estimating parameters from daily incidence and mortality time series for an outbreak of Ebola in the Democratic Republic of Congo in 1995. The incidence time series exhibit many low integers as well as zero counts requiring an intrinsically stochastic modeling approach. In order to capture the stochastic nature of the transitions between the compartmental populations in such a model we specify appropriate conditional binomial distributions. In addition, a relatively simple temporally varying transmission rate function is introduced that allows for the effect of control interventions. We develop Markov chain Monte Carlo methods for inference that are used to explore the posterior distribution of the parameters. The algorithm is further extended to integrate numerically over state variables of the model, which are unobserved. This provides a realistic stochastic model that can be used by epidemiologists to study the dynamics of the disease and the effect of control interventions. [source]


Statistical Inference for Familial Disease Clusters

BIOMETRICS, Issue 3 2002
Chang Yu
Summary. In many epidemiologic studies, the first indication of an environmental or genetic contribution to the disease is the way in which the diseased cases cluster within the same family units. The concept of clustering is contrasted with incidence. We assume that all individuals are exchangeable except for their disease status. This assumption is used to provide an exact test of the initial hypothesis of no familial link with the disease, conditional on the number of diseased cases and the distribution of the sizes of the various family units. New parametric generalizations of binomial sampling models are described to provide measures of the effect size of the disease clustering. We consider models and an example that takes covariates into account. Ascertainment bias is described and the appropriate sampling distribution is demonstrated. Four numerical examples with real data illustrate these methods. [source]


Zero tolerance ecology: improving ecological inference by modelling the source of zero observations

ECOLOGY LETTERS, Issue 11 2005
Tara G. Martin
Abstract A common feature of ecological data sets is their tendency to contain many zero values. Statistical inference based on such data are likely to be inefficient or wrong unless careful thought is given to how these zeros arose and how best to model them. In this paper, we propose a framework for understanding how zero-inflated data sets originate and deciding how best to model them. We define and classify the different kinds of zeros that occur in ecological data and describe how they arise: either from ,true zero' or ,false zero' observations. After reviewing recent developments in modelling zero-inflated data sets, we use practical examples to demonstrate how failing to account for the source of zero inflation can reduce our ability to detect relationships in ecological data and at worst lead to incorrect inference. The adoption of methods that explicitly model the sources of zero observations will sharpen insights and improve the robustness of ecological analyses. [source]


Statistical inference for aggregates of Farrell-type efficiencies

JOURNAL OF APPLIED ECONOMETRICS, Issue 7 2007
Léopold Simar
In this study, we merge results of two recent directions in efficiency analysis research,aggregation and bootstrap,applied, as an example, to one of the most popular point estimators of individual efficiency: the data envelopment analysis (DEA) estimator. A natural context of the methodology developed here is a study of efficiency of a particular economic system (e.g., an industry) as a whole, or a comparison of efficiencies of distinct groups within such a system (e.g., regulated vs. non-regulated firms or private vs. public firms). Our methodology is justified by the (neoclassical) economic theory and is supported by carefully adapted statistical methods. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Bayesian incidence analysis of animal tumorigenicity data

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 2 2001
D. B. Dunson
Statistical inference about tumorigenesis should focus on the tumour incidence rate. Unfortunately, in most animal carcinogenicity experiments, tumours are not observable in live animals and censoring of the tumour onset times is informative. In this paper, we propose a Bayesian method for analysing data from such studies. Our approach focuses on the incidence of tumours and accommodates occult tumours and censored onset times without restricting tumour lethality, relying on cause-of-death data, or requiring interim sacrifices. We represent the underlying state of nature by a multistate stochastic process and assume general probit models for the time-specific transition rates. These models allow the incorporation of covariates, historical control data and subjective prior information. The inherent flexibility of this approach facilitates the interpretation of results, particularly when the sample size is small or the data are sparse. We use a Gibbs sampler to estimate the relevant posterior distributions. The methods proposed are applied to data from a US National Toxicology Program carcinogenicity study. [source]


Assessing teratogenicity of antiretroviral drugs: monitoring and analysis plan of the Antiretroviral Pregnancy Registry,,

PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, Issue 8 2004
Deborah L. Covington DrPH
Abstract This paper describes the Antiretroviral Pregnancy Registry's (APR) monitoring and analysis plan. APR is overseen by a committee of experts in obstetrics, pediatrics, teratology, infectious diseases, epidemiology and biostatistics from academia, government and the pharmaceutical industry. APR uses a prospective exposure-registration cohort design. Clinicians voluntarily register pregnant women with prenatal exposures to any antiretroviral therapy and provide fetal/neonatal outcomes. A birth defect is any birth outcome ,20 weeks gestation with a structural or chromosomal abnormality as determined by a geneticist. The prevalence is calculated by dividing the number of defects by the total number of live births and is compared to the prevalence in the CDC's population-based surveillance system. Additionally, first trimester exposures, in which organogenesis occurs, are compared with second/third trimester exposures. Statistical inference is based on exact methods for binomial proportions. Overall, a cohort of 200 exposed newborns is required to detect a doubling of risk, with 80% power and a Type I error rate of 5%. APR uses the Rule of Three: immediate review occurs once three specific defects are reported for a specific exposure. The likelihood of finding three specific defects in a cohort of ,600 by chance alone is less than 5% for all but the most common defects. To enhance the assurance of prompt, responsible, and appropriate action in the event of a potential signal, APR employs the strategy of ,threshold'. The threshold for action is determined by the extent of certainty about the cases, driven by statistical considerations and tempered by the specifics of the cases. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Inference in Long-Horizon Event Studies: A Bayesian Approach with Application to Initial Public Offerings

THE JOURNAL OF FINANCE, Issue 5 2000
Alon Brav
Statistical inference in long-horizon event studies has been hampered by the fact that abnormal returns are neither normally distributed nor independent. This study presents a new approach to inference that overcomes these difficulties and dominates other popular testing methods. I illustrate the use of the methodology by examining the long-horizon returns of initial public offerings (IPOs). I find that the Fama and French (1993) three-factor model is inconsistent with the observed long-horizon price performance of these IPOs, whereas a characteristic-based model cannot be rejected. [source]


Polynomial Spline Estimation and Inference of Proportional Hazards Regression Models with Flexible Relative Risk Form

BIOMETRICS, Issue 3 2006
Jianhua Z. Huang
Summary The Cox proportional hazards model usually assumes an exponential form for the dependence of the hazard function on covariate variables. However, in practice this assumption may be violated and other relative risk forms may be more appropriate. In this article, we consider the proportional hazards model with an unknown relative risk form. Issues in model interpretation are addressed. We propose a method to estimate the relative risk form and the regression parameters simultaneously by first approximating the logarithm of the relative risk form by a spline, and then employing the maximum partial likelihood estimation. An iterative alternating optimization procedure is developed for efficient implementation. Statistical inference of the regression coefficients and of the relative risk form based on parametric asymptotic theory is discussed. The proposed methods are illustrated using simulation and an application to the Veteran's Administration lung cancer data. [source]


Survival Analysis in Clinical Trials: Past Developments and Future Directions

BIOMETRICS, Issue 4 2000
Thomas R. Fleming
Summary. The field of survival analysis emerged in the 20th century and experienced tremendous growth during the latter half of the century. The developments in this field that have had the most profound impact on clinical trials are the Kaplan-Meier (1958, Journal of the American Statistical Association53, 457,481) method for estimating the survival function, the log-rank statistic (Mantel, 1966, Cancer Chemotherapy Report50, 163,170) for comparing two survival distributions, and the Cox (1972, Journal of the Royal Statistical Society, Series B34, 187,220) proportional hazards model for quantifying the effects of covariates on the survival time. The counting-process martingale theory pioneered by Aalen (1975, Statistical inference for a family of counting processes, Ph.D. dissertation, University of California, Berkeley) provides a unified framework for studying the small- and large-sample properties of survival analysis statistics. Significant progress has been achieved and further developments are expected in many other areas, including the accelerated failure time model, multivariate failure time data, interval-censored data, dependent censoring, dynamic treatment regimes and causal inference, joint modeling of failure time and longitudinal data, and Baysian methods. [source]


Coordination and Motivation in Flat Hierarchies: The Impact of the Adjudication Culture

ECONOMICA, Issue 288 2005
Rabindra Nath Chakraborty
This paper considers a variation of the partnership game with imperfect public information, in which teams are semi-autonomous. The only hierarchical intervention in teamwork is when a superior is called in by a team member to adjudicate alleged cases of free-riding or unjustified lateral punishment (flat hierarchy) according to publicly known adjudicative rules (adjudication culture), using for statistical inference a publicly known organizational norm for teamwork cooperation. It is shown that it is advantageous to set a non-elitist organizational teamwork norm. Furthermore, fairness in adjudication is valuable for economic reasons alone. [source]


Dose,time,response modeling of longitudinal measurements for neurotoxicity risk assessment

ENVIRONMETRICS, Issue 6 2005
Yiliang Zhu
Abstract Neurotoxic effects are an important non-cancer endpoint in health risk assessment and environmental regulation. Neurotoxicity tests such as neurobehavioral screenings using a functional observational battery generate longitudinal dose,response data to profile neurological effects over time. Analyses of longitudinal neurotoxicological data have mostly relied on analysis of variance; explicit dose,time,response modeling has not been reported in the literature. As dose,response modeling has become an increasingly indispensible component in risk assessment as required by the use of benchmark doses, there are strong interests in and needs for appropriate dose,response models, effective model-fitting techniques, and computation methods for benchmark dose estimation. In this article we propose a family of dose,time,response models, illustrate statistical inference of these models in conjunction with random-effects to quantify inter-subject variation, and describe a procedure to profile benchmark dose across time. We illustrate the methods through a dataset from a US/EPA experiment involving the FOB tests on rats administered to a single dose of triethyl tin (TET). The results indicate that the existing functional observational battery data can be utilized for dose,response and benchmark dose analyses and the methods can be applied in general settings of neurotoxicity risk assessment. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Analysis of multilocus models of association

GENETIC EPIDEMIOLOGY, Issue 1 2003
B. Devlin
Abstract It is increasingly recognized that multiple genetic variants, within the same or different genes, combine to affect liability for many common diseases. Indeed, the variants may interact among themselves and with environmental factors. Thus realistic genetic/statistical models can include an extremely large number of parameters, and it is by no means obvious how to find the variants contributing to liability. For models of multiple candidate genes and their interactions, we prove that statistical inference can be based on controlling the false discovery rate (FDR), which is defined as the expected number of false rejections divided by the number of rejections. Controlling the FDR automatically controls the overall error rate in the special case that all the null hypotheses are true. So do more standard methods such as Bonferroni correction. However, when some null hypotheses are false, the goals of Bonferroni and FDR differ, and FDR will have better power. Model selection procedures, such as forward stepwise regression, are often used to choose important predictors for complex models. By analysis of simulations of such models, we compare a computationally efficient form of forward stepwise regression against the FDR methods. We show that model selection includes numerous genetic variants having no impact on the trait, whereas FDR maintains a false-positive rate very close to the nominal rate. With good control over false positives and better power than Bonferroni, the FDR-based methods we introduce present a viable means of evaluating complex, multivariate genetic models. Naturally, as for any method seeking to explore complex genetic models, the power of the methods is limited by sample size and model complexity. Genet Epidemiol 25:36,47, 2003. © 2003 Wiley-Liss, Inc. [source]


Exact multivariate tests for brain imaging data

HUMAN BRAIN MAPPING, Issue 1 2002
Rita Almeida
Abstract In positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) data sets, the number of variables is larger than the number of observations. This fact makes application of multivariate linear model analysis difficult, except if a reduction of the data matrix dimension is performed prior to the analysis. The reduced data set, however, will in general not be normally distributed and therefore, the usual multivariate tests will not be necessarily applicable. This problem has not been adequately discussed in the literature concerning multivariate linear analysis of brain imaging data. No theoretical foundation has been given to support that the null distributions of the tests are as claimed. Our study addresses this issue by introducing a method of constructing test statistics that follow the same distributions as when the data matrix is normally distributed. The method is based on the invariance of certain tests over a large class of distributions of the data matrix. This implies that the method is very general and can be applied for different reductions of the data matrix. As an illustration we apply a test statistic constructed by the method now presented to test a multivariate hypothesis on a PET data set. The test rejects the null hypothesis of no significant differences in measured brain activity between two conditions. The effect responsible for the rejection of the hypothesis is characterized using canonical variate analysis (CVA) and compared with the result obtained by using univariate regression analysis for each voxel and statistical inference based on size of activations. The results obtained from CVA and the univariate method are similar. Hum. Brain Mapping 16:24,35, 2002. © 2002 Wiley-Liss, Inc. [source]


A Communication Researchers' Guide to Null Hypothesis Significance Testing and Alternatives

HUMAN COMMUNICATION RESEARCH, Issue 2 2008
Timothy R. Levine
This paper offers a practical guide to use null hypotheses significance testing and its alternatives. The focus is on improving the quality of statistical inference in quantitative communication research. More consistent reporting of descriptive statistics, estimates of effect size, confidence intervals around effect sizes, and increasing the statistical power of tests would lead to needed improvements over current practices. Alternatives including confidence intervals, effect tests, equivalence tests, and meta-analysis are discussed. Résumé Le guide du test de signification basé sur l,hypothèse nulle et de ses alternatives, à l'usage des chercheurs en communication Cet article offre un guide pratique pour l,utilisation du test de signification basé sur l'hypothèse nulle (NHST) et de ses alternatives. Il se concentre sur la manière d,améliorer la qualité de l'inférence statistique dans la recherche quantitative en communication. Une divulgation plus cohérente de la statistique descriptive, une évaluation de l,ampleur de l'effet, des intervalles de confiance autour de l,ampleur de l'effet et une augmentation de l,efficacité statistique des tests mèneraient à de nécessaires améliorations des pratiques actuelles. Des alternatives sont commentées, dont les intervalles de confiance, les tests d'effet, les tests d'équivalence et la méta-analyse. Abstract Anleitung und Alternativen zum Nullhypothesen-Signifikanztesten für Kommunikationsforscher Dieser Artikel bietet eine praktische Anleitung zum Gebrauch von Nullhypothesen-Signifikanztests und zu möglichen Alternativen. Der Fokus des Artikels liegt dabei auf der Qualitätsverbesserung von statistischen Inferenzschlüssen in der quantitativen Kommunikationsforschung. Eine konsistentere Dokumentation und Offenlegung von deskriptiver Statistik, Effektgrößen, Konfidenzintervallen der Effektgrößen und die Verbesserung der statistischen Power von Tests würden zu einer Optimierung der bislang üblichen Praxis führen. Alternativen wie Konfidenzintervalle, Effekttests, Äquivalenztests und Meta-Analysen werden diskutiert. Resumen Una Guía para los Investigadores de Comunicación sobre la Puesta a Prueba de la Significancia de la Hipótesis Nula y sus Alternativas Este artículo ofrece una guía práctica para el uso de la puesta a prueba de la significancia (NHST) de las hipótesis nulas y sus alternativas. El enfoque se centra en mejorar la calidad de la inferencia estadística de la investigación de comunicación cuantitativa. Reportes estadísticos descriptivos más consistentes, estimaciones del efecto de tamaño, intervalos de confianza alrededor del efecto de tamaño, y el incremento del poder estadístico de las pruebas podrían conducir hacia mejoras necesarias de las prácticas corrientes. Las alternativas, incluyendo intervalos de confianza, pruebas de efecto, pruebas de equivalencia, y meta-análisis, son discutidas. ZhaiYao Yo yak [source]


Local maximum-entropy approximation schemes: a seamless bridge between finite elements and meshfree methods

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 13 2006
M. Arroyo
Abstract We present a one-parameter family of approximation schemes, which we refer to as local maximum-entropy approximation schemes, that bridges continuously two important limits: Delaunay triangulation and maximum-entropy (max-ent) statistical inference. Local max-ent approximation schemes represent a compromise,in the sense of Pareto optimality,between the competing objectives of unbiased statistical inference from the nodal data and the definition of local shape functions of least width. Local max-ent approximation schemes are entirely defined by the node set and the domain of analysis, and the shape functions are positive, interpolate affine functions exactly, and have a weak Kronecker-delta property at the boundary. Local max-ent approximation may be regarded as a regularization, or thermalization, of Delaunay triangulation which effectively resolves the degenerate cases resulting from the lack or uniqueness of the triangulation. Local max-ent approximation schemes can be taken as a convenient basis for the numerical solution of PDEs in the style of meshfree Galerkin methods. In test cases characterized by smooth solutions we find that the accuracy of local max-ent approximation schemes is vastly superior to that of finite elements. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Hierarchical spatial models for predicting pygmy rabbit distribution and relative abundance

JOURNAL OF APPLIED ECOLOGY, Issue 2 2010
Tammy L. Wilson
Summary 1.,Conservationists routinely use species distribution models to plan conservation, restoration and development actions, while ecologists use them to infer process from pattern. These models tend to work well for common or easily observable species, but are of limited utility for rare and cryptic species. This may be because honest accounting of known observation bias and spatial autocorrelation are rarely included, thereby limiting statistical inference of resulting distribution maps. 2.,We specified and implemented a spatially explicit Bayesian hierarchical model for a cryptic mammal species (pygmy rabbit Brachylagus idahoensis). Our approach used two levels of indirect sign that are naturally hierarchical (burrows and faecal pellets) to build a model that allows for inference on regression coefficients as well as spatially explicit model parameters. We also produced maps of rabbit distribution (occupied burrows) and relative abundance (number of burrows expected to be occupied by pygmy rabbits). The model demonstrated statistically rigorous spatial prediction by including spatial autocorrelation and measurement uncertainty. 3.,We demonstrated flexibility of our modelling framework by depicting probabilistic distribution predictions using different assumptions of pygmy rabbit habitat requirements. 4.,Spatial representations of the variance of posterior predictive distributions were obtained to evaluate heterogeneity in model fit across the spatial domain. Leave-one-out cross-validation was conducted to evaluate the overall model fit. 5.,Synthesis and applications. Our method draws on the strengths of previous work, thereby bridging and extending two active areas of ecological research: species distribution models and multi-state occupancy modelling. Our framework can be extended to encompass both larger extents and other species for which direct estimation of abundance is difficult. [source]


Finite sample improvements in statistical inference with I(1) processes

JOURNAL OF APPLIED ECONOMETRICS, Issue 3 2001
D. Marinucci
Robinson and Marinucci (1998) investigated the asymptotic behaviour of a narrow-band semiparametric procedure termed Frequency Domain Least Squares (FDLS) in the broad context of fractional cointegration analysis. Here we restrict discussion to the standard case when the data are I(1) and the cointegrating errors are I(0), proving that modifications of the Fully Modified Ordinary Least Squares (FM-OLS) procedure of Phillips and Hansen (1990) which use the FDLS idea have the same asymptotically desirable properties as FM-OLS, and, on the basis of a Monte Carlo study, find evidence that they have superior finite-sample properties. The new procedures are also shown to compare satisfactorily with parametric estimates. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Bayesian statistics in medical research: an intuitive alternative to conventional data analysis

JOURNAL OF EVALUATION IN CLINICAL PRACTICE, Issue 2 2000
AStat, Lyle C. Gurrin BSc (Hons)
Summary Statistical analysis of both experimental and observational data is central to medical research. Unfortunately, the process of conventional statistical analysis is poorly understood by many medical scientists. This is due, in part, to the counter-intuitive nature of the basic tools of traditional (frequency-based) statistical inference. For example, the proper definition of a conventional 95% confidence interval is quite confusing. It is based upon the imaginary results of a series of hypothetical repetitions of the data generation process and subsequent analysis. Not surprisingly, this formal definition is often ignored and a 95% confidence interval is widely taken to represent a range of values that is associated with a 95% probability of containing the true value of the parameter being estimated. Working within the traditional framework of frequency-based statistics, this interpretation is fundamentally incorrect. It is perfectly valid, however, if one works within the framework of Bayesian statistics and assumes a ,prior distribution' that is uniform on the scale of the main outcome variable. This reflects a limited equivalence between conventional and Bayesian statistics that can be used to facilitate a simple Bayesian interpretation based on the results of a standard analysis. Such inferences provide direct and understandable answers to many important types of question in medical research. For example, they can be used to assist decision making based upon studies with unavoidably low statistical power, where non-significant results are all too often, and wrongly, interpreted as implying ,no effect'. They can also be used to overcome the confusion that can result when statistically significant effects are too small to be clinically relevant. This paper describes the theoretical basis of the Bayesian-based approach and illustrates its application with a practical example that investigates the prevalence of major cardiac defects in a cohort of children born using the assisted reproduction technique known as ICSI (intracytoplasmic sperm injection). [source]


Building neural network models for time series: a statistical approach

JOURNAL OF FORECASTING, Issue 1 2006
Marcelo C. Medeiros
Abstract This paper is concerned with modelling time series by single hidden layer feedforward neural network models. A coherent modelling strategy based on statistical inference is presented. Variable selection is carried out using simple existing techniques. The problem of selecting the number of hidden units is solved by sequentially applying Lagrange multiplier type tests, with the aim of avoiding the estimation of unidentified models. Misspecification tests are derived for evaluating an estimated neural network model. All the tests are entirely based on auxiliary regressions and are easily implemented. A small-sample simulation experiment is carried out to show how the proposed modelling strategy works and how the misspecification tests behave in small samples. Two applications to real time series, one univariate and the other multivariate, are considered as well. Sets of one-step-ahead forecasts are constructed and forecast accuracy is compared with that of other nonlinear models applied to the same series. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Enhancing statistical education by using role-plays of consultations

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 2 2007
Ross Taplin
Summary., Role-plays in which students act as clients and statistical consultants to each other in pairs have proved to be an effective class exercise. As well as helping to teach statistical methodology, they are effective at encouraging statistical thinking, problem solving, the use of context in applied statistical problems and improving attitudes towards statistics and the statistics profession. Furthermore, they are fun. This paper explores the advantages of using role-plays and provides some empirical evidence supporting their success. The paper argues that there is a place for teaching statistical consulting skills well before the traditional post-graduate qualification in statistics, including to school students with no knowledge of techniques in statistical inference. [source]


On quantum statistical inference

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 4 2003
Ole E. Barndorff-Nielsen
Summary. Interest in problems of statistical inference connected to measurements of quantum systems has recently increased substantially, in step with dramatic new developments in experimental techniques for studying small quantum systems. Furthermore, developments in the theory of quantum measurements have brought the basic mathematical framework for the probability calculations much closer to that of classical probability theory. The present paper reviews this field and proposes and interrelates some new concepts for an extension of classical statistical inference to the quantum context. [source]


Elimination of Third-series Effect and Defining Partial Measures of Causality

JOURNAL OF TIME SERIES ANALYSIS, Issue 5 2001
Yuzo Hosoya
Using the one-way effect extraction method, this paper presents a set of partial causal measures which represents quantitatively the interdependence between a pair of vector-valued processes in the presence of a third process. Those measures are defined for stationary as well as for a class of non-stationary time series. In contrast to conventional conditioning methods, the partial concept defined in the paper would be mostly devoid of feedback distortion by the third process. The paper also discusses statistical inference on the proposed measures. [source]


Internal algorithm variability and among-algorithm discordance in statistical haplotype reconstruction

MOLECULAR ECOLOGY, Issue 8 2009
ZU-SHI HUANG
The potential effectiveness of statistical haplotype inference makes it an area of active exploration over the last decade. There are several complications of statistical inference, including: the same algorithm can produce different solutions for the same data set, which reflects the internal algorithm variability; different algorithms can give different solutions for the same data set, reflecting the discordance among algorithms; and the algorithms per se are unable to evaluate the reliability of the solutions even if they are unique, this being a general limitation of all inference methods. With the aim of increasing the confidence of statistical inference results, consensus strategy appears to be an effective means to deal with these problems. Several authors have explored this with different emphases. Here we discuss two recent studies examining the internal algorithm variability and among-algorithm discordance, respectively, and evaluate the different outcomes of these analyses, in light of Orzack (2009) comment. Until other, better methods are developed, a combination of these two approaches should provide a practical way to increase the confidence of statistical haplotyping results. [source]