Inference

Distribution by Scientific Domains
Distribution within Mathematics and Statistics

Kinds of Inference

  • asymptotic inference
  • bayesian inference
  • causal inference
  • demographic inference
  • ecological inference
  • erroneous inference
  • evolutionary inference
  • historical inference
  • incorrect inference
  • inductive inference
  • likelihood inference
  • parametric inference
  • phylogenetic inference
  • posterior inference
  • predictive inference
  • reliable inference
  • robust inference
  • statistical inference
  • strong inference
  • valid inference

  • Terms modified by Inference

  • inference method
  • inference methods
  • inference models
  • inference problem
  • inference procedure
  • inference process
  • inference system

  • Selected Abstracts


    LIKELIHOOD-BASED INFERENCE IN ISOLATION-BY-DISTANCE MODELS USING THE SPATIAL DISTRIBUTION OF LOW-FREQUENCY ALLELES

    EVOLUTION, Issue 11 2009
    John Novembre
    Estimating dispersal distances from population genetic data provides an important alternative to logistically taxing methods for directly observing dispersal. Although methods for estimating dispersal rates between a modest number of discrete demes are well developed, methods of inference applicable to "isolation-by-distance" models are much less established. Here, we present a method for estimating ,,2, the product of population density (,) and the variance of the dispersal displacement distribution (,2). The method is based on the assumption that low-frequency alleles are identical by descent. Hence, the extent of geographic clustering of such alleles, relative to their frequency in the population, provides information about ,,2. We show that a novel likelihood-based method can infer this composite parameter with a modest bias in a lattice model of isolation-by-distance. For calculating the likelihood, we use an importance sampling approach to average over the unobserved intraallelic genealogies, where the intraallelic genealogies are modeled as a pure birth process. The approach also leads to a likelihood-ratio test of isotropy of dispersal, that is, whether dispersal distances on two axes are different. We test the performance of our methods using simulations of new mutations in a lattice model and illustrate its use with a dataset from Arabidopsis thaliana. [source]


    EASY AND FLEXIBLE BAYESIAN INFERENCE OF QUANTITATIVE GENETIC PARAMETERS

    EVOLUTION, Issue 6 2009
    Patrik Waldmann
    There has been a tremendous advancement of Bayesian methodology in quantitative genetics and evolutionary biology. Still, there are relatively few publications that apply this methodology, probably because the availability of multipurpose and user-friendly software is somewhat limited. It is here described how only a few rows of code of the well-developed and very flexible Bayesian software WinBUGS (Lunn et al. 2000) can be used for inference of the additive polygenic variance and heritabilty in pedigrees of general design. The presented code is illustrated by application to an earlier published dataset of Scots pine. [source]


    DIFFERENTIATION AMONG POPULATIONS WITH MIGRATION, MUTATION, AND DRIFT: IMPLICATIONS FOR GENETIC INFERENCE

    EVOLUTION, Issue 1 2006
    Seongho Song
    Abstract Populations may become differentiated from one another as a result of genetic drift. The amounts and patterns of differentiation at neutral loci are determined by local population sizes, migration rates among populations, and mutation rates. We provide exact analytical expressions for the mean, variance, and covariance of a stochastic model for hierarchically structured populations subject to migration, mutation, and drift. In addition to the expected correlation in allele frequencies among populations in the same geographic region, we demonstrate that there is a substantial correlation in allele frequencies among regions at the top level of the hierarchy. We propose a hierarchical Bayesian model for inference of Wright's F -statistics in a two-level hierarchy in which we estimate the among-region correlation in allele frequencies by substituting replication across loci for replication across time. We illustrate the approach through an analysis of human microsatellite data, and we show that approaches ignoring the among-region correlation in allele frequencies underestimate the amount of genetic differentiation among major geographic population groups by approximately 30%. Finally, we discuss the implications of these results for the use and interpretation of F -statistics in evolutionary studies. [source]


    NONPARAMETRIC BOOTSTRAP PROCEDURES FOR PREDICTIVE INFERENCE BASED ON RECURSIVE ESTIMATION SCHEMES,

    INTERNATIONAL ECONOMIC REVIEW, Issue 1 2007
    Valentina Corradi
    We introduce block bootstrap techniques that are (first order) valid in recursive estimation frameworks. Thereafter, we present two examples where predictive accuracy tests are made operational using our new bootstrap procedures. In one application, we outline a consistent test for out-of-sample nonlinear Granger causality, and in the other we outline a test for selecting among multiple alternative forecasting models, all of which are possibly misspecified. In a Monte Carlo investigation, we compare the finite sample properties of our block bootstrap procedures with the parametric bootstrap due to Kilian (Journal of Applied Econometrics 14 (1999), 491,510), within the context of encompassing and predictive accuracy tests. In the empirical illustration, it is found that unemployment has nonlinear marginal predictive content for inflation. [source]


    METHODS FOR JOINT INFERENCE FROM MULTIPLE DATA SOURCES FOR IMPROVED ESTIMATES OF POPULATION SIZE AND SURVIVAL RATES

    MARINE MAMMAL SCIENCE, Issue 3 2004
    Daniel Goodman
    Abstract Critical conservation decisions often hinge on estimates of population size, population growth rate, and survival rates, but as a practical matter it is difficult to obtain enough data to provide precise estimates. Here we discuss Bayesian methods for simultaneously drawing on the information content from multiple sorts of data to get as much precision as possible for the estimates. The basic idea is that an underlying population model can connect the various sorts of observations, so this can be elaborated into a joint likelihood function for joint estimation of the respective parameters. The potential for improved estimates derives from the potentially greater effective sample size of the aggregate of data, even though some of the data types may only bear directly on a subset of the parameters. The achieved improvement depends on specifics of the interactions among parameters in the underlying model, and on the actual content of the data. Assuming the respective data sets are unbiased, notwithstanding the fact that they may be noisy, we may gauge the average improvement in the estimates of the parameters of interest from the reduction, if any, in the standard deviations of their posterior marginal distributions. Prospective designs may be evaluated from analysis of simulated data. Here this approach is illustrated with an assessment of the potential value in various ways of merging mark-resight and carcass-survey data for the Florida manatee, as could be made possible by various modifications in the data collection protocols in both programs. [source]


    AN EMPIRICAL ANALYSIS OF MEASUREMENT EQUIVALENCE WITH THE INDCOL MEASURE OF INDIVIDUALISM AND COLLECTIVISM: IMPLICATIONS FOR VALID CROSS-CULTURAL INFERENCE

    PERSONNEL PSYCHOLOGY, Issue 1 2006
    CHRISTOPHER ROBERT
    The INDCOL measure of individualism and collectivism (Singelis et al., 1995) has been used increasingly to test complex cross-cultural hypotheses. However, sample differences in translation, culture, organization, and response context might threaten the validity of cross-cultural inferences. We systematically explored the robustness of the INDCOL, for various statistical uses, in the face of those 4 threats. An analysis of measurement equivalence using multigroup mean and covariance structure analysis compared samples of INDCOL data from the United States, Singapore, and Korea. The INDCOL was robust with regard to the interpretability of correlations, whereas differences in culture and translation pose an important potential threat to the interpretability of mean-level analyses. Recommendations regarding the interpretation of the INDCOL and issues in the analysis of measurement equivalence in cross-cultural research are discussed. [source]


    ROUTES TO HIGHER-ORDER ACCURACY IN PARAMETRIC INFERENCE

    AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 2 2009
    G. Alastair Young
    Summary Developments in the theory of frequentist parametric inference in recent decades have been driven largely by the desire to achieve higher-order accuracy, in particular distributional approximations that improve on first-order asymptotic theory by one or two orders of magnitude. At the same time, much methodology is specifically designed to respect key principles of parametric inference, in particular conditionality principles. Two main routes to higher-order accuracy have emerged: analytic methods based on ,small-sample asymptotics', and simulation, or ,bootstrap', approaches. It is argued here that, of these, the simulation methodology provides a simple and effective approach, which nevertheless retains finer inferential components of theory. The paper seeks to track likely developments of parametric inference, in an era dominated by the emergence of methodological problems involving complex dependences and/or high-dimensional parameters that typically exceed available data sample sizes. [source]


    ESTIMATION, PREDICTION AND INFERENCE FOR THE LASSO RANDOM EFFECTS MODEL

    AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 1 2009
    Scott D. Foster
    Summary The least absolute shrinkage and selection operator (LASSO) can be formulated as a random effects model with an associated variance parameter that can be estimated with other components of variance. In this paper, estimation of the variance parameters is performed by means of an approximation to the marginal likelihood of the observed outcomes. The approximation is based on an alternative but equivalent formulation of the LASSO random effects model. Predictions can be made using point summaries of the predictive distribution of the random effects given the data with the parameters set to their estimated values. The standard LASSO method uses the mode of this distribution as the predictor. It is not the only choice, and a number of other possibilities are defined and empirically assessed in this article. The predictive mode is competitive with the predictive mean (best predictor), but no single predictor performs best across in all situations. Inference for the LASSO random effects is performed using predictive probability statements, which are more appropriate under the random effects formulation than tests of hypothesis. [source]


    X. CONCLUSIONS: OVERVIEW OF FINDINGS FROM THE ERA STUDY, INFERENCES, AND RESEARCH IMPLICATIONS

    MONOGRAPHS OF THE SOCIETY FOR RESEARCH IN CHILD DEVELOPMENT, Issue 1 2010
    Michael Rutter
    First page of article [source]


    Inductive Inference by Using Information Compression

    COMPUTATIONAL INTELLIGENCE, Issue 2 2003
    Ben Choi
    Inductive inference is of central importance to all scientific inquiries. Automating the process of inductive inference is the major concern of machine learning researchers. This article proposes inductive inference techniques to address three inductive problems: (1) how to automatically construct a general description, a model, or a theory to describe a sequence of observations or experimental data, (2) how to modify an existing model to account for new observations, and (3) how to handle the situation where the new observations are not consistent with the existing models. The techniques proposed in this article implement the inductive principle called the minimum descriptive length principle and relate to Kolmogorov complexity and Occam's razor. They employ finite state machines as models to describe sequences of observations and measure the descriptive complexity by measuring the number of states. They can be used to draw inference from sequences of observations where one observation may depend on previous observations. Thus, they can be applied to time series prediction problems and to one-to-one mapping problems. They are implemented to form an automated inductive machine. [source]


    Distributed parallel compilation of MSBNs

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 12 2009
    Xiangdong An
    Abstract Multiply sectioned Bayesian networks (MSBNs) support multiagent probabilistic inference in distributed large problem domains. Inference with MSBNs can be performed using their compiled representations. The compilation involves moralization and triangulation of a set of local graphical structures. Privacy of agents may prevent us from compiling MSBNs at a central location. In earlier work, agents performed compilation sequentially via a depth-first traversal of the hypertree that organizes local subnets, where communication failure between any two agents would crush the whole work. In this paper, we present an asynchronous compilation method by which multiple agents compile MSBNs in full parallel. Compared with the traversal compilation, the asynchronous one is robust, self-adaptive, and fault-tolerant. Experiments show that both methods provide similar quality compilation to simple MSBNs, but the asynchronous one provides much higher quality compilation to complex MSBNs. Empirical study also indicates that the asynchronous one is consistently faster than the traversal one. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Decision Making with Uncertain Judgments: A Stochastic Formulation of the Analytic Hierarchy Process*

    DECISION SCIENCES, Issue 3 2003
    Eugene D. Hahn
    ABSTRACT In the analytic hierarchy process (AHP), priorities are derived via a deterministic method, the eigenvalue decomposition. However, judgments may be subject to error. A stochastic characterization of the pairwise comparison judgment task is provided and statistical models are introduced for deriving the underlying priorities. Specifically, a weighted hierarchical multinomial logit model is used to obtain the priorities. Inference is then conducted from the Bayesian viewpoint using Markov chain Monte Carlo methods. The stochastic methods are found to give results that are congruent with those of the eigenvector method in matrices of different sizes and different levels of inconsistency. Moreover, inferential statements can be made about the priorities when the stochastic approach is adopted, and these statements may be of considerable value to a decision maker. The methods described are fully compatible with judgments from the standard version of AHP and can be used to construct a stochastic formulation of it. [source]


    Why environmental scientists are becoming Bayesians

    ECOLOGY LETTERS, Issue 1 2005
    James S. Clark
    Abstract Advances in computational statistics provide a general framework for the high-dimensional models typically needed for ecological inference and prediction. Hierarchical Bayes (HB) represents a modelling structure with capacity to exploit diverse sources of information, to accommodate influences that are unknown (or unknowable), and to draw inference on large numbers of latent variables and parameters that describe complex relationships. Here I summarize the structure of HB and provide examples for common spatiotemporal problems. The flexible framework means that parameters, variables and latent variables can represent broader classes of model elements than are treated in traditional models. Inference and prediction depend on two types of stochasticity, including (1) uncertainty, which describes our knowledge of fixed quantities, it applies to all ,unobservables' (latent variables and parameters), and it declines asymptotically with sample size, and (2) variability, which applies to fluctuations that are not explained by deterministic processes and does not decline asymptotically with sample size. Examples demonstrate how different sources of stochasticity impact inference and prediction and how allowance for stochastic influences can guide research. [source]


    Empirical Likelihood-Based Inference in Conditional Moment Restriction Models

    ECONOMETRICA, Issue 6 2004
    Yuichi Kitamura
    This paper proposes an asymptotically efficient method for estimating models with conditional moment restrictions. Our estimator generalizes the maximum empirical likelihood estimator (MELE) of Qin and Lawless (1994). Using a kernel smoothing method, we efficiently incorporate the information implied by the conditional moment restrictions into our empirical likelihood-based procedure. This yields a one-step estimator which avoids estimating optimal instruments. Our likelihood ratio-type statistic for parametric restrictions does not require the estimation of variance, and achieves asymptotic pivotalness implicitly. The estimation and testing procedures we propose are normalization invariant. Simulation results suggest that our new estimator works remarkably well in finite samples. [source]


    Inductive Inference: An Axiomatic Approach

    ECONOMETRICA, Issue 1 2003
    Itzhak Gilboa
    A predictor is asked to rank eventualities according to their plausibility, based on past cases. We assume that she can form a ranking given any memory that consists of finitely many past cases. Mild consistency requirements on these rankings imply that they have a numerical representation via a matrix assigning numbers to eventuality,case pairs, as follows. Given a memory, each eventuality is ranked according to the sum of the numbers in its row, over cases in memory. The number attached to an eventuality,case pair can be interpreted as the degree of support that the past case lends to the plausibility of the eventuality. Special instances of this result may be viewed as axiomatizing kernel methods for estimation of densities and for classification problems. Interpreting the same result for rankings of theories or hypotheses, rather than of specific eventualities, it is shown that one may ascribe to the predictor subjective conditional probabilities of cases given theories, such that her rankings of theories agree with rankings by the likelihood functions. [source]


    Semiparametric Bayesian Inference in Autoregressive Panel Data Models

    ECONOMETRICA, Issue 2 2002
    Keisuke Hirano
    First page of article [source]


    A Parametric Approach to Flexible Nonlinear Inference

    ECONOMETRICA, Issue 3 2001
    James D. Hamilton
    This paper proposes a new framework for determining whether a given relationship is nonlinear, what the nonlinearity looks like, and whether it is adequately described by a particular parametric model. The paper studies a regression or forecasting model of the form yt=,(xt)+,t where the functional form of ,(,) is unknown. We propose viewing ,(,) itself as the outcome of a random process. The paper introduces a new stationary random field m(,) that generalizes finite-differenced Brownian motion to a vector field and whose realizations could represent a broad class of possible forms for ,(,). We view the parameters that characterize the relation between a given realization of m(,) and the particular value of ,(,) for a given sample as population parameters to be estimated by maximum likelihood or Bayesian methods. We show that the resulting inference about the functional relation also yields consistent estimates for a broad class of deterministic functions ,(,). The paper further develops a new test of the null hypothesis of linearity based on the Lagrange multiplier principle and small-sample confidence intervals based on numerical Bayesian methods. An empirical application suggests that properly accounting for the nonlinearity of the inflation-unemployment trade-off may explain the previously reported uneven empirical success of the Phillips Curve. [source]


    TK/TD dose,response modeling of toxicity

    ENVIRONMETRICS, Issue 5 2007
    Munni Begum
    Abstract In environmental cancer risk assessment of a toxic chemical, the main focus is in understanding induced target organ toxicity that may in turn lead to carcinogenicity. Mathematical models based on systems of ordinary differential equations with biologically relevant parameters are tenable methods for describing the disposition of chemicals in target organs. In evaluation of a toxic chemical, dose,response assessment often addresses only toxicodynamics (TD) of the chemical, while its toxicokinetics (TK) do not enter into consideration. The primary objective of this research is to integrate both TK and TD in evaluation of toxic chemicals while performing dose,response assessment. Population models, with hierarchical setup and nonlinear predictors, for TK concentration and TD effect measures are considered. A one-compartment model with biologically relevant parameters, such as organ volume, uptake rate and excretion rate, or clearance, is used to derive the TK predictor while a two parameter Emax model is used as a predictor for TD measures. Inference of the model parameters with nonnegative and assay's Limit of Detection (LOD) constraints was carried out by Bayesian approaches using Markov Chain Monte Carlo (MCMC) techniques. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    An empirical method for inferring species richness from samples

    ENVIRONMETRICS, Issue 2 2006
    Paul A. Murtaugh
    Abstract We introduce an empirical method of estimating the number of species in a community based on a random sample. The numbers of sampled individuals of different species are modeled as a multinomial random vector, with cell probabilities estimated by the relative abundances of species in the sample and, for hypothetical species missing from the sample, by linear extrapolation from the abundance of the rarest observed species. Inference is then based on likelihoods derived from the multinomial distribution, conditioning on a range of possible values of the true richness in the community. The method is shown to work well in simulations based on a variety of real data sets. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    A connectionist inference model for pattern-directed knowledge representation

    EXPERT SYSTEMS, Issue 2 2000
    I Mitchell
    In this paper we propose a connectionist model for variable binding. The model is topology dependent on the graph it builds based on the predicates available. The irregular connections between perceptron-like assemblies facilitate forward and backward chaining. The model treats the symbolic data as a sequence and represents the training set as a partially connected network using basic set and graph theory to form the internal representation. Inference is achieved by opportunistic reasoning via the bidirectional connections. Consequently, such activity stabilizes to a multigraph. This multigraph is composed of isomorphic subgraphs which all represent solutions to the query made. Such a model has a number of advantages over other methods in that irrelevant connections are avoided by superimposing positionally dependent sub-structures that are identical, variable binding can be encoded and multiple solutions can be extracted simultaneously. The model also has the ability to adapt its existing architecture when presented with new clauses and therefore add new relationships/rules to the model explicitly; this is done by some partial retraining of the network due to the superimposition properties. [source]


    Inference of mantle viscosity from GRACE and relative sea level data

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2007
    Archie Paulson
    SUMMARY Gravity Recovery And Climate Experiment (GRACE) satellite observations of secular changes in gravity near Hudson Bay, and geological measurements of relative sea level (RSL) changes over the last 10 000 yr in the same region, are used in a Monte Carlo inversion to infer-mantle viscosity structure. The GRACE secular change in gravity shows a significant positive anomaly over a broad region (>3000 km) near Hudson Bay with a maximum of ,2.5 ,Gal yr,1 slightly west of Hudson Bay. The pattern of this anomaly is remarkably consistent with that predicted for postglacial rebound using the ICE-5G deglaciation history, strongly suggesting a postglacial rebound origin for the gravity change. We find that the GRACE and RSL data are insensitive to mantle viscosity below 1800 km depth, a conclusion similar to that from previous studies that used only RSL data. For a mantle with homogeneous viscosity, the GRACE and RSL data require a viscosity between 1.4 × 1021 and 2.3 × 1021 Pa s. An inversion for two mantle viscosity layers separated at a depth of 670 km, shows an ensemble of viscosity structures compatible with the data. While the lowest misfit occurs for upper- and lower-mantle viscosities of 5.3 × 1020 and 2.3 × 1021 Pa s, respectively, a weaker upper mantle may be compensated by a stronger lower mantle, such that there exist other models that also provide a reasonable fit to the data. We find that the GRACE and RSL data used in this study cannot resolve more than two layers in the upper 1800 km of the mantle. [source]


    PAIRWISE DIFFERENCE ESTIMATION WITH NONPARAMETRIC CONTROL VARIABLES,

    INTERNATIONAL ECONOMIC REVIEW, Issue 4 2007
    Andres Aradillas-Lopez
    This article extends the pairwise difference estimators for various semilinear limited dependent variable models proposed by Honoré and Powell (Identification and Inference in Econometric Models. Essays in Honor of Thomas Rothenberg Cambridge: Cambridge University Press, 2005) to permit the regressor appearing in the nonparametric component to itself depend upon a conditional expectation that is nonparametrically estimated. This permits the estimation approach to be applied to nonlinear models with sample selectivity and/or endogeneity, in which a "control variable" for selectivity or endogeneity is nonparametrically estimated. We develop the relevant asymptotic theory for the proposed estimators and we illustrate the theory to derive the asymptotic distribution of the estimator for the partially linear logit model. [source]


    A Foundational Justification for a Weighted Likelihood Approach to Inference

    INTERNATIONAL STATISTICAL REVIEW, Issue 3 2004
    Russell J. Bowater
    Summary Two types of probability are discussed, one of which is additive whilst the other is non-additive. Popular theories that attempt to justify the importance of the additivity of probability are then critically reviewed. By making assumptions the two types of probability put forward are utilised to justify a method of inference which involves betting preferences being revised in light of the data. This method of inference can be viewed as a justification for a weighted likelihood approach to inference where the plausibility of different values of a parameter , based on the data X, is measured by the quantity q(,) =l(X,, ,)w(,), where l(X,, ,) is the likelihood function and w(,) is a weight function. Even though, unlike Bayesian inference, the method has the disadvantageous property that the measure q(,) is generally non-additive, it is argued that the method has other properties which may be considered very desirable and which have the potential to imply that when everything is taken into account, the method is a serious alternative to the Bayesian approach in many situations. The methodology that is developed is applied to both a toy example and a real example. Résumé Deux types de probabilité sont discutées, dont l'une est additive et l'autre non additive. Les théories populaires qui tentent de justifier l'importance de l'additivité des probabilités font l'objet d'une analyse critique. En faisant des hypothèses, on utilise les deux types de probabilité proposés pour justifier une méthode d'inférence concernant des préférences de paris que l'on révise en fonction des données. Cette méthode d'inférence peut être considérée comme une justification de l'approche de vraisemblance pondérée, oú l'on mesure la plausibilité de différentes valeurs d'un paramètre ,. Bien que dans cette méthode, la mesure du paramètre de vraisemblance ne soit pas additive, la méthode a d'autres propriétés intéressantes qui font qu'elle peut être considérée comme une alternative sérieuse à l'aproche Bayésienne dans de nombreuses situations. La méthodologie est appliquée à la fois à un exemple fictif et un exemple réel. [source]


    Comments on "A Foundational Justification for a Weighted Likelihood Approach to Inference"

    INTERNATIONAL STATISTICAL REVIEW, Issue 3 2004
    Glenn Shafer
    First page of article [source]


    The Impact of Damage Caps on Malpractice Claims: Randomization Inference with Difference-in-Differences

    JOURNAL OF EMPIRICAL LEGAL STUDIES, Issue 1 2007
    John J. Donohue III
    We use differences-in-differences (DID) to assess the impact of damage caps on medical malpractice claims for states adopting caps between 1991,2004. We find that conventional DID estimators exhibit acute model sensitivity. As a solution, we offer (nonparametric) covariance-adjusted randomization inference, which incorporates information about cap adoption more directly and reduces model sensitivity. We find no evidence that caps affect the number of malpractice claims against physicians. [source]


    Keeping Our Ambition Under Control: The Limits of Data and Inference in Searching for the Causes and Consequences of Vanishing Trials in Federal Court

    JOURNAL OF EMPIRICAL LEGAL STUDIES, Issue 3 2004
    Stephen B. Burbank
    This article offers some reflections stimulated by Professor Galanter's materials, which were the common springboard for the Vanishing Trials Symposium. It suggests that other data, quantitative and qualitative, may be helpful in understanding the vanishing trials phenomenon in federal civil cases, notably data available for years prior to 1962, and questions whether it is meaningful to use total dispositions as the denominator in calculating a trial termination rate. The article argues that care should be taken in using data from state court systems, as also data from criminal cases, administrative adjudication, and ADR, lest one put at risk through careless assimilation of data or muddled thinking a project quite difficult enough without additional baggage. The article describes the limitations of data previously collected by the Administrative Office of the U.S. Courts and highlights unique opportunities created by the AO's switch to a new Case Management/Electronic Case Files system. It argues that Professor Galanter may underestimate the influence of both changing demand for court services (docket makeup) and of changing demand for judicial services (resources) on the trial rate. Finally, the article argues that conclusions about either the causes or consequences of the vanishing trials phenomenon in federal civil cases are premature, suggesting in particular reasons to be wary of emphasis on "institutional factors" such as the discretionary power of first-instance judges and the ideology of managerial judging. [source]


    Inference of parasite local adaptation using two different fitness components

    JOURNAL OF EVOLUTIONARY BIOLOGY, Issue 3 2007
    D. REFARDT
    Abstract Estimating parasite fitness is central to studies aiming to understand parasite evolution. Theoretical models generally use the basic reproductive rate R0 to express fitness, yet it is very difficult to quantify R0 empirically and experimental studies often use fitness components such as infection intensity or infectivity as substitutes. These surrogate measures may be biased in several ways. We assessed local adaptation of the microsporidium Ordospora colligata to its host, the crustacean Daphnia magna using two different parasite fitness components: infection persistence over several host generations in experimental populations and infection intensity in individual hosts. We argue that infection persistence is a close estimator of R0, whereas infection intensity measures only a component of it. Both measures show a pattern that is consistent with parasite local adaptation and they correlate positively. However, several inconsistencies between them suggest that infection intensity may at times provide an inadequate estimate of parasite fitness. [source]


    A joint test of market power, menu costs, and currency invoicing

    AGRICULTURAL ECONOMICS, Issue 1 2009
    Jean-Philippe Gervais
    Exchange rate pass-through; Currency invoicing; Menu costs; Threshold estimation Abstract This article investigates exchange rate pass-through (ERPT) and currency invoicing decisions of Canadian pork exporters in the presence of menu costs. It is shown that when export prices are negotiated in the exporter's currency, menu costs cause threshold effects in the sense that there are bounds within (outside of) which price adjustments are not (are) observed. Conversely, the pass-through is not interrupted by menu costs when export prices are denominated in the importer's currency. The empirical model focuses on pork meat exports from two Canadian provinces to the U.S. and Japan. Hansen's (2000) threshold estimation procedure is used to jointly test for currency invoicing and incomplete pass-through in the presence of menu costs. Inference is conducted using the bootstrap with pre-pivoting methods to deal with nuisance parameters. The existence of menu cost is supported by the data in three of the four cases. It also appears that Quebec pork exporters have some market power and invoice in Japanese yen their exports to Japan. Manitoba exporters also seem to follow the same invoicing strategy, but their ability to increase their profit margin in response to large enough own-currency devaluations is questionable. Our currency invoicing results for sales to the U.S. are consistent with subsets of Canadian firms using either the Canadian or U.S. currency. [source]


    Inference of object-oriented design patterns

    JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 5 2001
    Paolo Tonella
    Abstract When designing a new application, experienced software engineers usually adopt solutions that have proven successful in previous projects. Such reuse of code organizations is seldom made explicit. Nevertheless, it represents important information, which can be extremely valuable in the maintenance phase by documenting the design choices underlying the implementation. In addition it can be reused whenever a similar problem is encountered. In this paper an approach for the inference of recurrent design patterns directly from the code is proposed. No assumption is made on the availability of any pattern library, and the concept analysis algorithm,adapted for this purpose,is able to infer the presence of class groups which instantiate a common, repeated pattern. In fact, concept analysis provides sets of objects sharing attributes, which,in the case of object-oriented design patterns,become class members or inter-class relations. The approach was applied to three C++ applications for which the structural relations among classes led to the extraction of a set of design patterns, which could be enriched with non-structural information about class members and method invocations. The resulting patterns could be interpreted as meaningful organizations aimed at solving general problems which have several instances in the applications analyzed. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Principles of Statistical Inference

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2008
    Alan Kimber
    No abstract is available for this article. [source]