Data Better (data + better)

Distribution by Scientific Domains

Selected Abstracts

Analysis of determinants of mammalian species richness in South America using spatial autoregressive models

ECOGRAPHY, Issue 4 2004
Marcelo F. Tognelli
Classically, hypotheses concerning the distribution of species have been explored by evaluating the relationship between species richness and environmental variables using ordinary least squares (OLS) regression. However, environmental and ecological data generally show spatial autocorrelation, thus violating the assumption of independently distributed errors. When spatial autocorrelation exists, an alternative is to use autoregressive models that assume spatially autocorrelated errors. We examined the relationship between mammalian species richness in South America and environmental variables, thereby evaluating the relative importance of four competing hypotheses to explain mammalian species richness. Additionally, we compared the results of ordinary least squares (OLS) regression and spatial autoregressive models using Conditional and Simultaneous Autoregressive (CAR and SAR, respectively) models. Variables associated with productivity were the most important at determining mammalian species richness at the scale analyzed. Whereas OLS residuals between species richness and environmental variables were strongly autocorrelated, those from autoregressive models showed less spatial autocorrelation, particularly the SAR model, indicating its suitability for these data. Autoregressive models also fit the data better than the OLS model (increasing R2 by 5,14%), and the relative importance of the explanatory variables shifted under CAR and SAR models. These analyses underscore the importance of controlling for spatial autocorrelation in biogeographical studies. [source]


EVOLUTION, Issue 3 2007
L. Lacey Knowles
Patterns of genetic variation can provide valuable insights for deciphering the relative roles of different evolutionary processes in species differentiation. However, population-genetic models for studying divergence in geographically structured species are generally lacking. Since these are the biogeographic settings where genetic drift is expected to predominate, not only are population-genetic tests of hypotheses in geographically structured species constrained, but generalizations about the evolutionary processes that promote species divergence may also be potentially biased. Here we estimate a population-divergence model in montane grasshoppers from the sky islands of the Rocky Mountains. Because this region was directly impacted by Pleistocene glaciation, both the displacement into glacial refugia and recolonization of montane habitats may contribute to differentiation. Building on the tradition of using information from the genealogical relationships of alleles to infer the geography of divergence, here the additional consideration of the process of gene-lineage sorting is used to obtain a quantitative estimate of population relationships and historical associations (i.e., a population tree) from the gene trees of five anonymous nuclear loci and one mitochondrial locus in the broadly distributed species Melanoplus oregonensis. Three different approaches are used to estimate a model of population divergence; this comparison allows us to evaluate specific methodological assumptions that influence the estimated history of divergence. A model of population divergence was identified that significantly fits the data better compared to the other approaches, based on per-site likelihood scores of the multiple loci, and that provides clues about how divergence proceeded in M. oregonensis during the dynamic Pleistocene. Unlike the approaches that either considered only the most recent coalescence (i.e., information from a single individual per population) or did not consider the pattern of coalescence in the gene genealogies, the population-divergence model that best fits the data was estimated by considering the pattern of gene lineage coalescence across multiple individuals, as well as loci. These results indicate that sampling of multiple individuals per population is critical to obtaining an accurate estimate of the history of divergence so that the signal of common ancestry can be separated from the confounding influence of gene flow,even though estimates suggest that gene flow is not a predominant factor structuring patterns of genetic variation across these sky island populations. They also suggest that the gene genealogies contain information about population relationships, despite the lack of complete sorting of gene lineages. What emerges from the analyses is a model of population divergence that incorporates both contemporary distributions and historical associations, and shows a latitudinal and regional structuring of populations reminiscent of population displacements into multiple glacial refugia. Because the population-divergence model itself is built upon the specific events shaping the history of M. oregonensis, it provides a framework for estimating additional population-genetic parameters relevant to understanding the processes governing differentiation in geographically structured species and avoids the problems of relying on overly simplified and inaccurate divergence models. The utility of these approaches, as well as the caveats and future improvements, for estimating population relationships and historical associations relevant to genetic analyses of geographically structured species are discussed. [source]

Multilevel models for estimating incremental net benefits in multinational studies

Richard Grieve
Abstract Multilevel models (MLMs) have been recommended for estimating incremental net benefits (INBs) in multicentre cost-effectiveness analysis (CEA). However, these models have assumed that the INBs are exchangeable and that there is a common variance across all centres. This paper examines the plausibility of these assumptions by comparing various MLMs for estimating the mean INB in a multinational CEA. The results showed that the MLMs that assumed the INBs were exchangeable and had a common variance led to incorrect inferences. The MLMs that included covariates to allow for systematic differences across the centres, and estimated different variances in each centre, made more plausible assumptions, fitted the data better and led to more appropriate inferences. We conclude that the validity of assumptions underlying MLMs used in CEA need to be critically evaluated before reliable conclusions can be drawn. Copyright 2006 John Wiley & Sons, Ltd. [source]


This study investigates whether models of forward-looking behavior explain the observed patterns of heavy drinking and smoking of men in late middle age in the Health and Retirement Study better than myopic models. We develop and estimate a sequence of nested models that differ by their degree of forward-looking behavior. Our empirical findings suggest that forward looking models fit the data better than myopic models. These models also dominate other behavioral models based on out-of-sample predictions using data of men aged 70 and over. Myopic models predict rates of smoking for old individuals, which are significantly larger than those found in the data on elderly men. [source]

Long-term mortality from pleural and peritoneal cancer after exposure to asbestos: Possible role of asbestos clearance

Francesco Barone-Adesi
Abstract Models based on the multistage theory of carcinogenesis predict that the rate of mesothelioma increases monotonically as a function of time since first exposure (TSFE) to asbestos. Predictions of long-term mortality (TSFE , 40 years) are, however, still untested, because of the limited follow-up of most epidemiological studies. Some authors have suggested that the increase in mesothelioma rate with TSFE might be attenuated by clearance of asbestos from the lungs. We estimated mortality time trends from pleural and peritoneal cancer in a cohort of 3,443 asbestos-cement workers, followed for more than 50 years. The functional relation between mesothelioma rate and TSFE was evaluated with various regression models. The role of asbestos clearance was explored using the traditional mesothelioma multistage model, generalized to include a term representing elimination over time. We observed 139 deaths from pleural and 56 from peritoneal cancer during the period 1950,2003. The rate of pleural cancer increased during the first 40 years of TSFE and reached a plateau thereafter. In contrast, the rate of peritoneal cancer increased monotonically with TSFE. The model allowing for asbestos elimination fitted the data better than the traditional model for pleural (p = 0.02) but not for peritoneal cancer (p = 0.22). The risk for pleural cancer, rather than showing an indefinite increase, might reach a plateau when a sufficiently long time has elapsed since exposure. The different trends for pleural and peritoneal cancer might be related to clearance of the asbestos from the workers' lungs. 2008 Wiley-Liss, Inc. [source]

Party loyalty as habit formation

Ron Shachar
In most democracies, at least two out of any three individuals vote for the same party in sequential elections. This paper presents a model in which vote-persistence is partly due to the dependence of the utility on the previous voting decision. This dependence is termed ,habit formation'. The model and its implications are supported by individual-level panel data on the presidential elections in the USA in 1972 and 1976. For example, it is found that the voting probability is a function of the lagged choice variable, even when the endogeneity of the lagged variable is accounted for, and that the tendency to vote for different parties in sequential elections decreased with the age of the voter. Furthermore, using structural estimation the effect of habit is estimated, while allowing unobserved differences among respondents. The structural habit parameter implies that the effect of previous votes on the current decision is quite strong. The habit model fits the data better than the traditional ,party identification' model. Copyright 2003 John Wiley & Sons, Ltd. [source]

Are parametric models suitable for estimating avian growth rates?

William P. Brown
For many bird species, growth is negative or equivocal during development. Traditional, parametric growth curves assume growth follows a sigmoidal form with prescribed inflection points and is positive until asymptotic size. Accordingly, these curves will not accurately capture the variable, sometimes considerable, fluctuations in avian growth over the course of the trajectory. We evaluated the fit of three traditional growth curves (logistic, Gompertz, and von Bertalanffy) and a nonparametric spline estimator to simulated growth data of six different specified forms over a range of sample sizes. For all sample sizes, the spline best fit the simulated model that exhibited negative growth during a portion of the trajectory. The Gompertz curve was the most flexible for fitting simulated models that were strictly sigmoidal in form, yet the fit of the spline was comparable to that of the Gompertz curve as sample size increased. Importantly, confidence intervals for all of the fitted, traditional growth curves were wholly inaccurate, negating the apparent robustness of the Gompertz curve, while confidence intervals of the spline were acceptable. We further evaluated the fit of traditional growth curves and the spline to a large data set of wood thrush Hylocichla mustelina mass and wing chord observations. The spline fit the wood thrush data better than the traditional growth curves, produced estimates that did not differ from known observations, and described negative growth rates at relevant life history stages that were not detected by the growth curves. The common rationale for using parametric growth curves, which compress growth information into a few parameters, is to predict an expected size or growth rate at some age or to compare estimated growth with other published estimates. The suitability of these traditional growth curves may be compromised by several factors, however, including variability in the true growth trajectory. Nonparametric methods, such as the spline, provide a precise description of empirical growth yet do not produce such parameter estimates. Selection of a growth descriptor is best determined by the question being asked but may be constrained by inherent patterns in the growth data. [source]

Instrumental and Expert Assessment of Mahon Cheese Texture

J. Benedito
ABSTRACT: To improve Mahon cheese texture assessment, the relationship between instrumental and sensory measurements was sought. For that purpose 30 pieces of Mahon cheese from different batches and 2 different manufacturers were examined. Textural characteristics at different curing times were evaluated by uniaxial compression, puncture, and sensory analysis. Significant linear correlations were found between instrumental and sensory measurements. A logarithmic model (Weber-Fechner) fitted data better than a linear one. Only 1 factor was extracted when considering all the instrumental and sensory variables, thus indicating that both sets of measurements are related to the same phenomenon. The best predictors for Mahon cheese sensory attributes were found to be cheese moisture, deformability modulus, and slope in puncture. [source]


S.Y. Zheng
This paper presents an approach to the evaluation of reservoir models using transient pressure data. Braided fluvial sandstones exposed in cliffs in SW England were studied as the surface equivalent of the Triassic Sherwood Sandstone, a reservoir unit at the nearby Wytch Farm oilfield. Three reservoir models were built; each used a different modelling approach ranging in complexity from stochastic pixel-based modelling using commercially available software, to a spreadsheet random number generator. In order to test these models, numerical well test simulations were conducted using sector models extracted from the geological models constructed. The simulation results were then evaluated against the actual well test data in order to find the model which best represented the field geology. Two wells at Wytch Farm field were studied. The results suggested that for one of the sampled wells, the model built using the spreadsheet random number generator gave the best match to the well test data. In the well, the permeability from the test interpretation matched the geometric average permeability. This average is the "correct" upscaled permeability for a random system, and this was consistent with the random nature of the geological model. For the second well investigated, a more complex "channel object" model appeared to fit the dynamic data better. All the models were built with stationary properties. However, the well test data suggested that some parts of the field have different statistical properties and hence show non-stationarity. These differences would have to be built into the model representing the local geology. This study presents a workflow that is not yet considered standard in the oil industry, and the use of dynamic data to evaluate geological models requires further development. The study highlights the fact that the comparison or matching of results from reservoir models and well-test analyses is not always straightforward in that different models may match different wells. The study emphasises the need for integrated analyses of geological and engineering data. The methods and procedures presented are intended to form a feedback loop which can be used to evaluate the representivity of a geological model. [source]

Mathematical models for coinfection by two sexually transmitted agents: the human immunodeficiency virus and herpes simplex virus type 2 case

S. Guy Mahiane
Summary., To study the interactions between two sexually transmitted diseases without remission of the infections, we propose to use Markovian models. One model allows the estimation of the per-partnership female-to-male transmission probabilities for each infection, and the other the per-sex-act transmission probabilities. These models take into account the essential factors for the propagation of both infections, including the variability according to age of the rates of prevalence in the population of female partners for the male individuals constituting our sample. We estimate transmission probabilities and relative risks (for circumcision, usage of condoms and the effect of one infection on the infectivity of the other) by using the maximum likelihood method. Bootstrap procedures are used to provide confidence intervals for the parameters. We illustrate the new procedures with the study of the interactions between herpes simplex virus type 2 and human immunodeficiency virus by using data from the male circumcision trial that was conducted in Orange Farm (South Africa). The study shows that the probability that a susceptible male individual acquires one of the viruses is significantly higher when he is already infected with the other. Using the Akaike information criterion, we show that the per-partnership model fits the data better than the per-sex-act model. [source]

Bayesian analysis of single-molecule experimental data

S. C. Kou
Summary., Recent advances in experimental technologies allow scientists to follow biochemical processes on a single-molecule basis, which provides much richer information about chemical dynamics than traditional ensemble-averaged experiments but also raises many new statistical challenges. The paper provides the first likelihood-based statistical analysis of the single-molecule fluorescence lifetime experiment designed to probe the conformational dynamics of a single deoxyribonucleic acid (DNA) hairpin molecule. The conformational change is initially treated as a continuous time two-state Markov chain, which is not observable and must be inferred from changes in photon emissions. This model is further complicated by unobserved molecular Brownian diffusions. Beyond the simple two-state model, a competing model that models the energy barrier between the two states of the DNA hairpin as an Ornstein,Uhlenbeck process has been suggested in the literature. We first derive the likelihood function of the simple two-state model and then generalize the method to handle complications such as unobserved molecular diffusions and the fluctuating energy barrier. The data augmentation technique and Markov chain Monte Carlo methods are developed to sample from the posterior distribution desired. The Bayes factor calculation and posterior estimates of relevant parameters indicate that the fluctuating barrier model fits the data better than the simple two-state model. [source]

Product Innovation and Irregular Growth Cycles with Excess Capacity

Gang Gong
A non-linear growth model is presented that provides a solution to Harrod's knife-edge problem within the Keynesian multiplier,accelerator framework. The model introduces product innovation into a conventional investment function. Statistical analysis demonstrates that such an investment function matches the data better than the conventional one. Dynamic analysis shows that irregular growth cycles could occur with excess capacity. [source]

Dynamical modelling of luminous and dark matter in 17 Coma early-type galaxies

J. Thomas
ABSTRACT Dynamical models for 17 early-type galaxies in the Coma cluster are presented. The galaxy sample consists of flattened, rotating as well as non-rotating early-types including cD and S0 galaxies with luminosities between MB=,18.79 and ,22.56. Kinematical long-slit observations cover at least the major-axis and minor-axis and extend to 1,4reff. Axisymmetric Schwarzschild models are used to derive stellar mass-to-light ratios and dark halo parameters. In every galaxy, the best fit with dark matter matches the data better than the best fit without. The statistical significance is over 95 per cent for eight galaxies, around 90 per cent for five galaxies and for four galaxies it is not significant. For the highly significant cases, systematic deviations between models without dark matter and the observed kinematics are clearly seen; for the remaining galaxies, differences are more statistical in nature. Best-fitting models contain 10,50 per cent dark matter inside the half-light radius. The central dark matter density is at least one order of magnitude lower than the luminous mass density, independent of the assumed dark matter density profile. The central phase-space density of dark matter is often orders of magnitude lower than that in the luminous component, especially when the halo core radius is large. The orbital system of the stars along the major-axis is slightly dominated by radial motions. Some galaxies show tangential anisotropy along the minor-axis, which is correlated with the minor-axis Gauss,Hermite coefficient H4. Changing the balance between data-fit and regularization constraints does not change the reconstructed mass structure significantly: model anisotropies tend to strengthen if the weight on regularization is reduced, but the general property of a galaxy to be radially or tangentially anisotropic does not change. This paper is aimed to set the basis for a subsequent detailed analysis of luminous and dark matter scaling relations, orbital dynamics and stellar populations. [source]

On the Epidemiological Microfoundations of Sticky Information,

Ricardo Nunes
Abstract We estimate and compare two models in which households periodically update their expectations. The first model assumes that households update their expectations towards survey measures. In the second model, households update their expectations towards rational expectations (RE). While the literature has used these specifications indistinguishably, we argue that there are important differences. The two models imply different updating probabilities, and the data seem to prefer the second one. We then analyse the properties of both models in terms of mean expectations, median expectations, and a measure of disagreement among households. The model with periodical updates towards RE also seems to fit the data better along these dimensions. [source]

The Dimensionality of Right-Wing Authoritarianism: Lessons from the Dilemma between Theory and Measurement

Friedrich Funke
The RWA Scale (Altemeyer, 1981, 1988, 1996) is commonly regarded as the best measure of right-wing authoritarianism. The one-dimensional instrument assesses the covariation of three attitudinal clusters: authoritarian submission, authoritarian aggression, and conventionalism. The incongruence between the implicit conceptual dimensionality on the one hand and methodological operationalization on the other makes room for discussion about whether it would be advantageous to measure the 3 facets of RWA separately. I rely on three arguments: (1) confirmatory factor analyses showing that three-dimensional scales fit the data better than the conventional one-dimensional practice; (2) the dimensions showing a considerable interdimension discrepancy in their capability to explain validation criteria; and (3) the dimensions showing an intradimensional discrepancy which is dependent upon the research question. The argumentation is illustrated by empirical evidence from several Web-based studies among German Internet users. [source]

Women's Perspective on Men's Control and Aggression in Intimate Relationships

Zeev Winstok PhD
The relationship of men's self-control capability; their need to control their wives; and their use of verbal aggression, threats, and physical forms of aggression against their partners, as reported by women, were examined. Data were obtained from a stratified probability sample of 2,544 women drawn from the general population in Israel. Initially, structural equation modeling analysis showed that (a) men's need to control their partners and their ability to control themselves were negatively related, and were 2 aspects of personal control; (b) men's verbal aggression, threats of physical aggression and actual physical aggression toward their partners were closely related, and were 3 aspects of aggressive behavior; (c) personal control and aggressive behavior were closely related. Next, a revised model that fitted the data better, demonstrated that verbal aggression was more closely related to personal control than to aggressive behavior. Finally, a model representing co-occurrence of control and violent expressions was tested. This model yielded the best fit to the data. We concluded that control and aggression are two conceptualizations of the same phenomenon, rather than 2 distinct, yet interrelated, concepts. [source]

Integrating ecology with hydromorphology: a priority for river science and management

I.P. Vaughan
Abstract 1.The assessment of links between ecology and physical habitat has become a major issue in river research and management. Key drivers include concerns about the conservation implications of human modifications (e.g. abstraction, climate change) and the explicit need to understand the ecological importance of hydromorphology as prescribed by the EU's Water Framework Directive. Efforts are focusing on the need to develop ,eco-hydromorphology' at the interface between ecology, hydrology and fluvial geomorphology. Here, the scope of this emerging field is defined, some research and development issues are suggested, and a path for development is sketched out. 2.In the short term, major research priorities are to use existing literature or data better to identify patterns among organisms, ecological functions and river hydromorphological character. Another early priority is to identify model systems or organisms to act as research foci. In the medium term, the investigation of pattern,processes linkages, spatial structuring, scaling relationships and system dynamics will advance mechanistic understanding. The effects of climate change, abstraction and river regulation, eco-hydromorphic resistance/resilience, and responses to environmental disturbances are likely to be management priorities. Large-scale catchment projects, in both rural and urban locations, should be promoted to concentrate collaborative efforts, to attract financial support and to raise the profile of eco-hydromorphology. 3.Eco-hydromorphological expertise is currently fragmented across the main contributory disciplines (ecology, hydrology, geomorphology, flood risk management, civil engineering), potentially restricting research and development. This is paradoxical given the shared vision across these fields for effective river management based on good science with social impact. A range of approaches is advocated to build sufficient, integrated capacity that will deliver science of real management value over the coming decades. Copyright 2007 John Wiley & Sons, Ltd. [source]

Body size and extinction risk in Australian mammals: An information-theoretic approach

Abstract The critical weight range (CWR) hypothesis for Australian mammals states that extinctions and declines have been concentrated in species with body mass between 35 g and 5.5 kg. The biological basis for this hypothesis is that species of intermediate size are disproportionately impacted by introduced predators. The CWR hypothesis has received support from several statistical studies over the past decade, although the evidence is weaker or non-existent for certain groups such as mesic-zone mammals and arboreal mammals. In this study, we employ an information-theoretic model selection approach to gain further insights into the relationship between body mass and extinction risk in Australian mammals. We find evidence, consistent with the CWR hypothesis, that extinction risk peaks at intermediate body masses for marsupials, rodents and ground-dwelling species, but not for arboreal species. In contrast to previous studies, we find that the CWR describes extinction patterns in the mesic zone as well as the arid zone. In the mesic zone, there is also a weaker tendency for large species above the CWR to be more vulnerable, consistent with extinction patterns on other continents. We find that a more biological plausible Gaussian distribution consistently fits the data better than the polynomial models that have been used in previous studies. Our results justify conservation programmes targeted at species within the CWR across Australia. [source]

Point and Interval Estimation of the Population Size Using a Zero-Truncated Negative Binomial Regression Model

Maarten J. L. F. Cruyff
Abstract This paper presents the zero-truncated negative binomial regression model to estimate the population size in the presence of a single registration file. The model is an alternative to the zero-truncated Poisson regression model and it may be useful if the data are overdispersed due to unobserved heterogeneity. Horvitz,Thompson point and interval estimates for the population size are derived, and the performance of these estimators is evaluated in a simulation study. To illustrate the model, the size of the population of opiate users in the city of Rotterdam is estimated. In comparison to the Poisson model, the zero-truncated negative binomial regression model fits these data better and yields a substantially higher population size estimate. ( 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]

Peer Rejection, Aggressive or Withdrawn Behavior, and Psychological Maladjustment from Ages 5 to 12: An Examination of Four Predictive Models

Gary W. Ladd
Findings yielded a comprehensive portrait of the predictive relations among children's aggressive or withdrawn behaviors, peer rejection, and psychological maladjustment across the 5,12 age period. Examination of peer rejection in different variable contexts and across repeated intervals throughout childhood revealed differences in the timing, strength, and consistency of this risk factor as a distinct (additive) predictor of externalizing versus internalizing problems. In conjunction with aggressive behavior, peer rejection proved to be a stronger additive predictor of externalizing problems during early rather than later childhood. Relative to withdrawn behavior, rejection's efficacy as a distinct predictor of internalizing problems was significant early in childhood and increased progressively thereafter. These additive path models fit the data better than did disorder-driven or transactional models. [source]