Different Choices (different + choice)

Distribution by Scientific Domains


Selected Abstracts


Bootstrap-based bandwidth choice for log-periodogram regression

JOURNAL OF TIME SERIES ANALYSIS, Issue 6 2009
Josu Arteche
Abstract., The choice of the bandwidth in the local log-periodogram regression is of crucial importance for estimation of the memory parameter of a long memory time series. Different choices may give rise to completely different estimates, which may lead to contradictory conclusions, for example about the stationarity of the series. We propose here a data-driven bandwidth selection strategy that is based on minimizing a bootstrap approximation of the mean-squared error (MSE). Its behaviour is compared with other existing techniques for optimal bandwidth selection in a MSE sense, revealing its better performance in a wider class of models. The empirical applicability of the proposed strategy is shown with two examples: the widely analysed in a long memory context Nile river annual minimum levels and the input gas rate series of Box and Jenkins. [source]


Unified multipliers-free theory of dual-primal domain decomposition methods

NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 3 2009
Ismael Herrera
Abstract The concept of dual-primal methods can be formulated in a manner that incorporates, as a subclass, the non preconditioned case. Using such a generalized concept, in this article without recourse to "Lagrange multipliers," we introduce an all-inclusive unified theory of nonoverlapping domain decomposition methods (DDMs). One-level methods, such as Schur-complement and one-level FETI, as well as two-level methods, such as Neumann-Neumann and preconditioned FETI, are incorporated in a unified manner. Different choices of the dual subspaces yield the different dual-primal preconditioners reported in the literature. In this unified theory, the procedures are carried out directly on the matrices, independently of the differential equations that originated them. This feature reduces considerably the code-development effort required for their implementation and permit, for example, transforming 2D codes into 3D codes easily. Another source of this simplification is the introduction of two projection-matrices, generalizations of the average and jump of a function, which possess superior computational properties. In particular, on the basis of numerical results reported there, we claim that our jump matrix is the optimal choice of the B operator of the FETI methods. A new formula for the Steklov-Poincaré operator, at the discrete level, is also introduced. © 2008 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2009 [source]


The effect of the 19F(,, p)22Ne reaction rate uncertainty on the yield of fluorine from Wolf,Rayet stars

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 1 2005
Richard J. Stancliffe
ABSTRACT In the light of recent recalculations of the 19F(,, p)22Ne reaction rate, we present results of the expected yield of 19F from Wolf,Rayet (WR) stars. In addition to using the recommended rate, we have computed models using the upper and lower limits for the rate, and hence we constrain the uncertainty in the yield with respect to this reaction. We find a yield of 3.1 × 10,4 M, of 19F with our recommended rate, and a difference of a factor of 2 between the yields computed with the upper and lower limits. In comparison with previous work we find a difference in the yield of a factor of approximately 4, connected with a different choice of mass loss. Model uncertainties must be carefully evaluated in order to obtain a reliable estimate of the yield, together with its uncertainties, of fluorine from WR stars. [source]


Emerging from pseudo-symmetry: the redetermination of human carbonic anhydrase II in monoclinic P21 with a doubled a axis

ACTA CRYSTALLOGRAPHICA SECTION D, Issue 8 2010
Arthur H. Robbins
The crystal structure of human carbonic anhydrase II in the monoclinic P21 space group with a doubled a axis from that of the usually observed unit cell has recently been reported, with one of the two molecules in the asymmetric unit demonstrating rotational disorder [Robbins et al. (2010), Acta Cryst. D66, 628,634]. The structure has been redetermined, with the coordinates of both pseudo-symmetrically related molecules in the crystallographic asymmetric unit translated by x, = x± 1/4, and no rotational disorder is observed. This corresponds to a different choice of how the four molecules in the unit cell should be grouped into pairs that represent a single asymmetric unit. [source]


Evaluation of the Ages and Stages Questionnaires in identifying children with neurosensory disability in the Magpie Trial follow-up study

ACTA PAEDIATRICA, Issue 12 2007
Ly-Mee Yu
Abstract Aim: To evaluate performance of the Ages and Stages Questionnaires (full ASQ), and a shortened version (short ASQ), in detecting children with severe neurosensory disability in the Magpie Trial follow-up study. Methods: All children, born to women in the Magpie Trial and selected for follow-up, with a completed full 30 items and/or short 9-items ASQ were included in this analysis. Sensitivity and specificity, corrected for verification bias, were computed to assess detection ability. Results: Of the 2046 children who completed a full ASQ, 406 (19.8%) failed the assessment, 54 of whom had confirmed neurosensory disability. Adjusted sensitivity and specificity (95% confidence intervals) were 87.4% (62.9,96.6%), and 82.3% (80.5,83.9%), respectively. Two of the five domains in the full ASQ (Fine Motor and Problem Solving) contributed little to detection ability. Sensitivity and specificity for the short ASQ were 69.2% and 95.7%, respectively. Conclusions: Sensitivity of the full ASQ for severe neurosensory disability is generally good, and does not appear to be much reduced by restricting questions to three out of the five domains. The short ASQ reported here reduced performance, although this might be improved by a different choice of questions or scoring system. [source]


The Impact of Vertical Scaling Decisions on Growth Interpretations

EDUCATIONAL MEASUREMENT: ISSUES AND PRACTICE, Issue 4 2009
Derek C. Briggs
Most growth models implicitly assume that test scores have been vertically scaled. What may not be widely appreciated are the different choices that must be made when creating a vertical score scale. In this paper empirical patterns of growth in student achievement are compared as a function of different approaches to creating a vertical scale. Longitudinal item-level data from a standardized reading test are analyzed for two cohorts of students between Grades 3 and 6 and Grades 4 and 7 for the entire state of Colorado from 2003 to 2006. Eight different vertical scales were established on the basis of choices made for three key variables: Item Response Theory modeling approach, linking approach, and ability estimation approach. It is shown that interpretations of empirical growth patterns appear to depend upon the extent to which a vertical scale has been effectively "stretched" or "compressed" by the psychometric decisions made to establish it. While all of the vertical scales considered show patterns of decelerating growth across grade levels, there is little evidence of scale shrinkage. [source]


PHYLOGENETICALLY NESTED COMPARISONS FOR TESTING CORRELATES OF SPECIES RICHNESS: A SIMULATION STUDY OF CONTINUOUS VARIABLES

EVOLUTION, Issue 1 2003
NICK J. B. ISAAC
Abstract., Explaining the uneven distribution of species among lineages is one of the oldest questions in evolution. Proposed correlations between biological traits and species diversity are routinely tested by making comparisons between phylogenetic sister clades. Several recent studies have used nested sister-clade comparisons to test hypotheses linking continuously varying traits, such as body size, with diversity. Evaluating the findings of these studies is complicated because they differ in the index of species richness difference used, the way in which trait differences were treated, and the statistical tests employed. In this paper, we use simulations to compare the performance of four species richness indices, two choices about the branch lengths used to estimate trait values for internal nodes and two statistical tests under a range of models of clade growth and character evolution. All four indices returned appropriate Type I error rates when the assumptions of the method were met and when branch lengths were set proportional to time. Only two of the indices were robust to the different evolutionary models and to different choices of branch lengths and statistical tests. These robust indices had comparable power under one nonnull scenario. Regression through the origin was consistently more powerful than the t -test, and the choice of branch lengths exerts a strong effect on both the validity and power. In the light of our simulations, we re-evaluate the findings of those who have previously used nested comparisons in the context of species richness. We provide a set of simple guidelines to maximize the performance of phylogenetically nested comparisons in tests of putative correlates of species richness. [source]


Quantitative trait linkage analysis by generalized estimating equations: Unification of variance components and Haseman-Elston regression

GENETIC EPIDEMIOLOGY, Issue 4 2004
Wei-Min Chen
Two of the major approaches for linkage analysis with quantitative traits in humans include variance components and Haseman-Elston regression. Previously, these were viewed as quite separate methods. We describe a general model, fit by use of generalized estimating equations (GEE), for which the variance components and Haseman-Elston methods (including many of the extensions to the original Haseman-Elston method) are special cases, corresponding to different choices for a working covariance matrix. We also show that the regression-based test of Sham et al. ([2002] Am. J. Hum. Genet. 71:238,253) is equivalent to a robust score statistic derived from our GEE approach. These results have several important implications. First, this work provides new insight regarding the connection between these methods. Second, asymptotic approximations for power and sample size allow clear comparisons regarding the relative efficiency of the different methods. Third, our general framework suggests important extensions to the Haseman-Elston approach which make more complete use of the data in extended pedigrees and allow a natural incorporation of environmental and other covariates. © 2004 Wiley-Liss, Inc. [source]


Unified formulation of radiation conditions for the wave equation

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 2 2002
Steen Krenk
Abstract A family of radiation boundary conditions for the wave equation is derived by truncating a rational function approximation of the corresponding plane wave representation, and it is demonstrated how these boundary conditions can be formulated in terms of fictitious surface densities, governed by second-order wave equations on the radiating surface. Several well-established radiation boundary conditions appear as special cases, corresponding to different choices of the coefficients in the rational approximation. The relation between these choices is established, and an explicit formulation in terms of selected directions with ideal transmission is presented. A mechanical interpretation of the fictitious surface densities enables identification of suitable conditions at corners and boundaries of the radiating surface. Numerical examples illustrate excellent results with one or two fictitious layers with suitable corner and boundary conditions. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Numerical analysis of interfacial two-dimensional Stokes flow with discontinuous viscosity and variable surface tension

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 5 2001
Zhilin Li
Abstract A fluid model of the incompressible Stokes equations in two space dimensions is used to simulate the motion of a droplet boundary separating two fluids with unequal viscosity and variable surface tension. Our theoretical analysis leads to decoupled jump conditions that are used in constructing the numerical algorithm. Numerical results agree with others in the literature and include some new findings that may apply to processes similar to cell cleavage. The method developed here accurately preserves area for our test problems. Some interesting observations are obtained with different choices of the parameters. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Scheduling and power control for MAC layer design in multihop IR-UWB networks

INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 1 2010
Reena Pilakkat
Recently, a number of researchers have proposed media access control (MAC) designs for ultra-wideband (UWB) networks. Among them, designs based on scheduling and power control seem to be of great promise, particularly for quality-of-service (QoS) traffic. We investigate the efficiencies of many different choices for scheduling and power allocation for QoS traffic in a multihop impulse radio (IR)-UWB network, with the objective of achieving both high spectral efficiency and low transmission power. Specifically, we compare different scheduling schemes employing a protocol interference-based contention graph as well as a physical interference-based contention graph. We propose a relative distance to determine adjacency in the protocol interference-based contention graph. Using our improved protocol interference model with graph-based scheduling, we obtained better performance than the physical interference-based approach employing link-by-link scheduling. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Decisions from experience and statistical probabilities: Why they trigger different choices than a priori probabilities

JOURNAL OF BEHAVIORAL DECISION MAKING, Issue 1 2010
Robin Hau
Abstract The distinction between risk and uncertainty is deeply entrenched in psychologists' and economists' thinking. Knight (1921), to whom it is frequently attributed, however, went beyond this dichotomy. Within the domain of risk, he set apart a priori and statistical probabilities, a distinction that maps onto that between decisions from description and experience, respectively. We argue this distinction is important because risky choices based on a priori (described) and statistical (experienced) probabilities can substantially diverge. To understand why, we examine various possible contributing factors to the description,experience gap. We find that payoff variability and memory limitations play only a small role in the emergence of the gap. In contrast, the presence of rare events and their representation as either natural frequencies in decisions from experience or single-event probabilities in decisions from description appear relevant for the gap. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Evaluating effectiveness of preoperative testing procedure: some notes on modelling strategies in multi-centre surveys

JOURNAL OF EVALUATION IN CLINICAL PRACTICE, Issue 1 2008
Dario Gregori PhD
Abstract Rationale, In technology assessment in health-related fields the construction of a model for interpreting the economic implications of the introduction of a technology is only a part of the problem. The most important part is often the formulation of a model that can be used for selecting patients to submit to the new cost-saving procedure or medical strategy. The model is usually complicated by the fact that data are often non-homogeneous with respect to some uncontrolled variables and are correlated. The most typical example is the so-called hospital effect in multi-centre studies. Aims and objectives, We show the implications derived by different choices in modelling strategies when evaluating the usefulness of preoperative chest radiography, an exam performed before surgery, usually with the aim to detect unsuspected abnormalities that could influence the anaesthetic management and/or surgical plan. Method, We analyze the data from a multi-centre study including more than 7000 patients. We use about 6000 patients to fit regression models using both a population averaged and a subject-specific approach. We explore the limitations of these models when used for predictive purposes using a validation set of more than 1000 patients. Results, We show the importance of taking into account the heterogeneity among observations and the correlation structure of the data and propose an approach for integrating a population-averaged and subject specific approach into a single modeling strategy. We find that the hospital represents an important variable causing heterogeneity that influences the probability of a useful POCR. Conclusions, We find that starting with a marginal model, evaluating the shrinkage effect and eventually move to a more detailed model for the heterogeneity is preferable. This kind of flexible approach seems to be more informative at various phases of the model-building strategy. [source]


Translational Parallel Manipulators: New Proposals

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 12 2002
Raffaele Di Gregorio
Translational parallel manipulators are parallel manipulators wherein the end-effector performs only spatial translations. This paper presents a new family of translational parallel manipulators. The manipulators of this family are independent constraint manipulators. They have three limbs that are topologically identical and have no rotation singularity. The limbs of these manipulators feature five one-degree-of-freedom kinematic pairs in series. Four joints are revolute pairs and the remaining one, called T-pair, is a kinematic pair that can be manufactured in different ways. In each limb, three adjacent revolute pairs have parallel axes and the remaining revolute pair has an axis that is not parallel to the axes of the other revolute pairs. The mobility analysis of the manipulators of this new family is addressed by taking into account two different choices for the actuated pairs. One of the results of this analysis is that the geometry of a translational parallel manipulator free from singularities can be defined for a particular choice of the actuated pairs. © 2002 Wiley Periodicals, Inc. [source]


Cold dark matter microhalo survival in the Milky Way

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2007
G. W. Angus
ABSTRACT A special purpose N -body simulation has been built to understand the tidal heating of the smallest dark matter substructures (10,6 M, and 0.01 pc) from the grainy potential of the Milky Way due to individual stars in the disc and the bulge. To test the method, we first run simulations of single encounters of microhaloes with an isolated star, and compare with analytical predictions of the dark particle bound fraction as a function of impact parameter. We then follow the orbits of a set of microhaloes in a realistic flattened Milky Way potential. We concentrate on (detectable) microhaloes passing near the Sun with a range of pericentre and apocentre. Stellar perturbers near the orbital path of a microhalo would exert stochastic impulses, which we apply in a Monte Carlo fashion according to the Besançon model for the distribution of stars of different masses and ages in our Galaxy. Also incorporated are the usual pericentre tidal heating and disc shocking. We give a detailed diagnosis of typical microhaloes and find microhaloes with internal tangential anisotropy are slightly more robust than the ones with radial anisotropy. In addition, the dark particles generally go through of a random walk in velocity space and diffuse out of the microhaloes. We show that the typical destruction time-scales are strongly correlated with the stellar density averaged along a microhalo's orbit over the age of the stellar disc. We also present the morphology of a microhalo at several epochs which may hold the key to dark matter detections. We checked our results against different choices of microhalo mass, virial radius and anisotropy. [source]


Poroelastodynamic Boundary Element Method in Time Domain: Numerical Aspects

PROCEEDINGS IN APPLIED MATHEMATICS & MECHANICS, Issue 1 2005
Martin Schanz
Based on Biot's theory the governing equations for a poroelastic continuum are given as a coupled set of partial differential equations (PDEs) for the unknowns solid displacements and pore pressure. Using the Convolution Quadrature Method (CQM) proposed by Lubich a boundary time stepping procedure is established based only on the fundamental solutions in Laplace domain. To improve the numerical behavior of the CQM-based Boundary Element Method (BEM) dimensionless variables are introduced and different choices studied. This will be performed as a numerical study at the example of a poroelastic column. Summarizing the results, the normalization to time and spatial variable as well as on Young's modulus yields the best numerical behavior. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Parallel imports and price controls

THE RAND JOURNAL OF ECONOMICS, Issue 2 2008
Gene M. Grossman
Price controls create opportunities for international arbitrage. Many have argued that such arbitrage, if tolerated, will undermine intellectual property rights and dull the incentives for investment in research-intensive industries such as pharmaceuticals. We challenge this orthodox view and show, to the contrary, that the pace of innovation often is faster in a world with international exhaustion of intellectual property rights than in one with national exhaustion. The key to our conclusion is to recognize that governments will make different choices of price controls when parallel imports are allowed by their trade partners than they will when they are not. [source]