Parameter Space (parameter + space)

Distribution by Scientific Domains


Selected Abstracts


Human motion reconstruction from monocular images using genetic algorithms

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2004
Jianhui Zhao
Abstract This paper proposed an optimization approach for human motion recovery from the un-calibrated monocular images containing unlimited human movements. A 3D skeleton human model based on anatomy knowledge is employed with encoded biomechanical constraints for the joints. Energy Function is defined to represent the deviations between projection features and extracted image features. Reconstruction procedure is developed to adjust joints and segments of the human body into their proper positions. Genetic Algorithms are adopted to find the optimal solution effectively in the high dimensional parameter space by simultaneously considering all the parameters of the human model. The experimental results are analysed by Deviation Penalty. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Tangential-projection algorithm for manifold representation in unidentifiable model updating problems

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 4 2002
Lambros S. Katafygiotis
Abstract The problem of updating a structural model and its associated uncertainties by utilizing structural response data is addressed. In an identifiable case, the posterior probability density function (PDF) of the uncertain model parameters for given measured data can be approximated by a weighted sum of Gaussian distributions centered at a number of discrete optimal values of the parameters at which some positive measure-of-fit function is minimized. The present paper focuses on the problem of model updating in the general unidentifiable case for which certain simplifying assumptions available for identifiable cases are not valid. In this case, the PDF is distributed in the neighbourhood of an extended and usually highly complex manifold of the parameter space that cannot be calculated explicitly. The computational difficulties associated with calculating the highly complex posterior PDF are discussed and a new adaptive algorithm, referred to as the tangential-projection (TP) algorithm, allowing for an efficient approximate representation of the above manifold and the posterior PDF is presented. Using this approximation, expressions for calculating the uncertain predictive response are established. A numerical example involving noisy data is presented to demonstrate the proposed method. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Heteromyopia and the spatial coexistence of similar competitors

ECOLOGY LETTERS, Issue 1 2003
David J. Murrell
Abstract Most spatial models of competing species assume symmetries in the spatial scales of dispersal and interactions. This makes analysis tractable, and has led to the conclusion that segregation of species in space does not promote coexistence. However, these symmetries leave parts of the parameter space uninvestigated. Using a moment-approximation method, we present a spatial version of the Lotka,Volterra competition equations to investigate effects of removing symmetries in the distances over which individuals disperse and interact. Some spatial segregation of the species always comes about due to competition, and such segregation does not necessarily lead to coexistence. But, if interspecific competition occurs over shorter distances than intraspecific competition (heteromyopia), spatial segregation becomes strong enough to promote coexistence. Such coexistence is most likely when the species have similar dynamics, in contrast to the competition,colonization trade-off that requires large competitive differences between species. [source]


Reliability Analysis of Technical Systems/Structures by means of Polyhedral Approximation of the Safe/Unsafe Domain

GAMM - MITTEILUNGEN, Issue 2 2007
K. Marti
Abstract Reliability analysis of technical structures and systems is based on an appropriate (limit) state function separating the safe and unsafe/states in the space of random parameters. Starting with the survival conditions, hence, the state equation and the condition for the admissibility of states, an optimizational representation of the state function can be given in terms of the minimum function of a certainminimization problem. Selecting a certain number of boundary points of the safe/unsafe domain, hence, on the limit state surface, the safe/unsafe domain is approximated by a convex polyhedron defined by the intersection of the half spaces in the parameter space generated by the tangent hyperplanes to the safe/unsafe domain at the selected boundary points on the limit state surface. The resulting approximative probability functions are then defined by means of probabilistic linear constraints in the parameter space, where, after an appropriate transformation, the probability distribution of the parameter vector can be assumed to be normal with zero mean vector and unit covariance matrix. Working with separate linear constraints, approximation formulas for the probability of survival of the structure are obtained immediately. More exact approximations are obtained by considering joint probability constraints, which, in a second approximation step, can be evaluated by using probability inequalities and/or discretization of the underlying probability distribution. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Restricted parameter space models for testing gene-gene interaction

GENETIC EPIDEMIOLOGY, Issue 5 2009
Minsun Song
Abstract There is a growing recognition that interactions (gene-gene and gene-environment) can play an important role in common disease etiology. The development of cost-effective genotyping technologies has made genome-wide association studies the preferred tool for searching for loci affecting disease risk. These studies are characterized by a large number of investigated SNPs, and efficient statistical methods are even more important than in classical association studies that are done with a small number of markers. In this article we propose a novel gene-gene interaction test that is more powerful than classical methods. The increase in power is due to the fact that the proposed method incorporates reasonable constraints in the parameter space. The test for both association and interaction is based on a likelihood ratio statistic that has a x,2 distribution asymptotically. We also discuss the definitions used for "no interaction" and argue that tests for pure interaction are useful in genome-wide studies, especially when using two-stage strategies where the analyses in the second stage are done on pairs of loci for which at least one is associated with the trait. Genet. Epidemiol. 33:386,393, 2009. © 2008 Wiley-Liss, Inc. [source]


Addressing non-uniqueness in linearized multichannel surface wave inversion

GEOPHYSICAL PROSPECTING, Issue 1 2009
Michele Cercato
ABSTRACT The multichannel analysis of the surface waves method is based on the inversion of observed Rayleigh-wave phase-velocity dispersion curves to estimate the shear-wave velocity profile of the site under investigation. This inverse problem is nonlinear and it is often solved using ,local' or linearized inversion strategies. Among linearized inversion algorithms, least-squares methods are widely used in research and prevailing in commercial software; the main drawback of this class of methods is their limited capability to explore the model parameter space. The possibility for the estimated solution to be trapped in local minima of the objective function strongly depends on the degree of nonuniqueness of the problem, which can be reduced by an adequate model parameterization and/or imposing constraints on the solution. In this article, a linearized algorithm based on inequality constraints is introduced for the inversion of observed dispersion curves; this provides a flexible way to insert a priori information as well as physical constraints into the inversion process. As linearized inversion methods are strongly dependent on the choice of the initial model and on the accuracy of partial derivative calculations, these factors are carefully reviewed. Attention is also focused on the appraisal of the inverted solution, using resolution analysis and uncertainty estimation together with a posteriori effective-velocity modelling. Efficiency and stability of the proposed approach are demonstrated using both synthetic and real data; in the latter case, cross-hole S-wave velocity measurements are blind-compared with the results of the inversion process. [source]


Reduced-order modeling of parameterized PDEs using time,space-parameter principal component analysis,

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 8 2009
C. Audouze
Abstract This paper presents a methodology for constructing low-order surrogate models of finite element/finite volume discrete solutions of parameterized steady-state partial differential equations. The construction of proper orthogonal decomposition modes in both physical space and parameter space allows us to represent high-dimensional discrete solutions using only a few coefficients. An incremental greedy approach is developed for efficiently tackling problems with high-dimensional parameter spaces. For numerical experiments and validation, several non-linear steady-state convection,diffusion,reaction problems are considered: first in one spatial dimension with two parameters, and then in two spatial dimensions with two and five parameters. In the two-dimensional spatial case with two parameters, it is shown that a 7 × 7 coefficient matrix is sufficient to accurately reproduce the expected solution, while in the five parameters problem, a 13 × 6 coefficient matrix is shown to reproduce the solution with sufficient accuracy. The proposed methodology is expected to find applications to parameter variation studies, uncertainty analysis, inverse problems and optimal design. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Directional leakage and parameter drift,

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 1 2006
M. Hovd
Abstract A new method for eliminating parameter drift in parameter estimation problems is proposed. Existing methods for eliminating parameter drift work either on a limited time horizon, restricts the parameter estimates to a range that has to be determined a priori, or introduces bias in the parameter estimates which will degrade steady state performance. The idea of the new method is to apply leakage only in the directions in parameter space in which the exciting signal is not informative. This avoids the problem of parameter bias associated with conventional leakage. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Comparison of global and local sensitivity techniques for rate constants determined using complex reaction mechanisms

INTERNATIONAL JOURNAL OF CHEMICAL KINETICS, Issue 12 2001
James J. Scire Jr.
Many rate constant measurements, including some "direct" measurements, involve fitting a complex reaction mechanism to experimental data. Two techniques for estimating the error in such measurements were compared. In the first technique, local first-order elementary sensitivities were used to rapidly estimate the sensitivity of the fitted rate constants to the remaining mechanism parameters. Our group and others have used this technique for error estimation and experimental design. However, the nonlinearity and strong coupling found in reaction mechanisms make verification against globally valid results desirable. Here, the local results were compared with analogous importance-sampled Monte Carlo calculations in which the parameter values were distributed according to their uncertainties. Two of our published rate measurements were examined. The local uncertainty estimates were compared with Monte Carlo confidence intervals. The local sensitivity coefficients were compared with coefficients from first and second-degree polynomial regressions over the whole parameter space. The first-order uncertainty estimates were found to be sufficiently accurate for experimental design, but were subject to error in the presence of higher order sensitivities. In addition, global uncertainty estimates were found to narrow when the quality of the fit was used to weight the randomly distributed points. For final results, the global technique was found to provide efficient, accurate values without the assumptions inherent in the local analysis. The rigorous error estimates derived in this way were used to address literature criticism of one of the studies discussed here. Given its efficiency and the variety of problems it can detect, the global technique could also be used to check local results during the experimental design phase. The global routine, coded using SENKIN, can easily be extended to different types of data, and therefore can serve as a valuable tool for assessing error in rate constants determined using complex mechanisms. © 2001 John Wiley & Sons, Inc. Int J Chem Kinet 33: 784,802, 2001 [source]


Dynamic pricing based on asymmetric multiagent reinforcement learning

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 1 2006
Ville Könönen
A dynamic pricing problem is solved by using asymmetric multiagent reinforcement learning in this article. In the problem, there are two competing brokers that sell identical products to customers and compete on the basis of price. We model this dynamic pricing problem as a Markov game and solve it by using two different learning methods. The first method utilizes modified gradient descent in the parameter space of the value function approximator and the second method uses a direct gradient of the parameterized policy function. We present a brief literature survey of pricing models based on multiagent reinforcement learning, introduce the basic concepts of Markov games, and solve the problem by using proposed methods. © 2006 Wiley Periodicals, Inc. Int J Int Syst 21: 73,98, 2006. [source]


Robust MIMO disturbance observer analysis and design with application to active car steering

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 8 2010
Bilin Aksun Güvenç
Abstract A multi-input,multi-output extension of the well-known two control degrees-of-freedom disturbance observer architecture that decouples the problem into single-input,single-output disturbance observer loops is presented in this paper. Robust design based on mapping D -stability and the frequency domain specifications of weighted sensitivity minimization and phase margin bound to a chosen controller parameter space is presented as a part of the proposed design approach. The effect of the choice of disturbance observer Q filter on performance is explained with a numerical example. This is followed by the use of structured singular values in the robustness analysis of disturbance observer controlled systems subject to structured, real parametric and mixed uncertainty in the plant. A design and simulation study based on a four wheel active car steering control example is used to illustrate the methods presented in the paper. Copyright © 2009 John Wiley & Sons, Ltd. [source]


A stationary-wave model of enzyme catalysis

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 2 2010
Carlo Canepa
Abstract An expression for the external force driving a system of two coupled oscillators in the condensed phase was derived in the frame of the Debye theory of solids. The time dependence and amplitude of the force is determined by the size of the cell embedding the coupled oscillators and its Debye temperature (,D). The dynamics of the driven system of oscillators were followed in the two regimes of (a) low ,D and cell diameter, as a model of liquid water, and (b) large ,D and cell diameter, as a model of the core of a protein. The response in potential energy of the reference oscillator was computed for all possible values of the internal parameters of the system under investigation. For protein cores, the region in the parameter space of high maximum potential energy of the reference oscillator is considerably extended with respect to the corresponding simulation for water. © 2009 Wiley Periodicals, Inc. J Comput Chem, 2010 [source]


The treatment of solvation by a generalized Born model and a self-consistent charge-density functional theory-based tight-binding method

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 15 2002
Li Xie
Abstract We present a model to calculate the free energies of solvation of small organic compounds as well as large biomolecules. This model is based on a generalized Born (GB) model and a self-consistent charge-density functional theory-based tight-binding (SCC-DFTB) method with the nonelectrostatic contributions to the free energy of solvation modeled in terms of solvent-accessible surface areas (SA). The parametrization of the SCC-DFTB/GBSA model has been based on 60 neutral and six ionic molecules composed of H, C, N, O, and S, and spanning a wide range of chemical groups. Effective atomic radii as parameters have been obtained through Monte Carlo Simulated Annealing optimization in the parameter space to minimize the differences between the calculated and experimental free energies of solvation. The standard error in the free energies of solvation calculated by the final model is 1.11 kcal mol,1. We also calculated the free energies of solvation for these molecules using a conductor-like screening model (COSMO) in combination with different levels of theory (AM1, SCC-DFTB, and B3LYP/6-31G*) and compared the results with SCC-DFTB/GBSA. To assess the efficiency of our model for large biomolecules, we calculated the free energy of solvation for a HIV protease-inhibitor complex containing 3204 atoms using the SCC-DFTB/GBSA and the SCC-DFTB/COSMO models, separately. The computed relative free energies of solvation are comparable, while the SCC-DFTB/GBSA model is three to four times more efficient, in terms of computational cost. © 2002 Wiley Periodicals, Inc. J Comput Chem 23: 1404,1415, 2002 [source]


Transitions in the evolution of meiosis

JOURNAL OF EVOLUTIONARY BIOLOGY, Issue 3 2000
Hurst
Meiosis may have evolved gradually within the eukaryotes with the earliest forms having a one-step meiosis. It has been speculated that the putative transition from a one-step meiosis without recombination to one with recombination may have been stimulated by the invasion of Killer alleles. These imaginary selfish elements are considered to act prior to recombination. They prime for destruction (which occurs after cell division) the half of the cell on the opposite side of the meiotic spindle. Likewise the transition from one-step to two-step meiosis might have been stimulated by a subtly different sort of imaginary distorter allele, a SisterKiller. These are proposed to act after recombination. It has yet to be established that the presence of such distorter alleles could induce the transitions in question. To investigate these issues we have analysed the dynamics of a modifier (1) of recombination and (2) of the number of steps of meiosis, as they enter a population with one-step meiosis. For the modifier of recombination, we find that invasion conditions are very broad and that persistence of Killer and modifier is likely through most parameter space, even when the recombination rate is low. However, if we allow a Killer element to mutate into one that is self-tolerant, the modifier and the nonself-tolerant alleles are typically both lost from the population. The modifier of the number of steps can invade if the SisterKiller acts at meiosis II. However, a SisterKiller acting at meiosis I, far from promoting the modifier's spread, actually impedes it. In the former case the invasion is easiest if there is no recombination. The SisterKiller hypothesis therefore fails to provide a reasonable account of the evolution of two-step meiosis with recombination. As before, the evolution of self-tolerance on the part of the selfish element destroys the process. We conclude that the conditions under which SisterKillers promote the evolution of two-step meiosis are very much more limited than originally considered. We also conclude that there is no universal agreement between ESS and modifier analyses of the same transitions. [source]


Gain Scheduled LPV H, Control Based on LMI Approach for a Robotic Manipulator

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 12 2002
Zhongwei Yu
A new approach to the design of a gain scheduled linear parameter-varying (LPV) H, controller, which places the closed-loop poles in the region that satisfies the specified dynamic response, for an n -joint rigid robotic manipulator, is presented. The nonlinear time-varying robotic manipulator is modeled to be a LPV system with a convex polytopic structure with the use of the LPV convex decomposition technique in a filter introduced. State feedback controllers, which satisfy the H, performance and the closed-loop pole-placement requirements, for each vertex of the convex polyhedron parameter space, are designed with the use of the linear matrix inequality (LMI) approach. Based on these designed feedback controllers for each vertex, a LPV controller with a smaller on-line computation load and a convex polytopic structure is synthesized. Simulation and experiment results verify that the robotic manipulator with the LPV controller always has a good dynamic performance along with the variations of the joint positions. © 2002 Wiley Periodicals, Inc. [source]


Linear stability analysis of two-layer rectilinear flow in slot coating

AICHE JOURNAL, Issue 10 2010
Jaewook Nam
Abstract Two-layer coating occurs in many products. Ideally, the liquids are deposited onto the substrate simultaneously. In the case of two-layer slot coating, the interlayer between the coating liquids is subjected to enormous shearing. This may lead to flow instabilities that ruin the product. It is important to map the regions of the parameter space at which the flow is unstable. Most of the stability analyses of two-layer rectilinear flow consider the position of the interlayer as an independent parameter. Classical results cannot be applied directly in coating flows. We present a linear stability analysis of two-layer rectilinear flow considering the flow rates as an independent parameter. The predicted neutral-stability curves define the region of stable flow as a function of the operating parameters. The range of coating operating conditions is restricted further, when the condition for the desirable interlayer separation point location are considered together with the stability condition. © 2010 American Institute of Chemical Engineers AIChE J, 2010 [source]


The influence of experimental and model uncertainties on EXAFS results

JOURNAL OF SYNCHROTRON RADIATION, Issue 2 2001
Hermann Rossner
We analyze EXAFS oscillations in k-space with the FEFF code to obtain main-shell distances R, and mean-square displacement parameters ,i2 for all single and multiple scattering paths i in the shells , up to a maximum shell radius Rmax. To quantify the uncertainty in the determination of these model parameters we take into account experimental errors and uncertainties connected with background subtraction, with the approximate handling of the electronic many-body problem in FEFF, and with the truncation of the multiple scattering series. The impact of these uncertainties on the R, and ,i2 is investigated in the framework of Bayesian methods. We introduce an a priori guess of these model parameters and consider two alternative strategies to control the weight of the a priori input relative to that of the experimental data. We can take a model parameter space of up to 250 dimensions. Optionally we can also fit the coordination numbers Nj (j,,) and the skewness of the distribution of the R, besides the R, and ,i2. The method is applied to 10K Cu K-edge and 300K Au L3 -edge data to obtain model parameters and their a posteriori error correlation matrices. [source]


Estimates of human immunodeficiency virus prevalence and proportion diagnosed based on Bayesian multiparameter synthesis of surveillance data

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 3 2008
A. Goubar
Summary., Estimates of the number of prevalent human immunodeficiency virus infections are used in England and Wales to monitor development of the human immunodeficiency virus,acquired immune deficiency syndrome epidemic and for planning purposes. The population is split into risk groups, and estimates of risk group size and of risk group prevalence and diagnosis rates are combined to derive estimates of the number of undiagnosed infections and of the overall number of infected individuals. In traditional approaches, each risk group size, prevalence or diagnosis rate parameter must be informed by just one summary statistic. Yet a rich array of surveillance and other data is available, providing information on parameters and on functions of parameters, and raising the possibility of inconsistency between sources of evidence in some parts of the parameter space. We develop a Bayesian framework for synthesis of surveillance and other information, implemented through Markov chain Monte Carlo methods. The sources of data are found to be inconsistent under their accepted interpretation, but the inconsistencies can be resolved by introducing additional ,bias adjustment' parameters. The best-fitting model incorporates a hierarchical structure to spread information more evenly over the parameter space. We suggest that multiparameter evidence synthesis opens new avenues in epidemiology based on the coherent summary of available data, assessment of consistency and bias modelling. [source]


On the use of non-local prior densities in Bayesian hypothesis tests

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 2 2010
Valen E. Johnson
Summary., We examine philosophical problems and sampling deficiencies that are associated with current Bayesian hypothesis testing methodology, paying particular attention to objective Bayes methodology. Because the prior densities that are used to define alternative hypotheses in many Bayesian tests assign non-negligible probability to regions of the parameter space that are consistent with null hypotheses, resulting tests provide exponential accumulation of evidence in favour of true alternative hypotheses, but only sublinear accumulation of evidence in favour of true null hypotheses. Thus, it is often impossible for such tests to provide strong evidence in favour of a true null hypothesis, even when moderately large sample sizes have been obtained. We review asymptotic convergence rates of Bayes factors in testing precise null hypotheses and propose two new classes of prior densities that ameliorate the imbalance in convergence rates that is inherited by most Bayesian tests. Using members of these classes, we obtain analytic expressions for Bayes factors in linear models and derive approximations to Bayes factors in large sample settings. [source]


The restricted likelihood ratio test at the boundary in autoregressive series

JOURNAL OF TIME SERIES ANALYSIS, Issue 6 2009
Willa W. Chen
Abstract., The restricted likelihood ratio test, RLRT, for the autoregressive coefficient in autoregressive models has recently been shown to be second-order pivotal when the autoregressive coefficient is in the interior of the parameter space and so is very well approximated by the distribution. In this article, the non-standard asymptotic distribution of the RLRT for the unit root boundary value is obtained and is found to be almost identical to that of the in the right tail. Together, these two results imply that the distribution approximates the RLRT distribution very well even for near unit root series and transitions smoothly to the unit root distribution. [source]


Goodness-of-fit tests of normality for the innovations in ARMA models

JOURNAL OF TIME SERIES ANALYSIS, Issue 3 2004
Gilles R. Ducharme
Abstract., In this paper, we propose a goodness-of-fit test of normality for the innovations of an ARMA(p, q) model with known mean or trend. The test is based on the data driven smooth test approach and is simple to perform. An extensive simulation study is conducted to study the behaviour of the test for moderate sample sizes. It is found that our approach is generally more powerful than existing tests while holding its level throughout most of the parameter space and, thus, can be recommended. This agrees with theoretical results showing the superiority of the data driven smooth test approach in related contexts. [source]


Understanding Multicompartment Micelles Using Dissipative Particle Dynamics Simulation

MACROMOLECULAR THEORY AND SIMULATIONS, Issue 2 2007
Chongli Zhong
Abstract Multicompartment micelles are a new class of nanomaterials that may find wide applications in the fields of drug delivery, nanotechnology and catalysis. Due to their structural complexity, as well as the wide parameter space to explore, experimental investigations are a difficult task, to which molecular simulation may contribute greatly. In this paper, the application of the dissipative particle dynamics simulation technique to the understanding of multicompartment micelles is introduced, illustrating that DPD is a powerful tool for identifying new morphologies by varying block length, block ratio and solvent quality in a systematic way. The formation process of multicompartment micelles, as well as shear effects and the self-assembly of nanoparticle mixtures in multicompartment micelles, can also be studied well by DPD simulation. The present work shows that DPD, as well as other simulation techniques and theories, can complement experiments greatly, not only in exploring properties in a wider parameter space, but also by giving a preview of phenomena prior to experiments. DPD, as a mesoscopic dynamic simulation technique, is particularly useful for understanding the dynamic processes of multicompartment micelles at a microscopic level. [source]


First Liapunov coefficient for coupled identical oscillators.

MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 17 2006
Application to coupled demand, supply model
Abstract A general formula for the computation of the first Liapunov coefficient corresponding to the Hopf bifurcation in a four-dimensional system of two coupled identical oscillators is performed for two cases. Only bi-dimensional vectors are involved. Then a model of two coupled demand,supply systems, depending on four parameters is considered. A study of the Hopf bifurcation is done around one of the symmetrical equilibrium, as the parameters vary. The loci in the parameter space of the parameters values corresponding to subcritical, supercritical or degenerated Hopf bifurcation are found. The computation of the Liapunov coefficients is done using the derived formula. Numerical plots emphasizing the existence of different types of limit cycles are developed. Copyright © 2006 John Wiley & Sons, Ltd. [source]


When can ecological speciation be detected with neutral loci?

MOLECULAR ECOLOGY, Issue 11 2010
XAVIER THIBERT-PLANTE
Abstract It is not yet clear under what conditions empirical studies can reliably detect progress toward ecological speciation through the analysis of allelic variation at neutral loci. We use a simulation approach to investigate the range of parameter space under which such detection is, and is not, likely. We specifically test for the conditions under which divergent natural selection can cause a ,generalized barrier to gene flow' that is present across the genome. Our individual-based numerical simulations focus on how population divergence at neutral loci varies in relation to recombination rate with a selected locus, divergent selection on that locus, migration rate and population size. We specifically test whether genetic differences at neutral markers are greater between populations in different environments than between populations in similar environments. We find that this expected signature of ecological speciation can be detected under part of the parameter space, most consistently when divergent selection is strong and migration is intermediate. By contrast, the expected signature of ecological speciation is not reliably detected when divergent selection is weak or migration is low or high. These findings provide insights into the strengths and weaknesses of using neutral markers to infer ecological speciation in natural systems. [source]


When are genetic methods useful for estimating contemporary abundance and detecting population trends?

MOLECULAR ECOLOGY RESOURCES, Issue 4 2010
DAVID A. TALLMON
Abstract The utility of microsatellite markers for inferring population size and trend has not been rigorously examined, even though these markers are commonly used to monitor the demography of natural populations. We assessed the ability of a linkage disequilibrium estimator of effective population size (Ne) and a simple capture-recapture estimator of abundance (N) to quantify the size and trend of stable or declining populations (true N = 100,10,000), using simulated Wright,Fisher populations. Neither method accurately or precisely estimated abundance at sample sizes of S = 30 individuals, regardless of true N. However, if larger samples of S = 60 or 120 individuals were collected, these methods provided useful insights into abundance and trends for populations of N = 100,500. At small population sizes (N = 100 or 250), precision of the Ne estimates was improved slightly more by a doubling of loci sampled than by a doubling of individuals sampled. In general, monitoring Ne proved a more robust means of identifying stable and declining populations than monitoring N over most of the parameter space we explored, and performance of the Ne estimator is further enhanced if the Ne/N ratio is low. However, at the largest population size (N = 10,000), N estimation outperformed Ne. Both methods generally required , 5 generations to pass between sampling events to correctly identify population trend. [source]


Structures in the fundamental plane of early-type galaxies

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2010
D. Fraix-Burnet
ABSTRACT The fundamental plane of early-type galaxies is a rather tight three-parameter correlation discovered more than 20 yr ago. It has resisted both a global and precise physical interpretation despite a consequent number of works, observational, theoretical or using numerical simulations. It appears that its precise properties depend on the population of galaxies in study. Instead of selecting a priori these populations, we propose to objectively construct homologous populations from multivariate analyses. We have undertaken multivariate cluster and cladistic analyses of a sample of 56 low-redshift galaxy clusters containing 699 early-type galaxies, using four parameters: effective radius, velocity dispersion, surface brightness averaged over effective radius and Mg2 index. All our analyses are consistent with seven groups that define separate regions on the global fundamental plane, not across its thickness. In fact, each group shows its own fundamental plane, which is more loosely defined for less diversified groups. We conclude that the global fundamental plane is not a bent surface, but made of a collection of several groups characterizing several fundamental planes with different thicknesses and orientations in the parameter space. Our diversification scenario probably indicates that the level of diversity is linked to the number and the nature of transforming events and that the fundamental plane is the result of several transforming events. We also show that our classification, not the fundamental planes, is universal within our redshift range (0.007,0.053). We find that the three groups with the thinnest fundamental planes presumably formed through dissipative (wet) mergers. In one of them, this(ese) merger(s) must have been quite ancient because of the relatively low metallicity of its galaxies, Two of these groups have subsequently undergone dry mergers to increase their masses. In the k-space, the third one clearly occupies the region where bulges (of lenticular or spiral galaxies) lie and might also have formed through minor mergers and accretions. The two least diversified groups probably did not form by major mergers and must have been strongly affected by interactions, some of the gas in the objects of one of these groups having possibly been swept out. The interpretation, based on specific assembly histories of galaxies of our seven groups, shows that they are truly homologous. They were obtained directly from several observables, thus independently of any a priori classification. The diversification scenario relating these groups does not depend on models or numerical simulations, but is objectively provided by the cladistic analysis. Consequently, our classification is more easily compared to models and numerical simulations, and our work can be readily repeated with additional observables. [source]


Hydrodynamical N -body simulations of coupled dark energy cosmologies

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2010
Marco Baldi
ABSTRACT If the accelerated expansion of the Universe at the present epoch is driven by a dark energy scalar field, there may well be a non-trivial coupling between the dark energy and the cold dark matter (CDM) fluid. Such interactions give rise to new features in cosmological structure growth, like an additional long-range attractive force between CDM particles, or variations of the dark matter particle mass with time. We have implemented these effects in the N -body code gadget-2 and present results of a series of high-resolution N -body simulations where the dark energy component is directly interacting with the CDM. As a consequence of the new physics, CDM and baryon distributions evolve differently both in the linear and in the non-linear regime of structure formation. Already on large scales, a linear bias develops between these two components, which is further enhanced by the non-linear evolution. We also find, in contrast with previous work, that the density profiles of CDM haloes are less concentrated in coupled dark energy cosmologies compared with ,CDM, and that this feature does not depend on the initial conditions setup, but is a specific consequence of the extra physics induced by the coupling. Also, the baryon fraction in haloes in the coupled models is significantly reduced below the universal baryon fraction. These features alleviate tensions between observations and the ,CDM model on small scales. Our methodology is ideally suited to explore the predictions of coupled dark energy models in the fully non-linear regime, which can provide powerful constraints for the viable parameter space of such scenarios. [source]


The outburst duration and duty cycle of GRS 1915+105

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 3 2009
Patrick Deegan
ABSTRACT The extraordinarily long outburst of GRS 1915+105 makes it one of the most remarkable low-mass X-ray binaries (LMXBs). It has been in a state of constant outburst since its discovery in 1992, an eruption which has persisted ,100 times longer than those of more typical LXMBs. The long orbital period of GRS 1915+105 implies that it contains large and massive accretion disc which is able to fuel its extreme outburst. In this paper, we address the longevity of the outburst and quiescence phases of GRS 1915+105 using smooth particle hydrodynamics (SPH) simulations of its accretion disc through many outburst cycles. Our model is set in the two-, framework and includes the effects of the thermoviscous instability, tidal torques, irradiation by central X-rays and wind mass loss. We explore the model parameter space and examine the impact of the various ingredients. We predict that the outburst of GRS 1915+105 should last a minimum of 20 yr and possibly up to ,100 yr if X-ray irradiation is very significant. The predicted recurrence times are of the order of 104 yr, making the X-ray duty cycle a few 0.1 per cent. Such a low duty cycle may mean that GRS 1915+105 is not an anomaly among the more standard LMXBs and that many similar, but quiescent, systems could be present in the Galaxy. [source]


Microlensing by cosmic strings

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 1 2008
Konrad Kuijken
ABSTRACT We consider the signature and detectability of gravitational microlensing of distant quasars by cosmic strings. Because of the simple image configuration such events will have a characteristic lightcurve, in which a source would appear to brighten by exactly a factor of 2, before reverting to its original apparent brightness. We calculate the optical depth and event rate, and conclude that current predictions and limits on the total length of strings on the sky imply optical depths of , 10,8 and event rates of fewer than one event per 109 sources per year. Disregarding those predictions but replacing them with limits on the density of cosmic strings from the cosmic microwave background fluctuation spectrum, leaves only a small region of parameter space (in which the sky contains about 3 × 105 strings with deficit angle of the order of 0.3 milli-seconds) for which a microlensing survey of exposure 107 source years, spanning a 20,40-year period, might reveal the presence of cosmic strings. [source]


A census of metals and baryons in stars in the local Universe

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2008
Anna Gallazzi
ABSTRACT We combine stellar metallicity and stellar mass estimates for a large sample of galaxies drawn from the Sloan Digital Sky Survey Data Release 2 (SDSS DR2) spanning wide ranges in physical properties, in order to derive an inventory of the total mass of metals and baryons locked up in stars in the local Universe. Physical parameter estimates are derived from galaxy spectra with high signal-to-noise ratio (S/N) (of at least 20). Co-added spectra of galaxies with similar velocity dispersions, absolute r -band magnitudes and 4000-Å break values are used for those regions of parameter space where individual spectra have lower S/N. We estimate the total density of metals ,Z and of baryons ,* in stars and, from these two quantities, we obtain a mass- and volume-averaged stellar metallicity of ,Z*,= 1.04 ± 0.14 Z,, i.e. consistent with solar. We also study how metals are distributed in galaxies according to different properties, such as mass, morphology, mass- and light-weighted age, and we then compare these distributions with the corresponding distributions of stellar mass. We find that the bulk of metals locked up in stars in the local Universe reside in massive, bulge-dominated galaxies, with red colours and high 4000-Å break values corresponding to old stellar populations. Bulge-dominated and disc-dominated galaxies contribute similar amounts to the total stellar mass density, but have different fractional contributions to the mass density of metals in stars, in agreement with the mass,metallicity relation. Bulge-dominated galaxies contain roughly 40 per cent of the total amount of metals in stars, while disc-dominated galaxies less than 25 per cent. Finally, at a given galaxy stellar mass, we define two characteristic ages as the median of the distributions of mass and metals as a function of age. These characteristic ages decrease progressively from high-mass to low-mass galaxies, consistent with the high formation epochs of stars in massive galaxies. [source]