Assumptions Inherent (assumption + inherent)

Distribution by Scientific Domains


Selected Abstracts


Differential Sperm Priming by Male Sailfin Mollies (Poecilia latipinna): Effects of Female and Male Size

ETHOLOGY, Issue 3 2004
Andrea S. Aspbury
Recent interest in sperm competition has led to a re-evaluation of the ,cheap sperm' assumption inherent in many studies of sexual selection. In particular, mounting evidence suggests that male sperm availability can be increased by the presence of females. However, there is little information on how this interacts with male traits presumably affected by female mate choice, such as larger size. This study examines the effects on male sperm availability of female presence, male body size, and female body size in the sailfin molly, Poecilia latipinna. Individual males of variable body sizes were isolated in divided tanks for 3 d, after which time either a female or no female was added to the other side of the tank. Prior to the treatments, larger males had more stripped sperm than smaller males. Female presence significantly increased the amount of sperm males primed, but this effect was strongest in small males. Furthermore, males showed a greater priming response in the presence of larger females than in the presence of smaller females. These results demonstrate that the presence of sexually mature females increases the amount of sperm males have for insemination. Furthermore, traits that indicate female fecundity may be used by males as cues in male mate choice. [source]


Deformation and stress change associated with plate interaction at subduction zones: a kinematic modelling

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2000
Shaorong Zhao
The interseismic deformation associated with plate coupling at a subduction zone is commonly simulated by the steady-slip model in which a reverse dip-slip is imposed on the down-dip extension of the locked plate interface, or by the backslip model in which a normal slip is imposed on the locked plate interface. It is found that these two models, although totally different in principle, produce similar patterns for the vertical deformation at a subduction zone. This suggests that it is almost impossible to distinguish between these two models by analysing only the interseismic vertical deformation observed at a subduction zone. The steady-slip model cannot correctly predict the horizontal deformation associated with plate coupling at a subduction zone, a fact that is proved by both the numerical modelling in this study and the GPS (Global Positioning System) observations near the Nankai trough, southwest Japan. It is therefore inadequate to simulate the effect of the plate coupling at a subduction zone by the steady-slip model. It is also revealed that the unphysical assumption inherent in the backslip model of imposing a normal slip on the locked plate interface makes it impossible to predict correctly the horizontal motion of the subducted plate and the stress change within the overthrust zone associated with the plate coupling during interseismic stages. If the analysis made in this work is proved to be correct, some of the previous studies on interpreting the interseismic deformation observed at several subduction zones based on these two models might need substantial revision. On the basis of the investigations on plate interaction at subduction zones made using the finite element method and the kinematic/mechanical conditions of the plate coupling implied by the present plate tectonics, a synthesized model is proposed to simulate the kinematic effect of the plate interaction during interseismic stages. A numerical analysis shows that the proposed model, designed to simulate the motion of a subducted slab, can correctly produce the deformation and the main pattern of stress concentration associated with plate coupling at a subduction zone. The validity of the synthesized model is examined and partially verified by analysing the horizontal deformation observed by GPS near the Nankai trough, southwest Japan. [source]


Conservation Biogeography: assessment and prospect

DIVERSITY AND DISTRIBUTIONS, Issue 1 2005
Robert J. Whittaker
ABSTRACT There is general agreement among scientists that biodiversity is under assault on a global basis and that species are being lost at a greatly enhanced rate. This article examines the role played by biogeographical science in the emergence of conservation guidance and makes the case for the recognition of Conservation Biogeography as a key subfield of conservation biology delimited as: the application of biogeographical principles, theories, and analyses, being those concerned with the distributional dynamics of taxa individually and collectively, to problems concerning the conservation of biodiversity. Conservation biogeography thus encompasses both a substantial body of theory and analysis, and some of the most prominent planning frameworks used in conservation. Considerable advances in conservation guidelines have been made over the last few decades by applying biogeographical methods and principles. Herein we provide a critical review focussed on the sensitivity to assumptions inherent in the applications we examine. In particular, we focus on four inter-related factors: (i) scale dependency (both spatial and temporal); (ii) inadequacies in taxonomic and distributional data (the so-called Linnean and Wallacean shortfalls); (iii) effects of model structure and parameterisation; and (iv) inadequacies of theory. These generic problems are illustrated by reference to studies ranging from the application of historical biogeography, through island biogeography, and complementarity analyses to bioclimatic envelope modelling. There is a great deal of uncertainty inherent in predictive analyses in conservation biogeography and this area in particular presents considerable challenges. Protected area planning frameworks and their resulting map outputs are amongst the most powerful and influential applications within conservation biogeography, and at the global scale are characterised by the production, by a small number of prominent NGOs, of bespoke schemes, which serve both to mobilise funds and channel efforts in a highly targeted fashion. We provide a simple typology of protected area planning frameworks, with particular reference to the global scale, and provide a brief critique of some of their strengths and weaknesses. Finally, we discuss the importance, especially at regional scales, of developing more responsive analyses and models that integrate pattern (the compositionalist approach) and processes (the functionalist approach) such as range collapse and climate change, again noting the sensitivity of outcomes to starting assumptions. We make the case for the greater engagement of the biogeographical community in a programme of evaluation and refinement of all such schemes to test their robustness and their sensitivity to alternative conservation priorities and goals. [source]


Reduced-complexity flow routing models for sinuous single-thread channels: intercomparison with a physically-based shallow-water equation model

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 5 2009
A. P. Nicholas
Abstract Reduced-complexity models of fluvial processes use simple rules that neglect much of the underlying governing physics. This approach is justified by the potential to use these models to investigate long-term and/or fundamental river behaviour. However, little attention has been given to the validity or realism of reduced-complexity process parameterizations, despite the fact that the assumptions inherent in these approaches may limit the potential for elucidating the behaviour of natural rivers. This study presents two new reduced-complexity flow routing schemes developed specifically for application in single-thread rivers. Output from both schemes is compared with that from a more sophisticated model that solves the depth-averaged shallow water equations. This comparison provides the first demonstration of the potential for deriving realistic predictions of in-channel flow depth, unit discharge, energy slope and unit stream power using simple flow routing schemes. It also highlights the inadequacy of modelling unit stream power, shear stress or sediment transport capacity as a function of local bed slope, as has been common practice in a number of previous reduced-complexity models. Copyright © 2009 John Wiley & Sons, Ltd. [source]


On establishing the accuracy of noise tomography travel-time measurements in a realistic medium

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2009
Victor C. Tsai
SUMMARY It has previously been shown that the Green's function between two receivers can be retrieved by cross-correlating time series of noise recorded at the two receivers. This property has been derived assuming that the energy in normal modes is uncorrelated and perfectly equipartitioned, or that the distribution of noise sources is uniform in space and the waves measured satisfy a high frequency approximation. Although a number of authors have successfully extracted travel-time information from seismic surface-wave noise, the reason for this success of noise tomography remains unclear since the assumptions inherent in previous derivations do not hold for dispersive surface waves on the Earth. Here, we present a simple ray-theory derivation that facilitates an understanding of how cross correlations of seismic noise can be used to make direct travel-time measurements, even if the conditions assumed by previous derivations do not hold. Our new framework allows us to verify that cross-correlation measurements of isotropic surface-wave noise give results in accord with ray-theory expectations, but that if noise sources have an anisotropic distribution or if the velocity structure is non-uniform then significant differences can sometimes exist. We quantify the degree to which the sensitivity kernel is different from the geometric ray and find, for example, that the kernel width is period-dependent and that the kernel generally has non-zero sensitivity away from the geometric ray, even within our ray theoretical framework. These differences lead to usually small (but sometimes large) biases in models of seismic-wave speed and we show how our theoretical framework can be used to calculate the appropriate corrections. Even when these corrections are small, calculating the errors within a theoretical framework would alleviate fears traditional seismologists may have regarding the robustness of seismic noise tomography. [source]


Culture Matters: How Our Culture Affects the Audit,

ACCOUNTING PERSPECTIVES, Issue 3 2010
PHILIP COWPERTHWAITE
audit; culture; normes internationales Abstract If the influence of national cultures on the implementation of global standards is not taken into account, the result will be inconsistent implementation at best and outright failure at worst. The experiences in fields such as medicine, peacekeeping, aviation, and environmental protection offer insight into possible difficulties with the implementation, beginning in 2010, of International Standards on Auditing (ISAs) by members of the International Federation of Accountants. Some countries may have difficulty with implementation because of the differences between their cultural assumptions and those embodied in the standards to be adopted. It is too soon to know if and where that will happen, especially because the data on first experiences will not begin to be available until 2013. However, cultural-comparison data can be used to foresee which countries may have difficulty with implementation. But if unintended consequences do become evident, it will be important not to assume that the standards and the standard-setting process are defective; it is more likely that practitioners will need help in interpreting the ISAs in light of their local culture. A useful first step would be for standard-setting bodies to identify explicitly the cultural assumptions inherent in the standards they produce. The standard setters can then give that information to those responsible for standards implementation at the practitioner level to help promote consistent application of the standards globally. Question de culture : en quoi la culture influe sur l'audit Résumé Si l'on ne tient pas compte de l'influence des cultures nationales sur la mise en ,uvre de normes internationales, les résultatsde l'exercice seront incohérents, au mieux, ouse solderont par un échec pur et simple, au pire. Les expériences dans des domaines comme la médecine, le maintien de la paix, l'aviation et la protection de l'environnement nous livrent des indications quant aux problèmes que pourrait présenter la conversion, à compter de 2010, aux normes internationales d'audit et de certification établies par les membres de l'International Federation of Accountants (IFAC). Certains pays pourraient éprouver de la difficultéà instaurer ces normes en raison des différences entre leurs a priori culturels et ceux que véhiculent les normes devant être adoptées. Il est trop tôt pour dire si ces difficultés se manifesteront et à quel moment, notamment du fait que les données relatives aux premières expériences ne seront accessibles qu'à compter de 2013. Toutefois, des donnéesculturelles comparatives peuvent être utilisées pour prévoir quels pays risquent defaire face à des embûches dans la mise en ,uvre de ces normes. Toutefois, s'il émergedu processus des conséquences non souhaitées évidentes, il importera de ne pas en conclure que les normes et les processus de normalisation sont défectueux, mais plutôt que les professionnels en exerciceont besoin d'assistance pour interpréter les normes internationales à la lumière de leur culture nationale. Les organismes de normalisation pourraient faire un premier pas dans ce sens en définissant explicitement les a priori culturels inhérents aux normes qu'ils produisent. Les normalisateurs pourraient ensuite communiquer cette information aux responsables de la mise en ,uvre des normes chez lesprofessionnels en exercice et contribuer ainsi à promouvoir la cohérence dans l'application des normes à l'échelle mondiale. [source]


Comparison of global and local sensitivity techniques for rate constants determined using complex reaction mechanisms

INTERNATIONAL JOURNAL OF CHEMICAL KINETICS, Issue 12 2001
James J. Scire Jr.
Many rate constant measurements, including some "direct" measurements, involve fitting a complex reaction mechanism to experimental data. Two techniques for estimating the error in such measurements were compared. In the first technique, local first-order elementary sensitivities were used to rapidly estimate the sensitivity of the fitted rate constants to the remaining mechanism parameters. Our group and others have used this technique for error estimation and experimental design. However, the nonlinearity and strong coupling found in reaction mechanisms make verification against globally valid results desirable. Here, the local results were compared with analogous importance-sampled Monte Carlo calculations in which the parameter values were distributed according to their uncertainties. Two of our published rate measurements were examined. The local uncertainty estimates were compared with Monte Carlo confidence intervals. The local sensitivity coefficients were compared with coefficients from first and second-degree polynomial regressions over the whole parameter space. The first-order uncertainty estimates were found to be sufficiently accurate for experimental design, but were subject to error in the presence of higher order sensitivities. In addition, global uncertainty estimates were found to narrow when the quality of the fit was used to weight the randomly distributed points. For final results, the global technique was found to provide efficient, accurate values without the assumptions inherent in the local analysis. The rigorous error estimates derived in this way were used to address literature criticism of one of the studies discussed here. Given its efficiency and the variety of problems it can detect, the global technique could also be used to check local results during the experimental design phase. The global routine, coded using SENKIN, can easily be extended to different types of data, and therefore can serve as a valuable tool for assessing error in rate constants determined using complex mechanisms. © 2001 John Wiley & Sons, Inc. Int J Chem Kinet 33: 784,802, 2001 [source]


k -sample median test for vague data

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 5 2009
Przemys, aw Grzegorzewski
Classical statistical tests may be sensitive to violations of the fundamental model assumptions inherent in the derivation and construction of these tests. It is obvious that such violations are much more probable in the presence of vague data. Thus nonparametric tests seem to be promising statistical tools. In the present paper, a distribution-free statistical test for the so-called "many-one problem" with vague data is suggested. This test is a generalization of the k -sample median test. In our approach, we utilize the necessity index of strict dominance, suggested by Dubois and Prade. © 2009 Wiley Periodicals, Inc. [source]


Unsupervised classification methods in food sciences: discussion and outlook

JOURNAL OF THE SCIENCE OF FOOD AND AGRICULTURE, Issue 7 2008
Marcin Kozak
Abstract This paper reviews three unsupervised multivariate classification methods: principal component analysis, principal component similarity analysis and heuristic cluster analysis. The theoretical basis of each method is presented in brief, and assumptions inherent to the methods are highlighted. A literature review shows that these methods have sometimes been used inappropriately or without referencing all essential parameters. The paper also brings to the attention of the reader a relatively unknown method: probabilistic or model-based cluster analysis. The goal of this method is to uncover the true classification of objects rather than a convenient classification provided by the other methods. For this reason it is felt that model-based cluster analysis will have broad application in the future. Copyright © 2008 Society of Chemical Industry [source]