Underlying Assumptions (underlying + assumption)

Distribution by Scientific Domains


Selected Abstracts


Large scale wildlife monitoring studies: statistical methods for design and analysis

ENVIRONMETRICS, Issue 2 2002
Kenneth H. Pollock
Abstract Techniques for estimation of absolute abundance of wildlife populations have received a lot of attention in recent years. The statistical research has been focused on intensive small-scale studies. Recently, however, wildlife biologists have desired to study populations of animals at very large scales for monitoring purposes. Population indices are widely used in these extensive monitoring programs because they are inexpensive compared to estimates of absolute abundance. A crucial underlying assumption is that the population index (C) is directly proportional to the population density (D). The proportionality constant, ,, is simply the probability of ,detection' for animals in the survey. As spatial and temporal comparisons of indices are crucial, it is necessary to also assume that the probability of detection is constant over space and time. Biologists intuitively recognize this when they design rigid protocols for the studies where the indices are collected. Unfortunately, however, in many field studies the assumption is clearly invalid. We believe that the estimation of detection probability should be built into the monitoring design through a double sampling approach. A large sample of points provides an abundance index, and a smaller sub-sample of the same points is used to estimate detection probability. There is an important need for statistical research on the design and analysis of these complex studies. Some basic concepts based on actual avian, amphibian, and fish monitoring studies are presented in this article. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Source density-driven independent component analysis approach for fMRI data

HUMAN BRAIN MAPPING, Issue 3 2005
Baoming Hong
Abstract Independent component analysis (ICA) has become a popular tool for functional magnetic resonance imaging (fMRI) data analysis. Conventional ICA algorithms including Infomax and FAST-ICA algorithms employ the underlying assumption that data can be decomposed into statistically independent sources and implicitly model the probability density functions of the underlying sources as highly kurtotic or symmetric. When source data violate these assumptions (e.g., are asymmetric), however, conventional ICA methods might not work well. As a result, modeling of the underlying sources becomes an important issue for ICA applications. We propose a source density-driven ICA (SD-ICA) method. The SD-ICA algorithm involves a two-step procedure. It uses a conventional ICA algorithm to obtain initial independent source estimates for the first-step and then, using a kernel estimator technique, the source density is calculated. A refitted nonlinear function is used for each source at the second step. We show that the proposed SD-ICA algorithm provides flexible source adaptivity and improves ICA performance. On SD-ICA application to fMRI signals, the physiologic meaningful components (e.g., activated regions) of fMRI signals are governed typically by a small percentage of the whole-brain map on a task-related activation. Extra prior information (using a skewed-weighted distribution transformation) is thus additionally applied to the algorithm for the regions of interest of data (e.g., visual activated regions) to emphasize the importance of the tail part of the distribution. Our experimental results show that the source density-driven ICA method can improve performance further by incorporating some a priori information into ICA analysis of fMRI signals. Hum Brain Mapping, 2005. © 2005 Wiley-Liss, Inc. [source]


Socio-economic Development and International Migration: A Turkish Study

INTERNATIONAL MIGRATION, Issue 4 2001
Ahmet Icduygu
The root causes of international migration have been the subject of many studies, a vast majority of which are based on development theories dominated by economy-oriented perspectives. An underlying assumption is that poverty breeds migration. The results, and the conclusions drawn from these studies, differ widely. For instance, whether emigration increases when poverty becomes more extreme, or less extreme, or why it reaches certain levels, are issues on which research still offers a mixed answer. This article investigates the relationship between economic development and migration by taking into consideration the degrees of economic development that form thresholds for migration. It focuses on recent evidence on the development-emigration relationship in Turkey which reflects a dimension of the dynamics and mechanisms facilitating or restricting migratory flows from the country. Using data from the 1995 District-level Socio-economic Development Index of Turkey (DSDI) and the 1990 Census, the principal aim of the article is to provide an analytical base which identifies degrees of local level of development in Turkey, relate these to international migration flows, and examine patterns of the development-migration relationship. [source]


The Relationship between Legal Systems and Economic Development: Integrating Economic and Cultural Approaches

JOURNAL OF LAW AND SOCIETY, Issue 2 2002
Amanda J. Perry
This paper seeks to demonstrate the need to bridge the gap between the economic and culture-based approaches to two issues which are fundamental to the debate over the relationship between legal reform and economic development: (a) the relative importance which economic actors around the world place on the legal system and (b) the core components of an effective legal system, as defined by those economic actors. It first outlines the major tenets of current economic legal reform policy, focusing on its underlying assumption that the perceptions and expectations of economic actors around the world do not vary significantly. Data from Geert Hofstede's study of variance in cultural values are then analysed in order to demonstrate how cultural values might affect private sector perceptions and expectations of legal systems as supporters of material progress. It concludes that there is a clear need for a more interdisciplinary approach to the debate over the relationship between legal reform and economic development, and the potential variance in private sector perceptions and expectations of legal systems in particular. Such an approach might be initiated through a systematic integration of existing data and theory from each discipline, reinforced by a new multi-country survey. [source]


Relevance of Web documents: Ghosts consensus method

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 10 2002
Andrey L. Gorbunov
The dominant method currently used to improve the quality of Internet search systems is often called "digital democracy." Such an approach implies the utilization of the majority opinion of Internet users to determine the most relevant documents: for example, citation index usage for sorting of search results (google.com) or an enrichment of a query with terms that are asked frequently in relation with the query's theme. "Digital democracy" is an effective instrument in many cases, but it has an unavoidable shortcoming, which is a matter of principle: the average intellectual and cultural level of Internet users is very low,everyone knows what kind of information is dominant in Internet query statistics. Therefore, when one searches the Internet by means of "digital democracy" systems, one gets answers that reflect an underlying assumption that the user's mind potential is very low, and that his cultural interests are not demanding. Thus, it is more correct to use the term "digital ochlocracy" to refer to Internet search systems with "digital democracy." Based on the well-known mathematical mechanism of linear programming, we propose a method to solve the indicated problem. [source]


What Do Juvenile Offenders Know About Being Tried as Adults?

JUVENILE AND FAMILY COURT JOURNAL, Issue 3 2004
Implications for Deterrence
ABSTRACT An underlying assumption in the nationwide policy shift toward transferring more juveniles to criminal court has been the belief that stricter, adult sentences will act as either a specific or general deterrent to juvenile crime. With respect to general deterrence,whether transfer laws deter would-be offenders from committing crimes,it is important to examine whether juveniles know about transfer laws, whether this knowledge deters criminal behavior, and whether juveniles believe the laws will be enforced against them. The current study is one of the first to examine juveniles' knowledge and perceptions of transfer laws and criminal sanctions. We interviewed 37 juveniles who had been transferred to criminal court in Georgia, obtaining quantitative as well as qualitative data based on structured interviewed questions. Four key findings emerged. First, juveniles were unaware of the transfer law. Second, juveniles felt that awareness of the law may have deterred them from committing the crime or may deter other juveniles from committing crimes, and they suggested practical ways to enhance juveniles' awareness of transfer laws. Third, the juveniles generally felt that it was unfair to try and sentence them as adults. Finally, the consequences of committing their crime were worse than most had imagined, and the harsh consequences of their incarceration in adult facilities may have had a brutalizing effect on some juveniles. The implications for general and specific deterrence are discussed. [source]


Optimal Design of VSI ,X Control Charts for Monitoring Correlated Samples

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 8 2005
Yan-Kwang Chen
Abstract This paper develops an economic design of variable sampling interval (VSI),X control charts in which the next sample is taken sooner than usual if there is an indication that the process is off-target. When designing VSI,X control charts, the underlying assumption is that the measurements within a sample are independent. However, there are many practical situations that violate this hypothesis. Accordingly, a cost model combining the multivariate normal distribution model given by Yang and Hancock with Bai and Lee's cost model is proposed to develop the design of VSI charts for correlated data. An evolutionary search method to find the optimal design parameters for this model is presented. Also, we compare VSI and traditional ,X charts with respect to expected cost per unit time, utilizing hypothetical cost and process parameters as well as various correlation coefficients. The results indicate that VSI control charts outperform the traditional control charts for larger mean shift when correlation is present. In addition, there is a difference between the design parameters of VSI charts when correlation is present or absent. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Reducing Obesity: Motivating Action While Not Blaming the Victim

THE MILBANK QUARTERLY, Issue 1 2009
NANCY E. ADLER
Context: The rise in obesity in the United States may slow or even reverse the long-term trend of increasing life expectancy. Like many risk factors for disease, obesity results from behavior and shows a social gradient. Especially among women, obesity is more common among lower-income individuals, those with less education, and some ethnic/racial minorities. Methods: This article examines the underlying assumptions and implications for policy and the interventions of the two predominant models used to explain the causes of obesity and also suggests a synthesis that avoids "blaming the victim" while acknowledging the role of individuals' health behaviors in weight maintenance. Findings: (1) The medical model focuses primarily on treatment, addressing individuals' personal behaviors as the cause of their obesity. An underlying assumption is that as independent agents, individuals make informed choices. Interventions are providing information and motivating individuals to modify their behaviors. (2) The public health model concentrates more on prevention and sees the roots of obesity in an obesogenic environment awash in influences that lead individuals to engage in health-damaging behaviors. Interventions are modifying environmental forces through social policies. (3) There is a tension between empowering individuals to manage their weight through diet and exercise and blaming them for failure to do so. Patterns of obesity by race/ethnicity and socioeconomic status highlight this tension. (4) Environments differ in their health-promoting resources; for example, poorer communities have fewer supermarkets, more fast-food outlets, and fewer accessible and safe recreational opportunities. Conclusions: A social justice perspective facilitates a synthesis of both models. This article proposes the concept of "behavioral justice" to convey the principle that individuals are responsible for engaging in health-promoting behaviors but should be held accountable only when they have adequate resources to do so. This perspective maintains both individuals' control and accountability for behaviors and society's responsibility to provide health-promoting environments. [source]


A Bayesian approach to inverse modelling of stratigraphy, part 1: method

BASIN RESEARCH, Issue 1 2009
Karl Charvin
ABSTRACT The inference of ancient environmental conditions from their preserved response in the sedimentary record still remains an outstanding issue in stratigraphy. Since the 1970s, conceptual stratigraphic models (e.g. sequence stratigraphy) based on the underlying assumption that accommodation space is the critical control on stratigraphic architecture have been widely used. Although these methods considered more recently other possible parameters such as sediment supply and transport efficiency, they still lack in taking into account the full range of possible parameters, processes, and their complex interactions that control stratigraphic architecture. In this contribution, we present a new quantitative method for the inference of key environmental parameters (specifically sediment supply and relative sea level) that control stratigraphy. The approach combines a fully non-linear inversion scheme with a ,process,response' forward model of stratigraphy. We formulate the inverse problem using a Bayesian framework in order to sample the full range of possible solutions and explicitly build in prior geological knowledge. Our methodology combines Reversible Jump Markov chain Monte Carlo and Simulated Tempering algorithms which are able to deal with variable-dimensional inverse problems and multi-modal posterior probability distributions, respectively. The inverse scheme has been linked to a forward stratigraphic model, BARSIM (developed by Joep Storms, University of Delft), which simulates shallow-marine wave/storm-dominated systems over geological timescales. This link requires the construction of a likelihood function to quantify the agreement between simulated and observed data of different types (e.g. sediment age and thickness, grain size distributions). The technique has been tested and validated with synthetic data, in which all the parameters are specified to produce a ,perfect' simulation, although we add noise to these synthetic data for subsequent testing of the inverse modelling approach. These tests addressed convergence and computational-overhead issues, and highlight the robustness of the inverse scheme, which is able to assess the full range of uncertainties on the inferred environmental parameters and facies distributions. [source]


Exploiting Gene-Environment Independence for Analysis of Case,Control Studies: An Empirical Bayes-Type Shrinkage Estimator to Trade-Off between Bias and Efficiency

BIOMETRICS, Issue 3 2008
Bhramar Mukherjee
Summary Standard prospective logistic regression analysis of case,control data often leads to very imprecise estimates of gene-environment interactions due to small numbers of cases or controls in cells of crossing genotype and exposure. In contrast, under the assumption of gene-environment independence, modern "retrospective" methods, including the "case-only" approach, can estimate the interaction parameters much more precisely, but they can be seriously biased when the underlying assumption of gene-environment independence is violated. In this article, we propose a novel empirical Bayes-type shrinkage estimator to analyze case,control data that can relax the gene-environment independence assumption in a data-adaptive fashion. In the special case, involving a binary gene and a binary exposure, the method leads to an estimator of the interaction log odds ratio parameter in a simple closed form that corresponds to an weighted average of the standard case-only and case,control estimators. We also describe a general approach for deriving the new shrinkage estimator and its variance within the retrospective maximum-likelihood framework developed by Chatterjee and Carroll (2005, Biometrika92, 399,418). Both simulated and real data examples suggest that the proposed estimator strikes a balance between bias and efficiency depending on the true nature of the gene-environment association and the sample size for a given study. [source]


Some benefits of dichotomization in psychiatric and criminological research

CRIMINAL BEHAVIOUR AND MENTAL HEALTH, Issue 2 2000
Professor David P. Farrington PhD FBA
Background The product-moment correlation r is widely used in criminology and psychiatry to measure strength of association. However, most criminological and psychiatric variables contravene its underlying assumptions. Aim To compare statistical measures of association based on dichotomous variables with the use of r. Method Explanatory variables for delinquency are investigated in the Pittsburgh Youth Study using a sample of 506 boys aged 13,14. Results Dichotomization does not necessarily cause a decrease in measured strength of associations. Conclusions about the most important explanatory variables for delinquency were not greatly affected by using dichotomous as opposed to continuous variables, by different dichotomization splits, or by using logistic versus OLS multiple regression. Non-linear relationships, interaction effects and multiple risk factor individuals were easily studied using dichotomous data. Conclusions Dichotomization produces meaningful findings that are easily understandable to a wide audience. Measures of association for dichotomous variables, such as the odds ratio, have many advantages and are often more realistic and meaningful measures of strength of relationship than the product-moment correlation r. Copyright © 2000 Whurr Publishers Ltd. [source]


Defining Political Community and Rights to Natural Resources in Botswana

DEVELOPMENT AND CHANGE, Issue 2 2009
Amy R. Poteete
ABSTRACT Community-Based Natural Resource Management (CBNRM), once presented as the best way to protect common pool natural resources, now attracts a growing chorus of critiques that either question its underlying assumptions or emphasize problems related to institutional design. These critiques overlook connections between the definition of rights to natural resources and membership in political communities. The potential for competing definitions of political identity and rights across natural resources arises when property rights regimes differ across natural resources and these different systems of rights appeal to alternative definitions of community. In Botswana, the entangling of natural resource policy with identity politics contributed to a partial recentralization of CBNRM in 2007. [source]


Other people, other drugs: the policy response to petrol sniffing among Indigenous Australians

DRUG AND ALCOHOL REVIEW, Issue 3 2004
Dr PETER H. D'ABBS
Abstract This paper examines the policy response of Australian governments to petrol sniffing in Indigenous communities from the 1980s until the present. During this period, despite the formation of numerous inquiries, working parties and intergovernmental committees, there has been little accumulation of knowledge about the nature and causes of sniffing, or about the effectiveness of interventions. Policies are fragmentary; programmes are rarely evaluated, and most rely on short-term funding. The paper sets out to explain why this should be so. It draws upon a conceptual framework known as ,analytics of government' to examine the ways in which petrol sniffing comes to the attention of government agencies and is perceived as an issue; the mechanisms deployed by governments to address petrol sniffing; ways in which knowledge about sniffing is generated; and the underlying assumptions about people that inform policy-making. Drawing upon case studies of policy responses, the paper argues that a number of structural factors combine to marginalize petrol sniffing as an issue, and to encourage reliance on short-term, one-off interventions in place of a sustained policy commitment. Four recommendations are advanced to help overcome these factors: (1) agreements should be reached within and between levels of government on steps to be taken to reduce risk factors before the eruption of petrol-sniffing crises; (2) the evidence base relevant to petrol sniffing (and other inhalants) should be improved by funding and directing one or more existing national drug research centres to collate data on inhalant-caused mortality and morbidity, and to conduct or commission research into prevalence patterns, effectiveness of interventions and other gaps in knowledge; (3) the current pattern of short-term, pilot and project funding should be replaced with longer-term, evidence-based interventions that address the multiple risk and protective factors present in communities; and (4) insistence by governments that communities must take ,ownership' of the problem should be replaced by a commitment to genuine partnerships involving governments, non-government and community sectors. [source]


It's all relative: ranking the diversity of aquatic bacterial communities

ENVIRONMENTAL MICROBIOLOGY, Issue 9 2008
Allison K. Shaw
Summary The study of microbial diversity patterns is hampered by the enormous diversity of microbial communities and the lack of resources to sample them exhaustively. For many questions about richness and evenness, however, one only needs to know the relative order of diversity among samples rather than total diversity. We used 16S libraries from the Global Ocean Survey to investigate the ability of 10 diversity statistics (including rarefaction, non-parametric, parametric, curve extrapolation and diversity indices) to assess the relative diversity of six aquatic bacterial communities. Overall, we found that the statistics yielded remarkably similar rankings of the samples for a given sequence similarity cut-off. This correspondence, despite the different underlying assumptions of the statistics, suggests that diversity statistics are a useful tool for ranking samples of microbial diversity. In addition, sequence similarity cut-off influenced the diversity ranking of the samples, demonstrating that diversity statistics can also be used to detect differences in phylogenetic structure among microbial communities. Finally, a subsampling analysis suggests that further sequencing from these particular clone libraries would not have substantially changed the richness rankings of the samples. [source]


Empowerment in the self-management of diabetes: are we ready to test assumptions?

EUROPEAN DIABETES NURSING, Issue 3 2007
CPsychol, Csci Chartered Health Psychologist, KG Asimakopoulou BSc
Abstract This paper describes the origins and definitions of the concept of diabetes empowerment. It summarises why ,compliance' was considered to be a problematic term in diabetes and why it was replaced by ,self-management' which, in turn, paved the way for introducing the concept of empowerment. Although empowerment is a popular and helpful concept and process, it comes with several important underlying assumptions about the health care professional (HCP),patient encounter, patient understanding, memory and willingness to become empowered, and finally the HCP's view on the validity of the concept. All these assumptions, it is argued, need further testing before the concept and process are fully and wholly embraced in diabetes care across Europe. Copyright © 2007 FEND [source]


A cost-effectiveness analysis of caspofungin vs. liposomal amphotericin B for treatment of suspected fungal infections in the UK

EUROPEAN JOURNAL OF HAEMATOLOGY, Issue 6 2007
Karin Bruynesteyn
Abstract Objective:, To evaluate the cost-effectiveness of caspofungin vs. liposomal amphotericin B in the treatment of suspected fungal infections in the UK. Methods:, The cost-effectiveness of caspofungin vs. liposomal amphotericin B was evaluated using a decision-tree model. The decision tree was populated using both data and clinical definitions from published clinical studies. Model outcomes included success in terms of resolution of fever, baseline infection, absence of breakthrough infection, survival and quality adjusted life years (QALYs) saved. Discontinuation due to nephrotoxicity or other adverse events were included in the model. Efficacy and safety data were based on additional analyses of a randomised, double blind, multinational trial of caspofungin compared with liposomal amphotericin B. Information on life expectancy, quality of life, medical resource consumption and costs were obtained from peer-reviewed published data. Results:, The caspofungin mean total treatment cost was £9762 (95% uncertainty interval 6955,12 577), which was £2033 (,2489; 6779) less than liposomal amphotericin B. Treatment with caspofungin resulted in 0.40 (,0.12; 0.94) additional QALYs saved in comparison with liposomal amphotericin B. Probabilistic sensitivity analysis found a 95% probability of the incremental cost per QALY saved being within the generally accepted threshold for cost-effectiveness (£30 000). Additional analyses with varying dose of caspofungin and liposomal amphotericin B confirmed these findings. Conclusion:, Given the underlying assumptions, caspofungin is cost-effective compared with liposomal amphotericin B in the treatment of suspected fungal infections in the UK. [source]


Semantic confusion regarding the development of multisensory integration: a practical solution

EUROPEAN JOURNAL OF NEUROSCIENCE, Issue 10 2010
Barry E. Stein
Abstract There is now a good deal of data from neurophysiological studies in animals and behavioral studies in human infants regarding the development of multisensory processing capabilities. Although the conclusions drawn from these different datasets sometimes appear to conflict, many of the differences are due to the use of different terms to mean the same thing and, more problematic, the use of similar terms to mean different things. Semantic issues are pervasive in the field and complicate communication among groups using different methods to study similar issues. Achieving clarity of communication among different investigative groups is essential for each to make full use of the findings of others, and an important step in this direction is to identify areas of semantic confusion. In this way investigators can be encouraged to use terms whose meaning and underlying assumptions are unambiguous because they are commonly accepted. Although this issue is of obvious importance to the large and very rapidly growing number of researchers working on multisensory processes, it is perhaps even more important to the non-cognoscenti. Those who wish to benefit from the scholarship in this field but are unfamiliar with the issues identified here are most likely to be confused by semantic inconsistencies. The current discussion attempts to document some of the more problematic of these, begin a discussion about the nature of the confusion and suggest some possible solutions. [source]


Attachment styles, conflict styles and humour styles: interrelationships and associations with relationship satisfaction

EUROPEAN JOURNAL OF PERSONALITY, Issue 2 2008
Arnie Cann
Abstract Relationships among attachment styles, conflict styles and humour styles were examined in the context of romantic relationships. Each style was assumed to be based upon underlying assumptions about self and others, so relationships among the measures were predicted. A model assuming that the relationship of attachment styles to relationship satisfaction was partially mediated by the conflict styles and humour styles was tested. Overall, the predicted relationships among the three measures were supported. Conflict styles and humour styles reflecting attitudes about others were related to the avoidance attachment style, while those reflecting attitudes about the self were related to the anxiety attachment dimension. Conflict styles and humour styles were mediators of the association of attachment style with relationship satisfaction. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Classifying tagging experiments for commercial fisheries into three fundamental types based on design, data requirements and estimable population parameters

FISH AND FISHERIES, Issue 2 2010
Tom Polacheck
Abstract Mark,recapture experiments have the potential to provide direct estimates of fundamental parameters required for fishery stock assessment and provision of subsequent management advice in fisheries. The literature on mark,recapture experiments is enormous, with a variety of different experimental designs and estimation models; thus, it can be difficult to grasp the primary features of different approaches, the inter-relationship among them and their potential utility in different situations. Here, we present an overview of the tagging experimental designs that are appropriate for use in commercial fishery situations. We suggest that most mark,recapture experiments in a large-scale fishery context can be classified into one of three basic types , Petersen, tag-attrition or Brownie , based on the fundamental design employed for releases and recaptures. The release and recapture strategy (e.g. the number of release events, whether the size of the sample examined for recaptured tags is known) determines which parameters can be estimated and from where the information for estimating them arises. We conclude that an integrated Brownie and Petersen approach is the most powerful of the different approaches in terms of the range of parameters that can be estimated without underlying assumptions or constraints on parameters. Such an approach can provide direct estimates of fishing mortality, natural mortality and population size, which are the main population dynamics parameters that traditional fishery stock assessments attempt to estimate. [source]


Spatially explicit fisheries simulation models for policy evaluation

FISH AND FISHERIES, Issue 4 2005
Dominique Pelletier
Abstract This paper deals with the design of modelling tools suitable for investigating the consequences of alternative policies on the dynamics of resources and fisheries, such as the evaluation of marine protected areas (MPA). We first review the numerous models that have been developed for this purpose, and compare them from several standpoints: population modelling, exploitation modelling and management measure modelling. We then present a generic fisheries simulation model, Integration of Spatial Information for FISHeries simulation (ISIS-Fish). This spatially explicit model allows quantitative policy screening for fisheries with mixed-species harvests. It may be used to investigate the effects of combined management scenarios including a variety of policies: total allowable catch (TAC), licenses, gear restrictions, MPA, etc. Fisher's response to management may be accounted for by means of decision rules conditioned on population and exploitation parameters. An application to a simple example illustrates the relevance of this kind of tool for policy screening, particularly in the case of mixed fisheries. Finally, the reviewed models and ISIS-Fish are discussed and confronted in the light of the underlying assumptions and model objectives. In the light of this discussion, we identify desirable features for fisheries simulation models aimed at policy evaluation, and particularly MPA evaluation. [source]


Can the evolutionary-rates hypothesis explain species-energy relationships?

FUNCTIONAL ECOLOGY, Issue 6 2005
K. L. EVANS
Summary 1There is growing consensus that much of the marked broad-scale spatial variation in species richness is associated with variation in environmental energy availability, but at least nine principal mechanisms have been proposed that may explain these patterns. 2The evolutionary-rates hypothesis suggests that high environmental energy availability elevates rates of molecular evolution, promoting faster speciation, so that more species occur in high-energy areas because more evolve there. Direct tests of this hypothesis are rare and their conclusions inconsistent. Here we focus on assessing the support for its underlying assumptions. 3First, the evolutionary-rates hypothesis assumes that high energy levels promote mutation. There is certainly evidence that high levels of ultraviolet radiation increase mutation rates. High temperatures may also reduce generation times and elevate metabolic rates, which may promote mutation. On balance, data support a link between rates of metabolism and mutation, but a link between the latter and generation time is more equivocal and is particularly unlikely in plants. 4Second, the evolutionary-rates hypothesis assumes that mutation rates limit speciation rates. This may be true if all else was equal, but correlations between mutation and speciation are probably very noisy as many other factors may influence rates both of sympatric and allopatric speciation, including the occurrence of physical isolation barriers, the magnitude of selection and population size. 5Third, the evolutionary-rates hypothesis assumes that there is a strong correlation between current and historical energy levels. Factors such as tectonic drift may weaken such relationships, but are likely to have had negligible effects over the time period during which the majority of extant species evolved. 6Fourth, the evolutionary-rates hypothesis assumes that changes in species ranges following speciation do not sufficiently weaken the correlation between the rate of speciation in an area and species richness. The ranges of many species appear to alter dramatically following speciation, and this may markedly reduce the strength of the relationship, but to what extent is unclear. 7In sum, the degree to which the evolutionary-rates hypothesis can explain spatial variation in species richness remains surprisingly uncertain. We suggest directions for further research. [source]


The Space of Local Control in the Devolution of us Public Housing Policy

GEOGRAFISKA ANNALER SERIES B: HUMAN GEOGRAPHY, Issue 4 2000
Janet L. Smith
Sweeping changes in national policy aim to radically transform public housing in the United States. The goal is to reduce social isolation and increase opportunities for low income tenants by demolishing ,worst case' housing, most of which is modern, high-rise buildings with high vacancy and crime rates, and replacing it with ,mixed-income' developments and tenant based assistance to disperse current public housing families. Transformation relies on the national government devolving more decision-making power to local government and public housing authorities. The assumption here is that decentralizing the responsibility for public housing will yield more effective results and be more efficient. This paper explores the problematic nature of decentralization as it has been conceptualized in policy discourse, focusing on the underlying assumptions about the benefits of increasing local control in the implementation of national policy. As this paper describes, this conceived space of local control does not take into account the spatial features that have historically shaped where and how low income families live in the US, including racism and classism and a general aversion by the market to produce affordable rental units and mixed-income developments. As a result, this conceived space of local control places the burden on low income residents to make transformation a success. To make this case, Wittgenstein's (1958) post-structural view of language is combined with Lefebvre's view of space to provide a framework in which to examine US housing policy discourse as a ,space producing' activity. The Chicago Housing Authority's Plan for Transformation is used to illustrate how local efforts to transform public housing reproduce a functional space for local control that is incapable of generating many of the proposed benefits of decentralization for public housing tenants. [source]


Models of Earth's main magnetic field incorporating flux and radial vorticity constraints

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2007
A. Jackson
SUMMARY We describe a new technique for implementing the constraints on magnetic fields arising from two hypotheses about the fluid core of the Earth, namely the frozen-flux hypothesis and the hypothesis that the core is in magnetostrophic force balance with negligible leakage of current into the mantle. These hypotheses lead to time-independence of the integrated flux through certain ,null-flux patches' on the core surface, and to time-independence of their radial vorticity. Although the frozen-flux hypothesis has received attention before, constraining the radial vorticity has not previously been attempted. We describe a parametrization and an algorithm for preserving topology of radial magnetic fields at the core surface while allowing morphological changes. The parametrization is a spherical triangle tesselation of the core surface. Topology with respect to a reference model (based on data from the Oersted satellite) is preserved as models at different epochs are perturbed to optimize the fit to the data; the topology preservation is achieved by the imposition of inequality constraints on the model, and the optimization at each iteration is cast as a bounded value least-squares problem. For epochs 2000, 1980, 1945, 1915 and 1882 we are able to produce models of the core field which are consistent with flux and radial vorticity conservation, thus providing no observational evidence for the failure of the underlying assumptions. These models are a step towards the production of models which are optimal for the retrieval of frozen-flux velocity fields at the core surface. [source]


How to model shallow water-table depth variations: the case of the Kervidy-Naizin catchment, France

HYDROLOGICAL PROCESSES, Issue 4 2005
Jérôme Molénat
Abstract The aim of this work is threefold: (1) to identify the main characteristics of water-table variations from observations in the Kervidy-Naizin catchment, a small catchment located in western France; (2) to confront these characteristics with the assumptions of the Topmodel concepts; and (3) to analyse how relaxation of the assumptions could improve the simulation of distributed water-table depth. A network of piezometers was installed in the Kervidy-Naizin catchment and the water-table depth was recorded every 15 min in each piezometer from 1997 to 2000. From these observations, the Kervidy-Naizin groundwater appears to be characteristic of shallow groundwaters of catchments underlain by crystalline bedrock, in view of the strong relation between water distribution and topography in the bottom land of the hillslopes. However, from midslope to summit, the water table can attain a depth of many metres, it does not parallel the topographic surface and it remains very responsive to rainfall. In particular, hydraulic gradients vary with time and are not equivalent to the soil surface slope. These characteristics call into question some assumptions that are used to model shallow lateral subsurface flow in saturated conditions. We investigate the performance of three models (Topmodel, a kinematic model and a diffusive model) in simulating the hourly distributed water-table depths along one of the hillslope transects, as well as the hourly stream discharge. For each model, two sets of parameters are identified following a Monte Carlo procedure applied to a simulation period of 2649 h. The performance of each model with each of the two parameter sets is evaluated over a test period of 2158 h. All three models, and hence their underlying assumptions, appear to reproduce adequately the stream discharge variations and water-table depths in bottom lands at the foot of the hillslope. To simulate the groundwater depth distribution over the whole hillslope, the steady-state assumption (Topmodel) is quite constraining and leads to unacceptable water-table depths in midslope and summit areas. Once this assumption is relaxed (kinematic model), the water-table simulation is improved. A subsequent relaxation of the hydraulic gradient (diffusive model) further improves water-table simulations in the summit area, while still yielding realistic water-table depths in the bottom land. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Brief measure of expressed emotion: internal consistency and stability over time

INTERNATIONAL JOURNAL OF METHODS IN PSYCHIATRIC RESEARCH, Issue 4 2003
Seija Sandberg Consultant, Senior Lecturer
Abstract The study examined three methodological aspects of expressed emotion (EE) as assessed in the course of PACE (Psychosocial Assessment of Childhood Experiences) interviews with a parent. In a sample of 87 children, aged 6,13 years, enrolled in a prospective study examining the role of stress on the course of asthma, EE was assessed at three time points, 9 months apart. A high degree of agreement was found among the three concurrent measures of negative and positive EE (kappas from 0.74 to 0.97, and from 0.45 to 0.88, respectively; p , 0.0001 in all instances). The temporal stability of all measures was lower, although statistically significant in all but 2 instances (kappas from 0.19 to 0.59, and from 0.11 to 0.39, respectively). The temporal stability across measures, as well as across interviewers and over time, was broadly similar (kappas from 0.21 to 0.56 for negative EE, and from 0.09 to 0.38 for positive EE, with all but three of the 36 statistically significant). The findings provide support for the underlying assumptions of the PACE-EE and show the utility of measures based on just very brief periods of non-directive interviewing, making them practical in a wide range of studies with EE just one of a larger set of measures. Copyright © 2003 Whurr Publishers Ltd. [source]


Perception and Politics in Intelligence Assessment: U.S. Estimates of the Soviet and "Rogue-State" Nuclear Threats

INTERNATIONAL STUDIES PERSPECTIVES, Issue 4 2009
James H. Lebovic
United States estimates of Soviet nuclear goals and capabilities and the current "rogue-state" nuclear threat reflected prevailing beliefs about threat within the U.S. government and the relative influence of agencies charged with threat assessment. This article establishes that the patterns in formal Soviet threat assessment: (i) did not reflect a uniform response to "external threat," (ii) were inevitably tied to underlying assumptions about adversary intent, and (iii) were susceptible then to perceptual, organizational, and/or political influences within government. Thus, threat assessments reflected the optimism and pessimism,and political interests and ideologies,of those who participated in the estimating process. The article concludes by examining these lessons in light of the experiences and challenges of assessing threat from small states harboring nuclear ambitions. [source]


Modeling Judgments in the Angoff and Contrasting-Groups Method of Standard Setting

JOURNAL OF EDUCATIONAL MEASUREMENT, Issue 1 2008
Daniël Van Nijlen
Essential for the validity of the judgments in a standard-setting study is that they follow the implicit task assumptions. In the Angoff method, judgments are assumed to be inversely related to the difficulty of the items; contrasting-groups judgments are assumed to be positively related to the ability of the students. In the present study, judgments from both procedures were modeled with a random-effects probit regression model. The Angoff judgments showed a weaker link with the position of the items on the latent scale than the contrasting-groups judgments with the position of the students. Hence, in the specific context of the study, the contrasting-groups judgments were more aligned with the underlying assumptions of the method than the Angoff judgments. [source]


The seven deadly sins of comparative analysis

JOURNAL OF EVOLUTIONARY BIOLOGY, Issue 7 2009
R. P. FRECKLETON
Abstract Phylogenetic comparative methods are extremely commonly used in evolutionary biology. In this paper, I highlight some of the problems that are frequently encountered in comparative analyses and review how they can be fixed. In broad terms, the problems boil down to a lack of appreciation of the underlying assumptions of comparative methods, as well as problems with implementing methods in a manner akin to more familiar statistical approaches. I highlight that the advent of more flexible computing environments should improve matters and allow researchers greater scope to explore methods and data. [source]


Size-independent growth in fishes: patterns, models and metrics

JOURNAL OF FISH BIOLOGY, Issue 10 2008
D. B. Sigourney
A combination of a dynamic energy budget (DEB) model, field data on Atlantic salmon Salmo salar and brown trout Salmo trutta and laboratory data on Atlantic salmon was used to assess the underlying assumptions of three different metrics of growth including specific growth rate (G), standardized mass-specific growth rate (GS) and absolute growth rate in length (GL) in salmonids. Close agreement was found between predictions of the DEB model and the assumptions of linear growth in length and parabolic growth in mass. Field data comparing spring growth rates of age 1+ year and 2+ year Atlantic salmon demonstrated that in all years the larger age 2+ year fish exhibited a significantly lower G, but differences in growth in terms of GS and GL depended on the year examined. For brown trout, larger age 2+ year fish also consistently exhibited slower growth rates in terms of G but grew at similar rates as age 1+ year fish in terms of GS and GL. Laboratory results revealed that during the age 0+ year (autumn) the divergence in growth between future Atlantic salmon smolts and non-smolts was similar in terms of all three metrics with smolts displaying higher growth than non-smolts, however, both GS and GL indicated that smolts maintain relatively fast growth into the late autumn where G suggested that both smolts and non-smolts exhibit a sharp decrease in growth from October to November. During the spring, patterns of growth in length were significantly decoupled from patterns in growth in mass. Smolts maintained relatively fast growth though April in length but not in mass. These results suggest GS can be a useful alternative to G as a size-independent measure of growth rate in immature salmonids. In addition, during certain growth stanzas, GS may be highly correlated with GL. The decoupling of growth in mass from growth in length over ontogeny, however, may necessitate a combination of metrics to adequately describe variation in growth depending on ontogenetic stage particularly if life histories differ. [source]


Creativity Defined: Implicit Theories in the Professions of Interior Design, Architecture, Landscape Architecture, and Engineering

JOURNAL OF INTERIOR DESIGN, Issue 1 2002
Margaret Portillo Ph.D.
ABSTRACT The purpose of this study was to examine implicit theories of creativity in related fields through a mail survey of 3 1 3 professors randomly selected from accredited programs in interior design, architecture, landscape architecture, and engineering. To describe a highly creative practitioner in their respective fields, the respondents completed the Gough Adjective Check List (ACL), scored with the Domino creativity scale (Cr). The interior design professors generated a profile of the creative practitioner that obtained a significantly higher mean score than did the architecture or engineering groups on the ACL-Cr scale. No other significant differences on the creativity scale appeared between groups. Exploratory analyses of individual ACL-Cr items found considerable agreement as to what constituted creativeness in implicit theories. At least 75 percent of the respondents in each group described the creative practitioner in their respective fields as imaginative, inventive, and adventurous. Disciplinary differences among groups surfaced in 21 traits on the ACL-Cr scale that were statistically significant in areas of artistic creativity, scientific creativity, intelligence, self-confidence, and task orientation. Further, the creative interior design practitioner was perceived as significantly more individualistic and original than in the other three fields, and sixteen other traits significantly differentiated interior design profiles from those posited in architecture, landscape architecture, or engineering. By promoting a scholarship of integration, findings reveal perceived traits of the creative practitioner in allied fields and advance interdisciplinary understanding. Design educators are encouraged to reflect on their own implicit theories of creativity and those of their students to acknowledge underlying assumptions about creativity that can influence innovation and collaboration. [source]