Different Assumptions (different + assumption)

Distribution by Scientific Domains


Selected Abstracts


Forks in the Road: Choices in Procedures for Designing Wildland Linkages

CONSERVATION BIOLOGY, Issue 4 2008
PAUL BEIER
análisis de sensibilidad; conectividad; corredor de vida silvestre; enlace; diseño de reservas Abstract:,Models are commonly used to identify lands that will best maintain the ability of wildlife to move between wildland blocks through matrix lands after the remaining matrix has become incompatible with wildlife movement. We offer a roadmap of 16 choices and assumptions that arise in designing linkages to facilitate movement or gene flow of focal species between 2 or more predefined wildland blocks. We recommend designing linkages to serve multiple (rather than one) focal species likely to serve as a collective umbrella for all native species and ecological processes, explicitly acknowledging untested assumptions, and using uncertainty analysis to illustrate potential effects of model uncertainty. Such uncertainty is best displayed to stakeholders as maps of modeled linkages under different assumptions. We also recommend modeling corridor dwellers (species that require more than one generation to move their genes between wildland blocks) differently from passage species (for which an individual can move between wildland blocks within a few weeks). We identify a problem, which we call the subjective translation problem, that arises because the analyst must subjectively decide how to translate measurements of resource selection into resistance. This problem can be overcome by estimating resistance from observations of animal movement, genetic distances, or interpatch movements. There is room for substantial improvement in the procedures used to design linkages robust to climate change and in tools that allow stakeholders to compare an optimal linkage design to alternative designs that minimize costs or achieve other conservation goals. Resumen:,Los modelos son utilizados comúnmente para identificar tierras que mantengan la habilidad de la vida silvestre para moverse entre bloques de tierras silvestres a través de una matriz de tierras que habían sido incompatibles con el movimiento de vida silvestre. Ofrecemos 16 opciones y supuestos que se originan en el diseño de enlaces para facilitar el movimiento o el flujo de genes de especies focales entre 2 o más bloques de tierras silvestres predefinidos. Recomendamos el diseño de enlaces que sirvan a múltiples (y solo a una) especies focales que funjan como una sombrilla colectiva para todas las especies nativas y los procesos ecológicos, que explícitamente admitan supuestos no comprobados y que utilicen análisis de incertidumbre para ilustrar efectos potenciales de la incertidumbre del modelo. La mejor forma de mostrar tal incertidumbre a los interesados es mediante mapas de los enlaces modelados bajo diferentes suposiciones. También recomendamos modelar a habitantes de corredores (especies que requieren más de una generación para mover sus genes entre bloques de tierra silvestre) de manera diferente que las especies pasajeras (un individuo se puede mover entre bloques de tierras silvestres en unas cuantas semanas). Identificamos un problema, que denominamos el problema de traducción subjetiva, que surge porque un analista debe decidir subjetivamente cómo traducir medidas de selección de recursos a resistencia. Este problema puede ser sobrepuesto mediante la estimación de la resistencia a partir de observaciones de movimientos de animales, distancias genéticas o movimientos entre fragmentos. Hay espacio para la mejora sustancial de los procedimientos utilizados para diseñar enlaces robustos ante el cambio climático y en herramientas que permiten que los interesados comparen un diseño óptimo con diseños alternativos que minimicen costos o alcancen otras metas de conservación. [source]


Writing the "Show,Me" Standards: Teacher Professionalism and Political Control in U.S. State Curriculum Policy

CURRICULUM INQUIRY, Issue 3 2002
Margaret Placier
This qualitative case study analyzes the process of writing academic standards in one U.S. state, Missouri. The researchers took a critical pragmatic approach, which entailed close examination of the intentions and interactions of various participants in the writing process (teachers, politicians, business leaders, the public), in order to understand the text that was finally produced. School reform legislation delegated responsibility for writing the standards to a teacher work group, but the teachers found that their "professional" status and their intention to write standards that reflected a "constructivist" view of knowledge would meet with opposition. Politicians, who held different assumptions about the audience, organization, and content of the standards, exercised their greater power to control the outcome of the process. As the researchers analyzed public records and documents generated during the writing process, they constructed a chronological narrative detailing points of tension among political actors. From the narrative, they identified four conflicts that significantly influenced the final wording of the standards. They argue that as a consequence of these conflicts, Missouri's standards are characterized by a dichotomous view of content and process; bland, seemingly value,neutral language; and lack of specificity. Such conflicts and outcomes are not limited to this context. A comparative, international perspective shows that they seem to occur when groups in societies marked by political conflicts over education attempt to codify what "all students should know." [source]


MANDATORY HIV TESTING IN PREGNANCY: IS THERE EVER A TIME?

DEVELOPING WORLD BIOETHICS, Issue 1 2008
RUSSELL ARMSTRONG
ABSTRACT Despite recent advances in ways to prevent transmission of HIV from a mother to her child during pregnancy, infants continue to be born and become infected with HIV, particularly in southern Africa where HIV prevalence is the highest in the world. In this region, emphasis has shifted from voluntary HIV counselling and testing to routine testing of women during pregnancy. There have also been proposals for mandatory testing. Could mandatory testing ever be an option, even in high-prevalence settings? Many previous examinations of mandatory testing have dealt with it in the context of low HIV prevalence and a well-resourced health care system. In this discussion, different assumptions are made. Within this context, where mandatory testing may be a strategy of last resort, the objections to it are reviewed. Special attention is paid in the discussion to the entrenched vulnerability of women in much of southern Africa and how this contributes to both HIV prevalence and ongoing challenges for preventing HIV transmission during pregnancy. While mandatory testing is ethically plausible, particularly when coupled with guaranteed access to treatment and care, the discussion argues that the moment to employ this strategy has not yet come. Many barriers remain for pregnant women in terms of access to testing, treatment and care, most acutely in the southern African setting, despite the presence of national and international human rights instruments aimed at empowering women and removing such barriers. While this situation persists, mandatory HIV testing during pregnancy cannot be justified. [source]


Body size and host range in European Heteroptera

ECOGRAPHY, Issue 1 2000
Martin Brändle
We used data on body size and host range of phytophagous Heteroptera in central Europe, an inverse measure of specialisation, to analyse the relationship of body size vs specialisation: 1) we found a clear positive relationship between body size and host range using species as independent data points. 2) However, a nested analysis of variance shows that most of the variance in body size occurred at higher taxonomic levels whereas most of the variance in host specialisation occurred between species. This suggests different phylogenetic inertia of body size and specialisation. Nevertheless, using means of different higher taxonomic levels there is still a significant positive correlation between body size and host range. 3) With more sophisticated methods of correcting for the phylogenetic relatedness between species, the positive correlation between body size and host range still holds, despite the different assumptions of each method. Thus, the relationship between body size and host range is a very robust pattern in true bugs. [source]


Towards an integration of ecological stoichiometry and the metabolic theory of ecology to better understand nutrient cycling

ECOLOGY LETTERS, Issue 5 2009
Andrew P. Allen
Abstract Ecologists have long recognized that species are sustained by the flux, storage and turnover of two biological currencies: energy, which fuels biological metabolism and materials (i.e. chemical elements), which are used to construct biomass. Ecological theories often describe the dynamics of populations, communities and ecosystems in terms of either energy (e.g. population-dynamics theory) or materials (e.g. resource-competition theory). These two classes of theory have been formulated using different assumptions, and yield distinct, but often complementary predictions for the same or similar phenomena. For example, the energy-based equation of von Bertalanffy and the nutrient-based equation of Droop both describe growth. Yet, there is relatively little theoretical understanding of how these two distinct classes of theory, and the currencies they use, are interrelated. Here, we begin to address this issue by integrating models and concepts from two rapidly developing theories, the metabolic theory of ecology and ecological stoichiometry theory. We show how combining these theories, using recently published theory and data along with new theoretical formulations, leads to novel predictions on the flux, storage and turnover of energy and materials that apply to animals, plants and unicells. The theory and results presented here highlight the potential for developing a more general ecological theory that explicitly relates the energetics and stoichiometry of individuals, communities and ecosystems to subcellular structures and processes. We conclude by discussing the basic and applied implications of such a theory, and the prospects and challenges for further development. [source]


Two Competing Models of How People Learn in Games

ECONOMETRICA, Issue 6 2002
Ed Hopkins
Reinforcement learning and stochastic fictitious play are apparent rivals as models of human learning. They embody quite different assumptions about the processing of information and optimization. This paper compares their properties and finds that they are far more similar than were thought. In particular, the expected motion of stochastic fictitious play and reinforcement learning with experimentation can both be written as a perturbed form of the evolutionary replicator dynamics. Therefore they will in many cases have the same asymptotic behavior. In particular, local stability of mixed equilibria under stochastic fictitious play implies local stability under perturbed reinforcement learning. The main identifiable difference between the two models is speed: stochastic fictitious play gives rise to faster learning. [source]


Road pricing: lessons from London

ECONOMIC POLICY, Issue 46 2006
Georgina Santos
SUMMARY Road pricing LESSONS FROM LONDON This paper assesses the original London Congestion Charging Scheme (LCCS) and its impacts, and it simulates the proposed extension which will include most of Kensington and Chelsea. It also touches upon the political economy of the congestion charge and the increase of the charge from £5 to £8 per day. The possibility of transferring the experience to Paris, Rome and New York is also discussed. The LCCS has had positive impacts. This was despite the considerable political influences on the charge level and location. It is difficult to assess the impacts of the increase of the charge from £5 to £8, which took place in July 2005, because no data have yet been released by Transport for London. The proposed extension of the charging zone does not seem to be an efficient change on economic grounds, at least for the specific boundaries, method of charging and level of charging that is currently planned. Our benefit cost ratios computed under different assumptions of costs and benefits are all below unity. Overall, the experience shows that simple methods of congestion charging, though in no way resembling first-best Pigouvian taxes, can do a remarkably good job of creating benefits from the reduction of congestion. Nevertheless, the magnitude of these benefits can be highly sensitive to the details of the scheme, which therefore need to be developed with great care. , Georgina Santos and Gordon Fraser [source]


ACCUMULATING DOBZHANSKY-MULLER INCOMPATIBILITIES: RECONCILING THEORY AND DATA

EVOLUTION, Issue 6 2004
John J. Welch
Abstract Theoretical models of the accumulation of Dobzhansky-Muller incompatibilities (DMIs) are studied, and in particular, the framework introduced by Orr (1995) and a verbal model introduced by Kondrashov et al. (2002). These models embody very different assumptions about the relationship between the substitution process underlying evolutionary divergence and the formation of incompatibilities. These differences have implications for our ability to make inferences about the divergence from patterns in the relevant data. With this in mind, the models are investigated for their ability to account for three patterns evident in this data: (1) the asymmetrical nature of incompatibilities under reciprocal introgression; (2) the finding that multiple concurrent introgressions may be necessary for an incompatibility to form; and (3) the finding that the probability of obtaining an incompatibility by introgressing a single amino acid remains roughly constant over a wide range of genetic distances. None of the models available in the literature can account for all of the empirical patterns. However, modified versions of the models can do so. Ways of discriminating between the different models are then discussed. [source]


A framework for incorporating climate regime shifts into the management of marine resources

FISHERIES MANAGEMENT & ECOLOGY, Issue 2 2006
J. R. KING
Abstract, It is possible to use an ecosystem-based management approach to incorporate knowledge of climate regime impacts on ecosystem productivity to manage fishery resources. To do so, it requires the development of a coherent framework that can be built using existing stock assessment and management activities: ecosystem assessment, risk analyses, adaptive management and reference points. This paper builds such a framework and uses two population simulations to illustrate the benefits and tradeoffs of variable regime-specific harvest rates. The framework does not require prediction of regime shifts, but assumes that detection can occur soon after one has happened. As such, decisions do not need to be coincident to regime shifts, but can be delayed by an appropriate period of time that is linked to a species' life history, i.e. age of maturity or recruitment. Fisheries scientists should provide harvest recommendations that reflect a range of levels of risk to the stock under different assumptions of productivity. Coupling ecosystem assessment with ecosystem-based management would allow managers to select appropriate regime-specific harvest rates. [source]


Subglacial drainage system structure and morphology of Brewster Glacier, New Zealand

HYDROLOGICAL PROCESSES, Issue 3 2009
Ian Willis
Abstract A global positioning system and ground penetrating radar surveys is used to produce digital elevation models of the surface and bed of Brewster Glacier. These are used to derive maps of subglacial hydraulic potential and drainage system structure using three different assumptions about the subglacial water pressure (Pw): (i) Pw = ice overburden; (ii) Pw = half ice overburden; (iii) Pw = atmospheric. Additionally, 16 dye-tracing experiments at 12 locations were performed through a summer melt season. Dye return curve shape, together with calculations of transit velocity, dispersivity and storage, are used to infer the likely morphology of the subglacial drainage system. Taken together, the data indicate that the glacier is underlain by a channelised but hydraulically inefficient drainage system in the early summer in which water pressures are close to ice overburden. By mid-summer, water pressures are closer to half-ice overburden and the channelised drainage system is more hydraulically efficient. Surface streams that enter the glacier close to the location of major subglacial drainage pathways are routed quickly to the channels and then to the glacier snout. Streams that enter the glacier further away from the drainage pathways are routed slowly to the channels and then to the snout because they first flow through a distributed drainage system. Copyright © 2008 John Wiley & Sons, Ltd. [source]


A numerical method to solve the m -terms of a submerged body with forward speed

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 5 2002
W.-Y. Duan
Abstract To model mathematically the problem of a rigid body moving below the free surface, a control surface surrounding the body is introduced. The linear free surface condition of the steady waves created by the moving body is satisfied. To describe the fluid flow outside this surface a potential integral equation is constructed using the Kelvin wave Green function whereas inside the surface, a source integral equation is developed adopting a simple Green function. Source strengths are determined by matching the two integral equations through continuity conditions applied to velocity potential and its normal derivatives along the control surface. After solving for the induced fluid velocity on the body surface and the control surface, an integral equation is derived involving a mixed distribution of sources and dipoles using a simple Green function and one component of the fluid velocity. The normal derivatives of the fluid velocity on the body surface, namely the m -terms, are then solved by this matching integral equation method (MIEM). Numerical results are presented for two elliptical sections moving at a prescribed Froude number and submerged depth and a sensitivity analysis undertaken to assess the influence of these parameters. Furthermore, comparisons are performed to analyse the impact of different assumptions adopted in the derivation of the m -terms. It is found that the present method is easy to use in a panel method with satisfactory numerical precision. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Review of generative models for the narrowband land mobile satellite propagation channel

INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 4 2008
F. P. Fontan
Abstract The land mobile satellite (LMS) propagation channel is frequently described using statistical models. These models usually make different assumptions regarding the behavior of the direct signal, the diffuse multipath component and the shadowing effects. This paper analyzes the theoretical formulation and implementation of time-series synthesizers based on three typical statistical models: Loo, Corazza,Vatalaro and Suzuki, describing their similarities and differences. The discussion is not limited to the amplitude of the complex envelope but also to the phase variations and Doppler spectra. Finally, guidelines are also provided for comparing model parameters supplied by different authors. Copyright © 2008 John Wiley & Sons, Ltd. [source]


The intersection of scientific and indigenous ecological knowledge in coastal Melanesia: implications for contemporary marine resource management*

INTERNATIONAL SOCIAL SCIENCE JOURNAL, Issue 187 2006
Simon Foale
Fundamental differences in the worldviews of western marine scientists and coastal Melanesian fishers have resulted in very different conclusions being drawn from similar sets of observations. The same inductive logic may lead both scientists and indigenous fishers to conclude that, say, square-tail trout aggregate at a certain phase of the moon in a certain reef passage, but different assumptions derived from disparate worldviews may lead to very different conclusions about why the fish are there. In some cases these differences have significant implications for the way marine resources are (or are not) exploited and managed. Here I analyse examples of what I call empirical gaps in both scientific and indigenous knowledge concerning the biology and ecology of fished organisms that in some cases have led to the poor management of stocks of these species. I argue that scientific education can complement indigenous knowledge systems and thus lead to improved resource management, despite some claims that scientific and indigenous knowledge systems are incommensurable. [source]


Two Faces of Liberalism: Kant, Paine, and the Question of Intervention

INTERNATIONAL STUDIES QUARTERLY, Issue 3 2008
Thomas C. Walker
Compared with the realist tradition, relatively few students of international relations explore variations within liberalism. This paper introduces a particular interpretation of Immanuel Kant's evolutionary liberalism and then compares it with Thomas Paine's revolutionary liberalism. Paine was an ebullient optimist while Kant was more guarded and cautious. These different assumptions lead to distinct liberal views on voting rights, how trade fosters peace, and defense policies. The most striking disagreement, and one that endures in contemporary liberal circles, revolves around the question of military interventions to spread democratic rule. Kant advocated nonintervention while Paine actively pursued military intervention to spread democratic rule. Differences between Kant and Paine represent some enduring tensions still residing within the liberal tradition in international relations. [source]


Fear in International Politics: Two Positions

INTERNATIONAL STUDIES REVIEW, Issue 3 2008
Shiping Tang
There are two,and only two,fundamental positions on how to cope with the fear that is derived from the uncertainty over others' intentions in international relations (IR) literature. Because these two positions cannot be deduced from other bedrock assumptions within the different IR approaches, the two positions should be taken as an additional bedrock assumption. The first position, held by offensive realism, insists that states should assume the worst over others' intentions, thus essentially eliminating the uncertainty about others' intentions. The second position, held by a more diverse bunch of non-offensive realism theories, insists that states should not always assume the worst over others' intentions and that states can and should take measures to reduce uncertainty about each others intentions and thus fear. These two different assumptions are quintessential for the logic of the different theoretical approaches and underpin some of the fundamental differences between offensive realism on the one side and non-offensive realism theories on the other side. Making the two positions explicit helps us understand IR theories and makes dialogues among non-offensive realism theories possible. [source]


Effect of non-response bias in pressure ulcer prevalence studies

JOURNAL OF ADVANCED NURSING, Issue 2 2006
Nils Lahmann BA RN
Aim., This paper reports a study to determine the prevalence of pressure ulcers in German hospitals and nursing homes for national and international comparison, and analyses the influence of non-response bias. Background., Outcome rates are often used to evaluate provider performance. The prevalence of pressure ulcers is seen as a possible parameter of outcome healthcare quality. However, the results from different pressure ulcer prevalence studies cannot be compared, because there is no standardized methodology and terminology. Observed and published prevalence rates may reflect variations in quality of care, but differences could also relate to differences in case-mix or to random variation. Methods., A point prevalence survey was carried out for 2002 and 2003 using data from 21,574 patients and residents in 147 different kinds of institutions throughout Germany. Participation rates and reasons for not participating in the study were documented. Non-responders were considered in different calculations to show the range of possible prevalence rate for a hypothetic 100% participation. Results., In 2002 and 2003, the calculated prevalence rate (among participating persons at risk) in hospitals was 25·1% and 24·2% respectively, while in nursing homes it was 17·3% and 12·5% respectively. Non-response varied from 15·1% to 25·1%. The majority of non-responders in hospitals and nursing homes had not been willing to participate in the study. Based on different assumptions about the characteristics of the non-responders, we calculated minimum and maximum prevalence rates as if 100% participation was achieved. Conclusions., Calculating the non-response bias of prevalence rates is an inconvenient but necessary thing to do because its influence on calculated prevalence rates was high in this study. High participation rates in clinical studies will minimize non-response bias. If non-response cannot be avoided, the formula provided will help researchers calculate possible minimum and maximum prevalence rates for the total sample of both the responding and non-responding groups. [source]


Bayesian comparison of test-day models under different assumptions of heterogeneity for the residual variance: the change point technique versus arbitrary intervals

JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 1 2004
P. López-Romero
Summary Test-day milk yields from Spanish Holstein cows were analysed with two random regression models based on Legendre polynomials under two different assumptions of heterogeneity of residual variance which aim to describe the variability of temporary measurement errors along days in milk with a reduced number of parameters, such as (i) the change point identification technique with two unknown change points and (ii) using 10 arbitrary intervals of residual variance. Both implementations were based on a previous study where the trajectory of the residual variance was estimated using 30 intervals. The change point technique has been previously implemented in the analysis of the heterogeneity of the residual variance in the Spanish population, yet no comparisons with other methods have been reported so far. This study aims to compare the change point technique identification versus the use of arbitrary intervals as two possible techniques to deal with the characterization of the residual variance in random regression test-day models. The Bayes factor and the cross-validation predictive densities were employed for the model assessment. The two model-selecting tools revealed a strong consistency between them. Both specifications for the residual variance were close to each other. The 10 intervals modelling showed a slightly better performance probably because the change point function overestimates the residual variance values at the very early lactation. Zusammenfassung Testtagsgemelke von Spanischen Holstein-Kühen wurden mittels zweier zufälliger Regressionsmodelle, basierend auf Legendre Polynomen, unter zwei unterschiedlichen Voraussetzungen von Heterogenität der Residualvarianz, untersucht, um die Variabilität der Restvarianz der Milchleistung der Testtage durch so wenig Parameter wie möglich beschreiben zu können: 1) dem Verfahren des Wechsel-Identifikationspunktes mit zwei unbekannten Änderungspunkten und 2) der Verwendung von 10 frei gewählten Intervallen der Residualvarianz. Beide Anwendungen beruhen auf einer vorherigen Untersuchung, in der der Verlauf der Residualvarianz durch die Verwendung von 30 Intervallen geschätzt wurde. Das Wechsel-Identifikationspunkt Verfahren wurde bereits bei der Untersuchung der Residualvarianz in der spanischen Population verwendet, aber das Verfahren wurde noch nicht mit anderen Methoden verglichen. Das Ziel dieser Studie war der Vergleich des Wechsel-Identifikationspunkt Verfahrens mit dem Gebrauch von frei wählbaren Intervallen als zwei Möglichkeiten zur Charakterisierung der Residualvarianz in zufälligen Testtags-Regressionsmodellen. Der Bayes'sche Faktor und die Vorhersage der Vergleichsprüfungsdichten wurden zur Bewertung der Modelle verwandt. Beide Verfahren zeigten eine überzeugende Konsistenz der Modelle und die Beschreibung der Residualvarianzen stimmte in beiden Fällen überein. Die Modellierung mit 10 Intervallen zeigte eine etwas bessere Leistung, möglicherweise weil die Wechsel-Identifikationspunkt Funktion die Residualvarianz in der sehr frühen Laktation überbewertet. [source]


Real Options Analysis: Where Are the Emperor's Clothes?

JOURNAL OF APPLIED CORPORATE FINANCE, Issue 2 2005
Adam Borison
Once a topic of interest only to finance specialists, real options analysis now receives active, mainstream attention in business schools and industry. This article provides practitioners with a critical review of five well-established real options approaches that are extensively documented in the academic and professional literature. These approaches include the "classic approach" and "revised classic approach" (as proposed by Martha Amram and Nalin Kulatilaka), the "subjective approach" (as proposed by Tim Luehrman), the "MAD Approach" (as proposed by Tom Copeland and Vladimir Antikarov), and the "integrated approach" (as proposed by James Smith and Robert Nau). The article discusses the assumptions, mechanics, and potential range of applications of each approach, along with the results when applied to a simple case involving development of a natural gas field. While the approaches share a focus on investment flexibility and shareholder value, they rely on fundamentally different assumptions, use significantly different techniques, and can produce dramatically different results. Consequently, a great deal of thought must go into selecting and applying them in practice. The revised classic approach appears to be best suited to cases dominated either by "market" risk or "private" risk alone, and where approximate results are acceptable and resources are limited. The integrated approach is best suited to cases with a mix of market and technological risks, and where accuracy and a management roadmap are critical. [source]


Hierarchical spatial models for predicting pygmy rabbit distribution and relative abundance

JOURNAL OF APPLIED ECOLOGY, Issue 2 2010
Tammy L. Wilson
Summary 1.,Conservationists routinely use species distribution models to plan conservation, restoration and development actions, while ecologists use them to infer process from pattern. These models tend to work well for common or easily observable species, but are of limited utility for rare and cryptic species. This may be because honest accounting of known observation bias and spatial autocorrelation are rarely included, thereby limiting statistical inference of resulting distribution maps. 2.,We specified and implemented a spatially explicit Bayesian hierarchical model for a cryptic mammal species (pygmy rabbit Brachylagus idahoensis). Our approach used two levels of indirect sign that are naturally hierarchical (burrows and faecal pellets) to build a model that allows for inference on regression coefficients as well as spatially explicit model parameters. We also produced maps of rabbit distribution (occupied burrows) and relative abundance (number of burrows expected to be occupied by pygmy rabbits). The model demonstrated statistically rigorous spatial prediction by including spatial autocorrelation and measurement uncertainty. 3.,We demonstrated flexibility of our modelling framework by depicting probabilistic distribution predictions using different assumptions of pygmy rabbit habitat requirements. 4.,Spatial representations of the variance of posterior predictive distributions were obtained to evaluate heterogeneity in model fit across the spatial domain. Leave-one-out cross-validation was conducted to evaluate the overall model fit. 5.,Synthesis and applications. Our method draws on the strengths of previous work, thereby bridging and extending two active areas of ecological research: species distribution models and multi-state occupancy modelling. Our framework can be extended to encompass both larger extents and other species for which direct estimation of abundance is difficult. [source]


The PLS model space revisited

JOURNAL OF CHEMOMETRICS, Issue 2 2009
Svante Wold
Abstract Pell, Ramos and Manne (PRM) in a recent article in this journal claim that the ,conventional' PLS algorithm with orthogonal scores has an inherent inconsistency in that it uses different model spaces for calculating the prediction model coefficients and for calculating the X -space model and it's residuals [1]. We disagree with PRM. All PLS model scores, residuals, coefficients, etc., obtained by the conventional PLS algorithm do come from the same underlying latent variable (LV) model, and not from different models or model spaces as PRM suggest. PRM have simply posed a different model with different assumptions and obtained slightly different results, as should have been expected. Copyright © 2008 John Wiley & Sons, Ltd. [source]


VOTING, INEQUALITY AND REDISTRIBUTION

JOURNAL OF ECONOMIC SURVEYS, Issue 1 2007
Rainald Borck
Abstract This paper surveys models of voting on redistribution. Under reasonable assumptions, the baseline model produces an equilibrium with the extent of redistributive taxation chosen by the median income earner. If the median is poorer than average, redistribution is from rich to poor, and increasing inequality increases redistribution. However, under different assumptions about the economic environment, redistribution may not be simply rich to poor, and inequality need not increase redistribution. Several lines of argument are presented, in particular, political participation, public provision of private goods, public pensions, and tax avoidance or evasion. [source]


An Approach to Evaluating the Missing Data Assumptions of the Chain and Post-stratification Equating Methods for the NEAT Design

JOURNAL OF EDUCATIONAL MEASUREMENT, Issue 1 2008
Paul W. Holland
Two important types of observed score equating (OSE) methods for the non-equivalent groups with Anchor Test (NEAT) design are chain equating (CE) and post-stratification equating (PSE). CE and PSE reflect two distinctly different ways of using the information provided by the anchor test for computing OSE functions. Both types of methods include linear and nonlinear equating functions. In practical situations, it is known that the PSE and CE methods will give different results when the two groups of examinees differ on the anchor test. However, given that both types of methods are justified as OSE methods by making different assumptions about the missing data in the NEAT design, it is difficult to conclude which, if either, of the two is more correct in a particular situation. This study compares the predictions of the PSE and CE assumptions for the missing data using a special data set for which the usually missing data are available. Our results indicate that in an equating setting where the linking function is decidedly non-linear and CE and PSE ought to be different, both sets of predictions are quite similar but those for CE are slightly more accurate. [source]


Exploring strategic priorities for regional agricultural R&D investments in East and Central Africa

AGRICULTURAL ECONOMICS, Issue 2 2010
Liangzhi You
O13; O32; O55; Q16 Abstract The 11 countries of East and Central Africa have diverse but overlapping agroclimatic conditions, and could potentially benefit from spillovers of agricultural technology across country borders. This article uses high-resolution spatial data on actual and potential yields for 15 major products across 12 development domains to estimate the total benefits available from the spread of new agricultural technologies around the region. Market responses and welfare gains are estimated using the,Dynamic Research Evaluation for Management,model, taking account of current and future projections of local and international demand. Results suggest which crops, countries, and agroclimatic regions offer the largest total benefits. Downloadable data and program files permit different assumptions and additional information to be considered in the ongoing process of strategic priority setting. [source]


Bulges versus discs: the evolution of angular momentum in cosmological simulations of galaxy formation

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 1 2008
Jesus Zavala
ABSTRACT We investigate the evolution of angular momentum in simulations of galaxy formation in a cold dark matter universe. We analyse two model galaxies generated in the N -body/hydrodynamic simulations of Okamoto et al. Starting from identical initial conditions, but using different assumptions for the baryonic physics, one of the simulations produced a bulge-dominated galaxy and the other one a disc-dominated galaxy. The main difference is the treatment of star formation and feedback, both of which were designed to be more efficient in the disc-dominated object. We find that the specific angular momentum of the disc-dominated galaxy tracks the evolution of the angular momentum of the dark matter halo very closely: the angular momentum grows as predicted by linear theory until the epoch of maximum expansion and remains constant thereafter. By contrast, the evolution of the angular momentum of the bulge-dominated galaxy resembles that of the central, most bound halo material: it also grows at first according to linear theory, but 90 per cent of it is rapidly lost as pre-galactic fragments, into which gas had cooled efficiently, merge, transferring their orbital angular momentum to the outer halo by tidal effects. The disc-dominated galaxy avoids this fate because the strong feedback reheats the gas, which accumulates in an extended hot reservoir and only begins to cool once the merging activity has subsided. Our analysis lends strong support to the classical theory of disc formation whereby tidally torqued gas is accreted into the centre of the halo conserving its angular momentum. [source]


Giant repeated ejections from GRS 1915+105

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 1 2000
R. P. Fender
We report simultaneous millimetre and infrared observations of a sequence of very large amplitude quasi-periodic oscillations from the black hole X-ray binary GRS 1915+105. These oscillations are near the end of a sequence of over 700 repeated events as observed at 15 GHz, and are simultaneous at the mm and infrared wavelengths to within our time resolution (4 min), consistent with the respective emitting regions being physically close near the base of the outflow. One infrared event appears to have no mm counterpart, perhaps owing to highly variable absorption. The overall radio,mm,infrared spectrum around the time of the observations does suggest some absorption at lower frequencies. We calculate the energy and mass-flow into the outflow for a number of different assumptions, and find that the time-averaged power required to produce the observed synchrotron emission cannot be much lower than 3×1038 erg s,1, and is likely to be much larger. This minimum power requirement is found regardless of whether the observed emission arises in discrete ejections or in an internal shock in a quasi-continuous flow. Depending on the similarity of the physical conditions in the two types of ejection, GRS 1915+105 may be supplying more power (and mass, if both have the same baryonic component) to the jet during periods of repeated oscillations than during the more obvious larger events. [source]


Gender identity and adjustment: Understanding the impact of individual and normative differences in sex typing

NEW DIRECTIONS FOR CHILD & ADOLESCENT DEVELOPMENT, Issue 120 2008
Leah E. Lurye
The relationship among gender identity, sex typing, and adjustment has attracted the attention of social and developmental psychologists for many years. However, they have explored this issue with different assumptions and different approaches. Generally the approaches differ regarding whether sex typing is considered adaptive versus maladaptive, measured as an individual or normative difference, and whether gender identity is regarded as a unidimensional or multidimensional construct. In this chapter, we consider both perspectives and suggest that the developmental timing and degree of sex typing, as well as the multidimensionality of gender identity, be considered when examining their relationship to adjustment. © Wiley Periodicals, Inc. [source]


Practice development: purpose, methodology, facilitation and evaluation

NURSING IN CRITICAL CARE, Issue 1 2003
Kim Manley
Summary ,Different approaches to practice development are associated with different assumptions, and these need to be made explicit if ­practice development is to be transparent, rigorous and systematic in its intentions and approaches ,A practice development methodology underpinned by critical social science is advocated because it focuses on achieving sustain­able change through practitioner enlightenment, empowerment and emancipation and an associated culture, rather than focusing only on technical practice development ,Implications of different worldviews about practice development for facilitation and outcome evaluation are highlighted ,Emancipatory practice development underpinned by critical social science is argued as synonymous to emancipatory action research [source]


The Challenge of New Science

PERFORMANCE IMPROVEMENT QUARTERLY, Issue 2 2007
Gordon Rowland
A wide range of developments in science in recent years has altered our views of our world and ourselves in significant ways. These views challenge the direction of applied science and technology in many fields, including those associated with learning and performance in organizations. At the same time, they open up opportunities and possibilities. To take advantage, new approaches based on different assumptions are implied. This article summarizes a set of concepts associated with complexity and relates them to work in organizations. The final article of the issue then relates these same concepts to human performance technology. [source]


Personality constructs and measures

PSYCHOLOGY IN THE SCHOOLS, Issue 3 2007
Hedwig Teglasi
A psychological construct, such as personality, is an abstraction that is not directly seen but inferred through observed regularities in cognitive, affective, and behavioral responses in various settings. Two assumptions give meaning to the idea of construct validity. First, constructs represent real phenomena that exist apart from the potential ways in which they are measured. Second, constructs have a causal relation to their measures (see D. Boorsboom, G.J. Mellenbergh, & J. van Heerden, 2004). According to these twin assumptions, variation in a construct such as personality or intelligence causes individual differences in responses to items on measures; it also accounts for performance in real life settings. An alternative perspective equates a construct with the operation used for its measurement (M. Friedman, 1991) without a presumption of causality. This article elaborates on the implications of different assumptions about measurement, operation-referenced and construct-referenced, for advancing the science and practice of psychology. © 2007 Wiley Periodicals, Inc. Psychol Schs 44: 215,228, 2007. [source]


Causal Inference with Differential Measurement Error: Nonparametric Identification and Sensitivity Analysis

AMERICAN JOURNAL OF POLITICAL SCIENCE, Issue 2 2010
Kosuke Imai
Political scientists have long been concerned about the validity of survey measurements. Although many have studied classical measurement error in linear regression models where the error is assumed to arise completely at random, in a number of situations the error may be correlated with the outcome. We analyze the impact of differential measurement error on causal estimation. The proposed nonparametric identification analysis avoids arbitrary modeling decisions and formally characterizes the roles of different assumptions. We show the serious consequences of differential misclassification and offer a new sensitivity analysis that allows researchers to evaluate the robustness of their conclusions. Our methods are motivated by a field experiment on democratic deliberations, in which one set of estimates potentially suffers from differential misclassification. We show that an analysis ignoring differential measurement error may considerably overestimate the causal effects. This finding contrasts with the case of classical measurement error, which always yields attenuation bias. [source]