Assumptions Used (assumption + used)

Distribution by Scientific Domains


Selected Abstracts


Evaluating explicit and implicit routing for watershed hydro-ecological models of forest hydrology at the small catchment scale

HYDROLOGICAL PROCESSES, Issue 8 2001
C. L. Tague
Abstract This paper explores the behaviour and sensitivity of a watershed model used for simulating lateral soil water redistribution and runoff production. In applications such as modelling the effects of land-use change in small headwater catchments, interactions between soil moisture, runoff and ecological processes are important. Because climate, soil and canopy characteristics are spatially variable, both the pattern of soil moisture and the associated outflow must be represented in modelling these processes. This study compares implicit and explicit routing approaches to modelling the evolution of soil moisture pattern and spatially variable runoff production. It also addresses the implications of using different landscape partitioning strategies. This study presents the results of calibration and application of these different routing and landscape partitioning approaches on a 60 ha forested watershed in Western Oregon. For comparison, the different approaches are incorporated into a physically based hydro-ecological model, RHESSys, and the resulting simulated soil moisture, runoff production and sensitivity to unbiased error are examined. Results illustrate that both routing approaches can be calibrated to achieve a reasonable fit between observed and modelled outflow. Calibrated values for effective watershed hydraulic conductivity are higher for the explicit routing approach, which illustrates differences between the two routing approaches in their representation of internal watershed dynamics. The explicit approach illustrates a seasonal shift in drainage organization from watershed to more local control as climate goes from a winter wet to a summer dry period. Assumptions used in the implicit approach maintain the same pattern of drainage organization throughout the season. The implicit approach is also more sensitive to random error in soil and topographic input information, particularly during wetter periods. Comparison between the two routing approaches illustrates the advantage of the explicit routing approach, although the loss of computational efficiency associated with the explicit routing approach is noted. To compare different strategies for partitioning the landscape, the use of a non-grid-based method of partitioning is introduced and shown to be comparable to grid-based partitioning in terms of simulated soil moisture and runoff production. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Management of patients with acoustic neuromas: A Markov decision analysis,

THE LARYNGOSCOPE, Issue 4 2010
Daniel Morrison MD
Abstract Objectives/Hypothesis: The management of patients with small (<1.5 cm) acoustic neuromas is controversial. Immediate treatment via microsurgical resection or radiosurgery is often advocated. A period of observation is sometimes advised followed by microsurgery or radiosurgery for tumors that demonstrate growth during the observation period. The purpose of this study is to calculate quality-adjusted life expectancy for the most commonly applied management strategies in hypothetical cohorts of patients of various ages. Study Design: Markov decision analysis; societal perspective. Methods: Assumptions used in creating this model and event probabilities were obtained from a thorough literature review. Key parameters were identified and defined by the best available evidence. The main outcome measure is the benefit derived from each management strategy in quality-adjusted life years (QALYs). Sensitivity analysis was used to define benchmark performance information for these parameters. Results: The benefit of a period of observation followed by radiosurgery, if needed, for significant tumor growth is greater then all other strategies for all age groups and both sexes. When compared to observation followed by microsurgery, the additional benefit is small. QALY totals for the two immediate treatment groups were significantly lower than that for the observation groups. Conclusions: For patients of all ages, a period of observation during which tumor growth and hearing thresholds are closely monitored is the superior strategy. For tumors that grow substantially or when hearing deteriorates, definitive management via radiosurgery is recommended. Laryngoscope, 2010 [source]


Kinematic models for non-coaxial granular materials.

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 7 2005
Part II: evaluation
Abstract In this paper we present results of numerical simulations for the evaluation of kinematic models for non-coaxial granular materials by the distinct element method (DEM). Strain-rate controlled monotonic and cyclic un-drained simple shear tests were specifically designed and evaluation criteria established for this purpose. The models examined are the double-shearing model, the double-sliding free-rotating model, and the double slip and rotation rate model (DSR2 model) proposed by the authors (see the accompanying paper). It is shown that the assumption used in the double-shearing model appears to not be in agreement with the DEM data. It is also shown that in the double-sliding free-rotating model the energy dissipation requirements appear to be unduly restrictive as a constitutive assumption. The DSR2 model, which is a hybrid of discrete micro-mechanics and continuum modelling, gives better agreement with the results of our DEM simulations, than either the double-shearing model or the double-sliding free-rotating model. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Evaluation of Uncertainties Associated with Geocoding Techniques

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 3 2004
Hassan A. Karimi
Geocoded data play a major role in numerous engineering applications such as transportation and environmental studies where geospatial information systems (GIS) are used for spatial modeling and analysis as they contain spatial information (e.g., latitude and longitude) about objects. The information that a GIS produces is impacted by the quality of the geocoded data (e.g., coordinates) stored in its database. To make appropriate and reasonable decisions using geocoded data, it is important to understand the sources of uncertainty in geocoding. There are two major sources of uncertainty in geocoding, one related to the database that is used as a reference data set to geocode objects and one related to the interpolation technique used. Factors such as completeness, correctness, consistency, currency, and accuracy of the data in the reference database contribute to the uncertainty of the former whereas the specific logic and assumptions used in an interpolation technique contribute to the latter. The primary purpose of this article is to understand uncertainties associated with interpolation techniques used for geocoding. In doing so, three geocoding algorithms were used and tested and the results were compared with the data collected by the Global Positioning System (GPS). The result of the overall comparison indicated no significant differences between the three algorithms. [source]


A case of constitutional apples and oranges: a functional comparison of pension priority and benefit guarantees in U.S., U.K. and Canadian insolvency and pension law regimes

INTERNATIONAL INSOLVENCY REVIEW, Issue 2 2009
Ronald B. Davis
Canada's insolvency law reform increased the priority granted to employer-sponsored pension claims. The article compares the treatment of such claims in the U.S., the U.K. and Canada. A comparison of the legislative provisions concerning pension funding shortfalls from contribution arrears or economic underperformance in relation to the assumptions used for investment income or liability valuations finds that insolvency law has been used to address contribution arrears, but risks from economic underperformance have been addressed by pension benefit insurance. Post-insolvency priority for contribution arrears provides appropriate incentives to discourage pre-insolvency preferences for payments to other creditors, while shortfalls from economic underperformance do not involve issues of preference between creditors. The absence of any insolvency rationale for changing priority for shortfalls from economic underperformance and the likely disparity between the assets available to satisfy clams and the much larger amounts of such shortfalls makes the use of insolvency law to address this risk much less effective than insurance. Canada, however, has not adopted the insurance policy instrument used in the U.S. and U.K. to mitigate the impact of pension funding shortfalls. The constitutional inability of Canada to legislate in respect of matters of pension regulation that would allow it to control the well-known insurance problems of moral hazard and adverse selection may explain why it has only chosen to adopt an insolvency policy instrument. However, a change in priorities in insolvency may generate incentives for secured creditors that either undermine or reinforce this policy choice. Secured creditors could attempt to circumvent the new priority scheme through private arrangements with the debtor or to increase their monitoring activities to ensure the debtor is current in its pension contributions. Secured creditors choices will be influenced by the bankruptcy courts' interpretation of the preference provisions in the insolvency legislation. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Analysis of energy technology changes and associated costs

INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 12 2006
P. D. Lund
Abstract An integrated mathematical model constituting of interlinked submodels on technology costs, progress and market penetration has been developed. The model was applied to a few new energy technologies to investigate the economic boundary conditions for a full market breakthrough and corresponding market impact on a 50 years time scale. The model shows that public subsidies amounting to slightly over 220 billion , in total worldwide would be necessary over the next 30,40 years to bring wind and photovoltaics to a cost breakthrough in the market and to reach a 20 and 5% share of all electricity at t = 50 years, respectively. These up-front learning investments would be partly amortized toward the end of the interval as the new technologies become cost competitive but could be fully paid off earlier if CO2 emission trading schemes emerge even with modest CO2 price levels. The findings are sensitive to changes in the parameter assumptions used. For example, a 2% uncertainty in the main parameters of the model could lead to a spread of tens of per cents in the future energy impact and subsidy needs, or when related to the above subsidy estimate, 155,325 billion ,. This underlines the overall uncertainty in predicting future impacts and resource needs for new energy technologies. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Economic aspects and the Summer Olympics: a review of related research

INTERNATIONAL JOURNAL OF TOURISM RESEARCH, Issue 6 2003
Evangelia Kasimati
Abstract As the Summer Olympics are growing with larger media coverage and sponsorship, host cities have started to attach great importance to the tourism and other likely economic effects that occur by staging such a special event. As a result, a number of studies have been conducted to consider the various economic implications on the hosts. This paper examines and evaluates methods and assumptions used by the economic studies. It also compares ex-ante models and forecasts with the ex-post approach. The aim is to improve the information available to policy makers and potential future hosts of Summer Olympics and other mega-events. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Use of dispersal,vicariance analysis in biogeography , a critique

JOURNAL OF BIOGEOGRAPHY, Issue 1 2010
Ullasa Kodandaramaiah
Abstract Aim, Analytical methods are commonly used to identify historical processes of vicariance and dispersal in the evolution of taxa. Currently, dispersal,vicariance analysis implemented in the software diva is the most widely used method. Despite some recognized shortcomings of the method, it has been treated as error-free in many cases and used extensively as the sole method to reconstruct histories of taxa. In light of this, an evaluation of the limitations of the method is needed, especially in relation to several newer alternatives. Methods, In an approach similar to simulation studies in phylogenetics, I use hypothetical taxa evolving in specific geological scenarios and test how well diva reconstructs their histories. Results,diva reconstructs histories accurately when evolution has been simple; that is, where speciation is driven mainly by vicariance. Ancestral areas are wrongly identified under several conditions, including complex patterns of dispersals and within-area speciation events. Several potentially serious drawbacks in using diva for inferences in biogeography are discussed. These include the inability to distinguish between contiguous range expansions and across-barrier dispersals, a low probability of invoking extinctions, incorrect constraints set on the maximum number of areas by the user, and analysing the ingroup taxa without sister groups. Main conclusions, Most problems with inferences based on diva are linked to the inflexibility and simplicity of the assumptions used in the method. These are frequently invalid, resulting in spurious reconstructions. I argue that it might be dangerous to rely solely on diva optimization to infer the history of a group. I also argue that diva is not ideally suited to distinguishing between dispersal and vicariance because it cannot a priori take into account the age of divergences relative to the timing of barrier formation. I suggest that other alternative methods can be used to corroborate the findings in diva, increasing the robustness of biogeographic hypotheses. I compare some important alternatives and conclude that model-based approaches are promising. [source]


Comparison of temperate and tropical rainforest tree species: growth responses to temperature

JOURNAL OF BIOGEOGRAPHY, Issue 1 2003
S. Cunningham
Abstract Aim, To investigate whether the latitudinal distribution of rainforest trees in Australia can be explained by their growth responses to temperature. Methods, The rainforest canopy trees Acmena smithii (Poir.) Merrill & Perry, Alstonia scholaris (L.) R. Br., Castanospermum australe Cunn. & C. Fraser ex Hook., Eucryphia lucida (Labill.) Baill., Heritiera trifoliolata (F. Muell.) Kosterm., Nothofagus cunninghamii (Hook.) Oerst., Sloanea woollsii F. Muell. and Tristaniopsis laurina (Sm.) Wilson & Waterhouse were selected to cover the latitudinal range of rainforests in eastern Australia. Seedlings of these species were grown under a range of day/night temperature regimes (14/6, 19/11, 22/14, 25/17, and 30/22 °C) in controlled-environment cabinets. These seedlings were harvested after 16 weeks to determine differences in growth rate and biomass allocation among species and temperature regimes. Results, The temperate species showed maximum growth at lower temperatures than the tropical species. However, there was considerable overlap in the growth rates of the temperate and tropical rainforest types across the temperature range used. Maximum growth of the tropical rainforest types was associated with changes in biomass allocation whereas the temperate rainforest types showed no significant changes in biomass allocation across the temperature range. Main conclusions, All species showed temperatures for maximum growth that were considerably higher than those previously shown for maximum net photosynthesis. The growth responses to temperature of the rainforest species under these experimental conditions provided limited evidence for their restriction to certain latitudes. These growth responses to temperature showed that the physiological assumptions used in various types of vegetation-climate models may not be true of Australian rainforest trees. [source]


Demographic Issues in Longevity Risk Analysis

JOURNAL OF RISK AND INSURANCE, Issue 4 2006
Eric Stallard
Fundamental to the modeling of longevity risk is the specification of the assumptions used in demographic forecasting models that are designed to project past experience into future years, with or without modifications based on expert opinion about influential factors not represented in the historical data. Stochastic forecasts are required to explicitly quantify the uncertainty of forecasted cohort survival functions, including uncertainty due to process variance, parameter errors, and model misspecification errors. Current applications typically ignore the latter two sources although the potential impact of model misspecification errors is substantial. Such errors arise from a lack of understanding of the nature and causes of historical changes in longevity and the implications of these factors for the future. This article reviews the literature on the nature and causes of historical changes in longevity and recent efforts at deterministic and stochastic forecasting based on these data. The review reveals that plausible alternative sets of forecasting assumptions have been derived from the same sets of historical data, implying that further methodological development will be needed to integrate the various assumptions into a single coherent forecasting model. Illustrative calculations based on existing forecasts indicate that the ranges of uncertainty for older cohorts' survival functions will be at a manageable level. Uncertainty ranges for younger cohorts will be larger and the need for greater precision will likely motivate further model development. [source]


OSCILLATING VANE GEOMETRY FOR SOFT SOLID GELS AND FOAMS

JOURNAL OF TEXTURE STUDIES, Issue 6 2002
C. SERVAIS
ABSTRACT Several relationships between the torque and the stress exist for the vane geometry, but only a few equations have been proposed for the relationship between angular displacement and strain. In this study, an expression based on the infinite gap approximation for concentric cylinders is used and well-defined reference data are compared to oscillating vane data to validate the assumptions used. Gelatin gels are used for their property to stick to the wall and carrageenan gels are used to show that wall slip does not occur with oscillating vanes in serrated cup geometries. Shaving foams are used as a model low density, time and shear stable foam, which resists irreversible damage when loaded between serrated parallel plates. Results show that the assumptions used for the determination of stress and strain with the vane provide material viscoelastic properties that are not significantly different from reference values as obtained with concentric cylinders and parallel plates. [source]


On Different Approaches to Estimate the Mass Fractal Dimension of Coal Aggregates

PARTICLE & PARTICLE SYSTEMS CHARACTERIZATION, Issue 5 2005
Jimmy Y. H. Liao
Abstract Several methods to measure the structures of coal aggregates are compared. Loose and compact coal aggregates were generated through flocculation of ultrafine coal particles (mean volume diameter of 12,,m) under specific shearing conditions. Aggregate structure in terms of mass fractal dimension, Df, was determined using various methods; namely 2D and 3D image analysis, interpretation of intensity patterns from small angle light scattering, changes in aggregation state through light obscuration, and settling behavior. In this study, the measured values of Df ranged from 1.84,2.19 for coal aggregates with more open structures, and around 2.27,2.66 for the compact ones. All of these approaches could distinguish structural differences between aggregates, albeit with variation in Df values estimated by the different techniques. The discrepancy in the absolute values for fractal dimension is due to the different physical properties measured by each approach, depending on the assumptions used to infer Df from measurable parameters. In addition, image analysis and settling techniques are based on the examination of individual aggregates, such that a large number of data points are required to yield statistically representative estimations. Light scattering and obscuration measure the aggregates collectively to give average Df values of the particulate systems; consequently ignoring any structural variation between the aggregates, and leaving possible small contaminations undetected (e.g. by dust particles or air bubbles). Appropriate utilization of a particular method is thus largely determined by system properties and required data quality. [source]


Evaluation of gestational age and admission date assumptions used to determine prenatal drug exposure from administrative data,

PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, Issue 12 2005
Marsha A. Raebel PharmD
Abstract Objective Our aim was to evaluate the 270-day gestational age and delivery date assumptions used in an administrative dataset study assessing prenatal drug exposure compared to information contained in a birth registry. Study Design and Setting Kaiser Permanente Colorado (KPCO), a member of the Health Maintenance Organization (HMO) Research Network Center for Education and Research in Therapeutics (CERTs), previously participated in a CERTs study that used claims data to assess prenatal drug exposure. In the current study, gestational age and deliveries information from the CERTs study dataset, the Prescribing Safely during Pregnancy Dataset (PSDPD), was compared to information in the KPCO Birth Registry. Sensitivity and positive predictive value (PPV) of the claims data for deliveries were assessed. The effect of gestational age and delivery date assumptions on classification of prenatal drug exposure was evaluated. Results The mean gestational age in the Birth Registry was 273 (median,=,275) days. Sensitivity of claims data at identifying deliveries was 97.6%, PPV was 98.2%. Of deliveries identified in only one dataset, 45% were related to the gestational age assumption and 36% were due to claims data issues. The effect on estimates of prevalence of prescribing during pregnancy was an absolute change of 1% or less for all drug exposure categories. For Category X, drug exposures during the first trimester, the relative change in prescribing prevalence was 13.7% (p,=,0.014). Conclusion Administrative databases can be useful for assessing prenatal drug exposure, but gestational age assumptions can result in a small proportion of misclassification. Copyright © 2005 John Wiley & Sons, Ltd. [source]