Modelling Techniques (modelling + techniques)

Distribution by Scientific Domains
Distribution within Medical Sciences

Kinds of Modelling Techniques

  • equation modelling techniques
  • structural equation modelling techniques


  • Selected Abstracts


    Are Points the Better Graphics Primitives?

    COMPUTER GRAPHICS FORUM, Issue 3 2001
    Markus Gross
    Since the early days of graphics the computer based representation of three-dimensional geometry has been one of the core research fields. Today, various sophisticated geometric modelling techniques including NURBS or implicit surfaces allow the creation of 3D graphics models with increasingly complex shape. In spite of these methods the triangle has survived over decades as the king of graphics primitives meeting the right balance between descriptive power and computational burden. As a consequence, today's consumer graphics hardware is heavily tailored for high performance triangle processing. In addition, a new generation of geometry processing methods including hierarchical representations, geometric filtering, or feature detection fosters the concept of triangle meshes for graphics modelling. Unlike triangles, points have amazingly been neglected as a graphics primitive. Although being included in APIs since many years, it is only recently that point samples experience a renaissance in computer graphics. Conceptually, points provide a mere discretization of geometry without explicit storage of topology. Thus, point samples reduce the representation to the essentials needed for rendering and enable us to generate highly optimized object representations. Although the loss of topology poses great challenges for graphics processing, the latest generation of algorithms features high performance rendering, point/pixel shading, anisotropic texture mapping, and advanced signal processing of point sampled geometry. This talk will give an overview of how recent research results in the processing of triangles and points are changing our traditional way of thinking of surface representations in computer graphics - and will discuss the question: Are Points the Better Graphics Primitives? [source]


    The changing world demography of type 2 diabetes

    DIABETES/METABOLISM: RESEARCH AND REVIEWS, Issue 1 2003
    Anders Green
    Abstract In recent years it has been estimated that the current global prevalence of type 2 diabetes amounts to about 150 million patients. Projections suggest that by the year 2025 the number of prevalent patients in the world will reach approximately 300 million. It is assumed that the increase in the number of patients will be most pronounced in nations currently undergoing socio-economic development including increasing urbanization. The technique used to provide these estimates is based on results from available, contemporary survey results, combined with expected future trends in demographic indicators. We suggest that the currently available methods for the estimation of the future global burden of type 2 diabetes mellitus yield underestimates. Further modifications and validity tests of the modelling techniques are necessary in order to develop a reliable instrument to globally monitor the effects of the struggle against the diabetes problem. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Diabetic retinopathy screening: a systematic review of the economic evidence

    DIABETIC MEDICINE, Issue 3 2010
    S. Jones
    Diabet. Med. 27, 249,256 (2010) Abstract This paper systematically reviews the published literature on the economic evidence of diabetic retinopathy screening. Twenty-nine electronic databases were searched for studies published between 1998 and 2008. Internet searches were carried out and reference lists of key studies were hand searched for relevant articles. The key search terms used were ,diabetic retinopathy', ,screening', ,economic' and ,cost'. The search identified 416 papers of which 21 fulfilled the inclusion criteria, comprising nine cost-effectiveness studies, one cost analysis, one cost-minimization analysis, four cost,utility analyses and six reviews. Eleven of the included studies used economic modelling techniques and/or computer simulation to assess screening strategies. To date, the economic evaluation literature on diabetic retinopathy screening has focused on four key questions: the overall cost-effectiveness of ophthalmic care; the cost-effectiveness of systematic vs. opportunistic screening; how screening should be organized and delivered; and how often people should be screened. Systematic screening for diabetic retinopathy is cost-effective in terms of sight years preserved compared with no screening. Digital photography with telemedicine links has the potential to deliver cost-effective, accessible screening to rural, remote and hard-to-reach populations. Variation in compliance rates, age of onset of diabetes, glycaemic control and screening sensitivities influence the cost-effectiveness of screening programmes and are important sources of uncertainty in relation to the issue of optimal screening intervals. There is controversy in relation to the economic evidence on optimal screening intervals. Further research is needed to address the issue of optimal screening interval, the opportunities for targeted screening to reflect relative risk and the effect of different screening intervals on attendance or compliance by patients. [source]


    Where do Swainson's hawks winter?

    DIVERSITY AND DISTRIBUTIONS, Issue 5 2008
    Satellite images used to identify potential habitat
    ABSTRACT During recent years, predictive modelling techniques have been increasingly used to identify regional patterns of species spatial occurrence, to explore species,habitat relationships and to aid in biodiversity conservation. In the case of birds, predictive modelling has been mainly applied to the study of species with little variable interannual patterns of spatial occurrence (e.g. year-round resident species or migratory species in their breeding grounds showing territorial behaviour). We used predictive models to analyse the factors that determine broad-scale patterns of occurrence and abundance of wintering Swainson's hawks (Buteo swainsoni). This species has been the focus of field monitoring in its wintering ground in Argentina due to massive pesticide poisoning of thousands of individuals during the 1990s, but its unpredictable pattern of spatial distribution and the uncertainty about the current wintering area occupied by hawks led to discontinuing such field monitoring. Data on the presence and abundance of hawks were recorded in 30 × 30 km squares (n = 115) surveyed during three austral summers (2001,03). Sixteen land-use/land-cover, topography, and Normalized Difference Vegetation Index (NDVI) variables were used as predictors to build generalized additive models (GAMs). Both occurrence and abundance models showed a good predictive ability. Land use, altitude, and NDVI during spring previous to the arrival of hawks to wintering areas were good predictors of the distribution of Swainson's hawks in the Argentine pampas, but only land use and NDVI were entered into the model of abundance of the species in the region. The predictive cartography developed from the models allowed us to identify the current wintering area of Swainson's hawks in the Argentine pampas. The highest occurrence probability and relative abundances for the species were predicted for a broad area of south-eastern pampas that has been overlooked so far and where neither field research nor conservation efforts aiming to prevent massive mortalities has been established. [source]


    Spread and current potential distribution of an alien grass, Eragrostis lehmanniana Nees, in the southwestern USA: comparing historical data and ecological niche models

    DIVERSITY AND DISTRIBUTIONS, Issue 5 2006
    Heather Schussman
    ABSTRACT The potential distribution of alien species in a novel habitat often is difficult to predict because factors limiting species distributions may be unique to the new locale. Eragrostis lehmanniana is a perennial grass purposely introduced from South Africa to Arizona, USA in the 1930s; by the 1980s, it had doubled its extent. Based on environmental characteristics associated with its introduced and native range, researchers believed that E. lehmanniana had reached the limits of its distribution by the early 1990s. We collected data on E. lehmanniana locations from various land management agencies throughout Arizona and western New Mexico and found new records that indicate that E. lehmanniana has continued to spread. Also, we employed two modelling techniques to determine the current potential distribution and to re-investigate several environmental variables related to distribution. Precipitation and temperature regimes similar to those indicated by past research were the most important variables influencing model output. The potential distribution of E. lehmanniana mapped by both models was 71,843 km2 and covers a large portion of southeastern and central Arizona. Logistic regression (LR) predicted a potential distribution of E. lehmanniana more similar to this species current distribution than GARP based on average temperature, precipitation, and grassland species composition and recorded occurrences. Results of a cross-validation assessment and extrinsic testing showed that the LR model performed as well or better than GARP based on sensitivity, specificity, and kappa indices. [source]


    Quantifying sediment storage in a high alpine valley (Turtmanntal, Switzerland)

    EARTH SURFACE PROCESSES AND LANDFORMS, Issue 13 2009
    Jan-Christoph Otto
    Abstract The determination of sediment storage is a critical parameter in sediment budget analyses. But, in many sediment budget studies the quantification of magnitude and time-scale of sediment storage is still the weakest part and often relies on crude estimations only, especially in large drainage basins (>100,km2). We present a new approach to storage quantification in a meso-scale alpine catchment of the Swiss Alps (Turtmann Valley, 110,km2). The quantification of depositional volumes was performed by combining geophysical surveys and geographic information system (GIS) modelling techniques. Mean thickness values of each landform type calculated from these data was used to estimate the sediment volume in the hanging valleys and the trough slopes. Sediment volume of the remaining subsystems was determined by modelling an assumed parabolic bedrock surface using digital elevation model (DEM) data. A total sediment volume of 781·3×106,1005·7×106,m3 is deposited in the Turtmann Valley. Over 60% of this volume is stored in the 13 hanging valleys. Moraine landforms contain over 60% of the deposits in the hanging valleys followed by sediment stored on slopes (20%) and rock glaciers (15%). For the first time, a detailed quantification of different storage types was achieved in a catchment of this size. Sediment volumes have been used to calculate mean denudation rates for the different processes ranging from 0·1 to 2·6,mm/a based on a time span of 10,ka. As the quantification approach includes a number of assumptions and various sources of error the values given represent the order of magnitude of sediment storage that has to be expected in a catchment of this size. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Resolving the biodiversity paradox

    ECOLOGY LETTERS, Issue 8 2007
    James S. Clark
    Abstract The paradox of biodiversity involves three elements, (i) mathematical models predict that species must differ in specific ways in order to coexist as stable ecological communities, (ii) such differences are difficult to identify, yet (iii) there is widespread evidence of stability in natural communities. Debate has centred on two views. The first explanation involves tradeoffs along a small number of axes, including ,colonization-competition', resource competition (light, water, nitrogen for plants, including the ,successional niche'), and life history (e.g. high-light growth vs. low-light survival and few large vs. many small seeds). The second view is neutrality, which assumes that species differences do not contribute to dynamics. Clark et al. (2004) presented a third explanation, that coexistence is inherently high dimensional, but still depends on species differences. We demonstrate that neither traditional low-dimensional tradeoffs nor neutrality can resolve the biodiversity paradox, in part by showing that they do not properly interpret stochasticity in statistical and in theoretical models. Unless sample sizes are small, traditional data modelling assures that species will appear different in a few dimensions, but those differences will rarely predict coexistence when parameter estimates are plugged into theoretical models. Contrary to standard interpretations, neutral models do not imply functional equivalence, but rather subsume species differences in stochastic terms. New hierarchical modelling techniques for inference reveal high-dimensional differences among species that can be quantified with random individual and temporal effects (RITES), i.e. process-level variation that results from many causes. We show that this variation is large, and that it stands in for species differences along unobserved dimensions that do contribute to diversity. High dimensional coexistence contrasts with the classical notions of tradeoffs along a few axes, which are often not found in data, and with ,neutral models', which mask, rather than eliminate, tradeoffs in stochastic terms. This mechanism can explain coexistence of species that would not occur with simple, low-dimensional tradeoff scenarios. [source]


    Concept Acquisition within the Context of an AS Media Studies Course

    ENGLISH IN EDUCATION, Issue 2 2003
    Vivien Whelpton
    Abstract This article explores the means by which students' concept formation can be promoted and outlines findings from an action research project undertaken with a class of 17-year-old AS Media Studies students as a submission for the British Film Institute's MA Certificate in Media Education in 2001. It argues that academic concepts can neither be allowed to develop spontaneously nor be directly taught, but that indirect methods of teacher intervention can be found. It also examines the relationship between thought and language and argues that, while contact with academic discourse can be alienating, its features include a fluency which the handling of complex and abstract ideas requires, particularly in the written mode. The writer suggests that, while this discourse cannot be explicitly taught or learned, modelling techniques may offer a useful approach. [source]


    Disentangling social selection and social influence effects on adolescent smoking: the importance of reciprocity in friendships

    ADDICTION, Issue 9 2007
    Liesbeth Mercken
    ABSTRACT Aims The goal of this study was to examine social selection and social influence within reciprocal and non-reciprocal friendships, and the role of parents and siblings, as factors explaining similarity of smoking behaviour among adolescent friends. A new social selection,social influence model is proposed. Design Longitudinal design with two measurements. Setting Data were gathered among Dutch high school students in the control group of the European Smoking prevention Framework Approach (ESFA) study. Participants The sample consisted of 1886 adolescents with a mean age of 12.7 years. Measurements The main outcome measures were the smoking behaviours of the respondents, best friends, parents and siblings. We tested the social selection,social influence model with structural equation modelling techniques. Findings Social selection and social influence both played an important role in explaining similarity of smoking behaviour among friends. Within non-reciprocal friendships, only social selection explained similarity of smoking behaviour, whereas within reciprocal friendships, social influence and possibly also social selection explained similarity of smoking behaviour. Sibling smoking behaviour was a more important predictor of adolescent smoking behaviour than parental smoking behaviour. Conclusions Social selection and social influence both promote similarity of smoking behaviour, and the impact of each process differs with the degree of reciprocity of friendships. These insights may contribute to further refinement of smoking prevention strategies. [source]


    Solvent-Dependent Conformational Behaviour of Model Tetrapeptides Containing a Bicyclic Proline Mimetic

    EUROPEAN JOURNAL OF ORGANIC CHEMISTRY, Issue 22 2004
    Andrea Trabocchi
    Abstract Two model tetrapeptides containing bicyclic analogues of either D - or L -proline were synthesised and their conformational properties were studied by NMR in different solvent systems and by molecular modelling techniques. Compound 1, with the bicyclic D -proline mimetic in the i+1 position, generated a unique trans isomer, and the peptide showed a well organised structure, in accordance with the tendency of D -proline to act as a good turn inducer with respect to its enantiomer. Peptide 2 displayed structures equilibrating from type I,II to type VI ,-turns, thus confirming the hypothesised relationship between the chirality of BGS/Bgs and proline enantiomers on nucleating compact turns. Moreover, such behaviour suggested a tool for peptidomimetic design of reverse turn peptides containing BGS/Bgs bicyclic proline mimetics, as the choice of chirality might influence the generation either of compact ,- and ,-turns or of flexible equilibrating reverse turn structures. The effect of solvent on conformational behaviour was also studied. (© Wiley-VCH Verlag GmbH & Co. KGaA, 69451 Weinheim, Germany, 2004) [source]


    Novel modelling of residual operating time of transformer oil

    EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 4 2003
    M. A. A. Wahab
    This paper presents techniques developed to accurately model the characteristics of transformer oil for the purpose of predicting the effect of aging on these characteristics. Aging causes some of the transformer oil characteristics to violate the internationally specified limits. The purpose of this simulation is to develop alternative techniques to predict the operating time after which these characteristics would violate the limits. The results obtained from monitoring of twenty in-service power transformers for long period of operating time up to ten years have been implemented in developing the proposed models. The physical, chemical and electrical characteristics have been determined periodically by internationally specified testing methods. The patterns of violation sequence of the standard limits, against operating time, by different transformer oil characteristics have been revealed and the most common pattern has been determined. The definition of residual operating time (trot) of the different transformer oil characteristics has been introduced. The choice of transformer oil breakdown voltage trot to represent that of the transformer oil characteristics has been justified. Modelling of trot as a function of transformer oil breakdown voltage, total acidity and water content by multiple-linear regression has been proposed and verified. Also, polynomial regression model of trot as a function only of transformer oil breakdown voltage has been given. The accuracy and applicability of these models and the different modelling techniques have been discussed and proved. [source]


    The influence of stream invertebrate composition at neighbouring sites on local assemblage composition

    FRESHWATER BIOLOGY, Issue 2 2005
    R. A. SANDERSON
    Summary 1. The composition of freshwater invertebrate assemblages at a location is determined by a range of physico-chemical and biotic factors in the local environment, as well as larger-scale spatial factors such as sources of recruits. We assessed the relative importance of the species composition of local neighbourhoods and proximal environmental factors on the composition of invertebrate assemblages. 2. Macroinvertebrate assemblages were sampled at 188 running-water sites in the catchment of the River Rede, north-east England. A total of 176 species were recorded. 3. Environmental data, in the form of 13 biotic and abiotic measurements that described stream physical structure, aquatic vegetation and water characteristics, were recorded for each site. Detrended correspondence analysis was then used to simplify nine of these stream environmental variables to create an index of stream structure. 4. The species composition of the invertebrate assemblages was related to the environmental variables, using an information theoretic approach. The impact of the species composition of neighbouring sites on each site was determined using Moran's I and autoregressive modelling techniques. 5. Species composition was primarily associated with water pH and stream structure. The importance of the species composition of neighbouring sites in determining local species assemblages differed markedly between taxa. The autoregressive component was low for Coleoptera, intermediate for Trichoptera and Plecoptera, and high for Ephemeroptera. 6. We hypothesise that the observed differences in the autoregressive component amongst these orders reflects variation in their dispersal abilities from neighbouring sites. [source]


    BIOMOD , optimizing predictions of species distributions and projecting potential future shifts under global change

    GLOBAL CHANGE BIOLOGY, Issue 10 2003
    Wilfried ThuillerArticle first published online: 9 OCT 200
    Abstract A new computation framework (BIOMOD: BIOdiversity MODelling) is presented, which aims to maximize the predictive accuracy of current species distributions and the reliability of future potential distributions using different types of statistical modelling methods. BIOMOD capitalizes on the different techniques used in static modelling to provide spatial predictions. It computes, for each species and in the same package, the four most widely used modelling techniques in species predictions, namely Generalized Linear Models (GLM), Generalized Additive Models (GAM), Classification and Regression Tree analysis (CART) and Artificial Neural Networks (ANN). BIOMOD was applied to 61 species of trees in Europe using climatic quantities as explanatory variables of current distributions. On average, all the different modelling methods yielded very good agreement between observed and predicted distributions. However, the relative performance of different techniques was idiosyncratic across species, suggesting that the most accurate model varies between species. The results of this evaluation also highlight that slight differences between current predictions from different modelling techniques are exacerbated in future projections. Therefore, it is difficult to assess the reliability of alternative projections without validation techniques or expert opinion. It is concluded that rather than using a single modelling technique to predict the distribution of several species, it would be more reliable to use a framework assessing different models for each species and selecting the most accurate one using both evaluation methods and expert knowledge. [source]


    Hydrodynamics and geomorphic work of jökulhlaups (glacial outburst floods) from Kverkfjöll volcano, Iceland

    HYDROLOGICAL PROCESSES, Issue 6 2007
    Jonathan L. Carrivick
    Abstract Jökulhlaups (glacial outburst floods) occur frequently within most glaciated regions of the world and cause rapid landscape change, infrastructure damage, and human disturbance. The largest jökulhlaups known to have occurred during the Holocene within Iceland drained from the northern margin of Vatnajökull and along the Jökulsá á Fjöllum. Some of these jökulhlaups originated from Kverkfjöll volcano and were routed through anastomosing, high gradient and hydraulically rough channels. Landforms and sediments preserved within these channels permit palaeoflow reconstructions. Kverkfjöll jökulhlaups were reconstructed using palaeocompetence (point measurements), slope,area (one-dimensional), and depth-averaged two-dimensional (2D) hydrodynamic modelling techniques. The increasing complexity of 2D modelling required a range of assumptions, but produced information on both spatial and temporal variations in jökulhlaup characteristics. The jökulhlaups were volcanically triggered, had a linear-rise hydrograph and a peak discharge of 50 000,100 000 m3 s,1, which attenuated by 50,75% within 25 km. Frontal flow velocities were ,2 m s,1; but, as stage increased, velocities reached 5,15m s,1. Peak instantaneous shear stress and stream power reached 1 × 104 N m,2 and 1 × 105 W m,2 respectively. Hydraulic parameters can be related to landform groups. A hierarchy of landforms is proposed, ranging from the highest energy zones (erosional gorges, scoured bedrock, cataracts, and spillways) to the lowest energy zones (of valley fills, bars, and slackwater deposits). Fluvial erosion of bedrock occurred in Kverkfjallarani above ,3 m flow depth, ,7m s,1 flow velocity, ,1 × 102 N m,2 shear stress, and 3 × 102 W m,2 stream power. Fluvial deposition occurred in Kverkfjallarani below ,8 m flow depth, 11 m s,1 flow velocity, 5 × 102 N m,2 shear stress, and 3 × 103 W m,2 stream power. Hence, erosional and depositional ,envelopes' have considerable overlap, probably due to transitional flow phenomena and the influence of upstream effects, such as hydraulic ponding and topographic constrictions, for example. Holocene Kverkfjöll jökulhlaups achieved geomorphic work comparable to that of other late Pleistocene ,megafloods'. This work was a result of steep channel gradients, topographic channel constrictions, and high hydraulic roughness, rather than to extreme peak discharges. The Kverkfjöll jökulhlaups have implications for landscape evolution in north-central Iceland, for water-sediment inputs into the North Atlantic, and for recognizing jökulhlaups in the rock record. 2D hydrodynamic modelling is likely to be important for hazard mitigation in similar landscapes and upon other glaciated volcanoes, because it only requires an input hydrograph and a digital elevation model to run a model, rather than suites of geomorphological evidence and field-surveyed valley cross-sections, for example. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Application of fuzzy logic to forecast seasonal runoff

    HYDROLOGICAL PROCESSES, Issue 18 2003
    C. Mahabir
    Abstract Each spring in Alberta, Canada, the potential snowmelt runoff is forecast for several basins to assess the water supply situation. Water managers need this forecast to plan water allocations for the following summer season. The Lodge Creek and Middle Creek basins, located in southeastern Alberta, are two basins that require this type of late winter forecast of potential spring runoff. Historically, the forecast has been based upon a combination of regression equations. These results are then interpreted by a forecaster and are modified based on the forecaster's heuristic knowledge of the basin. Unfortunately, this approach has had limited success in the past, in terms of the accuracy of these forecasts, and consequently an alternative methodology is needed. In this study, the applicability of fuzzy logic modelling techniques for forecasting water supply was investigated. Fuzzy logic has been applied successfully in several fields where the relationship between cause and effect (variable and results) are vague. Fuzzy variables were used to organize knowledge that is expressed ,linguistically' into a formal analysis. For example, ,high snowpack', ,average snowpack' and ,low snowpack' became variables. By applying fuzzy logic, a water supply forecast was created that classified potential runoff into three forecast zones: ,low', ,average' and ,high'. Spring runoff forecasts from the fuzzy expert systems were found to be considerably more reliable than the regression models in forecasting the appropriate runoff zone, especially in terms of identifying low or average runoff years. Based on the modelling results in these two basins, it is concluded that fuzzy logic has a promising potential for providing reliable water supply forecasts. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Assessment of two-equation turbulence modelling for high Reynolds number hydrofoil flows

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 3 2004
    N. Mulvany
    Abstract This paper presents an evaluation of the capability of turbulence models available in the commercial CFD code FLUENT 6.0 for their application to hydrofoil turbulent boundary layer separation flow at high Reynolds numbers. Four widely applied two-equation RANS turbulence models were assessed through comparison with experimental data at Reynolds numbers of 8.284×106 and 1.657×107. They were the standard k,,model, the realizable k,,model, the standard k,,model and the shear-stress-transport (SST) k,,model. It has found that the realizable k,,turbulence model used with enhanced wall functions and near-wall modelling techniques, consistently provides superior performance in predicting the flow characteristics around the hydrofoil. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Exploring the relationship between consumer knowledge and purchase behaviour of value-based labels

    INTERNATIONAL JOURNAL OF CONSUMER STUDIES, Issue 5 2008
    Morven G. McEachern
    Abstract Despite an increasing market presence, little research has been conducted regarding consumer-purchase behaviour of food products bearing ,value-based' labels. Moreover, as the effectiveness of these labelling formats is dependent upon consumer's knowledge of their existence, this paper aims to explore the relationship between knowledge, openness to experience (i.e. validated personality trait related to intellectual capability) and purchase behaviour upon consumer behaviour in this context. Using structural equation modelling techniques, causal influences on purchases of fresh meat bearing ,value-based' labels are identified and three multi-attribute attitude models are proposed. The paper concludes that these labels are of value to consumers and that product knowledge plays a significant role in aiding purchase decisions. Consequently, marketing communication implications arising from the proposed multi-attribute attitude models are discussed. [source]


    Modelling lifetime QALYs and health care costs from different drinking patterns over time: a Markov model

    INTERNATIONAL JOURNAL OF METHODS IN PSYCHIATRIC RESEARCH, Issue 2 2010
    Carolina Barbosa
    Abstract The negative health consequences of alcohol use and its treatment account for significant health care expenditure worldwide. Long-term modelling techniques are developed in this paper to establish a link between drinking patterns, health consequences and alcohol treatment effectiveness and cost-effectiveness. The overall change in health related quality and quantity of life which results from changes in health-related behaviour is estimated. Specifically, a probabilistic lifetime Markov model is presented where alcohol consumption in grams of alcohol per day and drinking history are used for the categorization of patients into four Markov states. Utility weights are assigned to each drinking state using EQ-5D scores. Mortality and morbidity estimates are state, gender and age specific, and are alcohol-related and non-alcohol-related. The methodology is tested in a case study. This represents a major development in the techniques traditionally used in alcohol economic models, in which short-term costs and outcomes are assessed, omitting potential longer term cost savings and improvements in health related quality of life. Assumptions and implications of the approach are discussed. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    The dynamics of social exclusion

    INTERNATIONAL JOURNAL OF SOCIAL WELFARE, Issue 4 2006
    Graham Room
    Much mainstream analysis of the dynamics of social exclusion is concerned with the changing circumstances of households, using panel and cohort studies. However, changes in these circumstances are mediated by institutional processes and can be adequately explained only if the interactions of institutional and household strategies are taken into account. This is also a precondition of sound inferences for policy. These interactions may involve feedback loops and cumulative change: these require analysis as dynamic systems. The article explores how such dynamic systems can be modelled. It proposes a toolkit that brings together qualitative and quantitative modelling techniques, checks them against empirical data and roots their interpretation within an action frame of reference. [source]


    Risk modelling in blood safety , review of methods, strengths and limitations

    ISBT SCIENCE SERIES: THE INTERNATIONAL JOURNAL OF INTRACELLULAR TRANSPORT, Issue n1 2010
    B. Custer
    Risk modelling studies in blood safety play an important but occasionally misunderstood role. These studies are intended to quantify and contrast risks and benefits. This information is critical for policy development and intervention decision-making. The limitations of risk modelling should be considered alongside the results obtained. The goal of this manuscript and presentation is to review current risk modelling techniques used in blood safety and to discuss the pros and cons of using this information in the decision-making process. The types of questions that can be answered include the extent of a risk or threat; implications of action or inaction; identification of effective strategies for risk management; or whether to adopt specific interventions. These analyses can be focused on a risk alone but are often combined with economic information to gain an understanding of feasible risk interventions given budgetary or other monetary considerations. Thus, analyses that include risk modelling provide insights along multiple lines. As important, the analyses also provide information on what is not known or uncertain about a potential hazard and how much that uncertainty may influence the decision-making process. Specific examples of the range of risk analyses in which the author has participated will be reviewed and will include ongoing process improvement in testing laboratories such as error identification/eradication, estimation of the risk of malaria exposure based on the specific locations of travel, evaluation of blood supply and demand during an influenza pandemic, cost-utility analyses of screening interventions for infectious diseases in countries with different human development indices, and insurance against emerging pathogen risk. Each of these analyses has a different purpose and seeks to answer different questions, but all rely on similar methods. The tool kit for risk analysis is broad and varied but does have limitations. The chief limitation of risk modelling is that risk analyses are not scientific experiments or otherwise controlled studies. Consequently, the analyses are more apt to be influenced by assumptions. These assumptions may be necessary to structure a problem in a way that will allow the question of interest to be answered or may result from incomplete or missing information. Another potential limitation is that commissioners of such studies, those who undertake them, and the intended audience, such as regulatory agencies, may have distinct and differing interpretations of the results. Risk modelling is a set of techniques that can be used to inform and support decision-making at all levels in transfusion medicine. Advances in risk modelling techniques allow for continued expansion in the scope of possible questions that can be analysed. Expanded use also improves the acceptance of the utility of these studies in blood safety and transfusion medicine. [source]


    Go climb a mountain: an application of recreation demand modelling to rock climbing in Scotland

    JOURNAL OF AGRICULTURAL ECONOMICS, Issue 1 2001
    Nick Hanley
    In this paper, we apply random utility modelling techniques to rock-climbing in Scotland. Attributes relevant to choices over rock-climbing sites were identified from focus groups with climbers, along with a categorisation of principal climbing areas. A survey of climbers yielded 267 responses, which were then used as the basis for modelling. We compare a standard multi-nominal logit model with a random parameters approach, and look at seasonal differences in behaviour, and at the implications of different treatments of travel time. The random utility models showed that most of the attributes selected were significant determinants of choice. Welfare estimates of changes in site attributes are presented, which are relevant to policy choices currently facing land managers. [source]


    The influence of spatial errors in species occurrence data used in distribution models

    JOURNAL OF APPLIED ECOLOGY, Issue 1 2008
    Catherine H Graham
    Summary 1Species distribution modelling is used increasingly in both applied and theoretical research to predict how species are distributed and to understand attributes of species' environmental requirements. In species distribution modelling, various statistical methods are used that combine species occurrence data with environmental spatial data layers to predict the suitability of any site for that species. While the number of data sharing initiatives involving species' occurrences in the scientific community has increased dramatically over the past few years, various data quality and methodological concerns related to using these data for species distribution modelling have not been addressed adequately. 2We evaluated how uncertainty in georeferences and associated locational error in occurrences influence species distribution modelling using two treatments: (1) a control treatment where models were calibrated with original, accurate data and (2) an error treatment where data were first degraded spatially to simulate locational error. To incorporate error into the coordinates, we moved each coordinate with a random number drawn from the normal distribution with a mean of zero and a standard deviation of 5 km. We evaluated the influence of error on the performance of 10 commonly used distributional modelling techniques applied to 40 species in four distinct geographical regions. 3Locational error in occurrences reduced model performance in three of these regions; relatively accurate predictions of species distributions were possible for most species, even with degraded occurrences. Two species distribution modelling techniques, boosted regression trees and maximum entropy, were the best performing models in the face of locational errors. The results obtained with boosted regression trees were only slightly degraded by errors in location, and the results obtained with the maximum entropy approach were not affected by such errors. 4Synthesis and applications. To use the vast array of occurrence data that exists currently for research and management relating to the geographical ranges of species, modellers need to know the influence of locational error on model quality and whether some modelling techniques are particularly robust to error. We show that certain modelling techniques are particularly robust to a moderate level of locational error and that useful predictions of species distributions can be made even when occurrence data include some error. [source]


    Distribution modelling and statistical phylogeography: an integrative framework for generating and testing alternative biogeographical hypotheses

    JOURNAL OF BIOGEOGRAPHY, Issue 11 2007
    Corinne L. Richards
    Abstract Statistical phylogeographic studies contribute to our understanding of the factors that influence population divergence and speciation, and that ultimately generate biogeographical patterns. The use of coalescent modelling for analyses of genetic data provides a framework for statistically testing alternative hypotheses about the timing and pattern of divergence. However, the extent to which such approaches contribute to our understanding of biogeography depends on how well the alternative hypotheses chosen capture relevant aspects of species histories. New modelling techniques, which explicitly incorporate spatio-geographic data external to the gene trees themselves, provide a means for generating realistic phylogeographic hypotheses, even for taxa without a detailed fossil record. Here we illustrate how two such techniques , species distribution modelling and its historical extension, palaeodistribution modelling , in conjunction with coalescent simulations can be used to generate and test alternative hypotheses. In doing so, we highlight a few key studies that have creatively integrated both historical geographic and genetic data and argue for the wider incorporation of such explicit integrations in biogeographical studies. [source]


    Are niche-based species distribution models transferable in space?

    JOURNAL OF BIOGEOGRAPHY, Issue 10 2006
    Christophe F. Randin
    Abstract Aim, To assess the geographical transferability of niche-based species distribution models fitted with two modelling techniques. Location, Two distinct geographical study areas in Switzerland and Austria, in the subalpine and alpine belts. Methods, Generalized linear and generalized additive models (GLM and GAM) with a binomial probability distribution and a logit link were fitted for 54 plant species, based on topoclimatic predictor variables. These models were then evaluated quantitatively and used for spatially explicit predictions within (internal evaluation and prediction) and between (external evaluation and prediction) the two regions. Comparisons of evaluations and spatial predictions between regions and models were conducted in order to test if species and methods meet the criteria of full transferability. By full transferability, we mean that: (1) the internal evaluation of models fitted in region A and B must be similar; (2) a model fitted in region A must at least retain a comparable external evaluation when projected into region B, and vice-versa; and (3) internal and external spatial predictions have to match within both regions. Results, The measures of model fit are, on average, 24% higher for GAMs than for GLMs in both regions. However, the differences between internal and external evaluations (AUC coefficient) are also higher for GAMs than for GLMs (a difference of 30% for models fitted in Switzerland and 54% for models fitted in Austria). Transferability, as measured with the AUC evaluation, fails for 68% of the species in Switzerland and 55% in Austria for GLMs (respectively for 67% and 53% of the species for GAMs). For both GAMs and GLMs, the agreement between internal and external predictions is rather weak on average (Kulczynski's coefficient in the range 0.3,0.4), but varies widely among individual species. The dominant pattern is an asymmetrical transferability between the two study regions (a mean decrease of 20% for the AUC coefficient when the models are transferred from Switzerland and 13% when they are transferred from Austria). Main conclusions, The large inter-specific variability observed among the 54 study species underlines the need to consider more than a few species to test properly the transferability of species distribution models. The pronounced asymmetry in transferability between the two study regions may be due to peculiarities of these regions, such as differences in the ranges of environmental predictors or the varied impact of land-use history, or to species-specific reasons like differential phenotypic plasticity, existence of ecotypes or varied dependence on biotic interactions that are not properly incorporated into niche-based models. The lower variation between internal and external evaluation of GLMs compared to GAMs further suggests that overfitting may reduce transferability. Overall, a limited geographical transferability calls for caution when projecting niche-based models for assessing the fate of species in future environments. [source]


    Model-based uncertainty in species range prediction

    JOURNAL OF BIOGEOGRAPHY, Issue 10 2006
    Richard G. Pearson
    Abstract Aim, Many attempts to predict the potential range of species rely on environmental niche (or ,bioclimate envelope') modelling, yet the effects of using different niche-based methodologies require further investigation. Here we investigate the impact that the choice of model can have on predictions, identify key reasons why model output may differ and discuss the implications that model uncertainty has for policy-guiding applications. Location, The Western Cape of South Africa. Methods, We applied nine of the most widely used modelling techniques to model potential distributions under current and predicted future climate for four species (including two subspecies) of Proteaceae. Each model was built using an identical set of five input variables and distribution data for 3996 sampled sites. We compare model predictions by testing agreement between observed and simulated distributions for the present day (using the area under the receiver operating characteristic curve (AUC) and kappa statistics) and by assessing consistency in predictions of range size changes under future climate (using cluster analysis). Results, Our analyses show significant differences between predictions from different models, with predicted changes in range size by 2030 differing in both magnitude and direction (e.g. from 92% loss to 322% gain). We explain differences with reference to two characteristics of the modelling techniques: data input requirements (presence/absence vs. presence-only approaches) and assumptions made by each algorithm when extrapolating beyond the range of data used to build the model. The effects of these factors should be carefully considered when using this modelling approach to predict species ranges. Main conclusions, We highlight an important source of uncertainty in assessments of the impacts of climate change on biodiversity and emphasize that model predictions should be interpreted in policy-guiding applications along with a full appreciation of uncertainty. [source]


    Use of pharmacoeconomics in prescribing research.

    JOURNAL OF CLINICAL PHARMACY & THERAPEUTICS, Issue 5 2003
    Part 5: modelling, beyond clinical trials
    Summary This paper provides an overview of modelling in the economic evaluation of pharmaceuticals, reflecting the increasing use of models in analyses prepared for reimbursement applications to national and local drug formularies. The paper seeks to demystify the most commonly encountered modelling techniques (extrapolation, decision analysis, Markov modelling and Monte Carlo simulation), and to provide guidance in assessing the quality of submitted or published modelled economic evaluations. [source]


    Organizational Justice and Individuals' Withdrawal: Unlocking the Influence of Emotional Exhaustion

    JOURNAL OF MANAGEMENT STUDIES, Issue 3 2010
    Michael S. Cole
    abstract This study examined the relationships between organizational justice and withdrawal outcomes and whether emotional exhaustion was a mediator of these linkages. Data were obtained from 869 military personnel and civil servants; using structural equation modelling techniques, we examined an integrative model that combines justice and stress research. Our findings suggest that individuals' justice perceptions are related to their psychological health. As predicted, emotional exhaustion mediated the linkages between distributive and interpersonal (but not procedural and informational) justice and individuals' withdrawal reactions. Results showed that distributive and interpersonal justice negatively related to emotional exhaustion and emotional exhaustion negatively related to organizational commitment which, in turn, negatively influenced turnover intentions. These findings were observed even when controlling for the presence of contingent-reward behaviours provided by supervisors and individuals' psychological empowerment. [source]


    An Opportunity for Me?

    JOURNAL OF MANAGEMENT STUDIES, Issue 3 2009
    The Role of Resources in Opportunity Evaluation Decisions
    abstract We apply the prescriptions of the resource-based perspective to develop a model of entrepreneurial opportunity evaluation. We propose that opportunity evaluation decision policies are constructed as future-oriented, cognitive representations of ,what will be', assuming one were to exploit the opportunity under evaluation. These cognitive representations incorporate both (1) an evaluation of the existing resource endowments (already under the control of the venture), which may be employed to exploit the opportunity under evaluation, and at the same time (2) an assessment of the future, wealth generating resources that must be marshalled (and subsequently under the control of the venture) in order to exploit the opportunity under evaluation. We empirically test this model using conjoint analysis and hierarchical linear modelling techniques to decompose more than 2300 opportunity evaluation decisions nested in a sample of 73 entrepreneurs. Our findings suggest that entrepreneurs are attracted to opportunities that are complementary to their existing knowledge resources; however, we also identify a set of opportunity-specific and firm-specific conditions that encourage entrepreneurs to pursue the acquisition and control of resources that are inconsistent with the existing, knowledge-based resources of the venture. We identify these conditions and discuss implications for theory, practice, and future research. [source]


    Metamorphic phase relations in orthopyroxene-bearing granitoids: implication for high-pressure metamorphism and prograde melting in the continental crust

    JOURNAL OF METAMORPHIC GEOLOGY, Issue 4 2009
    S. K. BHOWMIK
    Abstract In this work, the factors controlling the formation and preservation of high-pressure mineral assemblages in the metamorphosed orthopyroxene-bearing metagranitoids of the Sandmata Complex, Aravalli-Delhi Mobile Belt (ADMB), northwestern India have been modelled. The rocks range in composition from farsundite through quartz mangerite to opdalite, and with varying K2O, Ca/(Ca + Na)rock and FeOtot + MgO contents. A two stage metamorphic evolution has been recorded in these rocks. An early hydration event stabilized biotite with or without epidote at the expense of magmatic orthopyroxene and plagioclase. Subsequent high-pressure granulite facies metamorphism (,15 kbar, ,800 °C) of these hydrated rocks produced two rock types with contrasting mineralogy and textures. In the non-migmatitic metagranitoids, spectacular garnet ± K-feldspar ± quartz corona was formed around reacting biotite, plagioclase, quartz and/or pyroxene. In contrast, biotite ± epidote melting produced migmatites, containing porphyroblastic garnet incongruent solids and leucosomes. Applying NCKFMASHTO T,M(H2O) and P,T pseudosection modelling techniques, it is demonstrated that the differential response of these magmatic rocks to high-pressure metamorphism is primarily controlled by the scale of initial hydration. Rocks, which were pervasively hydrated, produced garnetiferous migmatites, while for limited hydration, the same metamorphism formed sub-solidus garnet-bearing coronae. Based on the sequence of mineral assemblage evolution and the mineral compositional zoning features in the two metagranitoids, a clockwise metamorphic P,T path is constrained for the high-pressure metamorphic event. The finding has major implications in formulating geodynamic model of crustal amalgamation in the ADMB. [source]


    INVESTIGATION OF ELASTIC INVERSION ATTRIBUTES USING THE EXPANSIBLE CLAY MODEL FOR WATER SATURATION

    JOURNAL OF PETROLEUM GEOLOGY, Issue 2 2009
    J. O. Ugbo
    Quantitative X-ray diffraction has been used to characterize water saturation levels in complex shaly sand reservoirs (i.e. shaly sands with infrequent carbonates and minor proportions of iron-rich minerals such as pyrite and siderite). The results led to the design of a total expansible clay model for water saturation, which is similar in form to the Dual Water model except that the excess effect of the clay minerals has been accounted for by a volume-conductivity relationship, rather than one of the usual volume-porosity translations, effectively reducing the uncertainties in estimating water saturation. Given the ambiguities associated with predicting these petrophysical properties from data on rock properties, such as mineralogy, an investigation of the relationship of estimated water saturation based on the total expansible clay model to independently determined rock properties was undertaken using well log inversion and forward modelling techniques. The results show that there is consistency in the relationship between water saturation estimates made from the total expansible clay model and known elastic parameters such as primary and shear-wave sonic velocity (Vp, Vs), bulk density (,b) and impedance (I), when the Raymer-Gardner-Hunt model is used. Use of the Raymer-Gardner-Hunt model to reconstruct the required rock-physics relationship avoids the classic limitation of the more advanced Gassman model, which assumes that the dry shear modulus is equivalent to the wet shear modulus (,dry=,wet). The present work raises further questions on the application of the Voigt-Reuss-Hill (VRH) limits, or the Hashin Shtrikman bounds for averaging the effective shear modulus of the dry matrix in complex shaly sand reservoirs, where a two-mineral matrix is normally assumed. The study shows the inapplicability of the VRH or Hashin-Shtrikman averaging techniques but provides a minor adjustment to the averaging that solves the problems faced in reconstructing the relationships between directly measured elastic properties and derived petrophysical properties for this type of reservoir rock. [source]