Predictive Ability (predictive + ability)

Distribution by Scientific Domains

Kinds of Predictive Ability

  • good predictive ability
  • high predictive ability


  • Selected Abstracts


    TESTING LONG-HORIZON PREDICTIVE ABILITY WITH HIGH PERSISTENCE, AND THE MEESE,ROGOFF PUZZLE*

    INTERNATIONAL ECONOMIC REVIEW, Issue 1 2005
    Barbara Rossi
    A well-known puzzle in international finance is that a random walk predicts exchange rates better than economic models. I offer a potential explanation. When exchange rates and fundamentals are highly persistent, long-horizon forecasts of economic models are biased by the estimation error. When this bias is big, a random walk will forecast better, even if the economic model is true. I propose a test for equal predictability in the presence of high persistence. It shows that the poor forecasting ability of economic models does not imply that the models are not good descriptions of the data. [source]


    Predictive Ability of Pretransplant Comorbidities to Predict Long-Term Graft Loss and Death

    AMERICAN JOURNAL OF TRANSPLANTATION, Issue 3 2009
    G. Machnicki
    Whether to include additional comorbidities beyond diabetes in future kidney allocation schemes is controversial. We investigated the predictive ability of multiple pretransplant comorbidities for graft and patient survival. We included first-kidney transplant deceased donor recipients if Medicare was the primary payer for at least one year pretransplant. We extracted pretransplant comorbidities from Medicare claims with the Clinical Classifications Software (CCS), Charlson and Elixhauser comorbidities and used Cox regressions for graft loss, death with function (DWF) and death. Four models were compared: (1) Organ Procurement Transplant Network (OPTN) recipient and donor factors, (2) OPTN + CCS, (3) OPTN + Charlson and (4) OPTN + Elixhauser. Patients were censored at 9 years or loss to follow-up. Predictive performance was evaluated with the c-statistic. We examined 25 270 transplants between 1995 and 2002. For graft loss, the predictive value of all models was statistically and practically similar (Model 1: 0.61 [0.60 0.62], Model 2: 0.63 [0.62 0.64], Models 3 and 4: 0.62 [0.61 0.63]). For DWF and death, performance improved to 0.70 and was slightly better with the CCS. Pretransplant comorbidities derived from administrative claims did not identify factors not collected on OPTN that had a significant impact on graft outcome predictions. This has important implications for the revisions to the kidney allocation scheme. [source]


    Predictive ability of propofol effect,site concentrations during fast and slow infusion rates

    ACTA ANAESTHESIOLOGICA SCANDINAVICA, Issue 4 2010
    P. O. SEPÚLVEDA
    Background: The performance of propofol effect,site pharmacokinetic models during target-controlled infusion (TCI) might be affected by propofol administration rate. This study compares the predictive ability of three effect,site pharmacokinetic models during fast and slow infusion rates, utilizing the cerebral state index (CSI) as a monitor of consciousness. Methods: Sixteen healthy volunteers, 21,45 years of age, were randomly assigned to receive either a bolus dose of propofol 1.8 mg/kg at a rate of 1200 ml/h or an infusion of 12 mg/kg/h until 3,5 min after loss of consciousness (LOC). After spontaneous recovery of the CSI, the bolus was administered to patients who had first received the infusion and vice versa. The study was completed after spontaneous recovery of CSI following the second dose scheme. LOC was assessed and recorded when it occurred. Adequacies of model predictions during both administration schemes were assessed by comparing the effect,site concentrations estimated at the time of LOC during the bolus dose and during the infusion scheme. Results: LOC occurred 0.97 ± 0.29 min after the bolus dose and 6.77 ± 3.82 min after beginning the infusion scheme (P<0.05). The Ce estimated with Schnider (ke0=0.45/min), Marsh (ke0=1.21/min) and Marsh (ke0=0.26/min) at LOC were 4.40 ± 1.45, 3.55 ± 0.64 and 1.28 ± 0.44 ,g/ml during the bolus dose and 2.81 ± 0.61, 2.50 ± 0.39 and 1.72 ± 0.41 ,g/ml, during the infusion scheme (P<0.05). The CSI values observed at LOC were 70 ± 4 during the bolus dose and 71 ± 2 during the infusion scheme (NS). Conclusion: Speed of infusion, within the ranges allowed by TCI pumps, significantly affects the accuracy of Ce predictions. The CSI monitor was shown to be a useful tool to predict LOC in both rapid and slow infusion schemes. [source]


    Predictive ability of models for calving difficulty in US Holsteins

    JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 3 2009
    E.L. De Maturana
    Summary The performance of alternative threshold models for analyzing calving difficulty (CD) in Holstein cows was evaluated in terms of predictive ability. Four models were considered, with CD classified into either three or four categories and analysed either as a single trait or jointly with gestation length (GL). The data contained GL and CD records from 90 393 primiparous cows, sired by 1122 bulls and distributed over 935 herd-calving year classes. Predictive ability of each model was evaluated using four criteria: mean squared error of the difference between observed and predicted CD scores; a Kullback-Leibler divergence measure between the observed and predicted distributions of CD scores; Pearson's correlation between observed and predicted CD scores and ability to correctly classify bulls as above or below average for incidence of CD. In general, the four models had similar predictive abilities. The joint analysis of CD with GL produced little, if any, improvement in predictive ability over univariate models. In light of the small difference in predictive ability between models treating CD with three or four categories and considering that a greater number of categories can provide more information, analysis of CD classified into four categories seems warranted. [source]


    An Independent Evaluation of Four Quantitative Emergency Department Crowding Scales

    ACADEMIC EMERGENCY MEDICINE, Issue 11 2006
    Spencer S. Jones MStat
    Background Emergency department (ED) overcrowding has become a frequent topic of investigation. Despite a significant body of research, there is no standard definition or measurement of ED crowding. Four quantitative scales for ED crowding have been proposed in the literature: the Real-time Emergency Analysis of Demand Indicators (READI), the Emergency Department Work Index (EDWIN), the National Emergency Department Overcrowding Study (NEDOCS) scale, and the Emergency Department Crowding Scale (EDCS). These four scales have yet to be independently evaluated and compared. Objectives The goals of this study were to formally compare four existing quantitative ED crowding scales by measuring their ability to detect instances of perceived ED crowding and to determine whether any of these scales provide a generalizable solution for measuring ED crowding. Methods Data were collected at two-hour intervals over 135 consecutive sampling instances. Physician and nurse agreement was assessed using weighted , statistics. The crowding scales were compared via correlation statistics and their ability to predict perceived instances of ED crowding. Sensitivity, specificity, and positive predictive values were calculated at site-specific cut points and at the recommended thresholds. Results All four of the crowding scales were significantly correlated, but their predictive abilities varied widely. NEDOCS had the highest area under the receiver operating characteristic curve (AROC) (0.92), while EDCS had the lowest (0.64). The recommended thresholds for the crowding scales were rarely exceeded; therefore, the scales were adjusted to site-specific cut points. At a site-specific cut point of 37.19, NEDOCS had the highest sensitivity (0.81), specificity (0.87), and positive predictive value (0.62). Conclusions At the study site, the suggested thresholds of the published crowding scales did not agree with providers' perceptions of ED crowding. Even after adjusting the scales to site-specific thresholds, a relatively low prevalence of ED crowding resulted in unacceptably low positive predictive values for each scale. These results indicate that these crowding scales lack scalability and do not perform as designed in EDs where crowding is not the norm. However, two of the crowding scales, EDWIN and NEDOCS, and one of the READI subscales, bed ratio, yielded good predictive power (AROC >0.80) of perceived ED crowding, suggesting that they could be used effectively after a period of site-specific calibration at EDs where crowding is a frequent occurrence. [source]


    A general model for predicting brown tree snake capture rates

    ENVIRONMETRICS, Issue 3 2003
    Richard M. Engeman
    Abstract The inadvertent introduction of the brown tree snake (Boiga irregularis) to Guam has resulted in the extirpation of most of the island's native terrestrial vertebrates, has presented a health hazard to small children, and also has produced economic problems. Trapping around ports and other cargo staging areas is central to a program designed to deter dispersal of the species. Sequential trapping of smaller plots is also being used to clear larger areas of snakes in preparation for endangered species reintroductions. Traps and trapping personnel are limited resources, which places a premium on the ability to plan the deployment of trapping efforts. In a series of previous trapping studies, data on brown tree snake removal from forested plots was found to be well modeled by exponential decay functions. For the present article, we considered a variety of model forms and estimation procedures, and used capture data from individual plots as random subjects to produce a general random coefficients model for making predictions of brown tree snake capture rates. The best model was an exponential decay with positive asymptote produced using nonlinear mixed model estimation where variability among plots was introduced through the scale and asymptote parameters. Practical predictive abilities were used in model evaluation so that a manager could project capture rates in a plot after a period of time, or project the amount of time required for trapping to reduce capture rates to a desired level. The model should provide managers with a tool for optimizing the allocation of limited trapping resources. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Information needs to support environmental impact assessment of the effects of European marine offshore wind farms on birds

    IBIS, Issue 2006
    A.D. FOX
    European legislation requires Strategic Environmental Assessments (SEAs) of national offshore wind farm (OWF) programmes and Environmental Impact Assessments (EIAs) for individual projects likely to affect birds. SEAs require extensive mapping of waterbird densities to define breeding and feeding areas of importance and sensitivity. Use of extensive large scale weather, military, and air traffic control surveillance radar is recommended, to define areas, routes and behaviour of migrating birds, and to determine avian migration corridors in three dimensions. EIAs for individual OWFs should define the key avian species present; as well as assess the hazards presented to birds in terms of avoidance behaviour, habitat change and collision risk. Such measures, however, are less helpful in assessing cumulative impacts. Using aerial survey, physical habitat loss, modification, or gain and effective habitat loss through avoidance behaviour can be measured using bird densities as a proxy measure of habitat availability. The energetic consequences of avoidance responses and habitat change should be modelled to estimate fitness costs and predict impacts at the population level. Our present ability to model collision risk remains poor due to lack of data on species-specific avoidance responses. There is therefore an urgent need to gather data on avoidance responses; energetic consequences of habitat modification and avoidance flights and demographic sensitivity of key species, most affected by OWFs. This analysis stresses the importance of common data collection protocols, sharing of information and experience, and accessibility of results at the international level to better improve our predictive abilities. [source]


    Bearing capacity of shallow foundations in transversely isotropic granular media

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 8 2010
    A. Azami
    Abstract The main focus in this work is on the assessment of bearing capacity of a shallow foundation in an inherently anisotropic particulate medium. Both the experimental and numerical investigations are carried out using a crushed limestone with elongated angular-shaped aggregates. The experimental study involves small-scale model tests aimed at examining the variation of bearing capacity as a function of the angle of deposition of the material. In addition, the results of a series of triaxial and direct shear tests are presented and later employed to identify the material functions/parameters. The numerical part of this work is associated with the development and implementation of a constitutive framework that describes the mechanical response of transversely isotropic frictional materials. The framework is based on the elastoplasticity and accounts for the effects of strain localization and inherent anisotropy of both the deformation and strength characteristics. The results of numerical simulations are compared withthe experimental data. A parametric study is also carried out aimed at examining the influence of various simplifications in the mathematical framework on its predictive abilities. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Predictive ability of models for calving difficulty in US Holsteins

    JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 3 2009
    E.L. De Maturana
    Summary The performance of alternative threshold models for analyzing calving difficulty (CD) in Holstein cows was evaluated in terms of predictive ability. Four models were considered, with CD classified into either three or four categories and analysed either as a single trait or jointly with gestation length (GL). The data contained GL and CD records from 90 393 primiparous cows, sired by 1122 bulls and distributed over 935 herd-calving year classes. Predictive ability of each model was evaluated using four criteria: mean squared error of the difference between observed and predicted CD scores; a Kullback-Leibler divergence measure between the observed and predicted distributions of CD scores; Pearson's correlation between observed and predicted CD scores and ability to correctly classify bulls as above or below average for incidence of CD. In general, the four models had similar predictive abilities. The joint analysis of CD with GL produced little, if any, improvement in predictive ability over univariate models. In light of the small difference in predictive ability between models treating CD with three or four categories and considering that a greater number of categories can provide more information, analysis of CD classified into four categories seems warranted. [source]


    Wavelength selection with Tabu Search

    JOURNAL OF CHEMOMETRICS, Issue 8-9 2003
    J. A. Hageman
    Abstract This paper introduces Tabu Search in analytical chemistry by applying it to wavelength selection. Tabu Search is a deterministic global optimization technique loosely based on concepts from artificial intelligence. Wavelength selection is a method which can be used for improving the quality of calibration models. Tabu Search uses basic, problem-specific operators to explore a search space, and memory to keep track of parts already visited. Several implementational aspects of wavelength selection with Tabu Search will be discussed. Two ways of memorizing the search space are investigated: storing the actual solutions and storing the steps necessary to create them. Parameters associated with Tabu Search are configured with a Plackett,Burman design. In addition, two extension schemes for Tabu Search, intensification and diversification, have been implemented and are applied with good results. Eventually, two implementations of wavelength selection with Tabu Search are tested, one which searches for a solution with a constant number of wavelengths and one with a variable number of wavelengths. Both implementations are compared with results obtained by wavelength selection methods based on simulated annealing (SA) and genetic algorithms (GAs). It is demonstrated with three real-world data sets that Tabu Search performs equally well as and can be a valuable alternative to SA and GAs. The improvements in predictive abilities increased by a factor of 20 for data set 1 and by a factor of 2 for data sets 2 and 3. In addition, when the number of wavelengths in a solution is variable, measurements on the coverage of the search space show that the coverage is usually higher for Tabu Search compared with SA and GAs. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Predicting LDC debt rescheduling: performance evaluation of OLS, logit, and neural network models

    JOURNAL OF FORECASTING, Issue 8 2001
    Douglas K. Barney
    Abstract Empirical studies in the area of sovereign debt have used statistical models singularly to predict the probability of debt rescheduling. Unfortunately, researchers have made few efforts to test the reliability of these model predictions or to identify a superior prediction model among competing models. This paper tested neural network, OLS, and logit models' predictive abilities regarding debt rescheduling of less developed countries (LDC). All models predicted well out-of-sample. The results demonstrated a consistent performance of all models, indicating that researchers and practitioners can rely on neural networks or on the traditional statistical models to give useful predictions. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Regional Spatial Modeling of Topsoil Geochemistry

    BIOMETRICS, Issue 1 2009
    C. A. Calder
    Summary Geographic information about the levels of toxics in environmental media is commonly used in regional environmental health studies when direct measurements of personal exposure is limited or unavailable. In this article, we propose a statistical framework for analyzing the spatial distribution of topsoil geochemical properties, including the concentrations of various toxicants. Due to the small-scale heterogeneity of most geochemical topsoil processes, direct measurements of the processes themselves only provide highly localized information; it is thus financially prohibitive to study the spatial patterns of these processes across a large region using traditional geostatistical analyses of point-referenced topsoil data. Instead, it is standard practice to assess geochemical patterns at a regional scale using point-referenced measurements collected in stream sediment because, unlike topsoil data, individual stream sediment geochemical measurements are representative of the surrounding area. We propose a novel multiscale soils (MSS) model that formally synthesizes data collected in topsoil and stream sediment and allows the richer stream sediment information to inform about the topsoil process, which in environmental health studies is typically more relevant. Our model accommodates the small-scale heterogeneity of topsoil geochemical processes by modeling spatial dependence at an aggregate resolution corresponding to hydrologically similar regions known as watersheds. We present an analysis of the levels of arsenic, a toxic heavy metal, in topsoil across the midwestern United States using the MSS model and show that this model has better predictive abilities than alternative approaches using more conventional statistical models for point-referenced spatial data. [source]


    Where do Swainson's hawks winter?

    DIVERSITY AND DISTRIBUTIONS, Issue 5 2008
    Satellite images used to identify potential habitat
    ABSTRACT During recent years, predictive modelling techniques have been increasingly used to identify regional patterns of species spatial occurrence, to explore species,habitat relationships and to aid in biodiversity conservation. In the case of birds, predictive modelling has been mainly applied to the study of species with little variable interannual patterns of spatial occurrence (e.g. year-round resident species or migratory species in their breeding grounds showing territorial behaviour). We used predictive models to analyse the factors that determine broad-scale patterns of occurrence and abundance of wintering Swainson's hawks (Buteo swainsoni). This species has been the focus of field monitoring in its wintering ground in Argentina due to massive pesticide poisoning of thousands of individuals during the 1990s, but its unpredictable pattern of spatial distribution and the uncertainty about the current wintering area occupied by hawks led to discontinuing such field monitoring. Data on the presence and abundance of hawks were recorded in 30 × 30 km squares (n = 115) surveyed during three austral summers (2001,03). Sixteen land-use/land-cover, topography, and Normalized Difference Vegetation Index (NDVI) variables were used as predictors to build generalized additive models (GAMs). Both occurrence and abundance models showed a good predictive ability. Land use, altitude, and NDVI during spring previous to the arrival of hawks to wintering areas were good predictors of the distribution of Swainson's hawks in the Argentine pampas, but only land use and NDVI were entered into the model of abundance of the species in the region. The predictive cartography developed from the models allowed us to identify the current wintering area of Swainson's hawks in the Argentine pampas. The highest occurrence probability and relative abundances for the species were predicted for a broad area of south-eastern pampas that has been overlooked so far and where neither field research nor conservation efforts aiming to prevent massive mortalities has been established. [source]


    Multiple genetic tests for susceptibility to smoking do not outperform simple family history

    ADDICTION, Issue 1 2009
    Coral E. Gartner
    ABSTRACT Aims To evaluate the utility of using predictive genetic screening of the population for susceptibility to smoking. Methods The results of meta-analyses of genetic association studies of smoking behaviour were used to create simulated data sets using Monte Carlo methods. The ability of the genetic tests to screen for smoking was assessed using receiver operator characteristic curve analysis. The result was compared to prediction using simple family history information. To identify the circumstances in which predictive genetic testing would potentially justify screening we simulated tests using larger numbers of alleles (10, 15 and 20) that varied in prevalence from 10 to 50% and in strength of association [relative risks (RRs) of 1.2,2.1]. Results A test based on the RRs and prevalence of five susceptibility alleles derived from meta-analyses of genetic association studies of smoking performed similarly to chance and no better than the prediction based on simple family history. Increasing the number of alleles from five to 20 improved the predictive ability of genetic screening only modestly when using genes with the effect sizes reported to date. Conclusions This panel of genetic tests would be unsuitable for population screening. This situation is unlikely to be improved upon by screening based on more genetic tests. Given the similarity with associations found for other polygenic conditions, our results also suggest that using multiple genes to screen the general population for genetic susceptibility to polygenic disorders will be of limited utility. [source]


    Comparative sediment quality guideline performance for predicting sediment toxicity in Southern California, USA

    ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 12 2005
    Doris E. Vidal
    Abstract Several types of sediment quality guidelines (SQGs) are used by multiple agencies in southern California (USA) to interpret sediment chemistry data, yet little information is available to identify the best approaches to use. The objective of this study was to evaluate the predictive ability of five SQGs to predict the presence and absence of sediment toxicity in coastal southern California: the effects range-median quotient (ERMq), consensus moderate effect concentration (consensus MEC), mean sediment quality guideline quotient (SQGQ1), apparent effects threshold (AET), and equilibrium partitioning (EqP) for organics. Large differences in predictive ability among the SQGs were obtained when each approach was applied to the same southern California data set. Sediment quality guidelines that performed well in identifying nontoxic samples were not necessarily the best predictors of toxicity. In general, the mean ERMq, SQGQ1q, and consensus MECq approaches had a better overall predictive ability than the AET and EqP for organics approaches. In addition to evaluating the predictive ability of SQGs addressing chemical mixtures, the effect of an individual SQG value (DDT) was also evaluated for the mean ERMq with and without DDT. The mean ERMq without DDT had a better ability to predict toxic samples than the mean ERMq with DDT. Similarities in discriminatory ability between different approaches, variations in accuracy among SQG values for some chemicals, and the presence of complex mixtures of contaminants in most samples underscore the need to apply SQGs in combination, such as the mean quotient. Management objectives and SQG predictive ability using regional data should be determined beforehand so that the most appropriate SQG approach and critical values can be identified for specific applications. [source]


    Accumulation of DDT and mercury in prothonotary warblers (Protonotaria citrea) foraging in a heterogeneously contaminated environment

    ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 12 2001
    Kevin D. Reynolds
    Abstract Foraging areas of adult prothonotary warblers (Protonotaria citrea) were determined using standard radiotelemetry techniques to determine if soil concentrations of p,p,dichlorodiphenyltrichloroethane (p,p,DDT) and mercury in foraging areas could be used to predict contaminant levels in diets and tissues of nestling warblers. Adult warblers were fitted with transmitters and monitored for approximately 2 d while foraging and feeding 6- to 8-d-old nestlings. Foraging ecology data were integrated with contaminant levels of soil, diets, and tissues into a comprehensive analysis of geographic variation in contaminant exposure and uptake using linear regression. Concentrations of 1,1-dichloro-2,2-bis(p -chlorophenyl)ethylene (DDE) and mercury in nestling tissues varied considerably across the study site. Mean concentration of DDE was greater in eggs than all other tissues, with individual samples ranging from 0.24 to 8.12 ,g/kg. In general, concentrations of DDT in soil were effective in describing the variation of contaminants in adipose samples. Concentrations of mercury in soils accounted for 78% of the variation in kidney samples. This was the best relationship of any of the paired variables. All other relationships showed relatively poor predictive ability. [source]


    The Multipredator Hypothesis and the Evolutionary Persistence of Antipredator Behavior

    ETHOLOGY, Issue 3 2006
    Daniel T. Blumstein
    Invited Review Abstract Isolation from predators affects prey behavior, morphology, and life history, but there is tremendous variation in the time course of these responses. Previous hypotheses to explain this variation have limited predictive ability. I develop a ,multipredator' hypothesis to explain the evolutionary persistence of antipredator behavior after the loss of some, but not all, of a species' predators. The hypothesis assumes pleiotropy, whereby elements of antipredator behavior may function in non-predatory situations, and linkage, such that genes influencing the expression of antipredator behavior do not assort independently. The hypothesis is restricted to species with multiple predators (most species) and aims to predict the conditions under which antipredator behavior will persist following the loss of one or more of a species' predators. I acknowledge that the relative costs of non-functional antipredator behavior will influence the likelihood of linkage and therefore persistence. The hypothesis makes two main predictions. First, genes responsible for antipredator behavior will not be scattered throughout the genome but rather may be found close together on the same chromosome(s). Secondly, the presence of any predators may be sufficient to maintain antipredator behavior for missing predators. Advances in behavioral genetics will allow tests of the first prediction, while studies of geographic variation in antipredator behavior provide some support for the second. [source]


    Cash flow disaggregation and the prediction of future earnings

    ACCOUNTING & FINANCE, Issue 1 2010
    Neal Arthur
    G11; G23 Abstract We examine the incremental information content of the components of cash flows from operations (CFO). Specifically the research question examined in this paper is whether models incorporating components of CFO to predict future earnings provide lower prediction errors than models incorporating simply net CFO. We use Australian data in this setting as all companies were required to provide information using the direct method during the sample period. We find that the cash flow components model is superior to an aggregate cash flow model in terms of explanatory power and predictive ability for future earnings; and that disclosure of non-core (core) cash flows components is (not) useful in both respects. Our results are of relevance to investors and analysts in estimating earnings forecasts, managers of firms in regulators' domains where choice is provided with respect to the disclosure of CFO and also to regulators' deliberations on disclosure requirements and recommendations. [source]


    Modelling patterned ground distribution in Finnish Lapland: an integration of topographical, ground and remote sensing information

    GEOGRAFISKA ANNALER SERIES A: PHYSICAL GEOGRAPHY, Issue 1 2006
    Jan Hjort
    Abstract New data technologies and modelling methods have gained more attention in the field of periglacial geomorphology during the last decade. In this paper we present a new modelling approach that integrates topographical, ground and remote sensing information in predictive geomorphological mapping using generalized additive modelling (GAM). First, we explored the roles of different environmental variable groups in determining the occurrence of non-sorted and sorted patterned ground in a fell region of 100 km2 at the resolution of 1 ha in northern Finland. Second, we compared the predictive accuracy of ground-topography- and remote-sensing-based models. The results indicate that non-sorted patterned ground is more common at lower altitudes where the ground moisture and vegetation abundance is relatively high, whereas sorted patterned ground is dominant at higher altitudes with relatively high slope angle and sparse vegetation cover. All modelling results were from good to excellent in model evaluation data using the area under the curve (AUC) values, derived from receiver operating characteristic (ROC) plots. Generally, models built with remotely sensed data were better than ground-topography-based models and combination of all environmental variables improved the predictive ability of the models. This paper confirms the potential utility of remote sensing information for modelling patterned ground distribution in subarctic landscapes. [source]


    Increased leaf area dominates carbon flux response to elevated CO2 in stands of Populus deltoides (Bartr.)

    GLOBAL CHANGE BIOLOGY, Issue 5 2005
    Ramesh Murthy
    Abstract We examined the effects of atmospheric vapor pressure deficit (VPD) and soil moisture stress (SMS) on leaf- and stand-level CO2 exchange in model 3-year-old coppiced cottonwood (Populus deltoides Bartr.) plantations using the large-scale, controlled environments of the Biosphere 2 Laboratory. A short-term experiment was imposed on top of continuing, long-term CO2 treatments (43 and 120 Pa), at the end of the growing season. For the experiment, the plantations were exposed for 6,14 days to low and high VPD (0.6 and 2.5 kPa) at low and high volumetric soil moisture contents (25,39%). When system gross CO2 assimilation was corrected for leaf area, system net CO2 exchange (SNCE), integrated daily SNCE, and system respiration increased in response to elevated CO2. The increases were mainly as a result of the larger leaf area developed during growth at high CO2, before the short-term experiment; the observed decline in responses to SMS and high VPD treatments was partly because of leaf area reduction. Elevated CO2 ameliorated the gas exchange consequences of water stress at the stand level, in all treatments. The initial slope of light response curves of stand photosynthesis (efficiency of light use by the stand) increased in response to elevated CO2 under all treatments. Leaf-level net CO2 assimilation rate and apparent quantum efficiency were consistently higher, and stomatal conductance and transpiration were significantly lower, under high CO2 in all soil moisture and VPD combinations (except for conductance and transpiration in high soil moisture, low VPD). Comparisons of leaf- and stand-level gross CO2 exchange indicated that the limitation of assimilation because of canopy light environment (in well-irrigated stands; ratio of leaf : stand=3.2,3.5) switched to a predominantly individual leaf limitation (because of stomatal closure) in response to water stress (leaf : stand=0.8,1.3). These observations enabled a good prediction of whole stand assimilation from leaf-level data under water-stressed conditions; the predictive ability was less under well-watered conditions. The data also demonstrated the need for a better understanding of the relationship between leaf water potential, leaf abscission, and stand LAI. [source]


    Relative accuracy and predictive ability of direct valuation methods, price to aggregate earnings method and a hybrid approach

    ACCOUNTING & FINANCE, Issue 4 2006
    Lucie Courteau
    M41 Abstract In this paper, we assess the relative performance of the direct valuation method and industry multiplier models using 41 435 firm-quarter Value Line observations over an 11 year (1990,2000) period. Results from both pricing-error and return-prediction analyses indicate that direct valuation yields lower percentage pricing errors and greater return prediction ability than the forward price to aggregated forecasted earnings multiplier model. However, a simple hybrid combination of these two methods leads to more accurate intrinsic value estimates, compared to either method used in isolation. It would appear that fundamental analysis could benefit from using one approach as a check on the other. [source]


    Impact of time-scale of the calibration objective function on the performance of watershed models

    HYDROLOGICAL PROCESSES, Issue 25 2007
    K. P. Sudheer
    Abstract Many of the continuous watershed models perform all their computations on a daily time step, yet they are often calibrated at an annual or monthly time-scale that may not guarantee good simulation performance on a daily time step. The major objective of this paper is to evaluate the impact of the calibration time-scale on model predictive ability. This study considered the Soil and Water Assessment Tool for the analyses, and it has been calibrated at two time-scales, viz. monthly and daily for the War Eagle Creek watershed in the USA. The results demonstrate that the model's performance at the smaller time-scale (such as daily) cannot be ensured by calibrating them at a larger time-scale (such as monthly). It is observed that, even though the calibrated model possesses satisfactory ,goodness of fit' statistics, the simulation residuals failed to confirm the assumption of their homoscedasticity and independence. The results imply that evaluation of models should be conducted considering their behavior in various aspects of simulation, such as predictive uncertainty, hydrograph characteristics, ability to preserve statistical properties of the historic flow series, etc. The study enlightens the scope for improving/developing effective autocalibration procedures at the daily time step for watershed models. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Smoothing Mechanisms in Defined Benefit Pension Accounting Standards: A Simulation Study,

    ACCOUNTING PERSPECTIVES, Issue 2 2009
    Cameron Morrill
    ABSTRACT The accounting for defined benefit (DB) pension plans is complex and varies significantly across jurisdictions despite recent international convergence efforts. Pension costs are significant, and many worry that unfavorable accounting treatment could lead companies to terminate DB plans, a result that would have important social implications. A key difference in accounting standards relates to whether and how the effects of fluctuations in market and demographic variables on reported pension cost are "smoothed". Critics argue that smoothing mechanisms lead to incomprehensible accounting information and induce managers to make dysfunctional decisions. Furthermore, the effectiveness of these mechanisms may vary. We use simulated data to test the volatility, representational faithfulness, and predictive ability of pension accounting numbers under Canadian, British, and international standards (IFRS). We find that smoothed pension expense is less volatile, more predictive of future expense, and more closely associated with contemporaneous funding than is "unsmoothed" pension expense. The corridor method and market-related value approaches allowed under Canadian GAAP have virtually no smoothing effect incremental to the amortization of actuarial gains and losses. The pension accrual or deferred asset is highly correlated with the pension plan deficit/surplus. Our findings complement existing, primarily archival, pension accounting research and could provide guidance to standard-setters. [source]


    Neural network volatility forecasts

    INTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 3-4 2007
    José R. Aragonés
    We analyse whether the use of neural networks can improve ,traditional' volatility forecasts from time-series models, as well as implied volatilities obtained from options on futures on the Spanish stock market index, the IBEX-35. One of our main contributions is to explore the predictive ability of neural networks that incorporate both implied volatility information and historical time-series information. Our results show that the general regression neural network forecasts improve the information content of implied volatilities and enhance the predictive ability of the models. Our analysis is also consistent with the results from prior research studies showing that implied volatility is an unbiased forecast of future volatility and that time-series models have lower explanatory power than implied volatility. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Off-site monitoring systems for predicting bank underperformance: a comparison of neural networks, discriminant analysis, and professional human judgment

    INTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 3 2001
    Philip Swicegood
    This study compares the ability of discriminant analysis, neural networks, and professional human judgment methodologies in predicting commercial bank underperformance. Experience from the banking crisis of the 1980s and early 1990s suggest that improved prediction models are needed for helping prevent bank failures and promoting economic stability. Our research seeks to address this issue by exploring new prediction model techniques and comparing them to existing approaches. When comparing the predictive ability of all three models, the neural network model shows slightly better predictive ability than that of the regulators. Both the neural network model and regulators significantly outperform the benchmark discriminant analysis model's accuracy. These findings suggest that neural networks show promise as an off-site surveillance methodology. Factoring in the relative costs of the different types of misclassifications from each model also indicates that neural network models are better predictors, particularly when weighting Type I errors more heavily. Further research with neural networks in this field should yield workable models that greatly enhance the ability of regulators and bankers to identify and address weaknesses in banks before they approach failure. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Cyclic macro-element for soil,structure interaction: material and geometrical non-linearities

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 13 2001
    Cécile Cremer
    Abstract This paper presents a non-linear soil,structure interaction (SSI) macro-element for shallow foundation on cohesive soil. The element describes the behaviour in the near field of the foundation under cyclic loading, reproducing the material non-linearities of the soil under the foundation (yielding) as well as the geometrical non-linearities (uplift) at the soil,structure interface. The overall behaviour in the soil and at the interface is reduced to its action on the foundation. The macro-element consists of a non-linear joint element, expressed in generalised variables, i.e. in forces applied to the foundation and in the corresponding displacements. Failure is described by the interaction diagram of the ultimate bearing capacity of the foundation under combined loads. Mechanisms of yielding and uplift are modelled through a global, coupled plasticity,uplift model. The cyclic model is dedicated to modelling the dynamic response of structures subjected to seismic action. Thus, it is especially suited to combined loading developed during this kind of motion. Comparisons of cyclic results obtained from the macro-element and from a FE modelization are shown in order to demonstrate the relevance of the proposed model and its predictive ability. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Quantitative evaluation of DNA hypermethylation in malignant and benign breast tissue and fluids

    INTERNATIONAL JOURNAL OF CANCER, Issue 2 2010
    Weizhu Zhu
    Abstract The assessment of DNA had demonstrated altered methylation in malignant compared to benign breast tissue. The purpose of our study was to (i) confirm the predictive ability of methylation assessment in breast tissue, and (ii) use the genes found to be cancer predictive in tissue to evaluate the diagnostic potential of hypermethylation assessment in nipple aspirate fluid (NAF) and mammary ductoscopic (MD) samples. Quantitative methylation specific (qMS)-PCR was conducted on three specimen sets: 44 malignant (CA) and 34 normal (NL) tissue specimens, 18 matched CA, adjacent normal (ANL) tissue and NAF specimens, and 119 MD specimens. Training and validation tissue sets were analyzed to determine the optimal group of cancer predictive genes for NAF and MD analysis. NAF and MD cytologic review were also performed. Methylation of CCND -2, p16, RAR -, and RASSF-1a was significantly more prevalent in tumor than in normal tissue specimens. Receiver operating characteristic curve analysis demonstrated an area under the curve of 0.96. For the 18 matched CA, ANL and NAF specimens, the four predictive genes identified in cancer tissue contained increased methylation in CA vs. ANL tissue; NAF samples had higher methylation than ANL specimens. Methylation frequency was higher in MD specimens from breasts with cancer than benign samples for p16 and RASSF-1a. In summary, i) routine quantitative DNA methylation assessment in NAF and MD samples is possible, and ii) genes hypermethylated in malignant breast tissue are also altered in matched NAF and in MD samples, and may be useful to assist in early breast cancer detection. [source]


    Structure-hepatoprotective activity relationship study of sesquiterpene lactones: A QSAR analysis

    INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 1 2009
    Yuliya Paukku
    Abstract This study has been carried out using quantitative structure-activity relationship analysis (QSAR) for 22 sesquiterpene lactones to correlate and predict their hepatoprotective activity. Sesquiterpenoids, the largest class of terpenoids, are a widespread group of substances occurring in various plant organisms. QSAR analysis was carried out using methods such as genetic algorithm for variables selection among generated and calculated descriptors and multiple linear regression analysis. Quantum-chemical calculations have been performed by density functional theory at B3LYP/6-311G(d, p) level for evaluation of electronic properties using reference geometries optimized by semi-empirical AM1 approach. Three models describing hepatoprotective activity values for series of sesquiterpene lactones are proposed. The obtained models are useful for description of sesquiterpene lactones hepatoprotective activity and can be used to estimate the hepatoprotective activity of new substituted sesquiterpene lactones. The models obtained in our study show not only statistical significance, but also good predictive ability. The estimated predictive ability (r) of these models lies within 0.942,0.969. © 2008 Wiley Periodicals, Inc. Int J Quantum Chem, 2009 [source]


    Neural network modeling of physical properties of chemical compounds

    INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 1 2001
    J. Kozio
    Abstract Three different models relating structural descriptors to normal boiling points, melting points, and refractive indexes of organic compounds have been developed using artificial neural networks. A newly elaborated set of molecular descriptors was evaluated to determine their utility in quantitative structure,property relationship (QSPR) studies. Applying two data sets containing 190 amines and 393 amides, neural networks were trained to predict physical properties with close to experimental accuracy, using the conjugated gradient algorithm. Obtained results have shown a high predictive ability of learned neural networks models. The fit error for the predicted properties values compared to experimental data is relatively small. © 2001 John Wiley & Sons, Inc. Int J Quant Chem 84: 117,126, 2001 [source]


    Predictive ability of propofol effect,site concentrations during fast and slow infusion rates

    ACTA ANAESTHESIOLOGICA SCANDINAVICA, Issue 4 2010
    P. O. SEPÚLVEDA
    Background: The performance of propofol effect,site pharmacokinetic models during target-controlled infusion (TCI) might be affected by propofol administration rate. This study compares the predictive ability of three effect,site pharmacokinetic models during fast and slow infusion rates, utilizing the cerebral state index (CSI) as a monitor of consciousness. Methods: Sixteen healthy volunteers, 21,45 years of age, were randomly assigned to receive either a bolus dose of propofol 1.8 mg/kg at a rate of 1200 ml/h or an infusion of 12 mg/kg/h until 3,5 min after loss of consciousness (LOC). After spontaneous recovery of the CSI, the bolus was administered to patients who had first received the infusion and vice versa. The study was completed after spontaneous recovery of CSI following the second dose scheme. LOC was assessed and recorded when it occurred. Adequacies of model predictions during both administration schemes were assessed by comparing the effect,site concentrations estimated at the time of LOC during the bolus dose and during the infusion scheme. Results: LOC occurred 0.97 ± 0.29 min after the bolus dose and 6.77 ± 3.82 min after beginning the infusion scheme (P<0.05). The Ce estimated with Schnider (ke0=0.45/min), Marsh (ke0=1.21/min) and Marsh (ke0=0.26/min) at LOC were 4.40 ± 1.45, 3.55 ± 0.64 and 1.28 ± 0.44 ,g/ml during the bolus dose and 2.81 ± 0.61, 2.50 ± 0.39 and 1.72 ± 0.41 ,g/ml, during the infusion scheme (P<0.05). The CSI values observed at LOC were 70 ± 4 during the bolus dose and 71 ± 2 during the infusion scheme (NS). Conclusion: Speed of infusion, within the ranges allowed by TCI pumps, significantly affects the accuracy of Ce predictions. The CSI monitor was shown to be a useful tool to predict LOC in both rapid and slow infusion schemes. [source]