Prediction Models (prediction + models)

Distribution by Scientific Domains

Kinds of Prediction Models

  • numerical weather prediction models
  • risk prediction models
  • weather prediction models


  • Selected Abstracts


    Advancing Loss Given Default Prediction Models: How the Quiet Have Quickened

    ECONOMIC NOTES, Issue 2 2005
    Greg M. Gupton
    We describe LossCalcÔ version 2.0: the Moody's KMV model to predict loss given default (LGD), the equivalent of (1 , recovery rate). LossCalc is a statistical model that applies multiple predictive factors at different information levels: collateral, instrument, firm, industry, country and the macroeconomy to predict LGD. We find that distance-to-default measures (from the Moody's KMV structural model of default likelihood) compiled at both the industry and firm levels are predictive of LGD. We find that recovery rates worldwide are predictable within a common statistical framework, which suggests that the estimation of economic firm value (which is then available to allocate to claimants according to each country's bankruptcy laws) is a dominant step in LGD determination. LossCalc is built on a global dataset of 3,026 recovery observations for loans, bonds and preferred stock from 1981 to 2004. This dataset includes 1,424 defaults of both public and private firms , both rated and unrated instruments , in all industries. We demonstrate out-of-sample and out-of-time LGD model validation. The model significantly improves on the use of historical recovery averages to predict LGD. [source]


    The Performance of Risk Prediction Models

    BIOMETRICAL JOURNAL, Issue 4 2008
    Thomas A. Gerds
    Abstract For medical decision making and patient information, predictions of future status variables play an important role. Risk prediction models can be derived with many different statistical approaches. To compare them, measures of predictive performance are derived from ROC methodology and from probability forecasting theory. These tools can be applied to assess single markers, multivariable regression models and complex model selection algorithms. This article provides a systematic review of the modern way of assessing risk prediction models. Particular attention is put on proper benchmarks and resampling techniques that are important for the interpretation of measured performance. All methods are illustrated with data from a clinical study in head and neck cancer patients. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Clinical Prediction Models: A Practical Approach to Development, Validation and Updating by STEYERBERG, E. W.

    BIOMETRICS, Issue 2 2010
    Rumana Omar
    No abstract is available for this article. [source]


    Power and Sample Size Estimation for the Wilcoxon Rank Sum Test with Application to Comparisons of C Statistics from Alternative Prediction Models

    BIOMETRICS, Issue 1 2009
    B. Rosner
    Summary The Wilcoxon Mann-Whitney (WMW) U test is commonly used in nonparametric two-group comparisons when the normality of the underlying distribution is questionable. There has been some previous work on estimating power based on this procedure (Lehmann, 1998, Nonparametrics). In this article, we present an approach for estimating type II error, which is applicable to any continuous distribution, and also extend the approach to handle grouped continuous data allowing for ties. We apply these results to obtaining standard errors of the area under the receiver operating characteristic curve (AUROC) for risk-prediction rules under H1 and for comparing AUROC between competing risk prediction rules applied to the same data set. These results are based on SAS -callable functions to evaluate the bivariate normal integral and are thus easily implemented with standard software. [source]


    OPTIMAL CONDITIONS FOR THE GROWTH AND POLYSACCHARIDE PRODUCTION BY HYPSIZIGUS MARMOREUS IN SUBMERGED CULTURE

    JOURNAL OF FOOD PROCESSING AND PRESERVATION, Issue 4 2009
    PING WANG
    ABSTRACTS In submerged cultivation, many nutrient variables and environmental conditions have great influence on the growth and polysaccharide production by Hypsizigus marmoreus. Plackett,Burman design was used to determine the important nutrient factors. A central composite experimental design and surface response methodology were employed to optimize the factor levels. Prediction models for dry cell weight (DCW), polysaccharide outside cells (EPS) and polysaccharide inside cells (IPS) under important nutrient conditions were developed by multiple regression analysis and verified. By solving the equations, the optimal nutrient conditions for highest EPS production (9.62 g/L) were obtained at 6.77 g cornstarch/L, 36.57 g glucose/L, 3.5 g MgSO4/L and 6.14 g bean cake powder/L, under which DCW and IPS were 16.2 g/L and 1.46 g/L, close to the highest value under their corresponding optimal conditions. Optimal environmental conditions were obtained at 10% inoculation dose, 45 mL medium in a 250 mL flask, pH 6.5, 25C and 200 rpm according to the results of single-factor experiment design. PRACTICAL APPLICATIONS Hypsizigus marmoreus polysaccharides have many functional properties, including antitumor, antifungal and antiproliferative activities, and free-radical scavenging. Liquid cultivation could produce a higher yield of polysaccharides and more flexible sequential processing methods of H. marmoreus, compared with traditional solid-state cultivation. However, the cell growth and production of polysaccharides would be influenced by many factors, including nutrient conditions and environmental conditions in the liquid cultivation of H. marmoreus. Keeping the conditions at optimal levels can maximize the yield of polysaccharides. The study not only found out the optimal nutrient conditions and environmental conditions for highest cell growth and yield of polysaccharides, but also developed prediction models for these parameters with important nutrient variables. Yield of polysaccharide inside of cells was also studied as well as polysaccharides outside of cells and cell growth. The results provide essential information for production of H. marmoreus polysaccharides by liquid culture. [source]


    Electronic Nose Technology in Quality Assessment: Predicting Volatile Composition of Danish Blue Cheese During Ripening

    JOURNAL OF FOOD SCIENCE, Issue 6 2005
    Jeorgos Trihaas
    ABSTRACT This work describes for the 1st time the use of an electronic nose (e-nose) for the determination of changes of blue cheeses flavor during maturation. Headspace analysis of Danish blue cheeses was made for 2 dairy units of the same producer. An e-nose registered changes in cheeses flavor 5, 8, 12, and 20 wk after brining. Volatiles were collected from the headspace and analyzed by gas chromatography-mass spectrometry (GC-MS). Features from the chemical sensors of the e-nose were used to model the volatile changes by multivariate methods. Differences registered during ripening of the cheeses as well as between producing units are described and discussed for both methods. Cheeses from different units showed significant differences in their e-nose flavor profiles at early ripening stages but with ripening became more and more alike. Prediction of the concentration of 25 identified aroma compounds by e-nose features was possible by partial least square regression (PLS-R). It was not possible to create a reliable predictive model for both units because cheeses from 1 unit were contaminated by Geotrichum candidum, leading to unstable ripening patterns. Correction of the e-nose features by multiple scatter correction (MSC) and mean normalization (MN) of the integrated GC areas made correlation of the volatile concentration to the e-nose signal features possible. Prediction models were created, evaluated, and used to reconstruct the headspace of unknown cheese samples by e-nose measurements. Classification of predicted volatile compositions of unknown samples by their ripening stage was successful at a 78% and 54% overall correct classification for dairy units 1 and 2, respectively. Compared with GC-MS, the application of the rapid and less demanding e-nose seems an attractive alternative for this type of investigation. [source]


    Improved Correlation Between Sensory and Instrumental Measurement of Peanut Butter Texture

    JOURNAL OF FOOD SCIENCE, Issue 5 2002
    C.M. Lee And
    Two commercial peanut butters and 3 laboratory-prepared peanut butters containing 0.5, 1.5 and 2.5% stabilizer were evaluated by sensory and instrumental texture profile analysis (TPA) using an Instron. A 2×3 factorial design consisting of crosshead speeds of 5 and 50 mm/min, and amount and type of fluid added was used. A descriptive panel (n= 11) was used to evaluate 14 sensory TPA attributes. Twelve sensory TPA attributes, compared with only 2 found by other researchers, were highly correlated ( 0.88) with 1 or more instrumental TPA parameters. Prediction models (R 0.71) developed successfully predicted 12 sensory texture attributes from instrumental TPA results. Eleven models, excluding surface roughness, were successfully verified with 0.74 to 7.21% error. [source]


    A self-consistent scattering model for cirrus.

    THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 629 2007
    I: The solar region
    Abstract In this paper a self-consistent scattering model for cirrus is presented. The model consists of an ensemble of ice crystals where the smallest ice crystal is represented by a single hexagonal ice column. As the overall ice crystal size increases, the ice crystals become progressively more complex by arbitrarily attaching other hexagonal elements until a chain-like ice crystal is formed, this representing the largest ice crystal in the ensemble. The ensemble consists of six ice crystal members whose aspect ratios (ratios of the major-to-minor axes of the circumscribed ellipse) are allowed to vary between unity and 1.84 for the smallest and largest ice crystal, respectively. The ensemble model's prediction of parameters fundamental to solar radiative transfer through cirrus such as ice water content and the volume extinction coefficient is tested using in situ based data obtained from the midlatitudes and Tropics. It is found that the ensemble model is able to generally predict the ice water content and extinction measurements within a factor of two. Moreover, the ensemble model's prediction of cirrus spherical albedo and polarized reflection are tested against a space-based instrument using one day of global measurements. The space-based instrument is able to sample the scattering phase function between the scattering angles of approximately 60° and 180° , and a total of 37 581 satellite pixels were used in the present analysis covering latitude bands between 43.75°S and 76.58°N. It is found that the ensemble model phase function is well able to minimize significantly differences between satellite-based measurements of spherical albedo and the ensemble model's prediction of spherical albedo. The satellite-based measurements of polarized reflection are found to be reasonably described by more simple members of the ensemble. The ensemble model presented in this paper should find wide applicability to the remote sensing of cirrus as well as more fundamental solar radiative transfer calculations through cirrus, and improved solar optical properties for climate and Numerical Weather Prediction models. Copyright © 2007 Royal Meteorological Society [source]


    APPLYING MACHINE LEARNING TO LOW-KNOWLEDGE CONTROL OF OPTIMIZATION ALGORITHMS

    COMPUTATIONAL INTELLIGENCE, Issue 4 2005
    Tom Carchrae
    This paper addresses the question of allocating computational resources among a set of algorithms to achieve the best performance on scheduling problems. Our primary motivation in addressing this problem is to reduce the expertise needed to apply optimization technology. Therefore, we investigate algorithm control techniques that make decisions based only on observations of the improvement in solution quality achieved by each algorithm. We call our approach "low knowledge" since it does not rely on complex prediction models, either of the problem domain or of algorithm behavior. We show that a low-knowledge approach results in a system that achieves significantly better performance than all of the pure algorithms without requiring additional human expertise. Furthermore the low-knowledge approach achieves performance equivalent to a perfect high-knowledge classification approach. [source]


    Corporate Governance and Financial Distress: evidence from Taiwan

    CORPORATE GOVERNANCE, Issue 3 2004
    Tsun-Siou Lee
    Prior empirical evidence supports the wealth expropriation hypothesis that weak corporate governance induced by certain types of ownership structures and board composition tends to result in minority interest expropriation. This in turn reduces corporate value. However, it is still unclear whether corporate financial distress is related to these corporate governance characteristics. To answer this question, we adopt three variables to proxy for corporate governance risk, namely, the percentage of directors occupied by the controlling shareholder, the percentage the controlling shareholders shareholding pledged for bank loans (pledge ratio), and the deviation in control away from the cash flow rights. Binary logistic regressions are then fitted to generate dichotomous prediction models. Taiwanese listed firms, characterised by a high degree of ownership concentration, similar to that in most countries, are used as our empirical samples. The evidence suggests that the three variables mentioned above are positively related to the risk for financial distress in the following year. Generally speaking, firms with weak corporate governance are vulnerable to economic downturns and the probability of falling into financial distress increases. [source]


    Diagnostic potential of serum protein pattern in Type 2 diabetic nephropathy

    DIABETIC MEDICINE, Issue 12 2007
    Y-H. Yang
    Abstract Aims Microalbuminuria is the earliest clinical sign of diabetic nephropathy (DN). However, the multifactorial nature of DN supports the application of combined markers as a diagnostic tool. Thus, another screening approach, such as protein profiling, is required for accurate diagnosis. Surface enhanced laser desorption/ionization time-of-flight mass spectrometry (SELDI-TOF-MS) is a novel method for biomarker discovery. We aimed to use SELDI and bioinformatics to define and validate a DN-specific protein pattern in serum. Methods SELDI was used to obtain protein or polypeptide patterns from serum samples of 65 patients with DN and 65 non-DN subjects. From signatures of protein/polypeptide mass, a decision tree model was established for diagnosing the presence of DN. We estimated the proportion of correct classifications from the model by applying it to a masked group of 22 patients with DN and 28 non-DN subjects. The weak cationic exchange (CM10) ProteinChip arrays were performed on a ProteinChip PBS IIC reader. Results The intensities of 22 detected peaks appeared up-regulated, whereas 24 peaks were down-regulated more than twofold (P < 0.01) in the DN group compared with the non-DN groups. The algorithm identified a diagnostic DN pattern of six protein/polypeptide masses. On masked assessment, prediction models based on these protein/polypeptides achieved a sensitivity of 90.9% and specificity of 89.3%. Conclusion These observations suggest that DN patients have a unique cluster of molecular components in serum, which are present in their SELDI profile. Identification and characterization of these molecular components will help in the understanding of the pathogenesis of DN. The serum protein signature, combined with a tree analysis pattern, may provide a novel clinical diagnostic approach for DN. [source]


    Use of a Voltammetric Electronic Tongue for Detection and Classification of Nerve Agent Mimics

    ELECTROANALYSIS, Issue 14 2010
    Inmaculada Campos
    Abstract An electronic tongue (ET) based on pulse voltammetry has been used to predict the presence of nerve agent mimics in aqueous environments. The electronic tongue array consists of eight working electrodes (Au, Pt, Ir, Rh, Cu, Co, Ni and Ag) encapsulated on a stainless steel cylinder. Studies including principal component analysis (PCA), artificial neural networks (fuzzy ARTMAP) and partial least square techniques (PLS) have been applied for data management and prediction models. For instance the electronic tongue is able to discriminate the presence of the nerve agent simulants diethyl chlorophosphate (DCP) and diethyl cyanophosphate (DCNP) from the presence of other organophosphorous derivatives in water. Finally, PLS data analysis using a system of 3 compounds and 3 concentration levels shows a good accuracy in concentration prediction for DCP and DCNP in aqueous environments. [source]


    Modeling and predicting complex space,time structures and patterns of coastal wind fields

    ENVIRONMETRICS, Issue 5 2005
    Montserrat Fuentes
    Abstract A statistical technique is developed for wind field mapping that can be used to improve either the assimilation of surface wind observations into a model initial field or the accuracy of post-processing algorithms run on meteorological model output. The observed wind field at any particular location is treated as a function of the true (but unknown) wind and measurement error. The wind field from numerical weather prediction models is treated as a function of a linear and multiplicative bias and a term which represents random deviations with respect to the true wind process. A Bayesian approach is taken to provide information about the true underlying wind field, which is modeled as a stochastic process with a non-stationary and non-separable covariance. The method is applied to forecast wind fields from a widely used mesoscale numerical weather prediction (NWP) model (MM5). The statistical model tests are carried out for the wind speed over the Chesapeake Bay and the surrounding region for 21 July 2002. Coastal wind observations that have not been used in the MM5 initial conditions or forecasts are used in conjunction with the MM5 forecast wind field (valid at the same time that the observations were available) in a post-processing technique that combined these two sources of information to predict the true wind field. Based on the mean square error, this procedure provides a substantial correction to the MM5 wind field forecast over the Chesapeake Bay region. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Random forest can predict 30-day mortality of spontaneous intracerebral hemorrhage with remarkable discrimination

    EUROPEAN JOURNAL OF NEUROLOGY, Issue 7 2010
    S. -Y.
    Background and purpose:, Risk-stratification models based on patient and disease characteristics are useful for aiding clinical decisions and for comparing the quality of care between different physicians or hospitals. In addition, prediction of mortality is beneficial for optimizing resource utilization. We evaluated the accuracy and discriminating power of the random forest (RF) to predict 30-day mortality of spontaneous intracerebral hemorrhage (SICH). Methods:, We retrospectively studied 423 patients admitted to the Taichung Veterans General Hospital who were diagnosed with spontaneous SICH within 24 h of stroke onset. The initial evaluation data of the patients were used to train the RF model. Areas under the receiver operating characteristic curves (AUC) were used to quantify the predictive performance. The performance of the RF model was compared to that of an artificial neural network (ANN), support vector machine (SVM), logistic regression model, and the ICH score. Results:, The RF had an overall accuracy of 78.5% for predicting the mortality of patients with SICH. The sensitivity was 79.0%, and the specificity was 78.4%. The AUCs were as follows: RF, 0.87 (0.84,0.90); ANN, 0.81 (0.77,0.85); SVM, 0.79 (0.75,0.83); logistic regression, 0.78 (0.74,0.82); and ICH score, 0.72 (0.68,0.76). The discriminatory power of RF was superior to that of the other prediction models. Conclusions:, The RF provided the best predictive performance amongst all of the tested models. We believe that the RF is a suitable tool for clinicians to use in predicting the 30-day mortality of patients after SICH. [source]


    Proposed life prediction model for laser-formed high-strength low-alloy curved components

    FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 4 2007
    P. J. McGRATH
    ABSTRACT Techniques employed for material processing using laser technology are progressing at a rapid pace. One such technique is that of forming sheet metal plates. This high-intensity localized heating process allows for forming of metallic sheet materials without the need for expensive tools and dies or any mechanical assistance. The fundamental mechanisms related to this process are reasonably well understood and documented but there remain areas that require further research and development. One such area is the fatigue behaviour of sheet materials manufactured by this novel process. Hence, the proceeds of this paper deal with fatigue life prediction of sheet metal components laser-formed to a radius with a curvature of approximately 120 mm. The approach to this proposed model considers the mean stress relationship as given by Gerber and a prediction model derived from combining the aspects of life prediction models according to Collins and Juvinall & Marshek. [source]


    Severe Deep Moist Convective Storms: Forecasting and Mitigation

    GEOGRAPHY COMPASS (ELECTRONIC), Issue 1 2008
    David L. Arnold
    Small-scale (2,20 km) circulations, termed ,severe deep moist convective storms', account for a disproportionate share of the world's insured weather-related losses. Spatial frequency maximums of severe convective events occur in South Africa, India, Mexico, the Caucasus, and Great Plains/Prairies region of North America, where the maximum tornado frequency occurs east of the Rocky Mountains. Interest in forecasting severe deep moist convective systems, especially those that produce tornadoes, dates to 1884 when tornado alerts were first provided in the central United States. Modern thunderstorm and tornado forecasting relies on technology and theory, but in the post-World War II era interest in forecasting has also been driven by public pressure. The forecasting process begins with a diagnostic analysis, in which the forecaster considers the potential of the atmospheric environment to produce severe convective storms (which requires knowledge of the evolving kinematic and thermodynamic fields, and the character of the land surface over which the storms will pass), and the likely character of the storms that may develop. Improvements in forecasting will likely depend on technological advancements, such as the development of phased-array radar systems and finer resolution numerical weather prediction models. Once initiated, the evolution of deep convective storms is monitored by satellite and radar. Mitigation of the hazards posed by severe deep moist convective storms is a three-step process, involving preparedness, response, and recovery. Preparedness implies that risks have been identified and organizations and individuals are familiar with a response plan. Response necessitates that potential events are identified before they occur and the developing threat is communicated to the public. Recovery is a function of the awareness of local, regional, and even national governments to the character and magnitude of potential events in specific locations, and whether or not long-term operational plans are in place at the time of disasters. [source]


    Conceptual problems in laypersons' understanding of individualized cancer risk: a qualitative study

    HEALTH EXPECTATIONS, Issue 1 2009
    Paul K. J. Han MD MA MPH
    Abstract Objective, To explore laypersons' understanding of individualized cancer risk estimates, and to identify conceptual problems that may limit this understanding. Background, Risk prediction models are increasingly used to provide people with information about their individual risk of cancer and other diseases. However, laypersons may have difficulty understanding individualized risk information, because of conceptual as well as computational problems. Design, A qualitative study was conducted using focus groups. Semi-structured interviews explored participants' understandings of the concept of risk, and their interpretations of a hypothetical individualized colorectal cancer risk estimate. Setting and participants, Eight focus groups were conducted with 48 adults aged 50,74 years residing in two major US metropolitan areas. Participants had high school or greater education, some familiarity with information technology, and no personal or family history of cancer. Results, Several important conceptual problems were identified. Most participants thought of risk not as a neutral statistical concept, but as signifying danger and emotional threat, and viewed cancer risk in terms of concrete risk factors rather than mathematical probabilities. Participants had difficulty acknowledging uncertainty implicit to the concept of risk, and judging the numerical significance of individualized risk estimates. The most challenging conceptual problems related to conflict between subjective and objective understandings of risk, and difficulties translating aggregate-level objective risk estimates to the individual level. Conclusions, Several conceptual problems limit laypersons' understanding of individualized cancer risk information. These problems have implications for future research on health numeracy, and for the application of risk prediction models in clinical and public health settings. [source]


    Runoff and suspended sediment yields from an unpaved road segment, St John, US Virgin Islands

    HYDROLOGICAL PROCESSES, Issue 1 2007
    Carlos E. Ramos-Scharrón
    Abstract Unpaved roads are believed to be the primary source of terrigenous sediments being delivered to marine ecosystems around the island of St John in the eastern Caribbean. The objectives of this study were to: (1) measure runoff and suspended sediment yields from a road segment; (2) develop and test two event-based runoff and sediment prediction models; and (3) compare the predicted sediment yields against measured values from an empirical road erosion model and from a sediment trap. The runoff models use the Green,Ampt infiltration equation to predict excess precipitation and then use either an empirically derived unit hydrograph or a kinematic wave to generate runoff hydrographs. Precipitation, runoff, and suspended sediment data were collected from a 230 m long, mostly unpaved road segment over an 8-month period. Only 3,5 mm of rainfall was sufficient to initiate runoff from the road surface. Both models simulated similar hydrographs. Model performance was poor for storms with less than 1 cm of rainfall, but improved for larger events. The largest source of error was the inability to predict initial infiltration rates. The two runoff models were coupled with empirical sediment rating curves, and the predicted sediment yields were approximately 0·11 kg per square meter of road surface per centimetre of precipitation. The sediment trap data indicated a road erosion rate of 0·27 kg m,2 cm,1. The difference in sediment production between these two methods can be attributed to the fact that the suspended sediment samples were predominantly sand and silt, whereas the sediment trap yielded mostly sand and gravel. The combination of these data sets yields a road surface erosion rate of 0·31 kg m,2 cm,1, or approximately 36 kg m,2 year,1. This is four orders of magnitude higher than the measured erosion rate from undisturbed hillslopes. The results confirm the importance of unpaved roads in altering runoff and erosion rates in a tropical setting, provide insights into the controlling processes, and provide guidance for predicting runoff and sediment yields at the road-segment scale. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Predicting Business Failures in Canada,

    ACCOUNTING PERSPECTIVES, Issue 2 2007
    J. Efrim Boritz
    ABSTRACT Empirical researchers and practitioners frequently use the bankruptcy prediction models developed by Altman (1968) and Ohlson (1980). This poses a potential problem for practitioners in Canada and researchers working with Canadian data because the Altman and Ohlson models were developed using U.S. data. We compare Canadian bankruptcy prediction models developed by Springate (1978), Altman and Levallee (1980), and Legault and Véronneau (1986) against the Altman and Ohlson models using recent data to determine the robustness of all models over time and the applicability of the Altman and Ohlson models to the Canadian environment. Our results indicate that the models developed by Springate (1978) and Legault and Véronneau (1986) yield similar results to the Ohlson (1980) model while being simpler and requiring less data. The Altman (1968) and Altman and Levallee (1980) models generally have lower performance than the other models. All models have stronger performance with the original coefficients than with re-estimated coefficients. Our results regarding the Altman and Ohlson models are consistent with Begley, Ming, and Watts (1996), who found that the original version of the Ohlson model is superior to the Altman model and is robust over time. Les chercheurs empiriques et les praticiens ont souvent recours aux modèles de prédiction des faillites élaborés par Altman (1968) et Ohlson (1980). Or, le fait que ces auteurs aient utilisé des données des États-Unis dans l'élaboration de leurs modèles soulève un problème particulier pour les praticiens canadiens et les chercheurs qui traitent des données canadiennes. Les auteurs comparent les modèles canadiens de prédiction des faillites mis au point par Springate (1978), Altman et Levallée (1980) et Legault et Véronneau (1986) aux modèles proposés par Altman et Ohlson, en se servant de données récentes pour évaluer la robustesse de tous ces modèles dans le temps et l'applicabilité des modèles d'Altman et Ohlson au contexte canadien. L'analyse révèle que les modèles de Springate (1978) et de Legault et Véronneau (1986) produisent des résultats similaires à ceux du modèle d'Ohlson (1980), bien qu'ils soient plus simples et exigent moins de données. De façon générale, les modèles d'Altman (1968) et d'Altman et Levallee (1980) sont moins performants que les autres modèles. Tous les modèles sont plus efficaces lorsqu'ils font usage des coefficients initiaux que lorsqu'ils sont appliqués à de nouvelles estimations des coefficients. Les résultats obtenus en ce qui a trait aux modèles d'Altman et d'Ohlson corroborent ceux de Begley, Ming et Watts (1996) qui constatent que la version initiale du modèle d'Ohlson est supérieure au modèle d'Altman et résiste au passage du temps. [source]


    Bioaccumulation Assessment Using Predictive Approaches,

    INTEGRATED ENVIRONMENTAL ASSESSMENT AND MANAGEMENT, Issue 4 2009
    John W Nichols
    Abstract Mandated efforts to assess chemicals for their potential to bioaccumulate within the environment are increasingly moving into the realm of data inadequacy. Consequently, there is an increasing reliance on predictive tools to complete regulatory requirements in a timely and cost-effective manner. The kinetic processes of absorption, distribution, metabolism, and elimination (ADME) determine the extent to which chemicals accumulate in fish and other biota. Current mathematical models of bioaccumulation implicitly or explicitly consider these ADME processes, but there is a lack of data needed to specify critical model input parameters. This is particularly true for compounds that are metabolized, exhibit restricted diffusion across biological membranes, or do not partition simply to tissue lipid. Here we discuss the potential of in vitro test systems to provide needed data for bioaccumulation modeling efforts. Recent studies demonstrate the utility of these systems and provide a "proof of concept" for the prediction models. Computational methods that predict ADME processes from an evaluation of chemical structure are also described. Most regulatory agencies perform bioaccumulation assessments using a weight-of-evidence approach. A strategy is presented for incorporating predictive methods into this approach. To implement this strategy it is important to understand the "domain of applicability" of both in vitro and structure-based approaches, and the context in which they are applied. [source]


    Off-site monitoring systems for predicting bank underperformance: a comparison of neural networks, discriminant analysis, and professional human judgment

    INTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 3 2001
    Philip Swicegood
    This study compares the ability of discriminant analysis, neural networks, and professional human judgment methodologies in predicting commercial bank underperformance. Experience from the banking crisis of the 1980s and early 1990s suggest that improved prediction models are needed for helping prevent bank failures and promoting economic stability. Our research seeks to address this issue by exploring new prediction model techniques and comparing them to existing approaches. When comparing the predictive ability of all three models, the neural network model shows slightly better predictive ability than that of the regulators. Both the neural network model and regulators significantly outperform the benchmark discriminant analysis model's accuracy. These findings suggest that neural networks show promise as an off-site surveillance methodology. Factoring in the relative costs of the different types of misclassifications from each model also indicates that neural network models are better predictors, particularly when weighting Type I errors more heavily. Further research with neural networks in this field should yield workable models that greatly enhance the ability of regulators and bankers to identify and address weaknesses in banks before they approach failure. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Statistical prediction of global sea-surface temperature anomalies

    INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 14 2003
    A. W. Colman
    Abstract Sea-surface temperature (SST) is one of the principal factors that influence seasonal climate variability, and most seasonal prediction schemes make use of information regarding SST anomalies. In particular, dynamical atmospheric prediction models require global gridded SST data prescribed through the target season. The simplest way of providing those data is to persist the SST anomalies observed at the start of the forecast at each grid point, with some damping, and this strategy has proved to be quite effective in practice. In this paper we present a statistical scheme that aims to improve that basic strategy by combining three individual methods together: simple persistence, canonical correlation analysis (CCA), and nearest-neighbour regression. Several weighting schemes were tested: the best of these is one that uses equal weight in all areas except the east tropical Pacific, where CCA is preferred. The overall performance of the combined scheme is better than the individual schemes. The results show improvements in tropical ocean regions for lead times beyond 1 or 2 months, but the skill of simple persistence is difficult to beat in the extratropics at all lead times. Aspects such as averaging periods and grid size were also investigated: results showed little sensitivity to these factors. The combined statistical SST prediction scheme can also be used to improve statistical regional rainfall forecasts that use SST anomaly patterns as predictors. Copyright © Crown Copyright 2003. Published by John Wiley & Sons, Ltd. [source]


    Review and comparison of tropospheric scintillation prediction models for satellite communications

    INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 4 2006
    P. Yu
    Abstract An overview of the origin and characteristics of tropospheric scintillation is presented and a measurement database against which scintillation models are to be tested is described. Maximum likelihood log-normal and gamma distributions are compared with the measured distribution of scintillation intensity. Eleven statistical models of monthly mean scintillation intensity are briefly reviewed and their predictions compared with measurements. RMS error, correlation, percentage error bias, RMS percentage error and percentage error skew are used in a comprehensive comparison of these models. In the context of our measurements, the ITU-R model has the best overall performance. Significant difference in the relative performance of the models is apparent when these results are compared with those from a similar study using data measured in Italy. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Using automatic passenger counter data in bus arrival time prediction

    JOURNAL OF ADVANCED TRANSPORTATION, Issue 3 2007
    Mei Chen
    Artificial neural networks have been used in a variety of prediction models because of their flexibility in modeling complicated systems. Using the automatic passenger counter data collected by New Jersey Transit, a model based on a neural network was developed to predict bus arrival times. Test runs showed that the predicted travel times generated by the models are reasonably close to the actual arrival times. [source]


    Short-term travel speed prediction models in car navigation systems

    JOURNAL OF ADVANCED TRANSPORTATION, Issue 2 2006
    Seungjae Lee
    The objective of this study is the development of the short-term prediction models to predict average spot speeds of the subject location in the short-term periods of 5, 10 and 15 minutes respectively. In this study, field data were used to see the comparison of the predictability of Regression Analysis, ARIMA, Kalman Filtering and Neural Network models. These field data were collected from image processing detectors at the urban expressway for 17 hours including both peak and non-peak hours. Most of the results were reliable, but the results of models using Kalman Filtering and Neural Networks are more accurate and realistic than those of the others. [source]


    A dynamic shortest path algorithm using multi-step ahead link travel time prediction

    JOURNAL OF ADVANCED TRANSPORTATION, Issue 1 2005
    Young-Ihn Lee
    Abstract In this paper, a multi-step ahead prediction algorithm of link travel speeds has been developed using a Kalman filtering technique in order to calculate a dynamic shortest path. The one-step and the multi-step ahead link travel time prediction models for the calculation of the dynamic shortest path have been applied to the directed test network that is composed of 16 nodes: 3 entrance nodes, 2 exit nodes and 11 internal nodes. Time-varying traffic conditions such as flows and travel time data for the test network have been generated using the CORSIM model. The results show that the multi-step ahead algorithm is compared more favorably for searching the dynamic shortest time path than the other algorithm. [source]


    In defense of clinical judgment , and mechanical prediction

    JOURNAL OF BEHAVIORAL DECISION MAKING, Issue 5 2006
    Jason Dana
    Abstract Despite over 50 years of one-sided research favoring formal prediction rules over human judgment, the "clinical-statistical controversy," as it has come to be known, remains something of a hot-button issue. Surveying the objections to the formal approach, it seems the strongest point of disagreement is that clinical expertise can be replaced by statistics. We review and expand upon an unfortunately obscured part of Meehl's book to try to reconcile the issue. Building on Meehl, we argue that the clinician provides information that cannot be captured in, or outperformed by, mere frequency tables. However, that information is still best harnessed by a mechanical prediction rule that makes the ultimate decision. Two original studies support our arguments. The first study shows that multivariate prediction models using no data other than clinical speculations can perform well against statistical regression models. Study 2, however, showed that holistic predictions were less accurate than predictions made by mechanically combining smaller judgments without input from the judge at the combination stage. While we agree that clinical expertise cannot be replaced or neglected, we see no ethical reason to resist using explicit, mechanical rules for socially important decisions. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Behavioral forecasts do not improve the prediction of future behavior: a prospective study of self-injury,

    JOURNAL OF CLINICAL PSYCHOLOGY, Issue 10 2008
    Irene Belle Janis
    Abstract Clinicians are routinely encouraged to use multimodal assessments incorporating information from multiple sources when determining an individual's risk of dangerous or self-injurious behavior; however, some sources of information may not improve prediction models and so should not be relied on in such assessments. The authors examined whether individuals' prediction of their own future behavior improves prediction over using history of self-injurious thoughts and behaviors (SITB) alone. Sixty-four adolescents with a history of SITB were interviewed regarding their past year history of SITB, asked about the likelihood that they would engage in future SITB, and followed over a 6-month period. Individuals' forecasts of their future behavior were related to subsequent SITB, but did not improve prediction beyond the use of SITB history. In contrast, history of SITB improved prediction of subsequent SITB beyond individuals' behavioral forecasts. Clinicians should rely more on past history of a behavior than individuals' forecasts of future behavior in predicting SITB. © 2008 Wiley Periodicals, Inc. J Clin Psychol 64:1,11, 2008. [source]


    OPTIMAL CONDITIONS FOR THE GROWTH AND POLYSACCHARIDE PRODUCTION BY HYPSIZIGUS MARMOREUS IN SUBMERGED CULTURE

    JOURNAL OF FOOD PROCESSING AND PRESERVATION, Issue 4 2009
    PING WANG
    ABSTRACTS In submerged cultivation, many nutrient variables and environmental conditions have great influence on the growth and polysaccharide production by Hypsizigus marmoreus. Plackett,Burman design was used to determine the important nutrient factors. A central composite experimental design and surface response methodology were employed to optimize the factor levels. Prediction models for dry cell weight (DCW), polysaccharide outside cells (EPS) and polysaccharide inside cells (IPS) under important nutrient conditions were developed by multiple regression analysis and verified. By solving the equations, the optimal nutrient conditions for highest EPS production (9.62 g/L) were obtained at 6.77 g cornstarch/L, 36.57 g glucose/L, 3.5 g MgSO4/L and 6.14 g bean cake powder/L, under which DCW and IPS were 16.2 g/L and 1.46 g/L, close to the highest value under their corresponding optimal conditions. Optimal environmental conditions were obtained at 10% inoculation dose, 45 mL medium in a 250 mL flask, pH 6.5, 25C and 200 rpm according to the results of single-factor experiment design. PRACTICAL APPLICATIONS Hypsizigus marmoreus polysaccharides have many functional properties, including antitumor, antifungal and antiproliferative activities, and free-radical scavenging. Liquid cultivation could produce a higher yield of polysaccharides and more flexible sequential processing methods of H. marmoreus, compared with traditional solid-state cultivation. However, the cell growth and production of polysaccharides would be influenced by many factors, including nutrient conditions and environmental conditions in the liquid cultivation of H. marmoreus. Keeping the conditions at optimal levels can maximize the yield of polysaccharides. The study not only found out the optimal nutrient conditions and environmental conditions for highest cell growth and yield of polysaccharides, but also developed prediction models for these parameters with important nutrient variables. Yield of polysaccharide inside of cells was also studied as well as polysaccharides outside of cells and cell growth. The results provide essential information for production of H. marmoreus polysaccharides by liquid culture. [source]


    Rapid Profiling of Swiss Cheese by Attenuated Total Reflectance (ATR) Infrared Spectroscopy and Descriptive Sensory Analysis

    JOURNAL OF FOOD SCIENCE, Issue 6 2009
    N.A. Kocaoglu-Vurma
    ABSTRACT:, The acceptability of cheese depends largely on the flavor formed during ripening. The flavor profiles of cheeses are complex and region- or manufacturer-specific which have made it challenging to understand the chemistry of flavor development and its correlation with sensory properties. Infrared spectroscopy is an attractive technology for the rapid, sensitive, and high-throughput analysis of foods, providing information related to its composition and conformation of food components from the spectra. Our objectives were to establish infrared spectral profiles to discriminate Swiss cheeses produced by different manufacturers in the United States and to develop predictive models for determination of sensory attributes based on infrared spectra. Fifteen samples from 3 Swiss cheese manufacturers were received and analyzed using attenuated total reflectance infrared spectroscopy (ATR-IR). The spectra were analyzed using soft independent modeling of class analogy (SIMCA) to build a classification model. The cheeses were profiled by a trained sensory panel using descriptive sensory analysis. The relationship between the descriptive sensory scores and ATR-IR spectra was assessed using partial least square regression (PLSR) analysis. SIMCA discriminated the Swiss cheeses based on manufacturer and production region. PLSR analysis generated prediction models with correlation coefficients of validation (rVal) between 0.69 and 0.96 with standard error of cross-validation (SECV) ranging from 0.04 to 0.29. Implementation of rapid infrared analysis by the Swiss cheese industry would help to streamline quality assurance. [source]