Percentage Error (percentage + error)

Distribution by Scientific Domains

Kinds of Percentage Error

  • absolute percentage error
  • mean absolute percentage error


  • Selected Abstracts


    Original article: Apparent thermal diffusivity estimation for the heat transfer modelling of pork loin under air/steam cooking treatments

    INTERNATIONAL JOURNAL OF FOOD SCIENCE & TECHNOLOGY, Issue 9 2010
    Massimiliano Rinaldi
    Summary Apparent thermal diffusivity linear functions vs. product temperature were estimated for pork cooked under two different treatments (forced convection, FC and forced convection/steam combined, FC/S) at 100, 110, 120 and 140 °C by means of experimental time,temperature data and a developed finite-difference algorithm. Slope and intercept of each function were employed to calculate apparent thermal diffusivity at 40, 55 and 70 °C. Generally, FC/S treatments gave significantly higher apparent thermal diffusivities in comparison with FC conditions. Apparent thermal diffusivities were used to develop a model for cooking time and final core temperature prediction on the basis of oven setting. The model was validated by means of additional cooking tests performed at different temperatures of those employed for model development. Root mean square error values lower than 3.8 °C were obtained comparing predicted and experimental temperature profiles. Percentage errors lower than 3.1% and 3.5% were, respectively, obtained for cooking times and final core temperatures. [source]


    From Model to Forecasting: A Multicenter Study in Emergency Departments

    ACADEMIC EMERGENCY MEDICINE, Issue 9 2010
    Mathias Wargon MD
    ACADEMIC EMERGENCY MEDICINE 2010; 17:970,978 © 2010 by the Society for Academic Emergency Medicine Abstract Objectives:, This study investigated whether mathematical models using calendar variables could identify the determinants of emergency department (ED) census over time in geographically close EDs and assessed the performance of long-term forecasts. Methods:, Daily visits in four EDs at academic hospitals in the Paris area were collected from 2004 to 2007. First, a general linear model (GLM) based on calendar variables was used to assess two consecutive periods of 2 years each to create and test the mathematical models. Second, 2007 ED attendance was forecasted, based on a training set of data from 2004 to 2006. These analyses were performed on data sets from each individual ED and in a virtual mega ED, grouping all of the visits. Models and forecast accuracy were evaluated by mean absolute percentage error (MAPE). Results:, The authors recorded 299,743 and 322,510 ED visits for the two periods, 2004,2005 and 2006,2007, respectively. The models accounted for up to 50% of the variations with a MAPE less than 10%. Visit patterns according to weekdays and holidays were different from one hospital to another, without seasonality. Influential factors changed over time within one ED, reducing the accuracy of forecasts. Forecasts led to a MAPE of 5.3% for the four EDs together and from 8.1% to 17.0% for each hospital. Conclusions:, Unexpectedly, in geographically close EDs over short periods of time, calendar determinants of attendance were different. In our setting, models and forecasts are more valuable to predict the combined ED attendance of several hospitals. In similar settings where resources are shared between facilities, these mathematical models could be a valuable tool to anticipate staff needs and site allocation. [source]


    Short-Term Traffic Volume Forecasting Using Kalman Filter with Discrete Wavelet Decomposition

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2007
    Yuanchang Xie
    Short-term traffic volume data are often corrupted by local noises, which may significantly affect the prediction accuracy of short-term traffic volumes. Discrete wavelet decomposition analysis is used to divide the original data into several approximate and detailed data such that the Kalman filter model can then be applied to the denoised data and the prediction accuracy can be improved. Two types of wavelet Kalman filter models based on Daubechies 4 and Haar mother wavelets are investigated. Traffic volume data collected from four different locations are used for comparison in this study. The test results show that both proposed wavelet Kalman filter models outperform the direct Kalman filter model in terms of mean absolute percentage error and root mean square error. [source]


    Direct injection horse-urine analysis for the quantification and confirmation of threshold substances for doping control.

    DRUG TESTING AND ANALYSIS, Issue 8 2009

    Abstract Levodopa and dopamine have been abused as performance-altering substances in horse racing. Urinary 3-methoxytyramine is used as an indicator of dopaminergic manipulation resulting from dopamine or levodopa administration and is prohibited with a urinary threshold of 4 µg mL,1 (free and conjugated). A simple liquid chromatographic (LC)/mass spectrometric (MS) (LCMS) method was developed and validated for the quantification and identification of 3-methoxytyramine in equine urine. Sample preparation involved enzymatic hydrolysis and protein precipitation. Hydrophilic interaction liquid chromatography (HILIC) was selected as a separation technique that allows effective retention of polar substances like 3-methoxytyramine and efficient separation from matrix compounds. Electrospray ionization (ESI) in positive mode with product ion scan mode was chosen for the detection of the analytes. Quantification of 3-methoxytyramine was performed with fragmentation at low collision energy, resulting in one product ion, while a second run at high collision energy was performed for confirmation (at least three abundant ions). Studies on matrix effects showed ion suppression depending on the horse urine used. To overcome the variability of the results originating from the matrix effects, isotopic labelled internal standard was used and linear regression calibration methodology was applied for the quantitative determination of the analyte. The tested linear range was 1,20 µg mL,1. The relative standard deviations of intra- and inter- assay analysis of 3-methoxytyramine in horse urine were lower than 4.2% and 3.2%, respectively. Overall accuracy (relative percentage error) was less than 6.2%. The method was applied to case samples, demonstrating simplicity, accuracy and selectivity. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    A new recursive neural network algorithm to forecast electricity price for PJM day-ahead market

    INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 6 2010
    Paras Mandal
    Abstract This paper evaluates the usefulness of publicly available electricity market information in predicting the hourly prices in the PJM day-ahead electricity market using recursive neural network (RNN) technique, which is based on similar days (SD) approach. RNN is a multi-step approach based on one output node, which uses the previous prediction as input for the subsequent forecasts. Comparison of forecasting performance of the proposed RNN model is done with respect to SD method and other literatures. To evaluate the accuracy of the proposed RNN approach in forecasting short-term electricity prices, different criteria are used. Mean absolute percentage error, mean absolute error and forecast mean square error (FMSE) of reasonably small values were obtained for the PJM data, which has correlation coefficient of determination (R2) of 0.7758 between load and electricity price. Error variance, one of the important performance criteria, is also calculated in order to measure robustness of the proposed RNN model. The numerical results obtained through the simulation to forecast next 24 and 72,h electricity prices show that the forecasts generated by the proposed RNN model are significantly accurate and efficient, which confirm that the proposed algorithm performs well for short-term price forecasting. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Sensitivity analysis of neural network parameters to improve the performance of electricity price forecasting

    INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 1 2009
    Paras Mandal
    Abstract This paper presents a sensitivity analysis of neural network (NN) parameters to improve the performance of electricity price forecasting. The presented work is an extended version of previous works done by authors to integrate NN and similar days (SD) method for predicting electricity prices. Focus here is on sensitivity analysis of NN parameters while keeping the parameters same for SD to forecast day-ahead electricity prices in the PJM market. Sensitivity analysis of NN parameters include back-propagation learning set (BP-set), learning rate (,), momentum (,) and NN learning days (dNN). The SD parameters, i.e. time framework of SD (d=45 days) and number of selected similar price days (N=5) are kept constant for all the simulated cases. Forecasting performance is carried out by choosing two different days from each season of the year 2006 and for which, the NN parameters for the base case are considered as BP-set=500, ,=0.8, ,=0.1 and dNN=45 days. Sensitivity analysis has been carried out by changing the value of BP-set (500, 1000, 1500); , (0.6, 0.8, 1.0, 1.2), , (0.1, 0.2, 0.3) and dNN (15, 30, 45 and 60 days). The most favorable value of BP-set is first found out from the sensitivity analysis followed by that of , and ,, and based on which the best value of dNN is determined. Sensitivity analysis results demonstrate that the best value of mean absolute percentage error (MAPE) is obtained when BP-set=500, ,=0.8, ,=0.1 and dNN=60 days for winter season. For spring, summer and autumn, these values are 500, 0.6, 0.1 and 45 days, respectively. MAPE, forecast mean square error and mean absolute error of reasonably small value are obtained for the PJM data, which has correlation coefficient of determination (R2) of 0.7758 between load and electricity price. Numerical results show that forecasts generated by developed NN model based on the most favorable case are accurate and efficient. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Comparison of solar radiation correlations for ,zmir, Turkey

    INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 5 2002
    K. Ulgen
    Abstract In this study, empirical correlations are developed to estimate the monthly average daily global solar radiation on a horizontal surface (H) for the city of ,zmir in Turkey. Experimental data were measured in the Solar,Meteorological Station of the Solar Energy Institute at Ege University. The present models are then compared with the 25 models available in the literature for calculating H based on the main percentage error, root mean error, the main bias error, and correlation coefficient. It can be concluded that the present models predict the values of H for ,zmir better than other available models. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Review and comparison of tropospheric scintillation prediction models for satellite communications

    INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 4 2006
    P. Yu
    Abstract An overview of the origin and characteristics of tropospheric scintillation is presented and a measurement database against which scintillation models are to be tested is described. Maximum likelihood log-normal and gamma distributions are compared with the measured distribution of scintillation intensity. Eleven statistical models of monthly mean scintillation intensity are briefly reviewed and their predictions compared with measurements. RMS error, correlation, percentage error bias, RMS percentage error and percentage error skew are used in a comprehensive comparison of these models. In the context of our measurements, the ITU-R model has the best overall performance. Significant difference in the relative performance of the models is apparent when these results are compared with those from a similar study using data measured in Italy. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Correlation and agreement between the bispectral index vs. state entropy during hypothermic cardio-pulmonary bypass

    ACTA ANAESTHESIOLOGICA SCANDINAVICA, Issue 2 2010
    P. MEYBOHM
    Background: The bispectral index (BIS) and spectral entropy enable monitoring the depth of anaesthesia. Mild hypothermia has been shown to affect the ability of electroencephalography monitors to reflect the anaesthetic drug effect. The purpose of this study was to investigate the effect of hypothermia during a cardio-pulmonary bypass on the correlation and agreement between the BIS and entropy variables compared with normothermic conditions. Methods: This prospective clinical study included coronary artery bypass grafting patients (n=25) evaluating correlation and agreement (Bland,Altman analysis) between the BIS and both spectral and response entropy during a hypothermic cardio-pulmonary bypass (31,34 °C) compared with nomothermic conditions (34,37.5 °C). Anaesthesia was maintained with propofol and sufentanil and adjusted clinically, while the anaesthetist was blinded to the monitors. Results: The BIS and entropy values decreased during cooling (P<0.05), but the decrease was more pronounced for entropy variables compared with BIS (P<0.05). The correlation coefficients (bias±SD; percentage error) between the BIS vs. spectral state entropy and response entropy were r2=0.56 (1±11; 42%) and r2=0.58 (,2±11; 43%) under normothermic conditions, and r2=0.17 (10±12; 77%) and r2=0.18 (9±11; 68%) under hypothermic conditions, respectively. Bias was significantly increased under hypothermic conditions (P<0.001 vs. normothermia). Conclusion: Acceptable agreement was observed between the BIS and entropy variables under normothermic but not under hypothermic conditions. The BIS and entropy variables may therefore not be interchangeable during a hypothermic cardio-pulmonary bypass. [source]


    Accuracy and reliability of continuous blood glucose monitor in post-surgical patients

    ACTA ANAESTHESIOLOGICA SCANDINAVICA, Issue 1 2009
    K. YAMASHITA
    Background: The STG-22Ô is the only continuous blood glucose monitoring system currently available. The aim of this study is to determine the accuracy and reliability of the STG-22Ô for continuously monitoring blood glucose level in post-surgical patients. Methods: Fifty patients scheduled for routine surgery were studied in surgical intensive care unit (ICU) of a university hospital. After admission to the ICU, the STG-22Ô was connected to the patients. An attending physician obtained blood samples from a radial arterial catheter. Blood glucose level was measured using the ABLÔ800FLEX immediately after blood collection at 0, 4, 8, and 16 h post-admission to the ICU (total of 200 blood glucose values). Results: The correlation coefficient (R2) was 0.96. In the Clarke error grid, 100% of the paired measurements were in the clinically acceptable zone A and B. The Bland and Altman analysis showed that bias±limits of agreement (percent error) were 0.04(0.7)±0.35(6.3) mmol (mg/dl) (7%), ,0.11(,2)±1.22(22) (15%) and ,0.33(,6)±1.28(23) (10%) in hypoglycemia (<70(3.89) mmol (mg/dl), normoglycemia (3.89(70),10(180) mmol (mg/dl), and hyperglycemia (>10(180) mmol (mg/dl), respectively. Conclusions: The STG-22Ô can be used for measuring blood glucose level continuously and measurement results are consistent with intermittent measurement (percentage error within 15%). Therefore, the STG-22Ô is a useful device for monitoring in blood glucose level in the ICU for 16 h. [source]


    Upper bounds for single-source uncapacitated concave minimum-cost network flow problems

    NETWORKS: AN INTERNATIONAL JOURNAL, Issue 4 2003
    Dalila B. M. M. Fontes
    Abstract In this paper, we describe a heuristic algorithm based on local search for the Single-Source Uncapacitated (SSU) concave Minimum-Cost Network Flow Problem (MCNFP). We present a new technique for creating different and informed initial solutions to restart the local search, thereby improving the quality of the resulting feasible solutions (upper bounds). Computational results on different classes of test problems indicate the effectiveness of the proposed method in generating basic feasible solutions for the SSU concave MCNFP very near to a global optimum. A maximum upper bound percentage error of 0.07% is reported for all problem instances for which an optimal solution has been found by a branch-and-bound method. © 2003 Wiley Periodicals, Inc. [source]


    Resolution errors associated with gridded precipitation fields

    INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 15 2005
    C. J. Willmott
    Abstract Spatial-resolution errors are inherent in gridded precipitation (P) fields,such as those produced by climate models and from satellite observations,and they can be sizeable when P is averaged spatially onto a coarse grid. They can also vary dramatically over space and time. In this paper, we illustrate the importance of evaluating resolution errors associated with gridded P fields by investigating the relationships between grid resolution and resolution error for monthly P within the Amazon Basin. Spatial-resolution errors within gridded-monthly and average-monthly P fields over the Amazon Basin are evaluated for grid resolutions ranging from 0.1° to 5.0°. A resolution error occurs when P is estimated for a location of interest within a grid-cell from the unbiased, grid-cell average P. Graphs of January, July and annual resolution errors versus resolution show that, at the higher resolutions (<3° ), aggregation quickly increases resolution error. Resolution error then begins to level off as the grid becomes coarser. Within the Amazon Basin, the largest resolution errors occur during January (summer), but the largest percentage errors appear in July (winter). In January of 1980, e.g., resolution errors of 29, 52 and 65 mm,or 11, 19 and 24% of the grid-cell means,were estimated at resolutions of 1.0°, 3.0° and 5.0°. In July of 1980, however, the percentage errors at these three resolutions were considerably larger, that is, 15%, 27% and 33% of the grid-cell means. Copyright © 2005 Royal Meteorological Society [source]


    NUTRIENT LOADING ASSESSMENT IN THE ILLINOIS RIVER USING A SYNTHETIC APPROACH,

    JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 4 2003
    Baxter E. Vieux
    ABSTRACT: A synthetic relationship is developed between nutrient concentrations and discharge rates at two river gauging sites in the Illinois River Basin. Analysis is performed on data collected by the U.S. Geological Survey (USGS) on nutrients in 1990 through 1997 and 1999 and on discharge rates in 1988 through 1997 and 1999. The Illinois River Basin is in western Arkansas and northeastern Oklahoma and is designated as an Oklahoma Scenic River. Consistently high nutrient concentrations in the river and receiving water bodies conflict with recreational water use, leading to intense stakeholder debate on how best to manage water quality. Results show that the majority of annual phosphorus (P) loading is transported by direct runoff, with high concentrations transported by high discharge rates and low concentrations by low discharge rates. A synthetic relationship is derived and used to generate daily phosphorus concentrations, laying the foundation for analysis of annual loading and evaluation of alternative management practices. Total nitrogen (N) concentration does not have as clear a relationship with discharge. Using a simple regression relationship, annual P loadings are estimated as having a root mean squared error (RMSE) of 39.8 t/yr and 31.9 t/yr and mean absolute percentage errors of 19 percent and 28 percent at Watts and Tahlequah, respectively. P is the limiting nutrient over the full range of discharges. Given that the majority of P is derived from Arkansas, management practices that control P would have the most benefit if applied on the Arkansas side of the border. [source]