Root-mean-square Error (root-mean-square + error)

Distribution by Scientific Domains


Selected Abstracts


Quantitative Structure,Activity Relationship Study on Fish Toxicity of Substituted Benzenes

MOLECULAR INFORMATICS, Issue 8 2008
Zhiguo Gong
Abstract Many chemicals cause latent harm, such as erratic diseases and change of climate, and therefore it is necessary to evaluate environmentally safe levels of dangerous chemicals. Quantitative Structure,Toxicity Relationship (QSTR) analysis has become an indispensable tool in ecotoxicological risk assessments. Our paper used QSTR to deal with the modeling of the acute toxicity of 92 substituted benzenes. The molecular descriptors representing the structural features of the compounds were calculated by CODESSA program. Heuristic Method (HM) and Radial Basis Function Neural Networks (RBFNNs) were utilized to construct the linear and the nonlinear QSTR models, respectively. The predictive results were in agreement with the experimental values. The optimal QSTR model which was established based on RBFNNs gave a correlation coefficient (R2) of 0.893, 0.876, 0.889 and Root-Mean-Square Error (RMSE) of 0.220, 0.205, 0.218 for the training set, the test set, and the whole set, respectively. RBFNNs proved to be a very good method to assess acute aquatic toxicity of these compounds, and more importantly, the RBFNNs model established in this paper has fewer descriptors and better results than other models reported in previous literatures. The current model allows a more transparent chemical interpretation of the acute toxicity in terms of intermolecular interactions. [source]


Optimal designs for estimating penetrance of rare mutations of a disease-susceptibility gene

GENETIC EPIDEMIOLOGY, Issue 3 2003
Gail Gong
Abstract Many clinical decisions require accurate estimates of disease risks associated with mutations of known disease-susceptibility genes. Such risk estimation is difficult when the mutations are rare. We used computer simulations to compare the performance of estimates obtained from two types of designs based on family data. In the first (clinic-based designs), families are ascertained because they meet certain criteria concerning multiple disease occurrences among family members. In the second (population-based designs), families are sampled through a population-based registry of affected individuals called probands, with oversampling of probands whose families are more likely to segregate mutations. We generated family structures, genotypes, and phenotypes using models that reflect the frequencies and penetrances of mutations of the BRCA1/2 genes. We studied the effects of risk heterogeneity due to unmeasured, shared risk factors by including risk variation due to unmeasured genotypes of another gene. The simulations were chosen to mimic the ascertainment and selection processes commonly used in the two types of designs. We found that penetrance estimates from both designs are nearly unbiased in the absence of unmeasured shared risk factors, but are biased upward in the presence of such factors. The bias increases with increasing variation in risks across genotypes of the second gene. However, it is small compared to the standard error of the estimates. Standard errors from population-based designs are roughly twice those from clinic-based designs with the same number of families. Using the root-mean-square error as a measure of performance, we found that in all instances, the clinic-based designs gave more accurate estimates than did the population-based designs with the same numbers of families. Rough variance calculations suggest that clinic-based designs give more accurate estimates because they include more identified mutation carriers. Genet Epidemiol 24:173,180, 2003. © 2003 Wiley-Liss, Inc. [source]


Predicting river water temperatures using the equilibrium temperature concept with application on Miramichi River catchments (New Brunswick, Canada)

HYDROLOGICAL PROCESSES, Issue 11 2005
Daniel Caissie
Abstract Water temperature influences most of the physical, chemical and biological properties of rivers. It plays an important role in the distribution of fish and the growth rates of many aquatic organisms. Therefore, a better understanding of the thermal regime of rivers is essential for the management of important fisheries resources. This study deals with the modelling of river water temperature using a new and simplified model based on the equilibrium temperature concept. The equilibrium temperature concept is an approach where the net heat flux at the water surface can be expressed by a simple equation with fewer meteorological parameters than required with traditional models. This new water temperature model was applied on two watercourses of different size and thermal characteristics, but within a similar meteorological region, i.e., the Little Southwest Miramichi River and Catamaran Brook (New Brunswick, Canada). A study of the long-term thermal characteristics of these two rivers revealed that the greatest differences in water temperatures occurred during mid-summer peak temperatures. Data from 1992 to 1994 were used for the model calibration, while data from 1995 to 1999 were used for the model validation. Results showed a slightly better agreement between observed and predicted water temperatures for Catamaran Brook during the calibration period, with a root-mean-square error (RMSE) of 1·10 °C (Nash coefficient, NTD = 0·95) compared to 1·45 °C for the Little Southwest Miramichi River (NTD = 0·94). During the validation period, RMSEs were calculated at 1·31 °C for Catamaran Brook and 1·55 °C for the Little Southwest Miramichi River. Poorer model performances were generally observed early in the season (e.g., spring) for both rivers due to the influence of snowmelt conditions, while late summer to autumn modelling performances showed better results. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Improving extreme hydrologic events forecasting using a new criterion for artificial neural network selection

HYDROLOGICAL PROCESSES, Issue 8 2001
Paulin Coulibaly
Abstract The issue of selecting appropriate model input parameters is addressed using a peak and low flow criterion (PLC). The optimal artificial neural network (ANN) models selected using the PLC significantly outperform those identified with the classical root-mean-square error (RMSE) or the conventional Nash,Sutcliffe coefficient (NSC) statistics. The comparative forecast results indicate that the PLC can help to design an appropriate ANN model to improve extreme hydrologic events (peak and low flow) forecast accuracy. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Near-optimum short-term fade prediction on satellite links at Ka and V-bands

INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 1 2008
Andrew P. Chambers
Abstract Several short-term predictors of rain attenuation are implemented and tested using data recorded from a satellite link in Southern England, and a comparison is made in terms of the root-mean-square error and the cumulative distribution of under-predictions. A hybrid of an autoregressive moving average and adaptive linear element predictor is created that makes use of Gauss,Newton and gradient direction coefficient updates and exhibits the best prediction error performance of all prediction methods in the majority of cases. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Feasibility of k-t BLAST technique for measuring "seven-dimensional" fluid flow

JOURNAL OF MAGNETIC RESONANCE IMAGING, Issue 2 2006
Ian Marshall PhD
Abstract Purpose To investigate the feasibility of rapid MR measurement of "seven-dimensional" (three velocity components, three dimensions, and time) fluid flow using the k-t Broad-use Linear Acquisition Speed-Up Technique (BLAST). Materials and Methods Complete k -space data were acquired for pulsatile fluid flow in a model of a stenosed carotid bifurcation. The data was subsampled to simulate "training" and "accelerated acquisition" data for reconstruction using k-t BLAST. Results Flow waveforms estimated from k-t BLAST reconstructions were in good agreement with those measured from the full data set for overall speedup factors up to approximately four times when slice-by-slice undersampling in ky was used. Accuracy was better than 25 mm/second or 7% (root-mean-square error) for individual time frames under these conditions. Flow patterns in the plane of symmetry, near the bifurcation, and in the stenosis were also in good agreement with those reconstructed from the full data set. Improved performance was obtained from undersampling in both ky and kz, when acceleration factors up to 12 times gave acceptable results. Conclusion The k-t BLAST technique can be applied to flow quantification, and may make feasible the acquisition of time-resolved blood flow from extended arterial regions within acceptable examination times. J. Magn. Reson. Imaging 2006. © 2006 Wiley-Liss, Inc. [source]


Nonlinear quantitative structure-property relationship modeling of skin permeation coefficient

JOURNAL OF PHARMACEUTICAL SCIENCES, Issue 11 2009
Brian J. Neely
Abstract The permeation coefficient characterizes the ability of a chemical to penetrate the dermis, and the current study describes our efforts to develop structure-based models for the permeation coefficient. Specifically, we have integrated nonlinear, quantitative structure-property relationship (QSPR) models, genetic algorithms (GAs), and neural networks to develop a reliable model. Case studies were conducted to investigate the effects of structural attributes on permeation using a carefully characterized database. Upon careful evaluation, a permeation coefficient data set consisting of 333 data points for 258 molecules was identified, and these data were added to our extensive thermophysical database. Of these data, permeation values for 160 molecular structures were deemed suitable for our modeling efforts. We employed established descriptors and constructed new descriptors to aid the development of a reliable QSPR model for the permeation coefficient. Overall, our new nonlinear QSPR model had an absolute-average percentage deviation, root-mean-square error, and correlation coefficient of 8.0%, 0.34, and 0.93, respectively. Cause-and-effect analysis of the structural descriptors obtained in this study indicates that that three size/shape and two polarity descriptors accounted for ,70% of the permeation information conveyed by the descriptors. © 2009 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 98:4069,4084, 2009 [source]


The value of observations.

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 628 2007
III: Influence of weather regimes on targeting
Abstract This paper assesses the value of targeted observations over the North Atlantic Ocean for different meteorological flow regimes. It shows that during tropical cyclone activity and particularly tropical cyclone transition to extratropical characteristics, removing observations in sensitive regions, indicated by singular vectors optimized on the 2-day forecast over Europe, degrades the skill of a given forecast more so than excluding observations in randomly selected regions. The maximum downstream degradation computed in terms of spatially and temporally averaged root-mean-square error of 500 hPa geopotential height is about 13%, a value which is 6 times larger than when removing observations in randomly selected areas. The forecast impact for these selected periods, resulting from degrading the observational coverage in sensitive areas, was similar to the impact found (elsewhere in other weather forecast systems) for the observational targeting campaigns carried out over recent years, and it was larger than the average impact obtained by considering a larger set of cases covering various seasons. Copyright © 2007 Royal Meteorological Society [source]


The economic value of ensemble forecasts as a tool for risk assessment: From days to decades

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 581 2002
T. N. Palmer
Abstract Despite the revolutionary development of numerical weather and climate prediction (NWCP) in the second half of the last century, quantitative interaction between model developers and forecast customers has been rather limited. This is apparent in the diverse ways in which weather forecasts are assessed by these two groups: root-mean-square error of 500 hPa height on the one hand; pounds, euros or dollars saved on the other. These differences of approach are changing with the development of ensemble forecasting. Ensemble forecasts provide a qualitative tool for the assessment of weather and climate risk for a range of user applications, and on a range of time-scales, from days to decades. Examples of the commercial application of ensemble forecasting, from electricity generation, ship routeing, pollution modelling, weather-risk finance, disease prediction and crop yield modelling, are shown from all these time-scales. A generic user decision model is described that allows one to assess the potential economic value of numerical weather and climate forecasts for a range of customers. Using this, it is possible to relate analytically, potential economic value to conventional meteorological skill scores. A generalized meteorological measure of forecast skill is proposed which takes the distribution of customers into account. It is suggested that when customers' exposure to weather or climate risk can be quantified, such more generalized measures of skill should be used in assessing the performance of an operational NWCP system. Copyright © 2002 Royal Meteorological Society. [source]


Empirical models of UV total radiation and cloud effect study

INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 9 2010
David Mateos Villán
Abstract Several empirical models of hourly ultraviolet total radiation (UVT) have been proposed in this study. Measurements of UVT radiation, 290,385 nm, have been recorded at ground level from February 2001 to June 2008 in Valladolid, Spain (latitude 41°40,N, longitude 4°50,W and 840 m a.s.l.). The empirical models have emerged due to the lack of some radiometric variables in measuring stations. Hence, good forecasts of them can be obtained from usual measures in these stations. Therefore, some advantages of the empirical models are that they allow the estimation of past missing data in the database and the forecast of future ultraviolet solar availability. In this study, reported models in the bibliography have been assessed and recalibrated. New expressions have been proposed that allow obtaining hourly values of ultraviolet radiation from global radiation measures and parameters as clearness index and relative optical air mass. The accuracy of these models has been assessed through the following statistical indices: mean bias, mean-absolute bias and root-mean-square errors whose values are close to zero, below 7% and below 10%, respectively. Two new clear sky models have been used to evaluate two new parameters: ultraviolet and global cloud modification factors, which can help to understand the role of the clouds on solar radiation. The ultraviolet cloud modification factor depends on cloudiness in such a way that its value under overcast skies is half of the cloudless skies one. Exponential and potential fits are the best relationships between both cloud factors. Finally, these parameters have been used to build new UV empirical models which show low values of the statistical indices mentioned above. Copyright © 2009 Royal Meteorological Society [source]


The ECMWF operational implementation of four-dimensional variational assimilation.

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 564 2000
II: Experimental results with improved physics
Abstract A comprehensive set of physical parametrizations has been linearized for use in the European Centre for Medium-Range Weather Forecasts (ECMWF's) incremental four-dimensional variational (4D-Var) system described in Part I. The following processes are represented: vertical diffusion, subgrid-scale orographic effects, large-scale precipitation, deep moist convection and long-wave radiation. The tangent-linear approximation is examined for finite-size perturbations. Significant improvements are illustrated for surface wind and specific humidity with respect to a simplified vertical diffusion scheme. Singular vectors computed over 6 hours (compatible with the 4D-Var assimilation window) have lower amplification rates when the improved physical package is included, due to a more realistic description of dissipative processes, even though latent-heat release contributes to amplify the potential energy of perturbations in rainy areas. A direct consequence is a larger value of the observation term of the cost-function at the end of the minimization process when improved physics is included in 4D-Var. However, the larger departure of the analysis state from observations in the lower-resolution inner-loop is in better agreement with the behaviour of the full nonlinear model at high resolution. More precisely, the improved physics produces smaller discontinuities in the value of the cost-function when going from low to high resolution. In order to reduce the computational cost of the linear physics, a new configuration of the incremental 4D-Var system using two outer-loops is defined. In a first outer-loop, a minimization is performed at low resolution with simplified physics (50 iterations), while in the second loop a second minimization is performed with improved physics (20 iterations) after an update of the model trajectory at high resolution. In this configuration the extra cost of the physics is only 25%, and results from a 2-week assimilation period show positive impacts in terms of quality of the forecasts in the Tropics (reduced spin-down of precipitation, lower root-mean-square errors in wind scores). This 4D-Var configuration with improved physics and two outer-loops was implemented operationally at ECMWF in November 1997. [source]