Home About us Contact | |||
Model Improvements (model + improvement)
Selected AbstractsSimultaneous Controllability of PSD and MWD in Emulsion PolymerisationMACROMOLECULAR REACTION ENGINEERING, Issue 5 2008Stephen J. Sweetman Abstract A sensitivity study of particle size distribution (PSD) and molecular weight distribution (MWD) responses to perturbations in initiator, surfactant, monomer and chain transfer agent in a semi-batch emulsion polymerisation is presented. The objective is to provide a systematic study on the ability to simultaneously control both PSD and MWD, towards inferential control of end-use product properties. This would lead towards identification of the practical feasible regions of operability. All inputs appeared to have an intrinsic and simultaneous influence on end-time PSD and MWD. Trends shown in experimental results have been explained in a mechanistic sense and also compared to simulation results from a combined PSD/MWD population balance model. The preliminary comparison between experiment and simulation highlights areas to be focussed on with respect to model improvement. [source] Application of normal-mode refinement to X-ray crystal structures at the lower resolution limitACTA CRYSTALLOGRAPHICA SECTION D, Issue 7 2009Fengyun Ni The structural refinement of large complexes at the lower resolution limit is often difficult and inefficient owing to the limited number of reflections and the frequently high-level structural flexibility. A new normal-mode-based X-ray crystallographic refinement method has recently been developed that enables anisotropic B -factor refinement using a drastically smaller number of thermal parameters than even isotropic refinement. Here, the method has been systematically tested on a total of eight systems in the resolution range 3.0,3.9,Ĺ. This series of tests established the most applicable scenarios for the method, the detailed procedures for its application and the degree of structural improvement. The results demonstrated substantial model improvement at the lower resolution limit, especially in cases in which other methods such as the translation,libration,screw (TLS) model were not applicable owing to the poorly converged isotropic B -factor distribution. It is expected that this normal-mode-based method will be a useful tool for structural refinement, in particular at the lower resolution limit, in the field of X-ray crystallography. [source] Cluster Detection Based on Spatial Associations and Iterated Residuals in Generalized Linear Mixed ModelsBIOMETRICS, Issue 2 2009Tonglin Zhang Summary Spatial clustering is commonly modeled by a Bayesian method under the framework of generalized linear mixed effect models (GLMMs). Spatial clusters are commonly detected by a frequentist method through hypothesis testing. In this article, we provide a frequentist method for assessing spatial properties of GLMMs. We propose a strategy that detects spatial clusters through parameter estimates of spatial associations, and assesses spatial aspects of model improvement through iterated residuals. Simulations and a case study show that the proposed method is able to consistently and efficiently detect the locations and magnitudes of spatial clusters. [source] Simulation of ice phenology on Great Slave Lake, Northwest Territories, CanadaHYDROLOGICAL PROCESSES, Issue 18 2002Patrick Ménard Abstract A one-dimensional thermodynamic lake ice model (Canadian Lake Ice Model or CLIMo) is used to simulate ice phenology on Great Slave Lake (GSL) in the Mackenzie River basin, Northwest Territories, Canada. Model simulations are validated against freeze-up and break-up dates, as well as ice thickness and on-ice snow depth measurements made in situ at three sites on GSL (Back Bay near Yellowknife, 1960,91; Hay River, 1965,91; Charlton Bay near Fort Reliance, 1977,90). Freeze-up and break-up dates from the lake ice model are also compared with those derived from SSM/I 85 GHz passive microwave imagery over the entire lake surface (1988,99). Results show a very good agreement between observed and simulated ice thickness and freeze-up/break-up dates over the 30,40 years of observations, particularly for the Back Bay and Hay River sites. CLIMo simulates the ice thickness and annual freeze-up/break-dates with a mean error of 7 cm and 4 days respectively. However, some limitations have been identified regarding the rather simplistic approach used to characterize the temporal evolution of snow cover on ice. Future model improvements will therefore focus on this particular aspect, through linkage or coupling to a snow model. Copyright © 2002 John Wiley & Sons, Ltd. [source] Downscaling temperature and precipitation: a comparison of regression-based methods and artificial neural networksINTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 7 2001J.T. Schoof Abstract A comparison of two statistical downscaling methods for daily maximum and minimum surface air temperature, total daily precipitation and total monthly precipitation at Indianapolis, IN, USA, is presented. The analysis is conducted for two seasons, the growing season and the non-growing season, defined based on variability of surface air temperature. The predictors used in the downscaling are indices of the synoptic scale circulation derived from rotated principal components analysis (PCA) and cluster analysis of variables extracted from an 18-year record from seven rawinsonde stations in the Midwest region of the United States. PCA yielded seven significant components for the growing season and five significant components for the non-growing season. These PCs explained 86% and 83% of the original rawinsonde data for the growing and non-growing seasons, respectively. Cluster analysis of the PC scores using the average linkage method resulted in eight growing season synoptic types and twelve non-growing synoptic types. The downscaling of temperature and precipitation is conducted using PC scores and cluster frequencies in regression models and artificial neural networks (ANNs). Regression models and ANNs yielded similar results, but the data for each regression model violated at least one of the assumptions of regression analysis. As expected, the accuracy of the downscaling models for temperature was superior to that for precipitation. The accuracy of all temperature models was improved by adding an autoregressive term, which also changed the relative importance of the dominant anomaly patterns as manifest in the PC scores. Application of the transfer functions to model daily maximum and minimum temperature data from an independent time series resulted in correlation coefficients of 0.34,0.89. In accord with previous studies, the precipitation models exhibited lesser predictive capabilities. The correlation coefficient for predicted versus observed daily precipitation totals was less than 0.5 for both seasons, while that for monthly total precipitation was below 0.65. The downscaling techniques are discussed in terms of model performance, comparison of techniques and possible model improvements. Copyright © 2001 Royal Meteorological Society [source] Horizontal resolution impact on short- and long-range forecast errorTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 649 2010Roberto Buizza Abstract The impact of horizontal resolution increases from spectral truncation T95 to T799 on the error growth of ECMWF forecasts is analysed. Attention is focused on instantaneous, synoptic-scale features represented by the 500 and 1000 hPa geopotential height and the 850 hPa temperature. Error growth is investigated by applying a three-parameter model, and improvements in forecast skill are assessed by computing the time limits when fractions of the forecast-error asymptotic value are reached. Forecasts are assessed both in a realistic framework against T799 analyses, and in a perfect-model framework against T799 forecasts. A strong sensitivity to model resolution of the skill of instantaneous forecasts has been found in the short forecast range (say up to about forecast day 3). But sensitivity has shown to become weaker in the medium range (say around forecast day 7) and undetectable in the long forecast range. Considering the predictability of ECMWF operational, high-resolution T799 forecasts of the 500 hPa geopotential height verified in the realistic framework over the Northern Hemisphere (NH), the long-range time limit ,(95%) is 15.2 days, a value that is one day shorter than the limit computed in the perfect-model framework. Considering the 850 hPa temperature verified in the realistic framework, the time limit ,(95%) is 16.6 days for forecasts verified in the realistic framework over the NH (cold season), 14.1 days over the SH (warm season) and 20.6 days over the Tropics. Although past resolution increases have been providing continuously better forecasts especially in the short forecast range, this investigation suggests that in the future, although further increases in resolution are expected to improve the forecast skill in the short and medium forecast range, simple resolution increases without model improvements would bring only very limited improvements in the long forecast range. Copyright © 2010 Royal Meteorological Society [source] Influence of the Quasi-Biennial Oscillation on the ECMWF model short-range-forecast errors in the tropical stratosphereTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 628 2007Nedjeljka, agar Abstract This paper addresses the impact of the Quasi-Biennial Oscillation (QBO) on the background-error covariances in the tropical atmosphere of the ECMWF model. The tropical short-range-forecast-error covariances are represented in terms of equatorial waves coupled to convection. By comparing the forecast-error proxy data from two different phases of the QBO, it is shown that the phase of the QBO has an effect on the distribution of tropical forecast-error variances between various equatorial waves. The influence of the QBO is limited to the stratospheric levels between 50 hPa and 5 hPa. In the easterly QBO phase, the percentage of error variance in Kelvin waves is significantly greater than in the westerly phase. In the westerly phase, westward-propagating inertio-gravity waves become more important, at the expense of Kelvin modes, eastward-propagating mixed Rossby-gravity waves and inertio-gravity modes. Comparison of datasets from two easterly phases shows that the maxima of stratospheric error variance in various equatorial modes follow the theory of the interaction of waves with descending shear zones of the horizontal wind. Single-observation experiments illustrate an impact of the phase of the QBO on stratospheric analysis increments, which is mostly seen in the balanced geopotential field. Idealized 3D-Var assimilation experiments suggest that background-error statistics from the easterly QBO period are on average more useful for the multivariate variational assimilation, as a consequence of a stronger mass-wind coupling due to increased impact of Kelvin waves in the easterly phase. By comparing the tropical forecast errors in two operational versions of the model a few years apart, it is shown here that recent model improvements, primarily in the model physics, have substantially reduced the errors in both wind and geopotential throughout the tropical atmosphere. In particular, increased wind-field errors associated with the intertropical convergence zone have been removed. Consequently, the ability of the applied background-error model to represent the error fields has improved. Copyright © 2007 Royal Meteorological Society [source] Forcing Function Diagnostics for Nonlinear DynamicsBIOMETRICS, Issue 3 2009Giles Hooker Summary This article investigates the problem of model diagnostics for systems described by nonlinear ordinary differential equations (ODEs). I propose modeling lack of fit as a time-varying correction to the right-hand side of a proposed differential equation. This correction can be described as being a set of additive forcing functions, estimated from data. Representing lack of fit in this manner allows us to graphically investigate model inadequacies and to suggest model improvements. I derive lack-of-fit tests based on estimated forcing functions. Model building in partially observed systems of ODEs is particularly difficult and I consider the problem of identification of forcing functions in these systems. The methods are illustrated with examples from computational neuroscience. [source] |