Resulting Model (resulting + model)

Distribution by Scientific Domains


Selected Abstracts


Post-earthquake bridge repair cost and repair time estimation methodology

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 3 2010
Kevin R. Mackie
Abstract While structural engineers have traditionally focused on individual components (bridges, for example) of transportation networks for design, retrofit, and analysis, it has become increasingly apparent that the economic costs to society after extreme earthquake events are caused at least as much from indirect costs as direct costs due to individual structures. This paper describes an improved methodology for developing probabilistic estimates of repair costs and repair times that can be used for evaluating the performance of new bridge design options and existing bridges in preparation for the next major earthquake. The proposed approach in this paper is an improvement on previous bridge loss modeling studies,it is based on the local linearization of the dependence between repair quantities and damage states so that the resulting model follows a linear relationship between damage states and repair points. The methodology uses the concept of performance groups (PGs) that account for damage and repair of individual bridge components and subassemblies. The method is validated using two simple examples that compare the proposed method to simulation and previous methods based on loss models using a power,law relationship between repair quantities and damage. In addition, an illustration of the method is provided for a complete study on the performance of a common five-span overpass bridge structure in California. Intensity-dependent repair cost ratios (RCRs) and repair times are calculated using the proposed approach, as well as plots that show the disaggregation of repair cost by repair quantity and by PG. This provides the decision maker with a higher fidelity of data when evaluating the contribution of different bridge components to the performance of the bridge system, where performance is evaluated in terms of repair costs and repair times rather than traditional engineering quantities such as displacements and stresses. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Space,time modeling of rainfall data

ENVIRONMETRICS, Issue 6 2004
Luis Guillermo Coca Velarde
Abstract Climate variables assume non-negative values and are often measured as zero. This is just the case when the rainfall level, in the dry season, is measured in a specified place. Then, the stochastic modeling demands the inclusion of a probability mass point at the zero level, and the resulting model is a mixture of a continuous and a Bernoulli distribution. In this article, spatial conditional autoregressive effects dealing with the idea that neighbors present similar responses is considered and the response level is modeled in two stages. The aim is to consider spatial interpolation and prediction of levels in a Bayesian context. Data on weekly rainfall levels measured in different stations at the central region of Brazil, an area with two well-marked seasons, will be used as an example. A method for comparing models, based on the deviance function, is also implemented. The main conclusion is that the use of space,time models improves the modeling of hydrological and climatological variables, allowing the inclusion of real life considerations such as the influence of other covariates, space dependence and time effects such as seasonality. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Plant species richness of nature reserves: the interplay of area, climate and habitat in a central European landscape

GLOBAL ECOLOGY, Issue 4 2002
Petr Py
Abstract Aim To detect regional patterns of plant species richness in temperate nature reserves and determine the unbiased effects of environmental variables by mutual correlation with operating factors. Location The Czech Republic. Methods Plant species richness in 302 nature reserves was studied by using 14 explanatory variables reflecting the reserve area, altitude, climate, habitat diversity and prevailing vegetation type. Backward elimination of explanatory variables was used to analyse the data, taking into account their interactive nature, until the model contained only significant terms. Results A minimal adequate model with reserve area, mean altitude, prevailing vegetation type and habitat diversity (expressed as the number of major habitat types in the reserve) accounted for 53.9% of the variance in species number. After removing the area effect, habitat diversity explained 15.6% of variance, while prevailing vegetation type explained 29.6%. After removing the effect of both area and vegetation type, the resulting model explained 10.3% of the variance, indicating that species richness further increased with habitat diversity, and most obviously towards warm districts. After removing the effects of area, habitat diversity and climatic district, the model still explained 9.4% of the variance, and showed that species richness (i) significantly decreased with increasing mean altitude and annual precipitation, and with decreasing January temperature in the region of the mountain flora, and (ii) increased with altitudinal range in regions of temperate and thermophilous flora. Main conclusions We described, in quantitative terms, the effects of the main factors that might be considered to be determining plant species richness in temperate nature reserves, and evaluated their relative importance. The direct habitat effect on species richness was roughly equal to the direct area effect, but the total direct and indirect effects of area slightly exceeded that of habitat. It was shown that the overall effect of composite variables such as altitude or climatic district can be separated into particular climatic variables, which influence the richness of flora in a context-specific manner. The statistical explanation of richness variation at the level of families yielded similar results to that for species, indicating that the system of nature conservation provides similar degrees of protection at different taxonomic levels. [source]


Enabling a compact model to simulate the RF behavior of MOSFETs in SPICE

INTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 3 2005
Reydezel Torres-Torres
Abstract A detailed methodology for implementing a MOSFET model valid to perform RF simulations is described in this article. Since the SPICE-like simulation programs are used as a standard tool for integrated circuit (IC) design, the resulting model is oriented for its application under the SPICE environment. The core of the proposed model is the popular BSIM3v3, but in this model the RF effects are taken into account by means of extrinsic lumped elements. Good agreement between the simulated and measured small-signal S -parameter data is achieved for a 0.18-,m channel-length MOSFET, thus validating the proposed model. © 2005 Wiley Periodicals, Inc. Int J RF and Microwave CAE 15, 2005. [source]


Measurement based modeling and control of bimodal particle size distribution in batch emulsion polymerization

AICHE JOURNAL, Issue 8 2010
Mazen Alamir
Abstract In this article, a novel modeling approach is proposed for bimodal Particle Size Distribution (PSD) control in batch emulsion polymerization. The modeling approach is based on a behavioral model structure that captures the dynamics of PSD. The parameters of the resulting model can be easily identified using a limited number of experiments. The resulting model can then be incorporated in a simple learning scheme to produce a desired bimodal PSD while compensating for model mismatch and/or physical parameters variations using very simple updating rules. © 2010 American Institute of Chemical Engineers AIChE J, 2010 [source]


Leading change through an international faculty development programme

JOURNAL OF NURSING MANAGEMENT, Issue 8 2009
LORA C. LACEY-HAUN PhD
Aims, The purpose of the study was to evaluate the modification of an American model of academic leadership training for utilization in an African university and to pilot test the efficacy of the resulting model. Background, Traditionally many educators have moved into administrative positions without adequate training. Current world standards require leadership preparation for a wide array of persons. However, this opportunity did not yet exist in the study setting. Method, University leaders from the University of the Western Cape and the University of Missouri collaborated on revising and pilot testing a successful American academic leadership programme for use among African faculty. Cross-cultural adaptations, participant satisfaction and subsequent outcomes were assessed during the 2-year ,train-the-trainer' leadership development programme. Results, African faculty successfully modified the American training model, participated in training activities, and after 2 years, began to offer the service to other institutions in the region, which has increased the number of nurses in Africa who have had, and who will continue to have, the opportunity to move up the career ladder. Conclusion, The impact of the project extended further than originally expected, as the original plan to utilize the training materials at the University of the Western Cape (UWC) for the in-house faculty was expanded to allow UWC to utilize the modified materials to serve leadership development needs of faculty in other African universities. Implications for nursing management, Study findings will inform those interested in university policy and procedure on leadership training issues. The successful development of a self-sustaining leadership programme in which values of multiple cultures must be appropriately addressed has a significant impact for nursing administration. With the severe nursing shortage, health care institutions must develop cost effective yet quality development programmes to assure the succession of current staff into leadership positions. We no longer have the luxury of recruiting broadly and we must identify those talented nurses within our own institutions and prepare them for advanced leadership roles. This succession plan is especially important for the next generation of nurse leaders representing minority populations. In particular, nurse managers will find the overview of the literature for middle managers enlightening, and may find links to key resources that could be revised to be more culturally relevant for use in a wide array of settings. [source]


A Pricing Model for Quantity Contracts

JOURNAL OF RISK AND INSURANCE, Issue 4 2004
Knut K. Aase
An economic model is proposed for a combined price futures and yield futures market. The innovation of the article is a technique of transforming from quantity and price to a model of two genuine pricing processes. This is required in order to apply modern financial theory. It is demonstrated that the resulting model can be estimated solely from data for a yield futures market and a price futures market. We develop a set of pricing formulas, some of which are partially tested, using price data for area yield options from the Chicago Board of Trade. Compared to a simple application of the standard Black and Scholes model, our approach seems promising. [source]


Automating survey coding by multiclass text categorization techniques

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 14 2003
Daniela Giorgetti
Survey coding is the task of assigning a symbolic code from a predefined set of such codes to the answer given in response to an open-ended question in a questionnaire (aka survey). This task is usually carried out to group respondents according to a predefined scheme based on their answers. Survey coding has several applications, especially in the social sciences, ranging from the simple classification of respondents to the extraction of statistics on political opinions, health and lifestyle habits, customer satisfaction, brand fidelity, and patient satisfaction. Survey coding is a difficult task, because the code that should be attributed to a respondent based on the answer she has given is a matter of subjective judgment, and thus requires expertise. It is thus unsurprising that this task has traditionally been performed manually, by trained coders. Some attempts have been made at automating this task, most of them based on detecting the similarity between the answer and textual descriptions of the meanings of the candidate codes. We take a radically new stand, and formulate the problem of automated survey coding as a text categorization problem, that is, as the problem of learning, by means of supervised machine learning techniques, a model of the association between answers and codes from a training set of precoded answers, and applying the resulting model to the classification of new answers. In this article we experiment with two different learning techniques: one based on naive Bayesian classification, and the other one based on multiclass support vector machines, and test the resulting framework on a corpus of social surveys. The results we have obtained significantly outperform the results achieved by previous automated survey coding approaches. [source]


"Living" Free Radical Polymerization in Tubular Reactors.

MACROMOLECULAR REACTION ENGINEERING, Issue 6 2007

Abstract This is the first of a series of works aiming at developing a tool for designing "living" free radical polymerization processes in tubular reactors, in order to achieve tailor-made MWDs. A mathematical model of the nitroxide-mediated controlled free radical polymerization is built and implemented to predict the complete MWD. It is shown that this objective may be achieved accurately and efficiently by means of the probability generating function (pgf) transformation. Comparison with experimental data is good. The potential of the resulting model for optimization activities involving the complete MWD is also shown. [source]


A (3+1)-dimensional Painlevé integrable model obtained by deformation

MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 2 2002
Jun Yu
Abstract To find some non-trivial higher-dimensional integrable models (especially in (3+1) dimensions) is one of the most important problems in non-linear physics. An efficient deformation method to obtain higher-dimensional integrable models is proposed. Starting from (2+1)-dimensional linear wave equation, a (3+1)-dimensional non-trivial non-linear equation is obtained by using a non-invertible deformation relation. Further, the Painlevé integrability of the resulting model is also proved. Copyright © 2002 John Wiley & Sons, Ltd. [source]


A halo model of galaxy colours and clustering in the Sloan Digital Sky Survey

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 3 2009
Ramin A. Skibba
ABSTRACT Successful halo-model descriptions of the luminosity dependence of clustering distinguish between the central galaxy in a halo and all the others (satellites). To include colours, we provide a prescription for how the colour,magnitude relation of centrals and satellites depends on halo mass. This follows from two assumptions: (i) the bimodality of the colour distribution at a fixed luminosity is independent of halo mass and (ii) the fraction of satellite galaxies which populate the red sequence increases with luminosity. We show that these two assumptions allow one to build a model of how galaxy clustering depends on colour without any additional free parameters than those required to model the luminosity dependence of galaxy clustering. We then show that the resulting model is in good agreement with the distribution and clustering of colours in the Sloan Digital Sky Survey, both by comparing the predicted correlation functions of red and blue galaxies with measurements and by comparing the predicted colour,mark correlation function with the measured one. Mark correlation functions are powerful tools for identifying and quantifying correlations between galaxy properties and their environments: our results indicate that the correlation between halo mass and environment is the primary driver for correlations between galaxy colours and the environment; additional correlations associated with halo ,assembly bias' are relatively small. Our approach shows explicitly how to construct mock catalogues which include both luminosities and colours , thus providing realistic training sets for, e.g., galaxy cluster-finding algorithms. Our prescription is the first step towards incorporating the entire spectral energy distribution into the halo model approach. [source]


EFFECTS OF DOMAIN SIZE ON THE PERSISTENCE OF POPULATIONS IN A DIFFUSIVE FOOD-CHAIN MODEL WITH BEDDINGTON-DeANGELIS FUNCTIONAL RESPONSE

NATURAL RESOURCE MODELING, Issue 3 2001
ROBERT STEPHEN CANTRELL
ABSTRACT. A food chain consisting of species at three trophic levels is modeled using Beddington-DeAngelis functional responses as the links between trophic levels. The dispersal of the species is modeled by diffusion, so the resulting model is a three component reaction-diffusion system. The behavior of the system is described in terms of predictions of extinction or persistence of the species. Persistence is characterized via permanence, i.e., uniform persistence plus dissi-pativity. The way that the predictions of extinction or persistence depend on domain size is studied by examining how they vary as the size (but not the shape) of the underlying spatial domain is changed. [source]


Detection and correction of underassigned rotational symmetry prior to structure deposition

ACTA CRYSTALLOGRAPHICA SECTION D, Issue 5 2010
Billy K. Poon
Up to 2% of X-ray structures in the Protein Data Bank (PDB) potentially fit into a higher symmetry space group. Redundant protein chains in these structures can be made compatible with exact crystallographic symmetry with minimal atomic movements that are smaller than the expected range of coordinate uncertainty. The incidence of problem cases is somewhat difficult to define precisely, as there is no clear line between underassigned symmetry, in which the subunit differences are unsupported by the data, and pseudosymmetry, in which the subunit differences rest on small but significant intensity differences in the diffraction pattern. To help catch symmetry-assignment problems in the future, it is useful to add a validation step that operates on the refined coordinates just prior to structure deposition. If redundant symmetry-related chains can be removed at this stage, the resulting model (in a higher symmetry space group) can readily serve as an isomorphous replacement starting point for re-refinement using re-indexed and re-integrated raw data. These ideas are implemented in new software tools available at http://cci.lbl.gov/labelit. [source]