Simulation Techniques (simulation + techniques)

Distribution by Scientific Domains


Selected Abstracts


Reconstructing head models from photographs for individualized 3D-audio processing

COMPUTER GRAPHICS FORUM, Issue 7 2008
M. Dellepiane
Abstract Visual fidelity and interactivity are the main goals in Computer Graphics research, but recently also audio is assuming an important role. Binaural rendering can provide extremely pleasing and realistic three-dimensional sound, but to achieve best results it's necessary either to measure or to estimate individual Head Related Transfer Function (HRTF). This function is strictly related to the peculiar features of ears and face of the listener. Recent sound scattering simulation techniques can calculate HRTF starting from an accurate 3D model of a human head. Hence, the use of binaural rendering on large scale (i.e. video games, entertainment) could depend on the possibility to produce a sufficiently accurate 3D model of a human head, starting from the smallest possible input. In this paper we present a completely automatic system, which produces a 3D model of a head starting from simple input data (five photos and some key-points indicated by user). The geometry is generated by extracting information from images and accordingly deforming a 3D dummy to reproduce user head features. The system proves to be fast, automatic, robust and reliable: geometric validation and preliminary assessments show that it can be accurate enough for HRTF calculation. [source]


Managing very large distributed data sets on a data grid

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11 2010
Miguel Branco
Abstract In this work we address the management of very large data sets, which need to be stored and processed across many computing sites. The motivation for our work is the ATLAS experiment for the Large Hadron Collider (LHC), where the authors have been involved in the development of the data management middleware. This middleware, called DQ2, has been used for the last several years by the ATLAS experiment for shipping petabytes of data to research centres and universities worldwide. We describe our experience in developing and deploying DQ2 on the Worldwide LHC computing Grid, a production Grid infrastructure formed of hundreds of computing sites. From this operational experience, we have identified an important degree of uncertainty that underlies the behaviour of large Grid infrastructures. This uncertainty is subjected to a detailed analysis, leading us to present novel modelling and simulation techniques for Data Grids. In addition, we discuss what we perceive as practical limits to the development of data distribution algorithms for Data Grids given the underlying infrastructure uncertainty, and propose future research directions. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Towards workflow simulation in service-oriented architecture: an event-based approach

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2008
Yanchong Zheng
Abstract The emergence of service-oriented architecture (SOA) has brought about a loosely coupled computing environment that enables flexible integration and reuse of heterogeneous systems. On building a SOA for application systems, more and more research has been focused on service composition, in which workflow and simulation techniques have shown great potential. Simulation of services' interaction is important since the services ecosystem is dynamic and in continuous evolution. However, there is a lack in the research of services' simulation, especially models, methods and systems to support the simulation of interaction behavior of composite services. In this paper, an enhanced workflow simulation method with the support of interactive events mechanism is proposed to fulfill this requirement. At build time, we introduce an event sub-model in the workflow meta-model, and our simulation engine supports the event-based interaction pattern at run time. With an example simulated in the prototype system developed according to our method, the advantages of our method in model verification and QoS evaluation for service compositions are also highlighted. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Parallel protein folding with STAPL

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2005
Shawna Thomas
Abstract The protein-folding problem is a study of how a protein dynamically folds to its so-called native state,an energetically stable, three-dimensional conformation. Understanding this process is of great practical importance since some devastating diseases such as Alzheimer's and bovine spongiform encephalopathy (Mad Cow) are associated with the misfolding of proteins. We have developed a new computational technique for studying protein folding that is based on probabilistic roadmap methods for motion planning. Our technique yields an approximate map of a protein's potential energy landscape that contains thousands of feasible folding pathways. We have validated our method against known experimental results. Other simulation techniques, such as molecular dynamics or Monte Carlo methods, require many orders of magnitude more time to produce a single, partial trajectory. In this paper we report on our experiences parallelizing our method using STAPL (Standard Template Adaptive Parallel Library) that is being developed in the Parasol Lab at Texas A&M. An efficient parallel version will enable us to study larger proteins with increased accuracy. We demonstrate how STAPL enables portable efficiency across multiple platforms, ranging from small Linux clusters to massively parallel machines such as IBM's BlueGene/L, without user code modification. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Robust adaptive remeshing strategy for large deformation, transient impact simulations

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 13 2006
Tobias Erhart
Abstract In this paper, an adaptive approach, with remeshing as essential ingredient, towards robust and efficient simulation techniques for fast transient, highly non-linear processes including contact is discussed. The necessity for remeshing stems from two sources: the capability to deal with large deformations that might even require topological changes of the mesh and the desire for an error driven distribution of computational resources. The overall computational approach is sketched, the adaptive remeshing strategy is presented and the crucial aspect, the choice of suitable error indicator(s), is discussed in more detail. Several numerical examples demonstrate the performance of the approach. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Improvements of parametric quantum methods with new elementary parametric functionals

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 10 2008
Fernando Ruette
Abstract A series of elementary parametric functionals (EPFs) for resonance integral, electron-electron repulsion, electron-nucleus attraction, core-core interaction, and bond correlation correction were included in the new version of CATIVIC method [Int J Quantum Chem 2004, 96, 321]. In the present work, a systematic way to improve the precision of parametric quantum methods (PQMs) by using several EPFs in the parameterization of a set of molecules is proposed. Based on the fact that a linear combination of elementary functionals from the exact Hamiltonian is also a functional, a linear combination of EPFs has been proved that can enhance the accuracy of PQMs by considering the convex condition. A general formulation of simulation techniques for molecular properties is presented and a formal extension of the minimax principle to PQMs is also considered. © 2008 Wiley Periodicals, Inc. Int J Quantum Chem, 2008 [source]


The estimation of utility-consistent labor supply models by means of simulated scores

JOURNAL OF APPLIED ECONOMETRICS, Issue 4 2008
Hans G. Bloemen
We consider a utility-consistent static labor supply model with flexible preferences and a nonlinear and possibly non-convex budget set. Stochastic error terms are introduced to represent optimization and reporting errors, stochastic preferences, and heterogeneity in wages. Coherency conditions on the parameters and the support of error distributions are imposed for all observations. The complexity of the model makes it impossible to write down the probability of participation. Hence we use simulation techniques in the estimation. We compare our approach with various simpler alternatives proposed in the literature. Both in Monte Carlo experiments and for real data the various estimation methods yield very different results. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Mathematical Frameworks for Modeling Listeria Cross-contamination in Food-

JOURNAL OF FOOD SCIENCE, Issue 6 2004
D.W. Schaffner
ABSTRACT: The possibility of modeling the cross-contamination of Listeria species, total Listeria monocytogenes, or specific L. monocytogenes strains using a quantitative mathematical model using Monte Carlo simulation techniques is proposed. This article illustrates this approach using 2 different models: one that tracks L. monocytogenes number and prevalence for 4 different strains (Model I) and one that tracks only prevalence for a single strain (Model II). These models have been developed to provide a starting framework for predictive modelers and scientists studying L. monocytogenes to begin research together with the ultimate goal of understanding and controlling L. monocytogenes in food-processing plants. [source]


A non-Gaussian generalization of the Airline model for robust seasonal adjustment

JOURNAL OF FORECASTING, Issue 5 2006
JOHN A. D. ASTON
Abstract In their seminal book Time Series Analysis: Forecasting and Control, Box and Jenkins (1976) introduce the Airline model, which is still routinely used for the modelling of economic seasonal time series. The Airline model is for a differenced time series (in levels and seasons) and constitutes a linear moving average of lagged Gaussian disturbances which depends on two coefficients and a fixed variance. In this paper a novel approach to seasonal adjustment is developed that is based on the Airline model and that accounts for outliers and breaks in time series. For this purpose we consider the canonical representation of the Airline model. It takes the model as a sum of trend, seasonal and irregular (unobserved) components which are uniquely identified as a result of the canonical decomposition. The resulting unobserved components time series model is extended by components that allow for outliers and breaks. When all components depend on Gaussian disturbances, the model can be cast in state space form and the Kalman filter can compute the exact log-likelihood function. Related filtering and smoothing algorithms can be used to compute minimum mean squared error estimates of the unobserved components. However, the outlier and break components typically rely on heavy-tailed densities such as the t or the mixture of normals. For this class of non-Gaussian models, Monte Carlo simulation techniques will be used for estimation, signal extraction and seasonal adjustment. This robust approach to seasonal adjustment allows outliers to be accounted for, while keeping the underlying structures that are currently used to aid reporting of economic time series data.,,Copyright © 2006 John Wiley & Sons, Ltd. [source]


Spot water markets and risk in water supply

AGRICULTURAL ECONOMICS, Issue 2 2005
Javier Calatrava
Water markets; Economic risk; Water availability; Irrigated agriculture Abstract Water availability patterns in semiarid regions are typically extremely variable. Even in basins with a highly developed infrastructure, users are subject to unreliable water supplies, incurring substantial economic losses during periods of scarcity. More flexible instruments, such as voluntary exchanges of water among users, can help users to reduce risk exposure. This article looks at the effects of spot water markets on the economic risk caused by water availability variations. Our theoretical and empirical risk analyses are based on the random profits of water users. Profit probability density functions are formally and graphically characterized for both water sellers and buyers under several possible market outcomes. We conclude from this analysis that, where water supply is stochastic, water markets unambiguously reduce both parties' risk exposure. The empirical study is conducted on an irrigation district in the Guadalquivir Valley (Southern Spain), where there is a high probability of periods of extreme water scarcity. Water demand functions for the district representative irrigators and a spatial equilibrium model are used to simulate market exchanges and equilibrium. This programming model is combined with statistical simulation techniques. We show that the profit probability distribution of a representative irrigator is modified if water exchanges are authorized, leading to risk reductions. Results also indicate that if the market were extended to several districts and users that are subject to varying hydrological risk exposure, extremely low-profit events would be less likely to occur. In sum, we show that exchanging water in annual spot markets can reduce farmers' economic vulnerability caused by water supply variability across irrigation seasons. These results support the water policy reform carried out in Spain in 1999 to allow for voluntary water exchanges among right holders. [source]


Oxygen isotope and palaeotemperature records from six Greenland ice-core stations: Camp Century, Dye-3, GRIP, GISP2, Renland and NorthGRIP

JOURNAL OF QUATERNARY SCIENCE, Issue 4 2001
Sigfus J. Johnsen
Abstract Oxygen isotope variations spanning the last glacial cycle and the Holocene derived from ice-core records for six sites in Greenland (Camp Century, Dye-3, GRIP, GISP2, Renland and NorthGRIP) show strong similarities. This suggests that the dominant influence on oxygen isotope variations reflected in the ice-sheet records was regional climatic change. Differences in detail between the records probably reflect the effects of basal deformation in the ice as well as geographical gradients in atmospheric isotope ratios. Palaeotemperature estimates have been obtained from the records using three approaches: (i) inferences based on the measured relationship between mean annual ,18O of snow and of mean annual surface temperature over Greenland; (ii) modelled inversion of the borehole temperature profile constrained either by the dated isotopic profile, or (iii) by using Monte Carlo simulation techniques. The third of these approaches was adopted to reconstruct Holocene temperature variations for the Dye 3 and GRIP temperature profiles, which yields remarkably compatible results. A new record of Holocene isotope variations obtained from the NorthGRIP ice-core matches the GRIP short-term isotope record, and also shows similar long-term trends to the Dye-3 and GRIP inverted temperature data. The NorthGRIP isotope record reflects: (i) a generally stronger isotopic signal than is found in the GRIP record; (ii) several short-lived temperature fluctuations during the first 1500 yr of the Holocene; (iii) a marked cold event at ca. 8.2 ka (the ,8.2 ka event'); (iv) optimum temperatures for the Holocene between ca. 8.6 and 4.3 ka, a signal that is 0.6, stronger than for the GRIP profile; (v) a clear signal for the Little Ice Age; and (vi) a clear signal of climate warming during the last century. These data suggest that the NorthGRIP stable isotope record responded in a sensitive manner to temperature fluctuations during the Holocene. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Computer Simulation of Dissociative Adsorption of Water on the Surfaces of Spinel MgAl2O4

JOURNAL OF THE AMERICAN CERAMIC SOCIETY, Issue 7 2001
Chang Ming Fang
Atomistic simulation techniques have been used to model the dissociative adsorption of water onto the low-index {100}, {110}, and {111} surfaces of spinel MgAl2O4. The Born model of solids and the shell model for oxygen polarization have been used. The resulting structures and chemical bonding on the clean and hydrated surfaces are described. The calculations show that the dissociative adsorption of water on the low-index surfaces is generally energetically favorable. For the {110} and {111} orientations, the surfaces cleaved between oxygen layers show high absorption and stability. The calculations also show that, for the {111} orientation, the surfaces may absorb chemically water molecules up to ,90% coverage and have the highest stability. It is suggested that, during fracture, only partial hydration occurs, leading to cleavage preferentially along the {100} orientation. [source]


The influence of indexing practices and weighting algorithms on document spaces

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 1 2008
Dietmar Wolfram
Index modeling and computer simulation techniques are used to examine the influence of indexing frequency distributions, indexing exhaustivity distributions, and three weighting methods on hypothetical document spaces in a vector-based information retrieval (IR) system. The way documents are indexed plays an important role in retrieval. The authors demonstrate the influence of different indexing characteristics on document space density (DSD) changes and document space discriminative capacity for IR. Document environments that contain a relatively higher percentage of infrequently occurring terms provide lower density outcomes than do environments where a higher percentage of frequently occurring terms exists. Different indexing exhaustivity levels, however, have little influence on the document space densities. A weighting algorithm that favors higher weights for infrequently occurring terms results in the lowest overall document space densities, which allows documents to be more readily differentiated from one another. This in turn can positively influence IR. The authors also discuss the influence on outcomes using two methods of normalization of term weights (i.e., means and ranges) for the different weighting methods. [source]


Bayesian geoadditive sample selection models

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 3 2010
Manuel Wiesenfarth
Summary., Sample selection models attempt to correct for non-randomly selected data in a two-model hierarchy where, on the first level, a binary selection equation determines whether a particular observation will be available for the second level, i.e. in the outcome equation. Ignoring the non-random selection mechanism that is induced by the selection equation may result in biased estimation of the coefficients in the outcome equation. In the application that motivated this research, we analyse relief supply in earthquake-affected communities in Pakistan, where the decision to deliver goods represents the dependent variable in the selection equation whereas factors that determine the amount of goods supplied are analysed in the outcome equation. In this application, the inclusion of spatial effects is necessary since the available covariate information on the community level is rather scarce. Moreover, the high temporal dynamics underlying the immediate delivery of relief supply after a natural disaster calls for non-linear, time varying effects. We propose a geoadditive sample selection model that allows us to address these issues in a general Bayesian framework with inference being based on Markov chain Monte Carlo simulation techniques. The model proposed is studied in simulations and applied to the relief supply data from Pakistan. [source]


Computer modeling of frequency-modulation spectra of coherent dark resonances

LASER PHYSICS LETTERS, Issue 9 2006
J. Vladimirova
Abstract Dynamics of a three-level quantum system in , -configuration driven by a resonant laser field with and without frequency modulation (FM) is studied for the first time in detail using two simulation techniques , the density matrix and quantum trajectories analysis. This analysis was applied to the Fmspectroscopy of coherent dark resonances in Cs atoms and computer simulation results for the absorption spectra are in qualitative agreement with those taken in an experiment. (© 2006 by Astro, Ltd. Published exclusively by WILEY-VCH Verlag GmbH & Co. KGaA) [source]


Understanding Multicompartment Micelles Using Dissipative Particle Dynamics Simulation

MACROMOLECULAR THEORY AND SIMULATIONS, Issue 2 2007
Chongli Zhong
Abstract Multicompartment micelles are a new class of nanomaterials that may find wide applications in the fields of drug delivery, nanotechnology and catalysis. Due to their structural complexity, as well as the wide parameter space to explore, experimental investigations are a difficult task, to which molecular simulation may contribute greatly. In this paper, the application of the dissipative particle dynamics simulation technique to the understanding of multicompartment micelles is introduced, illustrating that DPD is a powerful tool for identifying new morphologies by varying block length, block ratio and solvent quality in a systematic way. The formation process of multicompartment micelles, as well as shear effects and the self-assembly of nanoparticle mixtures in multicompartment micelles, can also be studied well by DPD simulation. The present work shows that DPD, as well as other simulation techniques and theories, can complement experiments greatly, not only in exploring properties in a wider parameter space, but also by giving a preview of phenomena prior to experiments. DPD, as a mesoscopic dynamic simulation technique, is particularly useful for understanding the dynamic processes of multicompartment micelles at a microscopic level. [source]


Dissipative Particle Dynamics Simulations of Polymer Brushes: Comparison with Molecular Dynamics Simulations

MACROMOLECULAR THEORY AND SIMULATIONS, Issue 9 2006
Sandeep Pal
Abstract Summary: The structure of polymer brushes is investigated by dissipative particle dynamics (DPD) simulations that include explicit solvent particles. With an appropriate choice of the DPD interaction parameters , we obtain good agreement with previous molecular dynamics (MD) results where the good solvent behavior has been modeled by an effective Lennard,Jones potential. The present results confirm that DPD simulation techniques can be applied for large length scale simulations of polymer brushes. A relation between the different length scales and is established. Polymer brush at a solid,liquid interface. [source]


From linear to non-linear scales: analytical and numerical predictions for weak-lensing convergence

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 2 2004
Andrew J. Barber
ABSTRACT Weak-lensing convergence can be used directly to map and probe the dark-mass distribution in the Universe. Building on earlier studies, we recall how the statistics of the convergence field are related to the statistics of the underlying mass distribution, in particular to the many-body density correlations. We describe two model-independent approximations which provide two simple methods to compute the probability distribution function (pdf) of the convergence. We apply one of these to the case where the density field can be described by a lognormal pdf. Next, we discuss two hierarchical models for the high-order correlations which allow us to perform exact calculations and evaluate the previous approximations in such specific cases. Finally, we apply these methods to a very simple model for the evolution of the density field from linear to highly non-linear scales. Comparisons with the results obtained from numerical simulations, obtained from a number of different realizations, show excellent agreement with our theoretical predictions. We have probed various angular scales in the numerical work and considered sources at 14 different redshifts in each of two different cosmological scenarios, an open cosmology and a flat cosmology with non-zero cosmological constant. Our simulation technique employs computations of the full three-dimensional shear matrices along the line of sight from the source redshift to the observer and is complementary to more popular ray-tracing algorithms. Our results therefore provide a valuable cross-check for such complementary simulation techniques, as well as for our simple analytical model, from the linear to the highly non-linear regime. [source]


Signal-to-interference-plus-noise ratio estimation for wireless communication systems: Methods and analysis

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 5 2004
Daniel R. Jeske
Abstract The Signal-to-Interference-plus-Noise Ratio (SINR) is an important metric of wireless communication link quality. SINR estimates have several important applications. These include optimizing the transmit power level for a target quality of service, assisting with handoff decisions and dynamically adapting the data rate for wireless Internet applications. Accurate SINR estimation provides for both a more efficient system and a higher user-perceived quality of service. In this paper, we develop new SINR estimators and compare their mean squared error (MSE) performance. We show that our new estimators dominate estimators that have previously appeared in the literature with respect to MSE. The sequence of transmitted bits in wireless communication systems consists of both pilot bits (which are known both to the transmitter and receiver) and user bits (which are known only by the transmitter). The SINR estimators we consider alternatively depend exclusively on pilot bits, exclusively on user bits, or simultaneously use both pilot and user bits. In addition, we consider estimators that utilize smoothing and feedback mechanisms. Smoothed estimators are motivated by the fact that the interference component of the SINR changes relatively slowly with time, typically with the addition or departure of a user to the system. Feedback estimators are motivated by the fact that receivers typically decode bits correctly with a very high probability, and therefore user bits can be thought of as quasipilot bits. For each estimator discussed, we derive an exact or approximate formula for its MSE. Satterthwaite approximations, noncentral F distributions (singly and doubly) and distribution theory of quadratic forms are the key statistical tools used in developing the MSE formulas. In the case of approximate MSE formulas, we validate their accuracy using simulation techniques. The approximate MSE formulas, of interest in their own right for comparing the quality of the estimators, are also used for optimally combining estimators. In particular, we derive optimal weights for linearly combining an estimator based on pilot bits with an estimator based on user bits. The optimal weights depend on the MSE of the two estimators being combined, and thus the accurate approximate MSE formulas can conveniently be used. The optimal weights also depend on the unknown SINR, and therefore need to be estimated in order to construct a useable combined estimator. The impact on the MSE of the combined estimator due to estimating the weights is examined. © 2004 Wiley Periodicals, Inc. Naval Research Logistics, 2004 [source]


Modelling capping of 28,mm beverage closures using finite element analysis

PACKAGING TECHNOLOGY AND SCIENCE, Issue 5 2008
J. Rowson
Abstract Understanding the performance of packaging on production lines is of course extremely important to the packaging industry. Computer simulation techniques have improved vastly in recent years, and modelling the complex interaction of three-dimensional threaded shapes, like closures, is now a reality. This paper outlines the work undertaken in understanding the possible mechanisms relating to the capping of 28,,mm beverage closures and the use of experimental and computer simulations in order to establish that understanding. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Accessibility of simple gases in disordered carbons: theory and simulation

ASIA-PACIFIC JOURNAL OF CHEMICAL ENGINEERING, Issue 5 2009
T. X. Nguyen
Abstract We present a review of our recent studies on the accessibility of simple gases (Ar, N2, CH4 and CO2) in disordered microporous carbons using transition state theory (TST) and molecular simulation techniques. A realistic carbon model rather than the slit-pore approximation is utilised, providing more accurate understanding of complex adsorption equilibrium and dynamics behaviour at the molecular level in porous carbons, especially kinetic restriction of adsorbate molecules through highly constricted pore mouths of coals and molecular sieve carbons (MSC). This kinetic restriction leads to a molecular sieving effect which plays a vital role in gas separation using the MSCs. In particular, the realistic carbon model of a saccharose char used in a recent study was obtained by hybrid reverse Monte Carlo simulation. The time of adsorption or desorption of the single gas molecule between two neighbouring pores through a highly constricted window of the realistic saccharose char model was determined using TST. Finally, the validation of TST calculated results of adsorption and desorption times against experimental measurements as well as molecular dynamics simulation is also presented in this article. Copyright © 2009 Curtin University of Technology and John Wiley & Sons, Ltd. [source]


Residual-Based Diagnostics for Structural Equation Models

BIOMETRICS, Issue 1 2009
B. N. Sánchez
Summary Classical diagnostics for structural equation models are based on aggregate forms of the data and are ill suited for checking distributional or linearity assumptions. We extend recently developed goodness-of-fit tests for correlated data based on subject-specific residuals to structural equation models with latent variables. The proposed tests lend themselves to graphical displays and are designed to detect misspecified distributional or linearity assumptions. To complement graphical displays, test statistics are defined; the null distributions of the test statistics are approximated using computationally efficient simulation techniques. The properties of the proposed tests are examined via simulation studies. We illustrate the methods using data from a study of in utero lead exposure. [source]


Advanced Medical Simulation Applications for Emergency Medicine Microsystems Evaluation and Training

ACADEMIC EMERGENCY MEDICINE, Issue 11 2008
Leo Kobayashi MD
Abstract Participants in the 2008 Academic Emergency Medicine Consensus Conference "The Science of Simulation in Healthcare: Defining and Developing Clinical Expertise" morning workshop session on developing systems expertise were tasked with evaluating best applications of simulation techniques and technologies to small-scale systems in emergency medicine (EM). We collaborated to achieve several objectives: 1) describe relevant theories and terminology for discussion of health care systems and medical simulation, 2) review prior and ongoing efforts employing systems thinking and simulation programs in general medical sectors and acute care medicine, 3) develop a framework for discussion of systems thinking for EM, and 4) explore the rational application of advanced medical simulation methods to a defined framework of EM microsystems (EMMs) to promote a "quality-by-design" approach. This article details the materials compiled and questions raised during the consensus process, and the resulting simulation application framework, with proposed solutions as well as their limitations for EM systems education and improvement. [source]


Educational and Research Implications of Portable Human Patient Simulation in Acute Care Medicine

ACADEMIC EMERGENCY MEDICINE, Issue 11 2008
Leo Kobayashi MD
Abstract Advanced medical simulation has become widespread. One development, the adaptation of simulation techniques and manikin technologies for portable operation, is starting to impact the training of personnel in acute care fields such as emergency medicine (EM) and trauma surgery. Unencumbered by cables and wires, portable simulation programs mitigate several limitations of traditional (nonportable) simulation and introduce new approaches to acute care education and research. Portable simulation is already conducted across multiple specialties and disciplines. In situ medical simulations are those carried out within actual clinical environments, while off-site portable simulations take place outside of clinical practice settings. Mobile simulation systems feature functionality while moving between locations; progressive simulations are longer-duration events using mobile simulations that follow a simulated patient through sequential care environments. All of these variants have direct applications for acute care medicine. Unique training and investigative opportunities are created by portable simulation through four characteristics: 1) enhancement of experiential learning by reframing training inside clinical care environments, 2) improving simulation accessibility through delivery of training to learner locations, 3) capitalizing on existing care environments to maximize simulation realism, and 4) provision of improved training capabilities for providers in specialized fields. Research agendas in acute care medicine are expanded via portable simulation's introduction of novel topics, new perspectives, and innovative methodologies. Presenting opportunities and challenges, portable simulation represents an evolutionary progression in medical simulation. The use of portable manikins and associated techniques may increasingly complement established instructional measures and research programs at acute care institutions and simulation centers. [source]


Application of modeling and simulation tools for the evaluation of biocatalytic processes: A future perspective

BIOTECHNOLOGY PROGRESS, Issue 6 2009
Gürkan Sin
Abstract Modeling and simulation techniques have for some time been an important feature of biocatalysis research, often applied as a complement to experimental studies. In this short review, we report on the state-of-the-art process and kinetic modeling for biocatalysis with the aim of identifying future research needs. We have particularly focused on four aspects of modeling: (i) the model purpose, (ii) the process model boundary, (iii) the model structure, and (iv) the model identification procedure. First, one finds that most of the existing models describe biocatalyst behavior in terms of enzyme selectivity, mechanism, and reaction kinetics. More recently, work has focused on extending these models to obtain process flowsheet descriptions. Second, biocatalysis models remain at a relatively low level of complexity compared with the trends observed in other engineering disciplines. Hence, there is certainly room for additional development, i.e., detailed mixing and hydrodynamics, more process units (e.g., biorefinery). Third, biocatalysis models have been only partially subjected to formal statistical analysis. In particular, uncertainty analysis is needed to ascertain reliability of the predictions of the process model, which is necessary to make sound engineering decisions (e.g., the optimal process flowsheet, control strategy, etc). In summary, for modeling studies to be more mature and successful, one needs to introduce Good Modeling Practice and that asks for (i) a standardized and systematic guideline for model development, (ii) formal identifiability analysis, and (iii) uncertainty analysis. This will advance the utility of models in biocatalysis for more rigorous application within process design, optimization, and control strategy evaluation. © 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009 [source]


Mechanisms of constitutive activation of Janus kinase 2-V617F revealed at the atomic level through molecular dynamics simulations

CANCER, Issue 8 2009
Tai-Sung Lee PhD
Abstract BACKGROUND: The tyrosine kinase Janus kinase 2 (JAK2) is important in triggering nuclear translocation and regulation of target genes expression through signal transducer and activator of transcription pathways. The valine-to-phenylalanine mutation at amino acid 617 (V617F), which results in the deregulation of JAK2, has been implicated in the oncogenesis of chronic myeloproliferative disease. However, both the mechanism of JAK2 autoinhibition and the mechanism of V617F constitutive activation remain unclear. METHOD: In this work, the authors used molecular dynamics simulation techniques to establish plausible mechanisms of JAK2 autoinhibition and V617F constitutive activation at the atomic level. RESULTS: In wild-type JAK2, the activation loop of JAK2-homology domain 1 (JH1) is pulled toward the JH1/JH2 interface through interactions with key residues of JH2, especially S591, F595, and V617, and stabilizes the inactivated form of JH1. In the case of V617F, through the aromatic ring-ring stacking interaction, F617 blocks the interaction of JH1 the activation loop, S591, and F595, thus causing the JH1 activation loop to move back to its activated form. CONCLUSIONS: The current results indicated that this simulation-derived mechanism of JAK2 autoregulation is consistent with current available experimental evidence and may lead to a deeper understanding of JAK2 and other kinase systems that are regulated by pseudokinases. Cancer 2009. © 2009 American Cancer Society. [source]


Disulfide Bond Substitution by Directed Evolution in an Engineered Binding Protein

CHEMBIOCHEM, Issue 8 2009
Antoine Drevelle Dr.
Abstract Breaking ties: The antitumour protein, neocarzinostatin (NCS), is one of the few drug-carrying proteins used in human therapeutics. However, the presence of disulfide bonds limits this protein's potential development for many applications. This study describes a generic directed-evolution approach starting from NCS-3.24 (shown in the figure complexed with two testosterone molecules) to engineer stable disulfide-free NCS variants suitable for a variety of purposes, including intracellular applications. The chromoprotein neocarzinostatin (NCS) has been intensively studied for its antitumour properties. It has recently been redesigned as a potential drug-carrying scaffold. A potential limit of this protein scaffold, especially for intracellular applications, is the presence of disulfide bonds. The objective of this work was to create a disulfide-free NCS-derived scaffold. A generic targeted approach was developed by using directed evolution methods. As a starting point we used a previously engineered NCS variant in which a hapten binding site had been created. A library was then generated in which cysteine Cys88 and Cys93 and neighbouring residues were randomly substituted. Variants that preserved the hapten binding function were selected by phage display and further screened by colony filtration methods. Several sequences with common features emerged from this process. The corresponding proteins were expressed, purified and their biophysical properties characterised. How these selected sequences rescued folding ability and stability of the disulfide-free protein was carefully examined by using calorimetry and the results were interpreted with molecular simulation techniques. [source]


A Novel Quantum/Classical Hybrid Simulation Technique

CHEMPHYSCHEM, Issue 9 2005
Mike C. Payne Prof.
A successful scheme: The authors, hybrid modelling scheme can link one or more molecular-mechanics-based simulation techniques to one or more quantum mechanical atomistic simulation techniques in a seamless manner. This scheme is tested by studying the failure of a silicon nanobar under tensile stress and allows the study of brittle fracture in silicon (see simulation snapshot of the opening (111) crack system). [source]


The Use of Simulation in Emergency Medicine: A Research Agenda

ACADEMIC EMERGENCY MEDICINE, Issue 4 2007
William F. Bond MD
Abstract Medical simulation is a rapidly expanding area within medical education. In 2005, the Society for Academic Emergency Medicine Simulation Task Force was created to ensure that the Society and its members had adequate access to information and resources regarding this new and important topic. One of the objectives of the task force was to create a research agenda for the use of simulation in emergency medical education. The authors present here the consensus document from the task force regarding suggested areas for research. These include opportunities to study reflective experiential learning, behavioral and team training, procedural simulation, computer screen,based simulation, the use of simulation for evaluation and testing, and special topics in emergency medicine. The challenges of research in the field of simulation are discussed, including the impact of simulation on patient safety. Outcomes-based research and multicenter efforts will serve to advance simulation techniques and encourage their adoption. [source]