Computer Implementation (computer + implementation)

Distribution by Scientific Domains


Selected Abstracts


A Computer Implementation of the Separate Maintenance Model for Complex-system Reliability

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 7 2006
M. Tortorella
Abstract Reliability modeling and quantitative reliability prediction for all but the simplest system architectures demands intensive computer support for the numerical computations required. Many commercial and academic reliability modeling software packages provide support for the Markov-chain state diagram system reliability model. Other system reliability models, such as those offering non-exponential life and/or repair time distributions, transient analysis, or other special handling, may sometimes be desirable. Users have fewer choices for software supporting these options. This paper describes SUPER, a software package developed at Bell Laboratories, which provides computational support for the separate maintenance model as well as for some other useful system reliability descriptions. SUPER is an acronym for System Used for Prediction and Evaluation of Reliability. The paper also includes a brief tutorial to assist practitioners with system reliability model selection, a review of the models contained in SUPER and their theoretical bases, and implementation issues. SUPER has been used in the telecommunications industry for over 15 years. The paper includes an example from this experience. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Modeling expert problem solving in a game of chance: a Yahtzee© case study

EXPERT SYSTEMS, Issue 2 2001
Ken Maynard
Although developments on software agents have led to useful applications in automation of routine tasks such as electronic mail filtering, there is a scarcity of research that empirically evaluates the performance of a software agent versus that of a human reasoner, whose problem-solving capabilities the agent embodies. In the context of a game of a chance, namely Yahtzee©, we identified strategies deployed by expert human reasoners and developed a decision tree for agent development. This paper describes the computer implementation of the Yahtzee game as well as the software agent. It also presents a comparison of the performance of humans versus an automated agent. Results indicate that, in this context, the software agent embodies human expertise at a high level of fidelity. [source]


Parallelization of a vorticity formulation for the analysis of incompressible viscous fluid flows

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2002
Mary J. Brown
Abstract A parallel computer implementation of a vorticity formulation for the analysis of incompressible viscous fluid flow problems is presented. The vorticity formulation involves a three-step process, two kinematic steps followed by a kinetic step. The first kinematic step determines vortex sheet strengths along the boundary of the domain from a Galerkin implementation of the generalized Helmholtz decomposition. The vortex sheet strengths are related to the vorticity flux boundary conditions. The second kinematic step determines the interior velocity field from the regular form of the generalized Helmholtz decomposition. The third kinetic step solves the vorticity equation using a Galerkin finite element method with boundary conditions determined in the first step and velocities determined in the second step. The accuracy of the numerical algorithm is demonstrated through the driven-cavity problem and the 2-D cylinder in a free-stream problem, which represent both internal and external flows. Each of the three steps requires a unique parallelization effort, which are evaluated in terms of parallel efficiency. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Guaranteed recursive non-linear state bounding using interval analysis

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 3 2002
Michel Kieffer
Abstract The problem considered here is state estimation in the presence of unknown but bounded state perturbations and measurement noise. In this context, most available results are for linear models, and the purpose of the present paper is to deal with the non-linear case. Based on interval analysis and the notion of set inversion, a new state estimator is presented, which evaluates a set estimate guaranteed to contain all values of the state that are consistent with the available observations, given the perturbation and noise bounds and a set containing the initial value of the state. To the best of our knowledge, it is the first estimator for which this claim can be made. The precision of the set estimate can be improved, at the cost of more computation. Theoretical properties of the estimator are studied, and computer implementation receives special attention. A simple illustrative example is treated. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Evaluation of physiologically based pharmacokinetic models for use in risk assessment,

JOURNAL OF APPLIED TOXICOLOGY, Issue 3 2007
Weihsueh A. Chiu
Abstract Physiologically based pharmacokinetic (PBPK) models are sophisticated dosimetry models that offer great flexibility in modeling exposure scenarios for which there are limited data. This is particularly of relevance to assessing human exposure to environmental toxicants, which often requires a number of extrapolations across species, route, or dose levels. The continued development of PBPK models ensures that regulatory agencies will increasingly experience the need to evaluate available models for their application in risk assessment. To date, there are few published criteria or well-defined standards for evaluating these models. Herein, important considerations for evaluating such models are described. The evaluation of PBPK models intended for risk assessment applications should include a consideration of: model purpose, model structure, mathematical representation, parameter estimation, computer implementation, predictive capacity and statistical analyses. Model purpose and structure require qualitative checks on the biological plausibility of a model. Mathematical representation, parameter estimation, computer implementation involve an assessment of the coding of the model, as well as the selection and justification of the physical, physicochemical and biochemical parameters chosen to represent a biological organism. Finally, the predictive capacity and sensitivity, variability and uncertainty of the model are analysed so that the applicability of a model for risk assessment can be determined. Published in 2007 by John Wiley & Sons, Ltd. [source]


An adaptive empirical Bayesian thresholding procedure for analysing microarray experiments with replication

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 3 2007
Rebecca E. Walls
Summary., A typical microarray experiment attempts to ascertain which genes display differential expression in different samples. We model the data by using a two-component mixture model and develop an empirical Bayesian thresholding procedure, which was originally introduced for thresholding wavelet coefficients, as an alternative to the existing methods for determining differential expression across thousands of genes. The method is built on sound theoretical properties and has easy computer implementation in the R statistical package. Furthermore, we consider improvements to the standard empirical Bayesian procedure when replication is present, to increase the robustness and reliability of the method. We provide an introduction to microarrays for those who are unfamilar with the field and the proposed procedure is demonstrated with applications to two-channel complementary DNA microarray experiments. [source]