Home About us Contact | |||
Simplified Version (simplified + version)
Selected AbstractsFirst experience of compressible gas dynamics simulation on the Los Alamos roadrunner machineCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 17 2009Paul R. Woodward Abstract We report initial experience with gas dynamics simulation on the Los Alamos Roadrunner machine. In this initial work, we have restricted our attention to flows in which the flow Mach number is less than 2. This permits us to use a simplified version of the PPM gas dynamics algorithm that has been described in detail by Woodward (2006). We follow a multifluid volume fraction using the PPB moment-conserving advection scheme, enforcing both pressure and temperature equilibrium between two monatomic ideal gases within each grid cell. The resulting gas dynamics code has been extensively restructured for efficient multicore processing and implemented for scalable parallel execution on the Roadrunner system. The code restructuring and parallel implementation are described and performance results are discussed. For a modest grid size, sustained performance of 3.89 Gflops,1 CPU-core,1 is delivered by this code on 36 Cell processors in 9 triblade nodes of a single rack of Roadrunner hardware. Copyright © 2009 John Wiley & Sons, Ltd. [source] A fast, one-equation integration algorithm for the Lemaitre ductile damage modelINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 8 2002E. A. de Souza NetoArticle first published online: 3 MAY 200 Abstract This paper introduces an elastic predictor/return mapping integration algorithm for a simplified version of the Lemaitre ductile damage model, whose return mapping stage requires the solution of only one scalar non-linear equation. The simplified damage model differs from its original counterpart only in that it excludes kinematic hardening. It can be used to predict ductile damage growth whenever load reversal is absent or negligible,a condition met in a vast number of practical engineering applications. The one-equation integration scheme proves highly efficient in the finite element solution of typical boundary value problems, requiring computation times comparable to those observed in classical von Mises implementations. This is in sharp contrast to the previously proposed implementations of the original model whose return mapping may require, in the most general case, the solution of a system of 14 coupled algebraic equations. For completeness, a closed formula for the corresponding consistent tangent operator is presented. The performance of the algorithm is illustrated by means of a numerical example. Copyright © 2002 John Wiley & Sons, Ltd. [source] A simplified v2,f model for near-wall turbulenceINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 12 2007M. M. Rahman Abstract A simplified version of the v2,f model is proposed that accounts for the distinct effects of low-Reynolds number and near-wall turbulence. It incorporates modified C,(1,2) coefficients to amplify the level of dissipation in non-equilibrium flow regions, thus reducing the kinetic energy and length scale magnitudes to improve prediction of adverse pressure gradient flows, involving flow separation and reattachment. Unlike the conventional v2,f, it requires one additional equation (i.e. the elliptic equation for the elliptic relaxation parameter fµ) to be solved in conjunction with the k,, model. The scaling is evaluated from k in collaboration with an anisotropic coefficient Cv and fµ. Consequently, the model needs no boundary condition on and avoids free stream sensitivity. The model is validated against a few flow cases, yielding predictions in good agreement with the direct numerical simulation (DNS) and experimental data. Copyright © 2007 John Wiley & Sons, Ltd. [source] Hierarchic multigrid iteration strategy for the discontinuous Galerkin solution of the steady Euler equationsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 9-10 2006Koen Hillewaert Abstract We study the efficient use of the discontinuous Galerkin finite element method for the computation of steady solutions of the Euler equations. In particular, we look into a few methods to enhance computational efficiency. In this context we discuss the applicability of two algorithmical simplifications that decrease the computation time associated to quadrature. A simplified version of the quadrature free implementation applicable to general equations of state, and a simplified curved boundary treatment are investigated. We as well investigate two efficient iteration techniques, namely the classical Newton,Krylov method used in computational fluid dynamics codes, and a variant of the multigrid method which uses interpolation orders rather than coarser tesselations to define the auxiliary coarser levels. Copyright © 2005 John Wiley & Sons, Ltd. [source] Reasoning about non-linear AR models using expectation maximizationJOURNAL OF FORECASTING, Issue 6-7 2003M. ArnoldArticle first published online: 19 SEP 200 Abstract A simplified version of the expectation maximization (EM) algorithm is applied to search for optimal state sequences in state-dependent AR models whereby no prior knowledge about the state equation is necessary. These sequences can be used to draw conclusions about functional dependencies between the observed process and estimated AR coefficients. Consequently this approach is especially helpful in the identification of functional,coefficient AR models where the coefficients are controlled by the process itself. The approximation of regression functions in first-order non-linear AR models and the localization of multiple thresholds in self-exciting threshold autoregressive models are demonstrated as examples.,Copyright © 2003 John Wiley & Sons, Ltd. [source] Credit Market Failures and PolicyJOURNAL OF PUBLIC ECONOMIC THEORY, Issue 3 2009ENRICO MINELLI In a simplified version of the Stiglitz and Weiss (1981) model of the credit market we characterize optimal policies to correct market failures. Widely applied policies, notably interest-rate subsidies and investment subsidies, are compared to the theoretical optimum. [source] Developing audience awareness in writingJOURNAL OF RESEARCH IN READING, Issue 3 2002José Brandâo Carvalho Beginning writers need to consider their audience; but this is only possible when the writer has reached a certain stage of cognitive development, as it is necessary to consider an absent reality (e.g. an audience reading the piece at a later point). Adapting the text to the audience is only possible when the physical task of writing becomes automatic and the writer is no longer absorbed by it. Then the writer is free to pay attention to other aspects of the task without overloading cognitive processes. Procedural facilitation involves the use of external aids to support a simplified version of the processes used by expert writers. It may function as a way of enabling beginning writers to adapt what they write for their audiences. At the same time, as this task becomes automatic, it may be seen as a way of promoting writing development. A quasi,experimental study is described in which a procedural facilitation strategy is used to promote writing skills, in particular, the skill of suiting the text to the communicative context. The study was with fifth and ninth grade Portuguese students. The results of the post,test show significant progress for the experimental groups in contrast to the control groups. [source] A COMPARISON OF THE EFFECTIVENESS OF HEDONIC SCALES AND END-ANCHOR COMPRESSION EFFECTSJOURNAL OF SENSORY STUDIES, Issue 2010HARRY T. LAWLESS ABSTRACT Three experiments were conducted to compare the relative performance of hedonic scaling methods, including the labeled affective magnitude (LAM) scale. In the first study, three versions of the LAM were used to evaluate 20 phrases that described diverse sensory experiences. One scale was anchored to "greatest imaginable like/dislike for any experience" and another used the "greatest imaginable like" phrase of the LAM but with the interior phrases repositioned relative to "any experience." The scale anchored to "any experience" showed a smaller range of scale usage and lower statistical differentiation, relative to the LAM scale, with the repositioned scale intermediate. Two further experiments compared the LAM to the nine-point hedonic scale, an 11-point category scale using the LAM phrases, and to a three-label line scale, a simplified version of the LAM with only the end phrases and the neutral center-point phrase. All scales showed similar differentiation of juices in the second study and sensory experience phrases in the third. A modest advantage for the LAM scale in the second experiment did not extend to the third study. Researchers should be careful in the choice of high end anchors for hedonic scales, as a compressed range of scale usage may result in lower product differentiation. PRACTICAL APPLICATIONS Hedonic scales for food acceptability are widely used in new product development for consumer testing and in food preference surveys. A desired goal of efficient sensory evaluation testing is the ability of tests to differentiate samples on the basis of scale data, in this case scales commonly used for food acceptability and preference testing. Scales which are able to differentiate products more effectively are less likely to lead to Type II error in experimentation, in which true differences between products are not detected. Such errors can lead to lost opportunities for product improvements or to enhanced chances for taking undetected risks in the case of false parity conclusions. [source] Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximationsJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 2 2009Håvard Rue Summary., Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalized) linear models, (generalized) additive models, smoothing spline models, state space models, semiparametric regression, spatial and spatiotemporal models, log-Gaussian Cox processes and geostatistical and geoadditive models. We consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with non-Gaussian response variables. The posterior marginals are not available in closed form owing to the non-Gaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, in terms of both convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo sampling is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations is computational: where Markov chain Monte Carlo algorithms need hours or days to run, our approximations provide more precise estimates in seconds or minutes. Another advantage with our approach is its generality, which makes it possible to perform Bayesian analysis in an automatic, streamlined way, and to compute model comparison criteria and various predictive measures so that models can be compared and the model under study can be challenged. [source] GIS-based rapid assessment of erosion risk in a small catchment in the wet/dry tropics of AustraliaLAND DEGRADATION AND DEVELOPMENT, Issue 5 2001G. Boggs Abstract Assessing the impact of various land uses on catchment erosion processes commonly requires in-depth research, monitoring and field data collection, as well as the implementation of sophisticated modelling techniques. This paper describes the evaluation of a geographic information system (GIS)-based rapid erosion assessment method, which allows the user to quickly acquire and evaluate existing data to assist in the planning of more detailed monitoring and modelling programmes. The rapid erosion assessment method is based on a simplified version of the revised universal soil loss equation (RUSLE), and allows the rapid parameterization of the model from widely available land unit and elevation datasets. The rapid erosion assessment method is evaluated through the investigation of the effects of elevation data resolution on erosion predictions and field data validation. The use of raster digital elevation model (DEM)-derived data, as opposed to vector land unit relief data, was found to greatly improve the validity of the rapid erosion assessment method. Field validation of the approach, involving the comparison of predicted soil loss ratios with adjusted in-stream sediment yields on a subcatchment basis, indicated that with decreasing data resolution, the results are increasingly overestimated for larger catchments and underestimated for smaller catchments. However, the rapid erosion assessment method proved to be a valuable tool that is highly useful as an initial step in the planning of more detailed erosion assessments. Copyright © 2001 Commonwealth of Australia. [source] Amplitude,shape approximation as an extension of separation of variablesMATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 3 2008N. Parumasur Abstract Separation of variables is a well-known technique for solving differential equations. However, it is seldom used in practical applications since it is impossible to carry out a separation of variables in most cases. In this paper, we propose the amplitude,shape approximation (ASA) which may be considered as an extension of the separation of variables method for ordinary differential equations. The main idea of the ASA is to write the solution as a product of an amplitude function and a shape function, both depending on time, and may be viewed as an incomplete separation of variables. In fact, it will be seen that such a separation exists naturally when the method of lines is used to solve certain classes of coupled partial differential equations. We derive new conditions which may be used to solve the shape equations directly and present a numerical algorithm for solving the resulting system of ordinary differential equations for the amplitude functions. Alternatively, we propose a numerical method, similar to the well-established exponential time differencing method, for solving the shape equations. We consider stability conditions for the specific case corresponding to the explicit Euler method. We also consider a generalization of the method for solving systems of coupled partial differential equations. Finally, we consider the simple reaction diffusion equation and a numerical example from chemical kinetics to demonstrate the effectiveness of the method. The ASA results in far superior numerical results when the relative errors are compared to the separation of variables method. Furthermore, the method leads to a reduction in CPU time as compared to using the Rosenbrock semi-implicit method for solving a stiff system of ordinary differential equations resulting from a method of lines solution of a coupled pair of partial differential equations. The present amplitude,shape method is a simplified version of previous ones due to the use of a linear approximation to the time dependence of the shape function. Copyright © 2007 John Wiley & Sons, Ltd. [source] Shock formation in a chemotaxis modelMATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 1 2008Zhian Wang Abstract In this paper, we establish the existence of shock solutions for a simplified version of the Othmer,Stevens chemotaxis model (SIAM J. Appl. Math. 1997; 57:1044,1081). The existence of these shock solutions was suggested by Levine and Sleeman (SIAM J. Appl. Math. 1997; 57:683,730). Here, we consider the general Riemann problem and derive the shock curves in parameterized forms. By studying the travelling wave solutions, we examine the shock structure for the chemotaxis model and prove that the travelling wave speed is identical to the shock speed. Moreover, we explicitly derive an entropy,entropy flux pair to prove the uniqueness of the weak shock solutions. Some discussion is given for further study. Copyright © 2007 John Wiley & Sons, Ltd. [source] Unsaturated incompressible flows in adsorbing porous mediaMATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 16 2003A. Fasano We study a free boundary problem modelling the penetration of a liquid through a porous material in the presence of absorbing granules. The geometry is one dimensional. The early stage of penetration is considered, when the flow is unsaturated. Since the hydraulic conductivity depends both on saturation and on porosity and the latter change due to the absorption, the main coefficient in the flow equation depends on the free boundary and on the history of the process. Some results have been obtained in Fasano (Math. Meth. Appl. Sci. 1999; 22:605) for a simplified version of the model. Here existence and uniqueness are proved in a class of weighted Hölder spaces in a more general situation. A basic tool are the estimates on a non-standard linear boundary value problem for the heat equation in an initially degenerate domain (Rend. Mat. Acc. Lincei 2002; 13:23). Copyright © 2003 John Wiley & Sons, Ltd. [source] The funnel experiment: The Markov-based SPC approachQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 8 2007Gonen Singer Abstract The classical funnel experiment was used by Deming to promote the idea of statistical process control (SPC). The popular example illustrates that the implementation of simple feedback rules to stationary processes violates the independence assumption and prevents the implementation of conventional SPC. However, Deming did not indicate how to implement SPC in the presence of such feedback rules. This pedagogical gap is addressed here by introducing a simple feedback rule to the funnel example that results in a nonlinear process to which the traditional SPC methods cannot be applied. The proposed method of Markov-based SPC, which is a simplified version of the context-based SPC method, is shown to monitor the modified process well. Copyright © 2007 John Wiley & Sons, Ltd. [source] ESTIMATION IN RICKER'S TWO-RELEASE METHOD: A BAYESIAN APPROACHAUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 2 2006Shen-Ming Lee Summary The Ricker's two-release method is a simplified version of the Jolly-Seber method, from Seber's Estimation of Animal Abundance (1982), used to estimate survival rate and abundance in animal populations. This method assumes there is only a single recapture sample and no immigration, emigration or recruitment. In this paper, we propose a Bayesian analysis for this method to estimate the survival rate and the capture probability, employing Markov chain Monte Carlo methods and a latent variable analysis. The performance of the proposed method is illustrated with a simulation study as well as a real data set. The results show that the proposed method provides favourable inference for the survival rate when compared with the modified maximum likelihood method. [source] Insight into initiator,DNA interactions: a lesson from the archaeal ORCBIOESSAYS, Issue 3 2008Shusuke Tada Although initiation of DNA replication is considered to be highly coordinated through multiple protein,DNA and protein,protein interactions, it is poorly understood how particular locations within the eukaryotic chromosome are selected as origins of DNA replication. Here, we discuss recent reports that present structural information on the interaction characteristics of the archaeal orthologues of the eukaryotic origin recognition complex with their cognate binding sequences.1,2 Since the archaeal replication system is postulated as a simplified version of the one in eukaryotes, by analogy, these works provide insights into the functions of the eukaryotic initiator proteins. BioEssays 30:208,211, 2008. © 2008 Wiley Periodicals, Inc. [source] Extensions of the Penalized Spline of Propensity Prediction Method of ImputationBIOMETRICS, Issue 3 2009Guangyu Zhang SummaryLittle and An (2004,,Statistica Sinica,14, 949,968) proposed a penalized spline of propensity prediction (PSPP) method of imputation of missing values that yields robust model-based inference under the missing at random assumption. The propensity score for a missing variable is estimated and a regression model is fitted that includes the spline of the estimated logit propensity score as a covariate. The predicted unconditional mean of the missing variable has a double robustness (DR) property under misspecification of the imputation model. We show that a simplified version of PSPP, which does not center other regressors prior to including them in the prediction model, also has the DR property. We also propose two extensions of PSPP, namely, stratified PSPP and bivariate PSPP, that extend the DR property to inferences about conditional means. These extended PSPP methods are compared with the PSPP method and simple alternatives in a simulation study and applied to an online weight loss study conducted by Kaiser Permanente. [source] Choquet,stieltjes integral as a tool for decision modelingINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 2 2008Yasuo Narukawa The usefulness of the Choquet integral for modeling decision under risk and uncertainty is shown. It is shown that some paradoxes of expected utility theory are solved using the Choquet integral. Necessary and sufficient conditions for the Choquet expected utility model for decision under uncertainty (or rank dependent utility model for decision under risk) being the same as its simplified versions are presented. © 2008 Wiley Periodicals, Inc. [source] |