Computational Procedure (computational + procedure)

Distribution by Scientific Domains


Selected Abstracts


Rejoinder to "Modification of the Computational Procedure in Parker and Bregman's Method of Calculating Sample Size from Matched Case,Control Studies with a Dichotomous Exposure"

BIOMETRICS, Issue 4 2005
Robert A. Parker
No abstract is available for this article. [source]


Optimal search on spatial paths with recall, Part II: Computational procedures and examples

PAPERS IN REGIONAL SCIENCE, Issue 3 2000
Mitchell Harwitz
Search; spatial search; spatial economics Abstract. This is the second part of a two-part analysis of optimal spatial search begun in Harwitz et al. (1998). In the present article, two explicit computational procedures are developed for the optimal spatial search problem studied in Part I. The first uses reservation prices with continuous known distributions of prices and is illustrated for three stores. The second does not use reservation prices but assumes known discrete distributions. It is a numerical approximation to the first and also a tool for examining examples with larger numbers of stores. [source]


Response of unbounded soil in scaled boundary finite-element method

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 1 2002
John P. Wolf
Abstract The scaled boundary finite-element method is a powerful semi-analytical computational procedure to calculate the dynamic stiffness of the unbounded soil at the structure,soil interface. This permits the analysis of dynamic soil,structure interaction using the substructure method. The response in the neighbouring soil can also be determined analytically. The method is extended to calculate numerically the response throughout the unbounded soil including the far field. The three-dimensional vector-wave equation of elasto-dynamics is addressed. The radiation condition at infinity is satisfied exactly. By solving an eigenvalue problem, the high-frequency limit of the dynamic stiffness is constructed to be positive definite. However, a direct determination using impedances is also possible. Solving two first-order ordinary differential equations numerically permits the radiation condition and the boundary condition of the structure,soil interface to be satisfied sequentially, leading to the displacements in the unbounded soil. A generalization to viscoelastic material using the correspondence principle is straightforward. Alternatively, the displacements can also be calculated analytically in the far field. Good agreement of displacements along the free surface and below a prism foundation embedded in a half-space with the results of the boundary-element method is observed. Copyright © 2001 John Wiley & Sons, Ltd. [source]


A two-dimensional stochastic algorithm for the solution of the non-linear Poisson,Boltzmann equation: validation with finite-difference benchmarks,

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 1 2006
Kausik Chatterjee
Abstract This paper presents a two-dimensional floating random walk (FRW) algorithm for the solution of the non-linear Poisson,Boltzmann (NPB) equation. In the past, the FRW method has not been applied to the solution of the NPB equation which can be attributed to the absence of analytical expressions for volumetric Green's functions. Previous studies using the FRW method have examined only the linearized Poisson,Boltzmann equation. No such linearization is needed for the present approach. Approximate volumetric Green's functions have been derived with the help of perturbation theory, and these expressions have been incorporated within the FRW framework. A unique advantage of this algorithm is that it requires no discretization of either the volume or the surface of the problem domains. Furthermore, each random walk is independent, so that the computational procedure is highly parallelizable. In our previous work, we have presented preliminary calculations for one-dimensional and quasi-one-dimensional benchmark problems. In this paper, we present the detailed formulation of a two-dimensional algorithm, along with extensive finite-difference validation on fully two-dimensional benchmark problems. The solution of the NPB equation has many interesting applications, including the modelling of plasma discharges, semiconductor device modelling and the modelling of biomolecular structures and dynamics. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Global formulation of 3D magnetostatics using flux and gauged potentials

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 4 2004
M. Repetto
Abstract The use of algebraic formulation for the solution of electromagnetic fields is becoming more and more widespread. This paper presents the theoretical development of two algebraic formulations of the magneto-static problem and their implementation in a three dimensional computational procedure based on an unstructured tetrahedral mesh. A complete description of the variables used and of the solution algorithm is provided together with a discussion about the performances of the method. The performances of the two procedures are tested and assessed versus cases with known solutions. Copyright © 2004 John Wiley & Sons, Ltd. [source]


A multi-block lattice Boltzmann method for viscous fluid flows

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 2 2002
Dazhi Yu
Abstract Compared to the Navier,Stokes equation-based approach, the method of lattice Boltzmann Equation (LBE) offers an alternative treatment for fluid dynamics. The LBE method often employs uniform lattices to maintain a compact and efficient computational procedure, which makes it less efficient to perform flow simulations when there is a need for high resolution near the body and/or there is a far-field boundary. To resolve these difficulties, a multi-block method is developed. An accurate, conservative interface treatment between neighboring blocks is adopted, and demonstrated that it satisfies the continuity of mass, momentum, and stresses across the interface. Several test cases are employed to assess accuracy improvement with respect to grid refinement, the impact of the corner singularity, and the Reynolds number scaling. The present multi-block method can substantially improve the accuracy and computational efficiency of the LBE method for viscous flow computations. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Tayloring standard TDDFT approaches for computing UV/Vis transitions in thiocarbonyl chromophores

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 4 2008
Julien Preat
Abstract We report the development of an accurate computational procedure for the calculation of the n , ,* (,max,1) and , , ,* (,max,2) transitions of a set of thiocarbonyl derivatives. To ensure converged results, all calculations are carried out using the 6-311+G(2df,p) basis set for time-dependent calculations, and the 6-311G(2df,p) for the ground-state geometrical optimization. Starting with two hybrids, PBE0 and B3LYP, the Hartree,Fock exchange percentage (,) used is optimized in order to reach excitation energies that fit experimental data. It turns out that BLYP(,) is the more adequate functional for calibration. For the n , ,* excitation, the optimal , value lies in the 0.10,0.20 interval, whereas for the , , ,* process setting , equal to 0.10 provides the most accurate results. The corresponding mean absolute errors (MAE) are limited to 17 nm for ,max,1, and to 10 nm for ,max,2, allowing a consistent and accurate prediction of both transitions. We also assess the merits of the ZINDO//AM1 scheme and it turns out that the semi-empirical method only provides a poor prediction of the ,max of thiocarbonyl derivatives, especially for the n , ,* transition. © 2007 Wiley Periodicals, Inc. Int J Quantum Chem, 2008 [source]


Nonlinear wave function expansions: A progress report

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 15 2007
Ron Shepard
Abstract Some recent progress is reported for a novel nonlinear expansion form for electronic wave functions. This expansion form is based on spin eigenfunctions using the Graphical Unitary Group Approach and the wave function is expanded in a basis of product functions, allowing application to closed and open shell systems and to ground and excited electronic states. Each product basis function is itself a multiconfigurational expansion that depends on a relatively small number of nonlinear parameters called arc factors. Efficient recursive procedures for the computation of reduced one- and two-particle density matrices, overlap matrix elements, and Hamiltonian matrix elements result in a very efficient computational procedure that is applicable to very large configuration state function (CSF) expansions. A new energy-based optimization approach is presented based on product function splitting and variational recombination. Convergence of both valence correlation energy and dynamical correlation energy with respect to the product function basis dimension is examined. A wave function analysis approach suitable for very large CSF expansions is presented based on Shavitt graph node density and arc density. Some new closed-form expressions for various Shavitt Graph and Auxiliary Pair Graph statistics are presented. © 2007 Wiley Periodicals, Inc. Int J Quantum Chem, 2007 [source]


How the choice of a computational model could rule the chemical interpretation: The Ni(II) catalyzed ethylene dimerization as a case study

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 5 2010
Vincent Tognetti
Abstract In this article, we present a critical study of the theoretical protocol used for the determination of the nickel(II) catalyzed ethylene dimerization mechanism, considered as a representative example of the various problems related to the modeling a catalytic cycle. The choice of an appropriate computational procedure is indeed crucial for the validity of the conclusions that will be drawn from the computational process. The influence of the exchange-correlation functional on energetic profiles and geometries, the role of the basis set describing the metal atom, as well as the importance of the chosen molecular model, have been thus examined in details. From the obtained results, some general conclusions and guidelines are presented, which could constitute useful warnings in modeling homogenous catalysis. Besides, the database constituted by our high-level calculations can be used within benchmarking procedures to assess the performances of new computational methods based on density functional theory. © 2009 Wiley Periodicals, Inc. J Comput Chem 2010 [source]


Enhancing MAD FA data for substructure determination

ACTA CRYSTALLOGRAPHICA SECTION D, Issue 8 2010
Hongliang Xu
Heavy-atom substructure determination is a critical step in phasing an unknown macromolecular structure. Dual-space (Shake-and-Bake) recycling is a very effective procedure for locating the substructure (heavy) atoms using FA data estimated from multiple-wavelength anomalous diffraction. However, the estimated FA are susceptible to the accumulation of errors in the individual intensity measurements at several wavelengths and from inaccurate estimation of the anomalous atomic scattering corrections f, and f,,. In this paper, a new statistical and computational procedure which merges multiple FA estimates into an averaged data set is used to further improve the quality of the estimated anomalous amplitudes. The results of 18 Se-atom substructure determinations provide convincing evidence in favor of using such a procedure to locate anomalous scatterers. [source]


An Importance Sampling Method to Evaluate Value-at-Risk for Assets with Jump Risk,

ASIA-PACIFIC JOURNAL OF FINANCIAL STUDIES, Issue 5 2009
Ren-Her Wang
Abstract Risk management is an important issue when there is a catastrophic event that affects asset price in the market such as a sub-prime financial crisis or other financial crisis. By adding a jump term in the geometric Brownian motion, the jump diffusion model can be used to describe abnormal changes in asset prices when there is a serious event in the market. In this paper, we propose an importance sampling algorithm to compute the Value-at-Risk for linear and nonlinear assets under a multi-variate jump diffusion model. To be more precise, an efficient computational procedure is developed for estimating the portfolio loss probability for linear and nonlinear assets with jump risks. And the titling measure can be separated for the diffusion and the jump part under the assumption of independence. The simulation results show that the efficiency of importance sampling improves over the naive Monte Carlo simulation from 7 times to 285 times under various situations. We also show the robustness of the importance sampling algorithm by comparing it with the EVT-Copula method proposed by Oh and Moon (2006). [source]


CLONING DATA: GENERATING DATASETS WITH EXACTLY THE SAME MULTIPLE LINEAR REGRESSION FIT

AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 4 2009
S. J. Haslett
Summary This paper presents a simple computational procedure for generating ,matching' or ,cloning' datasets so that they have exactly the same fitted multiple linear regression equation. The method is simple to implement and provides an alternative to generating datasets under an assumed model. The advantage is that, unlike the case for the straight model-based alternative, parameter estimates from the original data and the generated data do not include any model error. This distinction suggests that ,same fit' procedures may provide a general and useful alternative to model-based procedures, and have a wide range of applications. For example, as well as being useful for teaching, cloned datasets can provide a model-free way of confidentializing data. [source]


Solvent effects on the conformational distribution and optical rotation of ,-methyl paraconic acids and esters,

CHIRALITY, Issue 5 2006
S. Coriani
Abstract A computational investigation of the optical rotatory power of cis and trans 2-methyl-5-oxo-tetrahydro-3-furancarboxylic acids and the corresponding methyl and ethyl esters is presented. Solvent effects on both the conformational space and the rotatory power are analyzed by comparing results obtained in vacuo with those computed,using the Polarizable Continuum Model,in methanol. A comparison with experimental observations for the optical rotatory power of the title compounds in methanol is also carried out, in a few cases also for several wavelengths. Agreement between theory and experiment is in all cases excellent, in particular when solvent effects are included both in the geometry optimization and in the calculation of the OR, thus confirming the validity of the computational procedure adopted, even for this challenging family of floppy molecules. Chirality, 2006. © 2006 Wiley-Liss, Inc. [source]


Separable approximations of space-time covariance matrices

ENVIRONMETRICS, Issue 7 2007
Marc G. Genton
Abstract Statistical modeling of space-time data has often been based on separable covariance functions, that is, covariances that can be written as a product of a purely spatial covariance and a purely temporal covariance. The main reason is that the structure of separable covariances dramatically reduces the number of parameters in the covariance matrix and thus facilitates computational procedures for large space-time data sets. In this paper, we discuss separable approximations of nonseparable space-time covariance matrices. Specifically, we describe the nearest Kronecker product approximation, in the Frobenius norm, of a space-time covariance matrix. The algorithm is simple to implement and the solution preserves properties of the space-time covariance matrix, such as symmetry, positive definiteness, and other structures. The separable approximation allows for fast kriging of large space-time data sets. We present several illustrative examples based on an application to data of Irish wind speeds, showing that only small differences in prediction error arise while computational savings for large data sets can be obtained. Copyright © 2007 John Wiley & Sons, Ltd. [source]


ParCYCLIC: finite element modelling of earthquake liquefaction response on parallel computers

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 12 2004
Jun Peng
Abstract This paper presents the computational procedures and solution strategy employed in ParCYCLIC, a parallel non-linear finite element program developed based on an existing serial code CYCLIC for the analysis of cyclic seismically-induced liquefaction problems. In ParCYCLIC, finite elements are employed within an incremental plasticity, coupled solid,fluid formulation. A constitutive model developed for simulating liquefaction-induced deformations is a main component of this analysis framework. The elements of the computational strategy, designed for distributed-memory message-passing parallel computer systems, include: (a) an automatic domain decomposer to partition the finite element mesh; (b) nodal ordering strategies to minimize storage space for the matrix coefficients; (c) an efficient scheme for the allocation of sparse matrix coefficients among the processors; and (d) a parallel sparse direct solver. Application of ParCYCLIC to simulate 3-D geotechnical experimental models is demonstrated. The computational results show excellent parallel performance and scalability of ParCYCLIC on parallel computers with a large number of processors. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Parallel computing of high-speed compressible flows using a node-based finite-element method

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2003
T. Fujisawa
Abstract An efficient parallel computing method for high-speed compressible flows is presented. The numerical analysis of flows with shocks requires very fine computational grids and grid generation requires a great deal of time. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed seamlessly in parallel in terms of nodes. Local finite-element mesh is generated robustly around each node, even for severe boundary shapes such as cracks. The algorithm and the data structure of finite-element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. The inter-processor communication is minimized by renumbering the nodal identification number using ParMETIS. The numerical scheme for high-speed compressible flows is based on the two-step Taylor,Galerkin method. The proposed method is implemented on distributed memory systems, such as an Alpha PC cluster, and a parallel supercomputer, Hitachi SR8000. The performance of the method is illustrated by the computation of supersonic flows over a forward facing step. The numerical examples show that crisp shocks are effectively computed on multiprocessors at high efficiency. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Strain-driven homogenization of inelastic microstructures and composites based on an incremental variational formulation

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 11 2002
Christian Miehe
Abstract The paper investigates computational procedures for the treatment of a homogenized macro-continuum with locally attached micro-structures of inelastic constituents undergoing small strains. The point of departure is a general internal variable formulation that determines the inelastic response of the constituents of a typical micro-structure as a generalized standard medium in terms of an energy storage and a dissipation function. Consistent with this type of inelasticity we develop a new incremental variational formulation of the local constitutive response where a quasi-hyperelastic micro-stress potential is obtained from a local minimization problem with respect to the internal variables. It is shown that this local minimization problem determines the internal state of the material for finite increments of time. We specify the local variational formulation for a setting of smooth single-surface inelasticity and discuss its numerical solution based on a time discretization of the internal variables. The existence of the quasi-hyperelastic stress potential allows the extension of homogenization approaches of elasticity to the incremental setting of inelasticity. Focusing on macro-strain-driven micro-structures, we develop a new incremental variational formulation of the global homogenization problem where a quasi-hyperelastic macro-stress potential is obtained from a global minimization problem with respect to the fine-scale displacement fluctuation field. It is shown that this global minimization problem determines the state of the micro-structure for finite increments of time. We consider three different settings of the global variational problem for prescribed linear displacements, periodic fluctuations and constant stresses on the boundary of the micro-structure and discuss their numerical solutions based on a spatial discretization of the fine-scale displacement fluctuation field. The performance of the proposed methods is demonstrated for the model problem of von Mises-type elasto-visco-plasticity of the constituents and applied to a comparative study of micro-to-macro transitions of inelastic composites. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Modelling artificial night-sky brightness with a polarized multiple scattering radiative transfer computer code

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2006
Dana Xavier Kerola
ABSTRACT As part of an ongoing investigation of radiative effects produced by hazy atmospheres, computational procedures have been developed for use in determining the brightening of the night sky as a result of urban illumination. The downwardly and upwardly directed radiances of multiply scattered light from an offending metropolitan source are computed by a straightforward Gauss,Seidel (G,S) iterative technique applied directly to the integrated form of Chandrasekhar's vectorized radiative transfer equation. Initial benchmark night-sky brightness tests of the present G,S model using fully consistent optical emission and extinction input parameters yield very encouraging results when compared with the double scattering treatment of Garstang, the only full-fledged previously available model. [source]


Optimal search on spatial paths with recall, Part II: Computational procedures and examples

PAPERS IN REGIONAL SCIENCE, Issue 3 2000
Mitchell Harwitz
Search; spatial search; spatial economics Abstract. This is the second part of a two-part analysis of optimal spatial search begun in Harwitz et al. (1998). In the present article, two explicit computational procedures are developed for the optimal spatial search problem studied in Part I. The first uses reservation prices with continuous known distributions of prices and is illustrated for three stores. The second does not use reservation prices but assumes known discrete distributions. It is a numerical approximation to the first and also a tool for examining examples with larger numbers of stores. [source]