Accurate Approximation (accurate + approximation)

Distribution by Scientific Domains


Selected Abstracts


The Influence of Flowing Water on the Resource Pursuit-Risk Avoidance Tradeoff in the Crayfish Orconectes virilis

ETHOLOGY, Issue 4 2006
Keith W. Pecor
The influence of hydrodynamics on chemically mediated behavioral tradeoffs has received little attention. We tested the hypothesis that individuals of the crayfish Orconectes virilis would be more sensitive to chemical cues in flowing water than in still water. Orconectes virilis is a good subject for this test, because it is found in both still water (e.g. ponds), and flowing water (e.g. rivers). A factorial design was used, with two stimulus treatments and two habitat types. Crayfish were exposed to either food cue or food + alarm cue in either still water or flowing water in an artificial stream arena. Habitat use and activity were significantly influenced by stimulus treatment, with more time spent away from the stimulus source and less activity in the food + alarm treatment than in the food treatment. Neither habitat type nor the interaction of stimulus treatment and habitat type had a significant effect on the response variables. Given the natural history of O. virilis, we suggest that selection has favored the ability to equally utilize chemical cues in both still and flowing water. We acknowledge that different flow conditions may influence chemical ecology in this species and caution against the view that tests in flowing waters necessarily provide a more accurate approximation of natural responses. [source]


The extended/generalized finite element method: An overview of the method and its applications

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2010
Thomas-Peter Fries
Abstract An overview of the extended/generalized finite element method (GEFM/XFEM) with emphasis on methodological issues is presented. This method enables the accurate approximation of solutions that involve jumps, kinks, singularities, and other locally non-smooth features within elements. This is achieved by enriching the polynomial approximation space of the classical finite element method. The GEFM/XFEM has shown its potential in a variety of applications that involve non-smooth solutions near interfaces: Among them are the simulation of cracks, shear bands, dislocations, solidification, and multi-field problems. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Higher-order XFEM for curved strong and weak discontinuities

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 5 2010
Kwok Wah Cheng
Abstract The extended finite element method (XFEM) enables the accurate approximation of solutions with jumps or kinks within elements. Optimal convergence rates have frequently been achieved for linear elements and piecewise planar interfaces. Higher-order convergence for arbitrary curved interfaces relies on two major issues: (i) an accurate quadrature of the Galerkin weak form for the cut elements and (ii) a careful formulation of the enrichment, which should preclude any problems in the blending elements. For (i), we employ a strategy of subdividing the elements into subcells with only one curved side. Reference elements that are higher-order on only one side are then used to map the integration points to the real element. For (ii), we find that enrichments for strong discontinuities are easily extended to higher-order accuracy. In contrast, problems in blending elements may hinder optimal convergence for weak discontinuities. Different formulations are investigated, including the corrected XFEM. Numerical results for several test cases involving strong or weak curved discontinuities are presented. Quadratic and cubic approximations are investigated. Optimal convergence rates are achieved using the standard XFEM for the case of a strong discontinuity. Close-to-optimal convergence rates for the case of a weak discontinuity are achieved using the corrected XFEM. Copyright © 2009 John Wiley & Sons, Ltd. [source]


A geometrically and materially non-linear piezoelectric three-dimensional-beam finite element formulation including warping effects

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 5 2008
A. Butz
Abstract This paper is concerned with a three-dimensional piezoelectric beam formulation and its finite element implementation. The developed model considers geometrically and materially non-linear effects. An eccentric beam formulation is derived based on the Timoshenko kinematics. The kinematic assumptions are extended by three additional warping functions of the cross section. These functions follow from torsion and piezoelectrically induced shear deformations. The presented beam formulation incorporates large displacements and finite rotations and allows the investigation of stability problems. The finite element model has two nodes with nine mechanical and five electrical degrees of freedom. It provides an accurate approximation of the electric potential, which is assumed to be linear in the direction of the beam axis and quadratic within the cross section. The mechanical degrees of freedom are three displacements, three rotations and three scaling factors for the warping functions. The latter are computed in a preprocess by solving a two-dimensional in-plane equilibrium condition with the finite element method. The gained warping patterns are considered within the integration through the cross section of the beam formulation. With respect to material non-linearities, which arise in ferroelectric materials, the scalar Preisach model is embedded in the formulation. This model is a mathematical model for the general description of hysteresis phenomena. Its application to piezoelectric materials leads to a phenomenological model for ferroelectric hysteresis effects. Here, the polarization direction is assumed to be constant, which leads to unidirectional constitutive equations. Some examples demonstrate the capability of the proposed model. Copyright © 2008 John Wiley & Sons, Ltd. [source]


A robust design method using variable transformation and Gauss,Hermite integration

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 12 2006
Beiqing Huang
Abstract Robust design seeks an optimal solution where the design objective is insensitive to the variations of input variables while the design feasibility under the variations is maintained. Accurate robustness assessment for both design objective and feasibility usually requires an intensive computational effort. In this paper, an accurate robustness assessment method with a moderate computational effort is proposed. The numerical Gauss,Hermite integration technique is employed to calculate the mean and standard deviation of the objective and constraint functions. To effectively use the Gauss,Hermite integration technique, a transformation from a general random variable into a normal variable is performed. The Gauss,Hermite integration and the transformation result in concise formulas and produce an accurate approximation to the mean and standard deviation. This approach is then incorporated into the framework of robust design optimization. The design of a two-bar truss and an automobile torque arm is used to demonstrate the effectiveness of the proposed method. The results are compared with the commonly used Taylor expansion method and Monte Carlo simulation in terms of accuracy and efficiency. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Further experiences with computing non-hydrostatic free-surface flows involving water waves

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 2 2005
Marcel Zijlema
Abstract A semi-implicit, staggered finite volume technique for non-hydrostatic, free-surface flow governed by the incompressible Euler equations is presented that has a proper balance between accuracy, robustness and computing time. The procedure is intended to be used for predicting wave propagation in coastal areas. The splitting of the pressure into hydrostatic and non-hydrostatic components is utilized. To ease the task of discretization and to enhance the accuracy of the scheme, a vertical boundary-fitted co-ordinate system is employed, permitting more resolution near the bottom as well as near the free surface. The issue of the implementation of boundary conditions is addressed. As recently proposed by the present authors, the Keller-box scheme for accurate approximation of frequency wave dispersion requiring a limited vertical resolution is incorporated. The both locally and globally mass conserved solution is achieved with the aid of a projection method in the discrete sense. An efficient preconditioned Krylov subspace technique to solve the discretized Poisson equation for pressure correction with an unsymmetric matrix is treated. Some numerical experiments to show the accuracy, robustness and efficiency of the proposed method are presented. Copyright © 2004 John Wiley & Sons, Ltd. [source]


On parameter estimation of a simple real-time flow aggregation model

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 7 2006
Huirong Fu
Abstract There exists a clear need for a comprehensive framework for accurately analysing and realistically modelling the key traffic statistics that determine network performance. Recently, a novel traffic model, sinusoid with uniform noise (SUN), has been proposed, which outperforms other models in that it can simultaneously achieve tractability, parsimony, accuracy (in predicting network performance), and efficiency (in real-time capability). In this paper, we design, evaluate and compare several estimation approaches, including variance-based estimation (Var), minimum mean-square-error-based estimation (MMSE), MMSE with the constraint of variance (Var+MMSE), MMSE of autocorrelation function with the constraint of variance (Var+AutoCor+MMSE), and variance of secondary demand-based estimation (Secondary Variance), to determining the key parameters in the SUN model. Integrated with the SUN model, all the proposed methods are able to capture the basic behaviour of the aggregation reservation system and closely approximate the system performance. In addition, we find that: (1) the Var is very simple to operate and provides both upper and lower performance bounds. It can be integrated into other methods to provide very accurate approximation to the aggregation's performance and thus obtain an accurate solution; (2) Var+AutoCor+MMSE is superior to other proposed methods in the accuracy to determine system performance; and (3) Var+MMSE and Var+AutoCor+MMSE differ from the other three methods in that both adopt an experimental analysis method, which helps to improve the prediction accuracy while reducing computation complexity. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Simulations of Xe@C60 collisions with graphitic films

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 15 2008
Victor V. Albert
Abstract Collisions between Xe@C60 and sheets of graphite of various dimensions were simulated. A Tersoff many-body potential modeled the interactions between carbon atoms and a Lennard-Jones potential simulated the xenon-carbon interactions. The simulations were compared to experiment and with simulations which implemented other potentials. The results indicate that a relatively small graphite film can be an accurate approximation for a nearly infinite sheet of graphite. © 2008 Wiley Periodicals, Inc. Int J Quantum Chem, 2008 [source]


Maximum norm error bounds of ADI and compact ADI methods for solving parabolic equations

NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 1 2010
Hong-Lin Liao
Abstract Alternating direction implicit (ADI) schemes are computationally efficient and widely utilized for numerical approximation of the multidimensional parabolic equations. By using the discrete energy method, it is shown that the ADI solution is unconditionally convergent with the convergence order of two in the maximum norm. Considering an asymptotic expansion of the difference solution, we obtain a fourth-order, in both time and space, approximation by one Richardson extrapolation. Extension of our technique to the higher-order compact ADI schemes also yields the maximum norm error estimate of the discrete solution. And by one extrapolation, we obtain a sixth order accurate approximation when the time step is proportional to the squares of the spatial size. An numerical example is presented to support our theoretical results. © 2008 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 2010 [source]


Fitting and comparing seed germination models with a focus on the inverse normal distribution

AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 3 2004
Michael E. O'Neill
Summary This paper reviews current methods for fitting a range of models to censored seed germination data and recommends adoption of a probability-based model for the time to germination. It shows that, provided the probability of a seed eventually germinating is not on the boundary, maximum likelihood estimates, their standard errors and the resultant deviances are identical whether only those seeds which have germinated are used or all seeds (including seeds ungerminated at the end of the experiment). The paper recommends analysis of deviance when exploring whether replicate data are consistent with a hypothesis that the underlying distributions are identical, and when assessing whether data from different treatments have underlying distributions with common parameters. The inverse normal distribution, otherwise known as the inverse Gaussian distribution, is discussed, as a natural distribution for the time to germination (including a parameter to measure the lag time to germination). The paper explores some of the properties of this distribution, evaluates the standard errors of the maximum likelihood estimates of the parameters and suggests an accurate approximation to the cumulative distribution function and the median time to germination. Additional material is on the web, at http://www.agric.usyd.edu.au/staff/oneill/. [source]


Fast and Efficient Skinning of Animated Meshes

COMPUTER GRAPHICS FORUM, Issue 2 2010
L. Kavan
Abstract Skinning is a simple yet popular deformation technique combining compact storage with efficient hardware accelerated rendering. While skinned meshes (such as virtual characters) are traditionally created by artists, previous work proposes algorithms to construct skinning automatically from a given vertex animation. However, these methods typically perform well only for a certain class of input sequences and often require long pre-processing times. We present an algorithm based on iterative coordinate descent optimization which handles arbitrary animations and produces more accurate approximations than previous techniques, while using only standard linear skinning without any modifications or extensions. To overcome the computational complexity associated with the iterative optimization, we work in a suitable linear subspace (obtained by quick approximate dimensionality reduction) and take advantage of the typically very sparse vertex weights. As a result, our method requires about one or two orders of magnitude less pre-processing time than previous methods. [source]


Temperature-dependent pseudopotential between two pointlike electrical charges

CONTRIBUTIONS TO PLASMA PHYSICS, Issue 5-6 2003
M.-M. Gombert
Abstract The pair distribution functions for particles electrically charged, at a temperature T, expressed in terms of density matrices and the corresponding pseudopotentials are studied. For an electron pair, the symmetry of the wave functions is taken into account. Exact expansions with respect to the separation distance and to a quantum parameter (, T,½) are carried out. The known results are recovered. For high temperature, accurate approximations are derived. (© 2003 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Comonotonic Approximations for Optimal Portfolio Selection Problems

JOURNAL OF RISK AND INSURANCE, Issue 2 2005
J. Dhaene
We investigate multiperiod portfolio selection problems in a Black and Scholes type market where a basket of 1 riskfree and m risky securities are traded continuously. We look for the optimal allocation of wealth within the class of "constant mix" portfolios. First, we consider the portfolio selection problem of a decision maker who invests money at predetermined points in time in order to obtain a target capital at the end of the time period under consideration. A second problem concerns a decision maker who invests some amount of money (the initial wealth or provision) in order to be able to fullfil a series of future consumptions or payment obligations. Several optimality criteria and their interpretation within Yaari's dual theory of choice under risk are presented. For both selection problems, we propose accurate approximations based on the concept of comonotonicity, as studied in Dhaene et al. (2002 a,b). Our analytical approach avoids simulation, and hence reduces the computing effort drastically. [source]


Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 2 2009
Håvard Rue
Summary., Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalized) linear models, (generalized) additive models, smoothing spline models, state space models, semiparametric regression, spatial and spatiotemporal models, log-Gaussian Cox processes and geostatistical and geoadditive models. We consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with non-Gaussian response variables. The posterior marginals are not available in closed form owing to the non-Gaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, in terms of both convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo sampling is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations is computational: where Markov chain Monte Carlo algorithms need hours or days to run, our approximations provide more precise estimates in seconds or minutes. Another advantage with our approach is its generality, which makes it possible to perform Bayesian analysis in an automatic, streamlined way, and to compute model comparison criteria and various predictive measures so that models can be compared and the model under study can be challenged. [source]


Modelling cell generation times by using the tempered stable distribution

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 4 2008
Karen J. Palmer
Summary., We show that the family of tempered stable distributions has considerable potential for modelling cell generation time data. Several real examples illustrate how these distributions can improve on currently assumed models, including the gamma and inverse Gaussian distributions which arise as special cases. Our applications concentrate on the generation times of oligodendrocyte progenitor cells and the yeast Saccharomyces cerevisiae. Numerical inversion of the Laplace transform of the probability density function provides fast and accurate approximations to the tempered stable density, for which no closed form generally exists. We also show how the asymptotic population growth rate is easily calculated under a tempered stable model. [source]


An excursion set model of hierarchical clustering: ellipsoidal collapse and the moving barrier

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 1 2002
Ravi K. Sheth
The excursion set approach allows one to estimate the abundance and spatial distribution of virialized dark matter haloes efficiently and accurately. The predictions of this approach depend on how the non-linear processes of collapse and virialization are modelled. We present simple analytic approximations that allow us to compare the excursion set predictions associated with spherical and ellipsoidal collapse. In particular, we present formulae for the universal unconditional mass function of bound objects and the conditional mass function which describes the mass function of the progenitors of haloes in a given mass range today. We show that the ellipsoidal collapse based moving barrier model provides a better description of what we measure in the numerical simulations than the spherical collapse based constant barrier model, although the agreement between model and simulations is better at large lookback times. Our results for the conditional mass function can be used to compute accurate approximations to the local-density mass function, which quantifies the tendency for massive haloes to populate denser regions than less massive haloes. This happens because low-density regions can be thought of as being collapsed haloes viewed at large lookback times, whereas high-density regions are collapsed haloes viewed at small lookback times. Although we have applied our analytic formulae only to two simple barrier shapes, we show that they are, in fact, accurate for a wide variety of moving barriers. We suggest how they can be used to study the case in which the initial dark matter distribution is not completely cold. [source]


Conditional length distributions induced by the coverage of two points by a Poisson Voronoļ tessellation: application to a telecommunication model

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 4 2006
Catherine Gloaguen
Abstract The end points of a fixed segment in the Euclidian plane covered by a Poisson Voronoļ tessellation belong to the same cell or to two distinct cells. This marks off one or two points of the underlying Poisson process that are the nucleus(i) of the cell(s). Our interest lies in the geometrical relationship between these nuclei and the segment end points as well as between the nuclei. We investigate their probability distribution functions conditioning on the number of nuclei, taking into account the length of the segment. The aim of the study is to establish some tools to be used for the analysis of a telecommunication problem related to the pricing of leased lines. We motivate and give accurate approximations of the probability of common coverage and of the length distributions that can be included in spreadsheet codes as an element of simple cost functions. Copyright © 2006 John Wiley & Sons, Ltd. [source]