Approximation Used (approximation + used)

Distribution by Scientific Domains


Selected Abstracts


A posteriori error estimation for extended finite elements by an extended global recovery

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 8 2008
Marc Duflot
Abstract This contribution presents an extended global derivative recovery for enriched finite element methods (FEMs), such as the extended FEM along with an associated error indicator. Owing to its simplicity, the proposed scheme is ideally suited to industrial applications. The procedure is based on global minimization of the L2 norm of the difference between the raw strain field (C,1) and the recovered (C0) strain field. The methodology engineered in this paper extends the ideas of Oden and Brauchli (Int. J. Numer. Meth. Engng 1971; 3) and Hinton and Campbell (Int. J. Numer. Meth. Engng 1974; 8) by enriching the approximation used for the construction of the recovered derivatives (strains) with the gradients of the functions employed to enrich the approximation employed for the primal unknown (displacements). We show linear elastic fracture mechanics examples, both in simple two-dimensional settings, and for a three-dimensional structure. Numerically, we show that the effectivity index of the proposed indicator converges to unity upon mesh refinement. Consequently, the approximate error converges to the exact error, indicating that the error indicator is valid. Additionally, the numerical examples suggest a novel adaptive strategy for enriched approximations in which the dimensions of the enrichment zone are first increased, before standard h - and p -adaptivities are applied; we suggest to coin this methodology e-adaptivity. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Numerical approximation of a thermally driven interface using finite elements

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 11 2003
P. Zhao
Abstract A two-dimensional finite element model for dendritic solidification has been developed that is based on the direct solution of the energy equation over a fixed mesh. The model tracks the position of the sharp solid,liquid interface using a set of marker points placed on the interface. The simulations require calculation of the temperature gradients on both sides of the interface in the direction normal to it; at the interface the heat flux is discontinuous due to the release of latent heat during the solidification (melting) process. Two ways to calculate the temperature gradients at the interface, evaluating their interpolants at Gauss points, were proposed. Using known one- and two-dimensional solutions to stable solidification problems (the Stefan problem), it was shown that the method converges with second-order accuracy. When applied to the unstable solidification of a crystal into an undercooled liquid, it was found that the numerical solution is extremely sensitive to the mesh size and the type of approximation used to calculate the temperature gradients at the interface, i.e. different approximations and different meshes can yield different solutions. The cause of these difficulties is examined, the effect of different types of interpolation on the simulations is investigated, and the necessary criteria to ensure converged solutions are established. Copyright © 2003 John Wiley & Sons, Ltd. [source]


On the Nitroxide Quasi-Equilibrium in the Alkoxyamine-Mediated Radical Polymerization of Styrene

MACROMOLECULAR THEORY AND SIMULATIONS, Issue 2 2006
Enrique Saldívar-Guerra
Abstract Summary: The range of validity of two popular versions of the nitroxide quasi-equilibrium (NQE) approximation used in the theory of kinetics of alkoxyamine mediated styrene polymerization, are systematically tested by simulation comparing the approximate and exact solutions of the equations describing the system. The validity of the different versions of the NQE approximation is analyzed in terms of the relative magnitude of (dN/dt)/(dP/dt). The approximation with a rigorous NQE, kc[P][N],=,kd[PN], where P, N and PN are living, nitroxide radicals and dormant species respectively, with kinetic constants kc and kd, is found valid only for small values of the equilibrium constant K (10,11,10,12 mol,·,L,1) and its validity is found to depend strongly of the value of K. On the other hand, the relaxed NQE approximation of Fischer and Fukuda, kc[P][N],=,kd[PN]0 was found to be remarkably good up to values of K around 10,8 mol,·,L,1. This upper bound is numerically found to be 2,3 orders of magnitude smaller than the theoretical one given by Fischer. The relaxed NQE is a better one due to the fact that it never completely neglects dN/dt. It is found that the difference between these approximations lies essentially in the number of significant figures taken for the approximation; still this subtle difference results in dramatic changes in the predicted course of the reaction. Some results confirm previous findings, but a deeper understanding of the physico-chemical phenomena and their mathematical representation and another viewpoint of the theory is offered. Additionally, experiments and simulations indicate that polymerization rate data alone are not reliable to estimate the value of K, as recently suggested. Validity of the rigorous nitroxide quasi-equilibrium assumption as a function of the nitroxide equilibrium constant. [source]


The local theory of the cosmic skeleton

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 2 2009
D. Pogosyan
ABSTRACT The local theory of the critical lines of two- and three-dimensional random fields that underline the cosmic structures is presented. In the context of cosmological matter distribution, the subset of critical lines of the three-dimensional density field serves to delineate the skeleton of the observed filamentary structure at large scales. A stiff approximation used to quantitatively describe the filamentary skeleton shows that the flux of the skeleton lines is related to the average Gaussian curvature of the (N , 1) dimensional sections of the field. The distribution of the length of the critical lines with threshold is analysed in detail, while the extended descriptors of the skeleton , its curvature and singular points , are introduced and briefly described. Theoretical predictions are compared to measurements of the skeleton in realizations of Gaussian random fields in two and three dimensions. It is found that the stiff approximation accurately predicts the shape of the differential length, allows for analytical insight and explicit closed form solutions. Finally, it provides a simple classification of the singular points of the critical lines: (i) critical points; (ii) bifurcation points and (iii) slopping plateaux. [source]


Calculation of conformational energies and optical rotation of the most simple chiral alkane,

CHIRALITY, Issue 9 2008
Stefan Grimme
Abstract Quantum chemical calculations have been performed to investigate the conformer distribution of 4-ethyl-4-methyloctane and its optical rotation. With the reference methods MP2 and SCS-MP2, the energies of seven conformers are found within a range of about 1.5 kcal mol,1. It is demonstrated that the relative energies cannot be reliably predicted with conventional GGA or hybrid density functionals, Hartree-Fock, semiempirical AM1, and classical force field (MM3) calculations. An empirical dispersion correction to GGA (PBE-D), hybrid (B3LYP-D), or double hybrid (B2PLYP-D) functionals corrects these errors and results in very good agreement with the reference energies. Optical rotations have been calculated for all seven conformers at the TDDFT(BHLYP/aTZV2P) level. The computed macroscopic rotation is derived from a classical Boltzmann average. The result (1.9,3.2 deg dm,1 (g/mL),1) is very close to the experimental value of 0.2 deg dm,1 (g/mL),1 for the (R)-enantiomer and has the right sign. Because six conformers are significantly populated at room temperature and the rotations of individual conformers differ in sign and magnitude, the calculated average rotation is rather sensitive to the conformer population used. From the electronic structure point of view, this example emphasizes the effect of long-range dispersion effects for the evaluation of population averaged quantities in large molecules. Computations based on free enthalpies are in worse agreement with experiment that is attributed to artefacts of the harmonic approximation used to compute the vibrational entropy terms. Chirality, 2008. © 2008 Wiley-Liss, Inc. [source]


Yodel: A Yield Stress Model for Suspensions

JOURNAL OF THE AMERICAN CERAMIC SOCIETY, Issue 4 2006
Robert J. Flatt
A model for the yield stress of particulate suspensions is presented that incorporates microstructural parameters taking into account volume fraction of solids, particle size, particle size distribution, maximum packing, percolation threshold, and interparticle forces. The model relates the interparticle forces between particles of dissimilar size and the statistical distribution of particle pairs expected for measured or log-normal size distributions. The model is tested on published data of sub-micron ceramic suspensions and represents the measured data very well, over a wide range of volume fractions of solids. The model shows the variation of the yield stress of particulate suspensions to be inversely proportional to the particle diameter. Not all the parameters in the model could be directly evaluated; thus, two were used as adjustable variables: the maximum packing fraction and the minimum interparticle separation distance. The values for these two adjustable variables provided by the model are in good agreement with separate determinations of these parameters. This indicates that the model and the approximations used in its derivation capture the main parameters that influence the yield stress of particulate suspensions and should help us to better predict changes in the rheological properties of complex suspensions. The model predicts the variation of the yield stress of particulate suspensions to be inversely proportional to the particle diameter, but the experimental results do not show a clear dependence on diameter. This result is consistent with previous evaluations, which have shown significant variations in this dependence, and the reasons behind the yield stress dependence on particle size are discussed in the context of the radius of curvature of particles at contact. [source]


Statistical hypothesis testing in intraspecific phylogeography: nested clade phylogeographical analysis vs. approximate Bayesian computation

MOLECULAR ECOLOGY, Issue 2 2009
ALAN R. TEMPLETON
Abstract Nested clade phylogeographical analysis (NCPA) and approximate Bayesian computation (ABC) have been used to test phylogeographical hypotheses. Multilocus NCPA tests null hypotheses, whereas ABC discriminates among a finite set of alternatives. The interpretive criteria of NCPA are explicit and allow complex models to be built from simple components. The interpretive criteria of ABC are ad hoc and require the specification of a complete phylogeographical model. The conclusions from ABC are often influenced by implicit assumptions arising from the many parameters needed to specify a complex model. These complex models confound many assumptions so that biological interpretations are difficult. Sampling error is accounted for in NCPA, but ABC ignores important sources of sampling error that creates pseudo-statistical power. NCPA generates the full sampling distribution of its statistics, but ABC only yields local probabilities, which in turn make it impossible to distinguish between a good fitting model, a non-informative model, and an over-determined model. Both NCPA and ABC use approximations, but convergences of the approximations used in NCPA are well defined whereas those in ABC are not. NCPA can analyse a large number of locations, but ABC cannot. Finally, the dimensionality of tested hypothesis is known in NCPA, but not for ABC. As a consequence, the ,probabilities' generated by ABC are not true probabilities and are statistically non-interpretable. Accordingly, ABC should not be used for hypothesis testing, but simulation approaches are valuable when used in conjunction with NCPA or other methods that do not rely on highly parameterized models. [source]