Home About us Contact | |||
Approximation
Kinds of Approximation Terms modified by Approximation Selected AbstractsA MULTINOMIAL APPROXIMATION FOR AMERICAN OPTION PRICES IN LÉVY PROCESS MODELSMATHEMATICAL FINANCE, Issue 4 2006Ross A. Maller This paper gives a tree-based method for pricing American options in models where the stock price follows a general exponential Lévy process. A multinomial model for approximating the stock price process, which can be viewed as generalizing the binomial model of Cox, Ross, and Rubinstein (1979) for geometric Brownian motion, is developed. Under mild conditions, it is proved that the stock price process and the prices of American-type options on the stock, calculated from the multinomial model, converge to the corresponding prices under the continuous time Lévy process model. Explicit illustrations are given for the variance gamma model and the normal inverse Gaussian process when the option is an American put, but the procedure is applicable to a much wider class of derivatives including some path-dependent options. Our approach overcomes some practical difficulties that have previously been encountered when the Lévy process has infinite activity. [source] STOCK LIQUIDATION VIA STOCHASTIC APPROXIMATION USING NASDAQ DAILY AND INTRA-DAY DATAMATHEMATICAL FINANCE, Issue 1 2006G. Yin By focusing on computational aspects, this work is concerned with numerical methods for stock selling decision using stochastic approximation methods. Concentrating on the class of decisions depending on threshold values, an optimal stopping problem is converted to a parametric stochastic optimization problem. The algorithms are model free and are easily implementable on-line. Convergence of the algorithms is established, second moment bound of estimation error is obtained, and escape probability from a neighborhood of the true parameter is also derived. Numerical examples using both daily closing prices and intra-day data are provided to demonstrate the performance of the algorithms. [source] AN APPROXIMATION FOR THE OPTIMAL LINEAR INCOME TAX RATEAUSTRALIAN ECONOMIC PAPERS, Issue 3 2009JOHN CREEDY This paper derives a convenient method of calculating an approximation to the optimal tax rate in a linear income tax structure. Individuals are assumed to have Cobb-Douglas preferences and the wage rate distribution is lognormal. First, the optimal tax rate is shown, for a general form of social welfare function, to be the smallest root of a quadratic equation involving a welfare-weighted average wage rate. Second, an approximation to this average is derived for an isoelastic social welfare function. This average depends on the degree of inequality aversion of the welfare function and the coefficient on consumption in individuals' utility functions. Calculations show that the method performs well in comparison with standard simulation methods of computing the optimal tax rate. [source] Hierarchical Convex Approximation of 3D Shapes for Fast Region SelectionCOMPUTER GRAPHICS FORUM, Issue 5 2008Marco Attene Abstract Given a 3D solid model S represented by a tetrahedral mesh, we describe a novel algorithm to compute a hierarchy of convex polyhedra that tightly enclose S. The hierarchy can be browsed at interactive speed on a modern PC and it is useful for implementing an intuitive feature selection paradigm for 3D editing environments. Convex parts often coincide with perceptually relevant shape components and, for their identification, existing methods rely on the boundary surface only. In contrast, we show that the notion of part concavity can be expressed and implemented more intuitively and efficiently by exploiting a tetrahedrization of the shape volume. The method proposed is completely automatic, and generates a tree of convex polyhedra in which the root is the convex hull of the whole shape, and the leaves are the tetrahedra of the input mesh. The algorithm proceeds bottom-up by hierarchically clustering tetrahedra into nearly convex aggregations, and the whole process is significantly fast. We prove that, in the average case, for a mesh of n tetrahedra O(n log2 n) operations are sufficient to compute the whole tree. [source] Applied Geometry:Discrete Differential Calculus for GraphicsCOMPUTER GRAPHICS FORUM, Issue 3 2004Mathieu Desbrun Geometry has been extensively studied for centuries, almost exclusively from a differential point of view. However, with the advent of the digital age, the interest directed to smooth surfaces has now partially shifted due to the growing importance of discrete geometry. From 3D surfaces in graphics to higher dimensional manifolds in mechanics, computational sciences must deal with sampled geometric data on a daily basis-hence our interest in Applied Geometry. In this talk we cover different aspects of Applied Geometry. First, we discuss the problem of Shape Approximation, where an initial surface is accurately discretized (i.e., remeshed) using anisotropic elements through error minimization. Second, once we have a discrete geometry to work with, we briefly show how to develop a full- blown discrete calculus on such discrete manifolds, allowing us to manipulate functions, vector fields, or even tensors while preserving the fundamental structures and invariants of the differential case. We will emphasize the applicability of our discrete variational approach to geometry by showing results on surface parameterization, smoothing, and remeshing, as well as virtual actors and thin-shell simulation. Joint work with: Pierre Alliez (INRIA), David Cohen-Steiner (Duke U.), Eitan Grinspun (NYU), Anil Hirani (Caltech), Jerrold E. Marsden (Caltech), Mark Meyer (Pixar), Fred Pighin (USC), Peter Schröder (Caltech), Yiying Tong (USC). [source] On the Approximation of Transport Phenomena , a Dynamical Systems ApproachGAMM - MITTEILUNGEN, Issue 1 2009Michael Dellnitz Abstract Transport phenomena are studied in a large variety of dynamical systems with applications ranging from the analysis of fluid flow in the ocean and the predator-prey interaction in jelly-fish to the investigation of blood flow in the cardiovascular system. Our approach to analyze transport is based on the methodology of so-called transfer operators associated with a dynamical system since this is particularly suitable. We describe the approach and illustrate it by two real world applications: the computation of transport for asteroids in the solar system and the approximation of macroscopic structures in the Southern Ocean (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Reliability Analysis of Technical Systems/Structures by means of Polyhedral Approximation of the Safe/Unsafe DomainGAMM - MITTEILUNGEN, Issue 2 2007K. Marti Abstract Reliability analysis of technical structures and systems is based on an appropriate (limit) state function separating the safe and unsafe/states in the space of random parameters. Starting with the survival conditions, hence, the state equation and the condition for the admissibility of states, an optimizational representation of the state function can be given in terms of the minimum function of a certainminimization problem. Selecting a certain number of boundary points of the safe/unsafe domain, hence, on the limit state surface, the safe/unsafe domain is approximated by a convex polyhedron defined by the intersection of the half spaces in the parameter space generated by the tangent hyperplanes to the safe/unsafe domain at the selected boundary points on the limit state surface. The resulting approximative probability functions are then defined by means of probabilistic linear constraints in the parameter space, where, after an appropriate transformation, the probability distribution of the parameter vector can be assumed to be normal with zero mean vector and unit covariance matrix. Working with separate linear constraints, approximation formulas for the probability of survival of the structure are obtained immediately. More exact approximations are obtained by considering joint probability constraints, which, in a second approximation step, can be evaluated by using probability inequalities and/or discretization of the underlying probability distribution. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] An Approximation for the Rank Adjacency Statistic for Spatial Clustering with Sparse DataGEOGRAPHICAL ANALYSIS, Issue 1 2001John Paul Ekwaru The rank adjacency statistic D provides a simple method to assess regional clustering. It is defined as the weighted average absolute difference in ranks of the data, taken over all possible pairs of adjacent regions. In this paper the usual normal approximation to the D statistic is found to give inaccurate results if the data are sparse and some regions have tied ranks. Adjusted formulae for the moments of D that allow for the existence of ties are derived. An example of analyses of sparse mortality data (with many regions having no deaths, and hence tied ranks) showed satisfactory agreement between the adjusted formulae and the empirical distribution of the D statistic. We conclude that the D statistic, when used with adjusted moments, provides a valid approximate method to evaluate spatial clustering, even in sparse data situations. [source] Lattice-Registered Two-Photon Polymerized Features within Colloidal Photonic Crystals and Their Optical Properties,ADVANCED FUNCTIONAL MATERIALS, Issue 13 2008Erik C. Nelson Abstract In this work we demonstrate a significant advance in the introduction of embedded defects in 3D photonic crystals by means of two-photon polymerization. We have developed the ability to precisely position embedded defects with respect to the lattice of 3D photonic crystals by imaging the structure concurrently with two-photon writing. Defects are written with near-perfect lattice registration and at specifically defined depths within the crystal. The effect of precise defect position on the optical response is investigated for embedded planar cavities written in a photonic crystal. The experimental data are compared to spectra calculated using the Scalar Wave Approximation (SWA). [source] Approximation to the interface velocity in phase change front trackingINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 2 2002P. H. Zhao Abstract Numerical models for front tracking in the sharp interface limit must calculate the interface velocity by means of a differentiation of the temperature field on both sides of the interface, the resulting velocity shows an oscillatory error that introduces noise in the solution. In unstable solidification problems, the noise can actually change the resulting solution. In this work, we look at the effect of the noise in the solution of dendritic solidification in an undercooled melt and analyse ways to control it. We conclude that at this point, we cannot suppress the noise and that methods to reduce it can actually lead to different solutions to the same problem. Copyright © 2001 John Wiley & Sons, Ltd. [source] Approximation of Cahn,Hilliard diffuse interface models using parallel adaptive mesh refinement and coarsening with C1 elementsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 5 2008Roy H. Stogner Abstract A variational formulation and C1 finite element scheme with adaptive mesh refinement and coarsening are developed for phase-separation processes described by the Cahn,Hilliard diffuse interface model of transport in a mixture or alloy. The adaptive scheme is guided by a Laplacian jump indicator based on the corresponding term arising from the weak formulation of the fourth-order non-linear problem, and is implemented in a parallel solution framework. It is then applied to resolve complex evolving interfacial solution behavior for 2D and 3D simulations of the classic spinodal decomposition problem from a random initial mixture and to other phase-transformation applications of interest. Simulation results and adaptive performance are discussed. The scheme permits efficient, robust multiscale resolution and interface characterization. Copyright © 2008 John Wiley & Sons, Ltd. [source] Human Capital and Stock Returns: Is the Value Premium an Approximation for Return on Human Capital?JOURNAL OF BUSINESS FINANCE & ACCOUNTING, Issue 3-4 2004Article first published online: 28 MAY 200, Bo Hansson This study, using a direct measure of the wage growth rate within firms, examines the value premium in relation to human capital. The results suggest that the dispersion in wage growth in value and growth stocks explains a large portion of the differences in stock returns. It appears that value stocks are less exposed to shocks in rents to human capital. Differences in labor force characteristics among value and growth stocks also proved to be an important factor in determining both the impact of future changes in labor income growth rate and firm value. The present findings are understood to mean that the ability of investors to forecast the dispersion in wage growth in firms is limited. [source] Papillary Muscle Approximation for Ischemic Mitral Valve RegurgitationJOURNAL OF CARDIAC SURGERY, Issue 6 2008Akhtar Rama M.D. Several procedures were described to restore a more normal alignment between the mitral annulus and the laterally displaced papillary muscles. We report a new approach to relocate the displaced papillary toward the mitral annulus and to reduce tethering. This procedure is believed to be technically easy and beneficial in terms of mitral repair. [source] The parameterization and validation of generalized born models using the pairwise descreening approximationJOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 14 2004Julien Michel Abstract Generalized Born Surface Area (GBSA) models for water using the Pairwise Descreening Approximation (PDA) have been parameterized by two different methods. The first method, similar to that used in previously reported parameterizations, optimizes all parameters against the experimental free energies of hydration of organic molecules. The second method optimizes the PDA parameters to compensate only for systematic errors of the PDA. The best models are compared to Poisson,Boltzmann calculations and applied to the computation of potentials of mean force (PMFs) for the association of various molecules. PMFs present a more rigorous test of the ability of a solvation model to correctly reproduce the screening of intermolecular interactions by the solvent, than its accuracy at predicting free energies of hydration of small molecules. Models derived with the first method are sometimes shown to fail to compute accurate potentials of mean force because of large errors in the computation of Born radii, while no such difficulties are observed with the second method. Furthermore, accurate computation of the Born radii appears to be more important than good agreement with experimental free energies of solvation. We discuss the source of errors in the potentials of mean force and suggest means to reduce them. Our findings suggest that Generalized Born models that use the Pairwise Descreening Approximation and that are derived solely by unconstrained optimization of parameters against free energies of hydration should be applied to the modeling of intermolecular interactions with caution. © 2004 Wiley Periodicals, Inc. J Comput Chem 25: 1760,1770, 2004 [source] Approximation of the Navier,Stokes system with variable viscosity by a system of Cauchy,Kowaleska typeMATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 12 2008G. M. de Araújo Abstract In this paper, we study the existence of weak solutions when n,4 of the mixed problem for the Navier,Stokes equations defined in a bounded domain Q using approximation by a system of Cauchy,Kowaleska type. Periodical solutions are also analyzed. Copyright © 2008 John Wiley & Sons, Ltd. [source] Approximation by herglotz wave functionsMATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 2 2004Norbert Weck By a general argument, it is shown that Herglotz wave functions are dense (with respect to the C,(,)-topology) in the space of all solutions to the reduced wave equation in ,. This is used to provide corresponding approximation results in global spaces (eg. in L2-Sobolev-spaces Hm(,)) and for boundary data. Copyright © 2004 John Wiley & Sons, Ltd. [source] Approximation of vector valued smooth functionsMATHEMATISCHE NACHRICHTEN, Issue 1 2004Eva C. Farkas Abstract A real locally convex space is said to be convenient if it is separated, bornological and Mackey-complete. These spaces serve as underlying objects for a whole theory of differentiation and integration (see [4]) upon which infinite dimensional differential geometry is based (cf. [8]). We investigate the question of density of the subspaces C,(E) , F and ,,f (E) , F of smooth (polynomial) decomposable functions in the space C,(E, F) of smooth functions between convenient vector spaces E, F with respect to various natural structures. A characterization is given for density with respect to the c, -topology and also some classical locally convex topologies on C,(E, F). It is shown furthermore, that for the space ,(,) the convenient analogon of the Schwartz kernel theorem for C, -functions holds. Spaces of C, -functions on both separable and non-separable manifolds are considered and an example of a non-separable manifold is given failing the above property of approximability by decomposable functions. Those notions and features of the theory of convenient vector spaces which are essential for the results of this paper are explained in the introductory section below and where needed. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Approximation of time-dependent, viscoelastic fluid flow: Crank-Nicolson, finite element approximation,NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 2 2004Vincent J. Ervin Abstract In this article we analyze a fully discrete approximation to the time dependent viscoelasticity equations with an Oldroyd B constitutive equation in ,, = 2, 3. We use a Crank-Nicolson discretization for the time derivatives. At each time level a linear system of equations is solved. To resolve the nonlinearities we use a three-step extrapolation for the prediction of the velocity and stress at the new time level. The approximation is stabilized by using a discontinuous Galerkin approximation for the constitutive equation. For the mesh parameter, h, and the temporal step size, ,t, sufficiently small and satisfying ,t , Ch, existence of the approximate solution is proven. A priori error estimates for the approximation in terms of ,t and h are also derived. © 2003 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 20: 248,283, 2004 [source] (217) Selective Nerve Root Injections Can Accurately Predict Level of Nerve Impairment and Outcome for Surgical Decompression: A Retrospective AnalysisPAIN MEDICINE, Issue 3 2001Kevin Macadaeg There remains significant controversy regarding the use of a vertebral selective nerve root injection (SNI) as a diagnostic and therapeutic tool. In addition, the frequency of use of such procedures in patients with radiculopathy has increased dramatically in the last few years. Based on a Medline review there has been no studies combining cervical and lumbar SNI results and comparing preoperative diagnosis to surgical findings and outcome. The purpose of this paper is to retrospectively examine and compare the sensitivity, specificity and predictive value of a good surgical outcome in patients who had an SNI and subsequent surgical intervention. 101 patients from a 1996 thru 1999 database, who were referred to 10 spine surgeons (2 orthopedic and 8 neurosurgeon) for either cervical or lumbar radiculopathy, and had SNI and various imagery studies and subsequent surgery. Patients receive SNIs at our institution if there is a discrepancy between physical exam and radiologic imagery or to confirm a putative pain generator in multilevel pathology. These patients were then retrospectively analyzed with regard to correlation to surgical level and surgical outcome. SNIs were performed by one of three pain specialists in our clinic. Approximation of the appropriate nerve root sleeve was performed using fluoroscopic imagery, a nerve stimulator and contrast. After nerve root stimulation and neurography, 0.5,0.75 cc of lidocaine 2% was injected. Pre- and post-procedural visual analog scale (VAS) pain scores were obtained from the non-sedated patient. A SNI was considered positive or negative if the patient had immediate appendicular pain relief of greater or less then ninety percent respectively. The study was designed to include only those patients that had a SNI, regardless of result, and subsequently had surgical decompression in an attempt to treat the pain that initially prompted the SNI. A statistical analysis was then performed comparing preoperative data to surgical findings and outcome. Overall, 101 patients had SNIs who subsequently underwent surgical decompression. Average duration of symptoms prior to SNI was 1.5,12 months (4 months mean). Fifteen patients presented with cervical and 86 with lumbar radiculopathy. There were a total of 110 procedures performed on these patients. VAS scores of <2 and overall pain reduction openface> 90% with respect to their pre-procedural appendicular were used to determine if a SNI was positive, negative or indeterminate. All of these patients had an MRI or CT with or without a myelogram and all went to surgery. The results yield that SNIs are able to predict surgical findings with 94% and 90% sensitivity and specificity, respectively. A good surgical outcome was determined if the patient would do the surgery again, if they were satisfied or very satisfied and had a VAS of <3 at 6- and 12-month intervals. Our data revealed that a positive SNI was able to predict a good 6-month outcome with 95% and 64% sensitivity and specificity, respectively. At 12-months, similar results were obtained of 95% and 56%. Preoperative MRI results were also evaluated and revealed a 92% sensitivity in predicting surgical findings. We had 24 false positive MRI results and 0 true negatives. Interestingly we had 8 diabetic (IDDM or NIDDM) patients or nearly 8% of our total. The odds ratio of a diabetic having a bad outcome at 12 months was 5.4 to 1. Diabetics had a 50% likelihood of having a bad 12-month outcome versus 16% for non-diabetics with a p value of 0.066. We also looked at gender, smoking history and presence of cardiovascular disease and found no significant relationship with outcomes. Our data indicate that SNIs, when performed under rigorous method, is a highly valuable tool that can accurately determine level of nerve root impairment and outcome in patients being considered for surgical decompression. With a sensitivity of 94% and a specificity of 90%, SNIs offer a major advantage over other diagnostic modalities in patients with difficult-to-diagnose radiculopathies. [source] Approximation of magnetic behavior of complex nanomagnetic materials, using the "P " curves for structural characterization of magnetic suspensionsPHYSICA STATUS SOLIDI (A) APPLICATIONS AND MATERIALS SCIENCE, Issue 8 2008N. C. Popa Abstract The "P " curves for the structural characterization of magnetic nanoparticles suspensions (complex fluids, complex powders, complex composite materials, or living biological materials having magnetic properties) are the graphical representation of the first derivative (relative to the magnetic field strength H) of the magnetization curve relative to its saturation magnetization. In the case of the above materials, the magnetic properties are conferred to various carrier liquids by artificially integrating in their structure ferromagnetic particles of different sizes. The magnetic properties are usually shown by the hysteresis curve. The structure can be seen by (electronic) micrography. The P curves offer another possibility to determine the structure of the magnetic component of a complex fluid by numerical analysis of the magnetization curve experimentally obtained. Starting from these P curves, the paper presents the possibility to approximate the magnetic behavior of these complex nanomagnetic materials. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] The electronic and electrochemical properties of the TiFe1,xNix alloysPHYSICA STATUS SOLIDI (A) APPLICATIONS AND MATERIALS SCIENCE, Issue 1 2003A. Szajek Abstract Mechanical alloying (MA) process was introduced to produce nanocrystalline TiFe1,xNix alloys (0 , x , 1). XRD analysis showed that, firstly, after 25 h of milling, the starting mixture of the elements had decomposed into an amorphous phase and, secondly, the annealing in high purity argon at 750 °C for 0.5 h led to formation of the CsCl-type (B2) structures with a crystallite sizes of about 30 nm. These materials, used as negative electrodes for Ni,MH batteries, showed an increase in discharge capacity with a maximum for x = 3/4. The band structure has been studied by the Tight Binding version of the Linear Muffin-Tin method in the Atomic Sphere Approximation (TB LMTO ASA). Increasing content of Ni atoms intensified charge transfer from Ti atoms, extended valence bands and increased the values of the densities of electronic states at the Fermi level. [source] Electronic structure of binary and ternary components of CdTe:O thin filmsPHYSICA STATUS SOLIDI (C) - CURRENT TOPICS IN SOLID STATE PHYSICS, Issue S1 2004E. Menéndez-Proupin Abstract We report first-principles calculations of the electronic structure of the simplest compounds that may be present in Cd,Te,O mixtures: CdTe, CdO, ,-TeO2, CdTeO3 and Cd3TeO6. The calculations are carried out in the Local Density Approximation (LDA) and predict the insulating character of these compounds, underestimating the optical bandgaps by nearly 1 eV, as usual for LDA. In the four oxides, the top valence bands originate mainly from the O 2p atomic levels. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Approximation for convenience yield in commodity futures pricingTHE JOURNAL OF FUTURES MARKETS, Issue 10 2002Richard HeaneyArticle first published online: 13 AUG 200 The pricing of commodity futures contracts is important both for professionals and academics. It is often argued that futures prices include a convenience yield, and this article uses a simple trading strategy to approximate the impact of convenience yields. The approximation requires only three variables,underlying asset price volatility, futures contract price volatility, and the futures contract time to maturity. The approximation is tested using spot and futures prices from the London Metals Exchange contracts for copper, lead, and zinc with quarterly observations drawn from a 25-year period from 1975 to 2000. Matching Euro-Market interest rates are used to estimate the risk-free rate. The convenience yield approximation is both statistically and economically important in explaining variation between the futures price and the spot price after adjustment for interest rates. © 2002 Wiley Periodicals, Inc. Jrl Fut Mark 22:1005,1017, 2002 [source] Reduction of quantum fluctuations by anisotropy fields in Heisenberg ferro- and antiferromagnetsANNALEN DER PHYSIK, Issue 10-11 2009B. Vogt Abstract The physical properties of quantum systems, which are described by the anisotropic Heisenberg model, are influenced by thermal as well as by quantum fluctuations. Such a quantum Heisenberg system can be profoundly changed towards a classical system by tuning two parameters, namely the total spin and the anisotropy field: Large easy-axis anisotropy fields, which drive the system towards the classical Ising model, as well as large spin quantum numbers suppress the quantum fluctuations and lead to a classical limit. We elucidate the incipience of this reduction of quantum fluctuations. In order to illustrate the resulting effects we determine the critical temperatures for ferro- and antiferromagnets and the ground state sublattice magnetization for antiferromagnets. The outcome depends on the dimension, the spin quantum number and the anisotropy field and is studied for a widespread range of these parameters. We compare the results obtained by: Classical Mean Field, Quantum Mean Field, Linear Spin Wave and Random Phase Approximation. Our findings are confirmed and quantitatively improved by numerical Quantum Monte Carlo simulations. The differences between the ferromagnet and antiferromagnet are investigated. We finally find a comprehensive picture of the classical trends and elucidate the suppression of quantum fluctuations in anisotropic spin systems. In particular, we find that the quantum fluctuations are extraordinarily sensitive to the presence of small anisotropy fields. This sensitivity can be quantified by introducing an "anisotropy susceptibility". [source] Reduction of quantum fluctuations by anisotropy fields in Heisenberg ferro- and antiferromagnetsANNALEN DER PHYSIK, Issue 10-11 2009B. Vogt Abstract The physical properties of quantum systems, which are described by the anisotropic Heisenberg model, are influenced by thermal as well as by quantum fluctuations. Such a quantum Heisenberg system can be profoundly changed towards a classical system by tuning two parameters, namely the total spin and the anisotropy field: Large easy-axis anisotropy fields, which drive the system towards the classical Ising model, as well as large spin quantum numbers suppress the quantum fluctuations and lead to a classical limit. We elucidate the incipience of this reduction of quantum fluctuations. In order to illustrate the resulting effects we determine the critical temperatures for ferro- and antiferromagnets and the ground state sublattice magnetization for antiferromagnets. The outcome depends on the dimension, the spin quantum number and the anisotropy field and is studied for a widespread range of these parameters. We compare the results obtained by: Classical Mean Field, Quantum Mean Field, Linear Spin Wave and Random Phase Approximation. Our findings are confirmed and quantitatively improved by numerical Quantum Monte Carlo simulations. The differences between the ferromagnet and antiferromagnet are investigated. We finally find a comprehensive picture of the classical trends and elucidate the suppression of quantum fluctuations in anisotropic spin systems. In particular, we find that the quantum fluctuations are extraordinarily sensitive to the presence of small anisotropy fields. This sensitivity can be quantified by introducing an "anisotropy susceptibility". [source] GPR microwave tomography for diagnostic analysis of archaeological sites: the case of a highway construction in Pontecagnano (Southern Italy)ARCHAEOLOGICAL PROSPECTION, Issue 3 2009R. Castaldo Abstract Interpretation of ground-penetrating radar (GPR) data usually involves data processing similar to that used for seismic data analysis, including also migration techniques. Alternatively, in the past few years, microwave tomographic approaches exploiting more accurate models of the electromagnetic scattering have gained interest, owing to their capability of providing accurate results and stable images. Within this framework, this paper deals with the application of a microwave tomography approach, based on the Born Approximation and working in the frequency domain. The case study is a survey performed during the realization of the third lane of the most important highway in southern Italy (the Salerno-Reggio Calabria, near Pontecagnano, Italy). It is shown that such an inversion approach produces well-focused images, from which buried structures can be more easily identified by comparison to traditional radar images. Moreover, the visualization of the reconstruction results is further enhanced through a three-dimensional volumetric rendering of the surveyed region, simply achieved by staggering the reconstructed GPR two-dimensional profiles. By means of this rendering it is possible to follow the spatial continuity of the buried structures in the subsurface thus obtaining a very effective geometrical characterization. The results are very useful in our case where, due to important civil engineering works, a fast diagnosis of the archaeological situation was needed. The quality of our GPR data modelling was confirmed by a test excavation, where a corner of a building and the eastern part of another house, with its courtyard, were found at the depth and horizontal position suggested by our interpretation. Copyright © 2009 John Wiley & Sons, Ltd. [source] Approximation and complexity trade-off by TP model transformation in controller design: A case study of the TORA system,ASIAN JOURNAL OF CONTROL, Issue 5 2010Zoltán Petres Abstract The main objective of the paper is to study the approximation and complexity trade-off capabilities of the recently proposed tensor product distributed compensation (TPDC) based control design framework. The TPDC is the combination of the TP model transformation and the parallel distributed compensation (PDC) framework. The Tensor Product (TP) model transformation includes an Higher Order Singular Value Decomposition (HOSVD)-based technique to solve the approximation and complexity trade-off. In this paper we generate TP models with different complexity and approximation properties, and then we derive controllers for them. We analyze how the trade-off effects the model behavior and control performance. All these properties are studied via the state feedback controller design of the Translational Oscillations with an Eccentric Rotational Proof Mass Actuator (TORA) System. Copyright © 2010 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society [source] Bayesian Inference for Stochastic Kinetic Models Using a Diffusion ApproximationBIOMETRICS, Issue 3 2005A. Golightly Summary This article is concerned with the Bayesian estimation of stochastic rate constants in the context of dynamic models of intracellular processes. The underlying discrete stochastic kinetic model is replaced by a diffusion approximation (or stochastic differential equation approach) where a white noise term models stochastic behavior and the model is identified using equispaced time course data. The estimation framework involves the introduction of m, 1 latent data points between every pair of observations. MCMC methods are then used to sample the posterior distribution of the latent process and the model parameters. The methodology is applied to the estimation of parameters in a prokaryotic autoregulatory gene network. [source] Temporal changes in retinal thickness after removal of the epiretinal membraneACTA OPHTHALMOLOGICA, Issue 4 2009Hitoshi Aso Abstract. Purpose:, We aimed to study the temporal aspects of the postoperative reduction of retinal thickness in eyes with epiretinal membrane after vitrectomy with peeling of the epiretinal membrane and internal limiting membrane. Methods:, In a retrospective study performed as a non-comparative, interventional case series, 16 eyes from 15 patients with idiopathic epiretinal membrane who underwent vitrectomy and removal of the epiretinal membrane were followed up using optical coherence tomography measurements. Retinal thickness in the macular area was assessed by the foveal thickness and macular volume in a circle 6 mm in diameter. Results:, Scattergrams of the foveal thickness and macular volume were best fitted with exponential curves. The average time constants of the exponential curve for foveal thickness and macular volume changes were 31 days (range 4,109 days) and 36 days (range 5,100 days), respectively. The average expected final values for foveal thickness and macular volume were 334 ,m (range 206,408 ,m) and 7.53 mm3 (range 6.57,8.66 mm3), respectively, which were significantly greater than those in normal controls (p < 0.0001, t -test). Conclusions:, Retinal thickness decreases rapidly immediately after surgical removal of the epiretinal membrane and the reduction rate gradually slows thereafter. Approximation of the exponential curve provides an estimation of final retinal thickness after surgical removal of the epiretinal membrane; final thickness is expected to be greater than in normal eyes. [source] Approximation by ,-convergence of a curvature-depending functional in visual reconstructionCOMMUNICATIONS ON PURE & APPLIED MATHEMATICS, Issue 1 2006Andrea Braides First page of article [source] |