Home About us Contact | |||
Simulation Procedure (simulation + procedure)
Selected AbstractsResponse simulation and seismic assessment of highway overcrossingsEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 9 2010Anastasios Kotsoglou Abstract Interaction of bridge structures with the adjacent embankment fills and pile foundations is generally responsible for response modification of the system to strong ground excitations, to a degree that depends on soil compliance, support conditions, and soil mass mobilized in dynamic response. This paper presents a general modeling and assessment procedure specifically targeted for simulation of the dynamic response of short bridges such as highway overcrossings, where the embankment soil,structure interaction is the most prevalent. From previous studies it has been shown that in this type of interaction, seismic displacement demands are magnified in the critical bridge components such as the central piers. This issue is of particular relevance not only in new design but also in the assessment of the existing infrastructure. Among a wide range of issues relevant to soil,structure interaction, typical highway overcrossings that have flexible abutments supported on earth embankments were investigated extensively in the paper. Simulation procedures are proposed for consideration of bridge-embankment interaction effects in practical analysis of these structures for estimation of their seismic performance. Results are extrapolated after extensive parametric studies and are used to extract ready-to-use, general, and parameterized capacity curves for a wide range of possible material properties and geometric characteristics of the bridge-embankment assembly. Using two instrumented highway overpasses as benchmark examples, the capacity curves estimated using the proposed practical procedures are correlated successfully with the results of explicit incremental dynamic analysis, verifying the applicability of the simple tools developed herein, in seismic assessment of existing short bridges. Copyright © 2009 John Wiley & Sons, Ltd. [source] Backtesting Derivative Portfolios with Filtered Historical Simulation (FHS)EUROPEAN FINANCIAL MANAGEMENT, Issue 1 2002Giovanni Barone-Adesi Filtered historical simulation provides the general framework to our backtests of portfolios of derivative securities held by a large sample of financial institutions. We allow for stochastic volatility and exchange rates. Correlations are preserved implicitly by our simulation procedure. Options are repriced at each node. Overall results support the adequacy of our framework, but our VaR numbers are too high for swap portfolios at long horizons and too low for options and futures portfolios at short horizons. [source] Waveform distortion caused by high power adjustable speed drives part I: High computational efficiency modelsEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 6 2003F. De Rosa Waveform distortion caused by high power adjustable speed drives is considered and two high computational efficiency models are proposed. Both models are essentially based on the switching function theory and on a new simplified control system representation. The first refers to the solution based on the Line Commutated Inverters, the second to the case of the Pulse Width Modulated Inverters. Models' accuracy and computational efficiency are demonstrated. In a companion paper, the proposed models are applied inside a simulation procedure for the probabilistic analysis of waveform distortion on both the supply and motor sides of the two types of Adjustable Speed Drives here considered. [source] Waveform distortion caused by high power adjustable speed drives part II: Probabilistic analysisEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 6 2003D. Castaldo Waveform distortion caused by high power adjustable speed drives is considered in a probabilistic scenario. In a companion paper, two high computational efficiency drive models, one using the Line Commutated Inverter and the other the Pulse Width Modulated Inverter, have been proposed. These models are used inside a simulation procedure for the probabilistic analysis of waveform distortion on both the supply and motor sides of the two kinds of drives considered. The results obtained considering both mechanical and supply voltage variability are presented and commented. [source] Assessing the performance of intumescent coatings using bench-scaled cone calorimeter and finite difference simulationsFIRE AND MATERIALS, Issue 3 2007M. Bartholmai Abstract A method was developed to assess the heat insulation performance of intumescent coatings. The method consists of temperature measurements using the bench-scaled experimental set-up of a cone calorimeter and finite difference simulation to calculate the effective thermal conductivity dependent on time/temperature. This simulation procedure was also adapted to the small scale test furnace, in which the standard time,temperature curve is applied to a larger sample and thus which provides results relevant for approval. Investigations on temperature and calculated effective thermal conduction were performed on intumescent coatings in both experimental set-ups using various coating thicknesses. The results correspond to each other as well as showing the limits of transferability between both fire tests. It is shown that bench-scaled cone calorimeter tests are a valuable tool for assessing and predicting the performance of intumescent coatings in larger tests relevant for approval. The correlation fails for processes at surface temperatures above 750°C, which are not reached in the cone calorimeter, but are attained in the small scale furnace set-up. Copyright © 2006 John Wiley & Sons, Ltd. [source] Numerical simulation on joint motion process of various modes of caisson breakwater under wave excitationINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 6 2006Wang Yuan-Zhan Abstract A caisson breakwater may experience various modes of motions under wave action. The elementary motion modes are classified into two categories, i.e. the horizontal and rotational vibration coupled motion and the horizontal slide and rotational vibration coupled motion. The motion modes of caisson will transform from one to another depending on the wave forces and the motion behaviours of a caisson. The numerical models of the two motion modes of caisson are developed, the numerical simulation procedure for joint motion process of the two modes of caisson breakwater under wave excitation is presented and tested by a physical model experiment. It is concluded that the simulation procedure is reliable and can be applied to the dynamic stability analysis of caisson breakwaters. Copyright © 2005 John Wiley & Sons, Ltd. [source] New approach for the analysis and design of negative-resistance oscillators: Application to a quasi-MMIC VCOINTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 4 2006Jeffrey Chuan Abstract This article proposes a new approach for the analysis and design of negative-resistance oscillators using computer-aided engineering tools. The method presented does not require any special probe and makes the oscillator design similar to the methodology applied to amplifiers. It speeds up convergence and avoids uncertainties in the solution. The negative-resistance oscillator is split into two parts: an active-amplifying part and a resonator part. A chain is constructed by linking both parts and repeating them several times, which is known as the repeated circuit simulation procedure. This method allows the separation of the signal flowing between them. Small-signal AC-sweep and harmonic-balance techniques, both available in several commercial software packages, are applied. This method is theoretically justified and shows convergence with less iteration. Furthermore, it is more robust than standard harmonic-balance probes in the case of multiple frequencies of oscillation. It has been demonstrated in the design of a quasi-MMIC VCO. This VCO has an external resonator circuit (coaxial resonator and varactor) and a MMIC negative-resistance circuit, which was manufactured using ED02AH p-HEMT technology (OMMIC). © 2006 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2006. [source] The generalized Rice lognormal channel model,first and second order statistical characterization and simulationINTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 1 2002F. Vatalaro Abstract The Rice-lognormal (RLN) channel model was recently generalized to include an additive scattering component having constant average power. The generalized Rice-lognormal (GRLN) model includes as limiting cases many well-known narrowband models such as, e.g. Rice, lognormal (and combinations), and Loo's. The paper provides the GRLN first-order statistical description of envelope and phase, and the second-order statistics in terms of level crossing rate. The paper then provides a fitting procedure to extract model parameters from experimental data. Finally, it presents a new simulation procedure and validates it through comparison with theoretical results. Copyright © 2002 John Wiley & Sons, Ltd. [source] Partition-based algorithm for estimating transportation network reliability with dependent link failuresJOURNAL OF ADVANCED TRANSPORTATION, Issue 3 2008Agachai Sumalee Evaluating the reliability of a transportation network often involves an intensive simulation exercise to randomly generate and evaluate different possible network states. This paper proposes an algorithm to approximate the network reliability which minimizes the use of such simulation procedure. The algorithm will dissect and classify the network states into reliable, unreliable, and un-determined partitions. By postulating the monotone property of the reliability function, each reliable and/or unreliable state can be used to determine a number of other reliable and/or unreliable states without evaluating all of them with an equilibrium assignment procedure. The paper also proposes the cause-based failure framework for representing dependent link degradation probabilities. The algorithm and framework proposed are tested with a medium size test network to illustrate the performance of the algorithm. [source] Hildebrand and Hansen solubility parameters from Molecular Dynamics with applications to electronic nose polymer sensorsJOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 15 2004M. Belmares Abstract We introduce the Cohesive Energy Density (CED) method, a multiple sampling Molecular Dynamics computer simulation procedure that may offer higher consistency in the estimation of Hildebrand and Hansen solubility parameters. The use of a multiple sampling technique, combined with a simple but consistent molecular force field and quantum mechanically determined atomic charges, allows for the precise determination of solubility parameters in a systematic way (, = 0.4 hildebrands). The CED method yields first-principles Hildebrand parameter predictions in good agreement with experiment [root-mean-square (rms) = 1.1 hildebrands]. We apply the CED method to model the Caltech electronic nose, an array of 20 polymer sensors. Sensors are built with conducting leads connected through thin-film polymers loaded with carbon black. Odorant detection relies on a change in electric resistivity of the polymer film as function of the amount of swelling caused by the odorant compound. The amount of swelling depends upon the chemical composition of the polymer and the odorant molecule. The pattern is unique, and unambiguously identifies the compound. Experimentally determined changes in relative resistivity of seven polymer sensors upon exposure to 24 solvent vapors were modeled with the CED estimated Hansen solubility components. Predictions of polymer sensor responses result in Pearson R2 coefficients between 0.82 and 0.99. © 2004 Wiley Periodicals, Inc. J Comput Chem 25: 1814,1826, 2004 [source] Generating Dichotomous Item Scores with the Four-Parameter Beta Compound Binomial ModelJOURNAL OF EDUCATIONAL MEASUREMENT, Issue 3 2007Patrick O. Monahan A Monte Carlo simulation technique for generating dichotomous item scores is presented that implements (a) a psychometric model with different explicit assumptions than traditional parametric item response theory (IRT) models, and (b) item characteristic curves without restrictive assumptions concerning mathematical form. The four-parameter beta compound-binomial (4PBCB) strong true score model (with two-term approximation to the compound binomial) is used to estimate and generate the true score distribution. The nonparametric item-true score step functions are estimated by classical item difficulties conditional on proportion-correct total score. The technique performed very well in replicating inter-item correlations, item statistics (point-biserial correlation coefficients and item proportion-correct difficulties), first four moments of total score distribution, and coefficient alpha of three real data sets consisting of educational achievement test scores. The technique replicated real data (including subsamples of differing proficiency) as well as the three-parameter logistic (3PL) IRT model (and much better than the 1PL model) and is therefore a promising alternative simulation technique. This 4PBCB technique may be particularly useful as a more neutral simulation procedure for comparing methods that use different IRT models. [source] CLIMATE CHANGE IMPACTS ON WATER RESOURCES OF THE TSENGWEN CREEK WATERSHED IN TAIWAN,JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 1 2001Ching-pin Tung ABSTRACT: This study presents a methodology to evaluate the vulnerability of water resources in the Tsengwen creek watershed, Taiwan. Tsengwen reservoir, located in the Tsengwen creek watershed, is a multipurpose reservoir with a primary function to supply water for the ChiaNan Irrigation District. A simulation procedure was developed to evaluate the impacts of climate change on the water resources system. The simulation procedure includes a streamflow model, a weather generation model, a sequent peak algorithm, and a risk assessment process. Three climate change scenarios were constructed based on the predictions of three General Circulation Models (CCCM, GFDL, and GISS). The impacts of climate change on streamflows were simulated, and, for each climate change scenario, the agricultural water demand was adjusted based on the change of potential evapotranspiration. Simulation results indicated that the climate change may increase the annual and seasonal streamflows in the Tsengwen creek watershed. The increase in streamflows during wet periods may result in serious flooding. In addition, despite the increase in streamflows, the risk of water deficit may still increase from between 4 and 7 percent to between 7 and 13 percent due to higher agricultural water demand. The simulation results suggest that the reservoir capacity may need to be expanded. In response to the climate change, four strategies are suggested: (1) strengthen flood mitigation measures, (2) enhance drought protection strategies, (3) develop new water resources technology, and (4) educate the public. [source] Simulation of Real-Valued Discrete-Time Periodically Correlated Gaussian Processes with Prescribed Spectral Density MatricesJOURNAL OF TIME SERIES ANALYSIS, Issue 2 2007A. R. Soltani Abstract., In this article, we provide a spectral characterization for a real-valued discrete-time periodically correlated process, and then proceed on to establish a simulation procedure to simulate such a Gaussian process for a given spectral density. We also prove that the simulated process, at each time index, converges to the actual process in the mean square. [source] Reliability of a machine service life prediction in thermal diagnostic testLUBRICATION SCIENCE, Issue 1 2007L. Burstein Abstract Reliability of the machine service life determined in a thermal diagnostic test is estimated in this investigation. The time dependence of a thermal diagnostic parameter, and the residual service life (RSL) predicted on its basis, are examined here with the aid of the Monte-Carlo simulation procedure. The relationships involved are derived from the machine heat balance under two approaches , short term (starting stage) and long term (service time). The diagnostic parameter considered is the temperature change rate during the short-term period. The constants in its expression are determined from experimental data obtained as an example on the gearboxes of three heavy portal cranes with different service times. The diagnostic parameter,time graph (long-term approach) derived from the data is used as reference for prediction of the RSL. The reference dependence obtained from theoretical values of the diagnostic parameter, and the RSL calculated from this dependence, were repeatedly varied, with a view to estimating the reliability of the prediction. The uniform and Weibull distributions were used for generating, respectively, the random fluctuations of the reference gearbox temperature and of the diagnostic parameter of the tested one. It is shown that in 95% of the cases the thermal method entails a two-sided error of at most 3.6%, and for the tested gearbox the discrepancy between the deterministic and simulated averages does not exceed 1.1%. Copyright © 2006 John Wiley & Sons, Ltd. [source] Direct meso-scale simulations of fibres in turbulent liquid flowTHE CANADIAN JOURNAL OF CHEMICAL ENGINEERING, Issue 4 2010J. J. Derksen Abstract A procedure for direct, meso-scale simulations of flexible fibres immersed in liquid flow is introduced. The fibres are composed of chains of spherical particles connected through ball joints with the bending stiffness of the joints as a variable. The motion of the fibres and the liquid is two-way coupled with full resolution of the solid,liquid interface. First the simulation procedure is validated by means of an analytical solution for sphere doublets in zero-Reynolds simple shear flow. Subsequently we use the numerical method to study inertial flows with fibres, more specifically the interaction of a fibre with isotropic turbulence. Une procédure pour des simulations directes à la méso-échelle de fibres souples immergées dans la circulation de liquide est présentée. Les fibres sont composées de chaînes de particules sphériques reliées par des joints à rotule avec la rigidité à la flexion des joints comme variable. Le mouvement des fibres et du liquide est bidirectionnel avec une résolution intégrale de l'interface solide-liquide. D'abord, la procédure de simulation est validée au moyen d'une solution analytique pour les doublets de sphère dans un écoulement de cisaillement simple à nombre de Reynolds nul. Par la suite, nous utilisons la méthode numérique pour étudier les flux inertiels avec les fibres, plus précisément l'interaction d'une fibre avec la turbulence isotrope. [source] [Ru(py)4Cl(NO)](PF6)2·0.5H2O: a model system for structural determination and ab initio calculations of photo-induced linkage NO isomersACTA CRYSTALLOGRAPHICA SECTION B, Issue 5 2009Benoît Cormary Structure analysis of ground state (GS) and two light-induced (SI and SII) metastable linkage NO isomers of [Ru(py)4Cl(NO)](PF6)2·0.5H2O is presented. Illumination of the crystal by a laser with , = 473,nm at T = 80,K transfers around 92% of the NO ligands from Ru,N,O into the isomeric configuration Ru,O,N (SI). A subsequent irradiation with , = 980,nm generates about 48% of the side-on configuration (SII). Heating to temperatures above 200,K or irradiation with light in the red spectral range transfers both metastable isomers reversibly back to the GS. Photodifference maps clearly show the N,O configurations for both isomers and they could be used to find a proper starting model for subsequent refinements. Both metastable isomers have slightly but significantly different cell parameters with respect to GS. The main structural changes besides the Ru,O,N and linkage are shortenings of the trans Ru,Cl bonds and the equatorial Ru,N bonds. The experimental results are compared with solid-state calculations based on density functional theory (DFT), which reproduce the observed structures with high accuracy concerning bond lengths and angles. The problem of how the different occupancies of SI and GS could affect refinement results was solved by a simulation procedure using the DFT data as starting values. [source] Regression-based Multivariate Linkage Analysis with an Application to Blood Pressure and Body Mass IndexANNALS OF HUMAN GENETICS, Issue 1 2007T. Wang Summary Multivariate linkage analysis has been suggested for the analysis of correlated traits, such as blood pressure (BP) and body mass index (BMI), because it may offer greater power and provide clearer results than univariate analyses. Currently, the most commonly used multivariate linkage methods are extensions of the univariate variance component model. One concern about those methods is their inherent sensitivity to the assumption of multivariate normality which cannot be easily guaranteed in practice. Another problem possibly related to all multivariate linkage analysis methods is the difficulty in interpreting nominal p-values, because the asymptotic distribution of the test statistic has not been well characterized. Here we propose a regression-based multivariate linkage method in which a robust score statistic is used to detect linkage. The p-value of the statistic is evaluated by a simple and rapid simulation procedure. Theoretically, this method can be used for any number and type of traits and for general pedigree data. We apply this approach to a genome linkage analysis of blood pressure and body mass index data from the Beaver Dam Eye Study. [source] Estimates of the twinning fraction for macromolecular crystals using statistical models accounting for experimental errorsACTA CRYSTALLOGRAPHICA SECTION D, Issue 11 2007Vladimir Y. Lunin An advanced statistical model is suggested that is designed to estimate the twinning fraction in merohedrally (or pseudo-merohedrally) twinned crystals. The model takes experimental errors of the measured intensities into account and is adapted to the accuracy of a particular X-ray experiment through the standard deviations of the reflection intensities. The theoretical probability distributions for the improved model are calculated using a Monte Carlo-type simulation procedure. The use of different statistical criteria (including likelihood) to estimate the optimal twinning-fraction value is discussed. The improved model enables better agreement of theoretical and observed cumulative distribution functions to be obtained and produces twinning-fraction estimates that are closer to the refined values in comparison to the conventional model, which disregards experimental errors. The results of the two approaches converge when applied to selected subsets of measured intensities of high accuracy. [source] Initialization Strategies in Simulation-Based SFE Eigenvalue AnalysisCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2005Song Du Poor initializations often result in slow convergence, and in certain instances may lead to an incorrect or irrelevant answer. The problem of selecting an appropriate starting vector becomes even more complicated when the structure involved is characterized by properties that are random in nature. Here, a good initialization for one sample could be poor for another sample. Thus, the proper eigenvector initialization for uncertainty analysis involving Monte Carlo simulations is essential for efficient random eigenvalue analysis. Most simulation procedures to date have been sequential in nature, that is, a random vector to describe the structural system is simulated, a FE analysis is conducted, the response quantities are identified by post-processing, and the process is repeated until the standard error in the response of interest is within desired limits. A different approach is to generate all the sample (random) structures prior to performing any FE analysis, sequentially rank order them according to some appropriate measure of distance between the realizations, and perform the FE analyses in similar rank order, using the results from the previous analysis as the initialization for the current analysis. The sample structures may also be ordered into a tree-type data structure, where each node represents a random sample, the traverse of the tree starts from the root of the tree until every node in the tree is visited exactly once. This approach differs from the sequential ordering approach in that it uses the solution of the "closest" node to initialize the iterative solver. The computational efficiencies that result from such orderings (at a modest expense of additional data storage) are demonstrated through a stability analysis of a system with closely spaced buckling loads and the modal analysis of a simply supported beam. [source] Control-oriented nonlinear modelling of molten carbonate fuel cellsINTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 5 2004Cheng Shen Abstract Performance and availability of molten carbonate fuel cells (MCFC) stack are greatly dependent on its operating temperature. Control of the operating temperature within a specified range and reduction of its temperature fluctuation are highly desirable. The models of MCFC stack existing are too complicated to be suitable for design of a controller because of its lack of clear input,output relations. In this paper, according to the demands of control design, a quantitative relations model of control-oriented MCFC between the temperatures of the stack and flowrates of the input gases is developed, based on conservation laws. It is an affine nonlinear model with multi-input and multi-output, the flowrates of fuel and oxidant gases as the manipulated vector and the temperatures of MCFC electrode,electrolyte plates, separator plates as the controlled vector. The modelling and simulation procedures are given in detail. The simulation tests reveal that the model developed is accurate and it is suitable to be used as a model in designing a controller of MCFC stack. Copyright © 2004 John Wiley & Sons, Ltd. [source] Phylogenetic autocorrelation and heritability of geographic range size, shape and position of fiddler crabs, genus Uca (Crustacea, Decapoda)JOURNAL OF ZOOLOGICAL SYSTEMATICS AND EVOLUTIONARY RESEARCH, Issue 2 2010J. C. Nabout Abstract The aim of this study was to evaluate the levels of phylogenetic heritability of the geographical range size, shape and position for 88 species of fiddler crabs of the world, using phylogenetic comparative methods and simulation procedures to evaluate their fit to the neutral model of Brownian motion. The geographical range maps were compiled from literature, and range size was based on the entire length of coastline occupied by each species, and the position of each range was calculated as its latitudinal and longitudinal midpoint. The range shape of each species was based in fractal dimension (box-counting technique). The evolutionary patterns in the geographical range metrics were explored by phylogenetic correlograms using Moran's I autocorrelation coefficients, autoregressive method (ARM) and phylogenetic eigenvector regression (PVR). The correlograms were compared with those obtained by simulations of Brownian motion processes across phylogenies. The distribution of geographical range size of fiddler crabs is right-skewed and weak phylogenetic autocorrelation was observed. On the other hand, there was a strong phylogenetic pattern in the position of the range (mainly along longitudinal axis). Indeed, the ARM and PVR evidenced, respectively, that ca. 86% and 91% of the longitudinal midpoint could be explained by phylogenetic relationships among the species. The strong longitudinal phylogenetic pattern may be due to vicariant allopatric speciation and geographically structured cladogenesis in the group. The traits analysed (geographical range size and position) did not follow a Brownian motion process, thus suggesting that both adaptive ecological and evolutionary processes must be invoked to explain their dynamics, not following a simple neutral inheritance in the fiddler-crab evolution. Resumen El objetivo de este trabajo fue estimar los niveles de herencia filogenética existentes en la posición geográfica, forma y el tamaño de rango geográfico en 88 especies de cangrejo violinista del mundo, mediante simulaciones y métodos comparativos filogenéticos para así evaluar su ajuste al modelo neutro de evolución browniana. Los mapas de rango geográfico se obtuvieron de la literatura. La forma de rango geográfico fue estimada en la dimensión fractal. Los patrones evolutivos en el tamaño y forma del rango geográfico y la posición geográfica fueron explorados mediante correlogramas filogenéticos utilizando el índice I de Moran, coeficientes autorregresivos (ARM) y regressión por autovetores filogenéticos (PVR). Estos correlogramas fueron comparados con aquellos obtenidos mediante la simulación de procesos de evolución browniana en las filogenias. El tamaño y forma de rango geográfico del cangrejo violinista mostró una distribución apuntada hacia la derecha aunque no se encontró autocorrelación filogenética. Por otra parte, se observó un marcado patrón filogenético para la posición geográfica del rango (principalmente a lo largo del eje longitudinal). De hecho, el ARM y PVR evidenció respectivamente que cerca del 86% y 91% de la localización del punto medio longitudinal del rango se puede explicar mediante las relaciones filogenéticas existentes entre las especies. El fuerte patrón filogenético en la longitud podría ser debido a especiación alopátrica y a una cladogénesis estructurada geográficamente para el grupo, tal y como se propuso en las hipótesis. Los rasgos analizados (rango geográfico y posición geográfica) no siguieron un proceso de evolución browniana, sugiriendo pues que tanto los procesos evolutivos como la adaptación ecológica deberían ser tenidos en cuenta para explicar sus dinámicas, ya que el transcurso de la evolución del cangrejo violinista no se explica mediante un simple modelo de herencia neutra. [source] Conditional and Unconditional Simulation of Healthy Patients' Visual FieldsBIOMETRICAL JOURNAL, Issue 4 2004M. V. Ibáñez Abstract This paper describes a simulation problem, motivated by the study of glaucoma, a very serious and widespread ocular illness. To ascertain whether a patient suffers from glaucoma, a perimetric test is done, but the evolution of the disease is very slow, and large longitudinal sets of tests taken on the same patient are needed to study its evolution, to analyze the efficiency of existing methods to detect the progression of glaucoma and to develop new ones. Simulation can be a very useful procedure to get appropriate data sets to work with. Our aim in this work is to simulate several VFs in a healthy patient to reflect his evolution in time. We use a spatio-temporal model to simulate from, taking into account the correlation existing between the observed (or simulated) values in space and time. Two different simulation procedures (unconditional and conditional) are studied, and applied to obtain the simulations we are interested in. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] |