Discretization

Distribution by Scientific Domains
Distribution within Engineering

Kinds of Discretization

  • boundary discretization
  • difference discretization
  • element discretization
  • finite difference discretization
  • finite element discretization
  • finite-difference discretization
  • numerical discretization
  • semi-implicit discretization
  • space discretization
  • spatial discretization
  • temporal discretization
  • time discretization
  • vertical discretization

  • Terms modified by Discretization

  • discretization error
  • discretization method
  • discretization methods
  • discretization scheme

  • Selected Abstracts


    Adaptive integral method combined with the loose GMRES algorithm for planar structures analysis

    INTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 1 2009
    W. Zhuang
    Abstract In this article, the adaptive integral method (AIM) is used to analyze large-scale planar structures. Discretization of the corresponding integral equations by method of moment (MoM) with Rao-Wilton-Glisson (RWG) basis functions can model arbitrarily shaped planar structures, but usually leads to a fully populated matrix. AIM could map these basis functions onto a rectangular grid, where the Toeplitz property of the Green's function would be utilized, which enables the calculation of the matrix-vector multiplication by use of the fast Fourier transform (FFT) technique. It reduces the memory requirement from O(N2) to O(N) and the operation complexity from O(N2) to O(N log N), where N is the number of unknowns. The resultant equations are then solved by the loose generalized minimal residual method (LGMRES) to accelerate iteration, which converges much faster than the conventional conjugate gradient method (CG). Furthermore, several preconditioning techniques are employed to enhance the computational efficiency of the LGMRES. Some typical microstrip circuits and microstrip antenna array are analyzed and numerical results show that the preconditioned LGMRES can converge much faster than conventional LGMRES. © 2008 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2009. [source]


    Non-linear dynamic contact of thin-walled structures

    PROCEEDINGS IN APPLIED MATHEMATICS & MECHANICS, Issue 1 2008
    Thomas Cichosz
    In many areas of mechanical engineering contact problems of thin,walled structures play a crucial role. Car crash tests and incremental sheet metal forming can be named as examples. But also in civil engineering, for instance when determining the moment,rotation characteristics of a bolted beam,column joint, contact occurs. Effective simulation of these and other contact problems, especially in three,dimensional non,linear implicit structural mechanic is still a challenging task. Modelling of those problems needs a robust method, which takes the thin,walled character and dynamic effects into account. We use a segment,to,segment approach for discretization of the contact and introduce Lagrange Multipliers, which physically represent the contact pressure. The geometric impenetrability condition is formulated in a weak, integral sense. Choosing dual shape functions for the interpolation of the Lagrange Multipliers, we obtain decoupled nodal constraint conditions. Combining this with an active set strategy, an elimination of the Lagrange multipliers is easily possible, so that the size of the resulting system of equations remains constant. Discretization in time is done with the implicit Generalized-, Method and the Generalized Energy,Momentum Method. Using the "Velocity,Update" Method, the total energy is conserved for frictionless contact. Various examples show the performance of the presented strategies. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    High order boundary integral methods forMaxwell's equations using Microlocal Discretization and Fast Multipole Methods

    PROCEEDINGS IN APPLIED MATHEMATICS & MECHANICS, Issue 1 2007
    E. Darrigrand
    An efficient method to solve time harmonic Maxwell's equations in exterior domain for high frequencies is obtained by using the integral formulation of Després combined with a coupling method (MLFMD) based on the Microlocal Discretization method (MD) and the Multi-Level Fast Multipole Method (MLFMM) [1]. In this paper, we consider curved finite elements of higher order in the MLFMD method. Moreover, we improve the MLFMD method by sparsifying the translation matrix of the MLFMM, which involves privileged directions in that application. This improvement leads to a significant reduction of the algorithm complexity. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Discovering Maximal Generalized Decision Rules Through Horizontal and Vertical Data Reduction

    COMPUTATIONAL INTELLIGENCE, Issue 4 2001
    Xiaohua Hu
    We present a method to learn maximal generalized decision rules from databases by integrating discretization, generalization and rough set feature selection. Our method reduces the data horizontally and vertically. In the first phase, discretization and generalization are integrated and the numeric attributes are discretized into a few intervals. The primitive values of symbolic attributes are replaced by high level concepts and some obvious superfluous or irrelevant symbolic attributes are also eliminated. Horizontal reduction is accomplished by merging identical tuples after the substitution of an attribute value by its higher level value in a pre-defined concept hierarchy for symbolic attributes, or the discretization of continuous (or numeric) attributes. This phase greatly decreases the number of tuples in the database. In the second phase, a novel context-sensitive feature merit measure is used to rank the features, a subset of relevant attributes is chosen based on rough set theory and the merit values of the features. A reduced table is obtained by removing those attributes which are not in the relevant attributes subset and the data set is further reduced vertically without destroying the interdependence relationships between classes and the attributes. Then rough set-based value reduction is further performed on the reduced table and all redundant condition values are dropped. Finally, tuples in the reduced table are transformed into a set of maximal generalized decision rules. The experimental results on UCI data sets and a real market database demonstrate that our method can dramatically reduce the feature space and improve learning accuracy. [source]


    On Floating-Point Normal Vectors

    COMPUTER GRAPHICS FORUM, Issue 4 2010
    Quirin Meyer
    Abstract In this paper we analyze normal vector representations. We derive the error of the most widely used representation, namely 3D floating-point normal vectors. Based on this analysis, we show that, in theory, the discretization error inherent to single precision floating-point normals can be achieved by 250.2 uniformly distributed normals, addressable by 51 bits. We review common sphere parameterizations and show that octahedron normal vectors perform best: they are fast and stable to compute, have a controllable error, and require only 1 bit more than the theoretical optimal discretization with the same error. [source]


    Are Points the Better Graphics Primitives?

    COMPUTER GRAPHICS FORUM, Issue 3 2001
    Markus Gross
    Since the early days of graphics the computer based representation of three-dimensional geometry has been one of the core research fields. Today, various sophisticated geometric modelling techniques including NURBS or implicit surfaces allow the creation of 3D graphics models with increasingly complex shape. In spite of these methods the triangle has survived over decades as the king of graphics primitives meeting the right balance between descriptive power and computational burden. As a consequence, today's consumer graphics hardware is heavily tailored for high performance triangle processing. In addition, a new generation of geometry processing methods including hierarchical representations, geometric filtering, or feature detection fosters the concept of triangle meshes for graphics modelling. Unlike triangles, points have amazingly been neglected as a graphics primitive. Although being included in APIs since many years, it is only recently that point samples experience a renaissance in computer graphics. Conceptually, points provide a mere discretization of geometry without explicit storage of topology. Thus, point samples reduce the representation to the essentials needed for rendering and enable us to generate highly optimized object representations. Although the loss of topology poses great challenges for graphics processing, the latest generation of algorithms features high performance rendering, point/pixel shading, anisotropic texture mapping, and advanced signal processing of point sampled geometry. This talk will give an overview of how recent research results in the processing of triangles and points are changing our traditional way of thinking of surface representations in computer graphics - and will discuss the question: Are Points the Better Graphics Primitives? [source]


    Direct Manipulation and Interactive Sculpting of PDE Surfaces

    COMPUTER GRAPHICS FORUM, Issue 3 2000
    Haixia Du
    This paper presents an integrated approach and a unified algorithm that combine the benefits of PDE surfaces and powerful physics-based modeling techniques within one single modeling framework, in order to realize the full potential of PDE surfaces. We have developed a novel system that allows direct manipulation and interactive sculpting of PDE surfaces at arbitrary location, hence supporting various interactive techniques beyond the conventional boundary control. Our prototype software affords users to interactively modify point, normal, curvature, and arbitrary region of PDE surfaces in a predictable way. We employ several simple, yet effective numerical techniques including the finite-difference discretization of the PDE surface, the multigrid-like subdivision on the PDE surface, the mass-spring approximation of the elastic PDE surface, etc. to achieve real-time performance. In addition, our dynamic PDE surfaces can also be approximated using standard bivariate B-spline finite elements, which can subsequently be sculpted and deformed directly in real-time subject to intrinsic PDE constraints. Our experiments demonstrate many attractive advantages of our dynamic PDE formulation such as intuitive control, real-time feedback, and usability to the general public. [source]


    An Adaptive Strategy for the Local Discontinuous Galerkin Method Applied to Porous Media Problems

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 4 2008
    Esov S. Velázquez
    DG methods may be viewed as high-order extensions of the classical finite volume method and, since no interelement continuity is imposed, they can be defined on very general meshes, including nonconforming meshes, making these methods suitable for h-adaptivity. The technique starts with an initial conformal spatial discretization of the domain and, in each step, the error of the solution is estimated. The mesh is locally modified according to the error estimate by performing two local operations: refinement and agglomeration. This procedure is repeated until the solution reaches a desired accuracy. The performance of this technique is examined through several numerical experiments and results are compared with globally refined meshes in examples with known analytic solutions. [source]


    A Polymorphic Dynamic Network Loading Model

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 2 2008
    Nie Yu (Marco)
    The polymorphism, realized through a general node-link interface and proper discretization, offers several prominent advantages. First of all, PDNL allows road facilities in the same network to be represented by different traffic flow models based on the tradeoff of efficiency and realism and/or the characteristics of the targeted problem. Second, new macroscopic link/node models can be easily plugged into the framework and compared against existing ones. Third, PDNL decouples links and nodes in network loading, and thus opens the door to parallel computing. Finally, PDNL keeps track of individual vehicular quanta of arbitrary size, which makes it possible to replicate analytical loading results as closely as desired. PDNL, thus, offers an ideal platform for studying both analytical dynamic traffic assignment problems of different kinds and macroscopic traffic simulation. [source]


    SECM Visualization of Spatial Variability of Enzyme-Polymer Spots.

    ELECTROANALYSIS, Issue 19-20 2006
    2: Complex Interference Elimination by Means of Selection of Highest Sensitivity Sensor Substructures, Artificial Neural Networks
    Abstract Polymer spots with entrapped glucose oxidase were fabricated on glass surfaces and the localized enzymatic response was subsequently visualized using scanning electrochemical microscopy (SECM) in the generator,collector mode. SECM images were obtained under simultaneous variation of the concentration of glucose (0,6,mM) and ascorbic acid (0,200,,M), or, in a second set of experiments, of glucose (0,2,mM) and 2-deoxy- D(+)-glucose (0,4,mM). Aiming at the quantification of the mixture components discretization of the response surfaces of the overall enzyme/polymer spot into numerous spatially defined microsensor substructures was performed. Sensitivity of sensor substructures to measured analytes was calculated and patterns of variability in the data were analyzed before and after elimination of interferences using principal component analysis. Using artificial neural networks which were fed with the data provided by the sensor substructures showing highest sensitivity for glucose, glucose concentration could be calculated in solutions containing unknown amounts of ascorbic acid with a good accuracy (RMSE 0.17,mM). Using, as an input data set, measurements provided by sensing substructures showing highest sensitivity for ascorbic acid in combination with the response of the sensors showing highest dependence on the glucose concentration, the error of the ascorbic acid concentration calculation in solution containing the unknown amount of glucose was 10,,M. Similarly, prediction of the glucose concentration in the presence of 2-deoxy- D(+)-glucose was possible with a RMSE of 0.1,mM while the error of the calculation of 2-deoxy- D(+)-glucose concentrations in the presence of unknown concentrations of glucose was 0.36,mM. [source]


    Scaling of water flow through porous media and soils

    EUROPEAN JOURNAL OF SOIL SCIENCE, Issue 1 2008
    K. Roth
    Summary Scaling of fluid flow in general is outlined and contrasted to the scaling of water flow in porous media. It is then applied to deduce Darcy's law, thereby demonstrating that stationarity of the flow field at the scale of the representative elementary volume (REV) is a critical prerequisite. The focus is on the implications of the requirement of stationarity, or local equilibrium, in particular on the validity of the Richards equation for the description of water movement through soils. Failure to satisfy this essential requirement may occur at the scale of the REV or, particularly in numerical simulations, at the scale of the model discretization. The latter can be alleviated by allocation of more computational resources and by working on a finer-grained representation. In contrast, the former is fundamental and leads to an irrevocable failure of the Richards equation as is observed with infiltration instabilities that lead to fingered flow. [source]


    The perturbation method and the extended finite element method.

    FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 8 2006
    An application to fracture mechanics problems
    ABSTRACT The extended finite element method has been successful in the numerical simulation of fracture mechanics problems. With this methodology, different to the conventional finite element method, discretization of the domain with a mesh adapted to the geometry of the discontinuity is not required. On the other hand, in traditional fracture mechanics all variables have been considered to be deterministic (uniquely defined by a given numerical value). However, the uncertainty associated with these variables (external loads, geometry and material properties, among others) it is well known. This paper presents a novel application of the perturbation method along with the extended finite element method to treat these uncertainties. The methodology has been implemented in a commercial software and results are compared with those obtained by means of a Monte Carlo simulation. [source]


    Numerical determination of 3D temperature fields in steel joints

    FIRE AND MATERIALS, Issue 2-4 2004
    Jean-Marc Franssen
    Abstract A numerical study was undertaken to investigate the temperature field in steel joints and to compare the temperatures in the joints with the temperatures of the adjacent steel members on the hypothesis that the thermal protection is the same on the joint and in the members. Very brief information is given on the numerical model, supplemented with parametric studies made in order to determine the required level of discretization in the time and in the space domain. A simplified assumption for representing the thermal insulation is also discussed and validated. Different numerical analyses are performed, with a variation of the following parameters: (i) type of joints, from very simple to more complex configurations, with welds and/or bolts, all of them representing joints between elements located in the same plane; (ii) unprotected joints or protected by one sprayed material; (iii) ISO, hydrocarbon or one natural fire scenario. The fact that the thermal attack from the fire might be less severe because the joints are usually located in the corner of the compartment is not taken into account. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Effects of Phonon Confinement on Anomalous Thermalization, Energy Transfer, and Upconversion in Ln3+ -Doped Gd2O3 Nanotubes

    ADVANCED FUNCTIONAL MATERIALS, Issue 4 2010
    Andreia G. Macedo
    Abstract There is a growing interest in understanding how size-dependent quantum confinement affects the photoluminescence efficiency, excited-state dynamics, energy-transfer and thermalization phenomena in nanophosphors. For lanthanide (Ln3+)-doped nanocrystals, despite the localized 4f states, confinement effects are induced mostly via electron,phonon interactions. In particular, the anomalous thermalization reported so far for a handful of Ln3+ -doped nanocrystals has been rationalized by the absence of low-frequency phonon modes. This nanoconfinement may further impact on the Ln3+ luminescence dynamics, such as phonon-assisted energy transfer or upconversion processes. Here, intriguing and unprecedented anomalous thermalization in Gd2O3:Eu3+ and Gd2O3:Yb3+,Er3+ nanotubes, exhibiting up to one order of magnitude larger than previously reported for similar materials, is reported. This anomalous thermalization induces unexpected energy transfer from Eu3+C2 to S6 crystallographic sites, at 11,K, and 2H11/2,,,4I15/2 Er3+ upconversion emission; it is interpreted on the basis of the discretization of the phonon density of states, easily tuned by varying the annealing temperature (923,1123,K) in the synthesis procedure, and/or the Ln3+ concentration (0.16,6.60%). [source]


    Reliability Analysis of Technical Systems/Structures by means of Polyhedral Approximation of the Safe/Unsafe Domain

    GAMM - MITTEILUNGEN, Issue 2 2007
    K. Marti
    Abstract Reliability analysis of technical structures and systems is based on an appropriate (limit) state function separating the safe and unsafe/states in the space of random parameters. Starting with the survival conditions, hence, the state equation and the condition for the admissibility of states, an optimizational representation of the state function can be given in terms of the minimum function of a certainminimization problem. Selecting a certain number of boundary points of the safe/unsafe domain, hence, on the limit state surface, the safe/unsafe domain is approximated by a convex polyhedron defined by the intersection of the half spaces in the parameter space generated by the tangent hyperplanes to the safe/unsafe domain at the selected boundary points on the limit state surface. The resulting approximative probability functions are then defined by means of probabilistic linear constraints in the parameter space, where, after an appropriate transformation, the probability distribution of the parameter vector can be assumed to be normal with zero mean vector and unit covariance matrix. Working with separate linear constraints, approximation formulas for the probability of survival of the structure are obtained immediately. More exact approximations are obtained by considering joint probability constraints, which, in a second approximation step, can be evaluated by using probability inequalities and/or discretization of the underlying probability distribution. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Parsimonious finite-volume frequency-domain method for 2-D P,SV -wave modelling

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2008
    R. Brossier
    SUMMARY A new numerical technique for solving 2-D elastodynamic equations based on a finite-volume frequency-domain approach is proposed. This method has been developed as a tool to perform 2-D elastic frequency-domain full-waveform inversion. In this context, the system of linear equations that results from the discretization of the elastodynamic equations is solved with a direct solver, allowing efficient multiple-source simulations at the partial expense of the memory requirement. The discretization of the finite-volume approach is through triangles. Only fluxes with the required quantities are shared between the cells, relaxing the meshing conditions, as compared to finite-element methods. The free surface is described along the edges of the triangles, which can have different slopes. By applying a parsimonious strategy, the stress components are eliminated from the discrete equations and only the velocities are left as unknowns in the triangles. Together with the local support of the P0 finite-volume stencil, the parsimonious approach allows the minimizing of core memory requirements for the simulation. Efficient perfectly matched layer absorbing conditions have been designed for damping the waves around the grid. The numerical dispersion of this FV formulation is similar to that of O(,x2) staggered-grid finite-difference (FD) formulations when considering structured triangular meshes. The validation has been performed with analytical solutions of several canonical problems and with numerical solutions computed with a well-established FD time-domain method in heterogeneous media. In the presence of a free surface, the finite-volume method requires 10 triangles per wavelength for a flat topography, and fifteen triangles per wavelength for more complex shapes, well below the criteria required by the staircase approximation of O(,x2) FD methods. Comparisons between the frequency-domain finite-volume and the O(,x2) rotated FD methods also show that the former is faster and less memory demanding for a given accuracy level, an attractive feature for frequency-domain seismic inversion. We have thus developed an efficient method for 2-D P,SV -wave modelling on structured triangular meshes as a tool for frequency-domain full-waveform inversion. Further work is required to improve the accuracy of the method on unstructured meshes. [source]


    Modelling elastic media with the wavelet transform

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2001
    João Willy Corrêa Rosa
    Summary We present a new method for modelling 2-D elastic media with the application of the wavelet transform, which is also extended to cases where discontinuities simulate geological faults between two different elastic media. The basic method consists of the discretization of the polynomial expansion for the boundary conditions of the 2-D problem involving the stress and strain relations for the media. This parametrization leads to a system of linear equations that should be solved for the determination of the expansion coefficients, which are the model parameters, and their determination leads to the solution of the problem. The wavelet transform is applied with two main objectives, namely to decrease the error related to the truncation of the polynomial expansion and to make the system of linear equations more compact for computation. This is possible due to the properties of this finite length transform. The method proposed here was tested for six different cases for which the analytical solutions are known. In all tests considered, we obtained very good matches with the corresponding known analytical solutions, which validate the theoretical and computational parts of the project. We hope that the new method is useful for modelling real media. [source]


    Energy Group optimization for forward and inverse problems in nuclear engineering: application to downwell-logging problems

    GEOPHYSICAL PROSPECTING, Issue 2 2006
    Elsa Aristodemou
    ABSTRACT Simulating radiation transport of neutral particles (neutrons and ,-ray photons) within subsurface formations has been an area of research in the nuclear well-logging community since the 1960s, with many researchers exploiting existing computational tools already available within the nuclear reactor community. Deterministic codes became a popular tool, with the radiation transport equation being solved using a discretization of phase-space of the problem (energy, angle, space and time). The energy discretization in such codes is based on the multigroup approximation, or equivalently the discrete finite-difference energy approximation. One of the uncertainties, therefore, of simulating radiation transport problems, has become the multigroup energy structure. The nuclear reactor community has tackled the problem by optimizing existing nuclear cross-sectional libraries using a variety of group-collapsing codes, whilst the nuclear well-logging community has relied, until now, on libraries used in the nuclear reactor community. However, although the utilization of such libraries has been extremely useful in the past, it has also become clear that a larger number of energy groups were available than was necessary for the well-logging problems. It was obvious, therefore, that a multigroup energy structure specific to the needs of the nuclear well-logging community needed to be established. This would have the benefit of reducing computational time (the ultimate aim of this work) for both the stochastic and deterministic calculations since computational time increases with the number of energy groups. We, therefore, present in this study two methodologies that enable the optimization of any multigroup neutron,, energy structure. Although we test our theoretical approaches on nuclear well-logging synthetic data, the methodologies can be applied to other radiation transport problems that use the multigroup energy approximation. The first approach considers the effect of collapsing the neutron groups by solving the forward transport problem directly using the deterministic code EVENT, and obtaining neutron and ,-ray fluxes deterministically for the different group-collapsing options. The best collapsing option is chosen as the one which minimizes the effect on the ,-ray spectrum. During this methodology, parallel processing is implemented to reduce computational times. The second approach uses the uncollapsed output from neural network simulations in order to estimate the new, collapsed fluxes for the different collapsing cases. Subsequently, an inversion technique is used which calculates the properties of the subsurface, based on the collapsed fluxes. The best collapsing option is chosen as the one that predicts the subsurface properties with a minimal error. The fundamental difference between the two methodologies relates to their effect on the generated ,-rays. The first methodology takes the generation of ,-rays fully into account by solving the transport equation directly. The second methodology assumes that the reduction of the neutron groups has no effect on the ,-ray fluxes. It does, however, utilize an inversion scheme to predict the subsurface properties reliably, and it looks at the effect of collapsing the neutron groups on these predictions. Although the second procedure is favoured because of (a) the speed with which a solution can be obtained and (b) the application of an inversion scheme, its results need to be validated against a physically more stringent methodology. A comparison of the two methodologies is therefore given. [source]


    2D data modelling by electrical resistivity tomography for complex subsurface geology

    GEOPHYSICAL PROSPECTING, Issue 2 2006
    E. Cardarelli
    ABSTRACT A new tool for two-dimensional apparent-resistivity data modelling and inversion is presented. The study is developed according to the idea that the best way to deal with ill-posedness of geoelectrical inverse problems lies in constructing algorithms which allow a flexible control of the physical and mathematical elements involved in the resolution. The forward problem is solved through a finite-difference algorithm, whose main features are a versatile user-defined discretization of the domain and a new approach to the solution of the inverse Fourier transform. The inversion procedure is based on an iterative smoothness-constrained least-squares algorithm. As mentioned, the code is constructed to ensure flexibility in resolution. This is first achieved by starting the inversion from an arbitrarily defined model. In our approach, a Jacobian matrix is calculated at each iteration, using a generalization of Cohn's network sensitivity theorem. Another versatile feature is the issue of introducing a priori information about the solution. Regions of the domain can be constrained to vary between two limits (the lower and upper bounds) by using inequality constraints. A second possibility is to include the starting model in the objective function used to determine an improved estimate of the unknown parameters and to constrain the solution to the above model. Furthermore, the possibility either of defining a discretization of the domain that exactly fits the underground structures or of refining the mesh of the grid certainly leads to more accurate solutions. Control on the mathematical elements in the inversion algorithm is also allowed. The smoothness matrix can be modified in order to penalize roughness in any one direction. An empirical way of assigning the regularization parameter (damping) is defined, but the user can also decide to assign it manually at each iteration. An appropriate tool was constructed with the purpose of handling the inversion results, for example to correct reconstructed models and to check the effects of such changes on the calculated apparent resistivity. Tests on synthetic and real data, in particular in handling indeterminate cases, show that the flexible approach is a good way to build a detailed picture of the prospected area. [source]


    Rapid simulated hydrologic response within the variably saturated near surface

    HYDROLOGICAL PROCESSES, Issue 3 2008
    Brian A. Ebel
    Abstract Column and field experiments have shown that the hydrologic response to increases in rainfall rates can be more rapid than expected from simple estimates. Physics-based hydrologic response simulation, with the Integrated Hydrology Model (InHM), is used here to investigate rapid hydrologic response, within the variably saturated near surface, to temporal variations in applied flux at the surface boundary. The factors controlling the speed of wetting front propagation are discussed within the Darcy,Buckingham conceptual framework, including kinematic wave approximations. The Coos Bay boundary-value problem is employed to examine simulated discharge, pressure head, and saturation responses to a large increase in applied surface flux. The results presented here suggest that physics-based simulations are capable of representing rapid hydrologic response within the variably saturated near surface. The new InHM simulations indicate that the temporal discretization and measurement precision needed to capture the rapid subsurface response to a spike increase in surface flux, necessary for both data-based analyses and evaluation of physics-based models, are smaller than the capabilities of the instrumentation deployed at the Coos Bay experimental catchment. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Evaluation of model complexity and space,time resolution on the prediction of long-term soil salinity dynamics, western San Joaquin Valley, California

    HYDROLOGICAL PROCESSES, Issue 13 2006
    G. Schoups
    Abstract The numerical simulation of long-term large-scale (field to regional) variably saturated subsurface flow and transport remains a computational challenge, even with today's computing power. Therefore, it is appropriate to develop and use simplified models that focus on the main processes operating at the pertinent time and space scales, as long as the error introduced by the simpler model is small relative to the uncertainties associated with the spatial and temporal variation of boundary conditions and parameter values. This study investigates the effects of various model simplifications on the prediction of long-term soil salinity and salt transport in irrigated soils. Average root-zone salinity and cumulative annual drainage salt load were predicted for a 10-year period using a one-dimensional numerical flow and transport model (i.e. UNSATCHEM) that accounts for solute advection, dispersion and diffusion, and complex salt chemistry. The model uses daily values for rainfall, irrigation, and potential evapotranspiration rates. Model simulations consist of benchmark scenarios for different hypothetical cases that include shallow and deep water tables, different leaching fractions and soil gypsum content, and shallow groundwater salinity, with and without soil chemical reactions. These hypothetical benchmark simulations are compared with the results of various model simplifications that considered (i) annual average boundary conditions, (ii) coarser spatial discretization, and (iii) reducing the complexity of the salt-soil reaction system. Based on the 10-year simulation results, we conclude that salt transport modelling does not require daily boundary conditions, a fine spatial resolution, or complex salt chemistry. Instead, if the focus is on long-term salinity, then a simplified modelling approach can be used, using annually averaged boundary conditions, a coarse spatial discretization, and inclusion of soil chemistry that only accounts for cation exchange and gypsum dissolution,precipitation. We also demonstrate that prediction errors due to these model simplifications may be small, when compared with effects of parameter uncertainty on model predictions. The proposed model simplifications lead to larger time steps and reduced computer simulation times by a factor of 1000. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Assessing the impact of the hydraulic properties of a crusted soil on overland flow modelling at the field scale

    HYDROLOGICAL PROCESSES, Issue 8 2006
    Nanée Chahinian
    Abstract Soil surface crusts are widely reported to favour Hortonian runoff, but are not explicitly represented in most rainfall-runoff models. The aim of this paper is to assess the impact of soil surface crusts on infiltration and runoff modelling at two spatial scales, i.e. the local scale and the plot scale. At the local scale, two separate single ring infiltration experiments are undertaken. The first is performed on the undisturbed soil, whereas the second is done after removal of the soil surface crust. The HYDRUS 2D two-dimensional vertical infiltration model is then used in an inverse modelling approach, first to estimate the soil hydraulic properties of the crust and the subsoil, and then the effective hydraulic properties of the soil represented as a single uniform layer. The results show that the crust hydraulic conductivity is 10 times lower than that of the subsoil, thus illustrating the limiting role the crust has on infiltration. Moving up to the plot scale, a rainfall-runoff model coupling the Richards equation to a transfer function is used to simulate Hortonian overland flow hydrographs. The previously calculated hydraulic properties are used, and a comparison is undertaken between a single-layer and a double-layer representation of the crusted soil. The results of the rainfall-runoff model show that the soil hydraulic properties calculated at the local scale give acceptable results when used to model runoff at the plot scale directly, without any numerical calibration. Also, at the plot scale, no clear improvement of the results can be seen when using a double-layer representation of the soil in comparison with a single homogeneous layer. This is due to the hydrological characteristics of Hortonian runoff, which is triggered by a rainfall intensity exceeding the saturated hydraulic conductivity of the soil surface. Consequently, the rainfall-runoff model is more sensitive to rainfall than to the subsoil's hydrodynamic properties. Therefore, the use of a double-layer soil model to represent runoff on a crusted soil does not seem necessary, as the increase of precision in the soil discretization is not justified by a better performance of the model. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Appropriate vertical discretization of Richards' equation for two-dimensional watershed-scale modelling

    HYDROLOGICAL PROCESSES, Issue 1 2004
    Charles W. Downer
    Abstract A number of watershed-scale hydrological models include Richards' equation (RE) solutions, but the literature is sparse on information as to the appropriate application of RE at the watershed scale. In most published applications of RE in distributed watershed-scale hydrological modelling, coarse vertical resolutions are used to decrease the computational burden. Compared to point- or field-scale studies, application at the watershed scale is complicated by diverse runoff production mechanisms, groundwater effects on runoff production, runon phenomena and heterogeneous watershed characteristics. An essential element of the numerical solution of RE is that the solution converges as the spatial resolution increases. Spatial convergence studies can be used to identify the proper resolution that accurately describes the solution with maximum computational efficiency, when using physically realistic parameter values. In this study, spatial convergence studies are conducted using the two-dimensional, distributed-parameter, gridded surface subsurface hydrological analysis (GSSHA) model, which solves RE to simulate vadose zone fluxes. Tests to determine if the required discretization is strongly a function of dominant runoff production mechanism are conducted using data from two very different watersheds, the Hortonian Goodwin Creek Experimental Watershed and the non-Hortonian Muddy Brook watershed. Total infiltration, stream flow and evapotranspiration for the entire simulation period are used to compute comparison statistics. The influences of upper and lower boundary conditions on the solution accuracy are also explored. Results indicate that to simulate hydrological fluxes accurately at both watersheds small vertical cell sizes, of the order of 1 cm, are required near the soil surface, but not throughout the soil column. The appropriate choice of approximations for calculating the near soil-surface unsaturated hydraulic conductivity can yield modest increases in the required cell size. Results for both watersheds are quite similar, even though the soils and runoff production mechanisms differ greatly between the two catchments. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Optimal use of high-resolution topographic data in flood inundation models

    HYDROLOGICAL PROCESSES, Issue 3 2003
    P. D. Bates
    Abstract In this paper we explore the optimum assimilation of high-resolution data into numerical models using the example of topographic data provision for flood inundation simulation. First, we explore problems with current assimilation methods in which numerical grids are generated independent of topography. These include possible loss of significant length scales of topographic information, poor representation of the original surface and data redundancy. These are resolved through the development of a processing chain consisting of: (i) assessment of significant length scales of variation in the input data sets; (ii) determination of significant points within the data set; (iii) translation of these into a conforming model discretization that preserves solution quality for a given numerical solver; and (iv) incorporation of otherwise redundant sub-grid data into the model in a computationally efficient manner. This processing chain is used to develop an optimal finite element discretization for a 12 km reach of the River Stour in Dorset, UK, for which a high-resolution topographic data set derived from airborne laser altimetry (LiDAR) was available. For this reach, three simulations of a 1 in 4 year flood event were conducted: a control simulation with a mesh developed independent of topography, a simulation with a topographically optimum mesh, and a further simulation with the topographically optimum mesh incorporating the sub-grid topographic data within a correction algorithm for dynamic wetting and drying in fixed grid models. The topographically optimum model is shown to represent better the ,raw' topographic data set and that differences between this surface and the control are hydraulically significant. Incorporation of sub-grid topographic data has a less marked impact than getting the explicit hydraulic calculation correct, but still leads to important differences in model behaviour. The paper highlights the need for better validation data capable of discriminating between these competing approaches and begins to indicate what the characteristics of such a data set should be. More generally, the techniques developed here should prove useful for any data set where the resolution exceeds that of the model in which it is to be used. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Kalman filter finite element method applied to dynamic ground motion

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 9 2009
    Yusuke Kato
    Abstract The purpose of this paper is to investigate the estimation of dynamic elastic behavior of the ground using the Kalman filter finite element method. In the present paper, as the state equation, the balance of stress equation, the strain,displacement equation and the stress,strain equation are used. For temporal discretization, the Newmark ¼ method is employed, and for the spatial discretization the Galerkin method is applied. The Kalman filter finite element method is a combination of the Kalman filter and the finite element method. The present method is adaptable to estimations not only in time but also in space, as we have confirmed by its application to the Futatsuishi quarry site. The input data are the measured velocity, acceleration, etc., which may include mechanical noise. It has been shown in numerical studies that the estimated velocity, acceleration, etc., at any other spatial and temporal point can be obtained by removing the noise included in the observation. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Bifurcation modeling in geomaterials: From the second-order work criterion to spectral analyses

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 9 2009
    F. Prunier
    Abstract The present paper investigates bifurcation analysis based on the second-order work criterion, in the framework of rate-independent constitutive models and rate-independent boundary-value problems. The approach applies mainly to nonassociated materials such as soils, rocks, and concretes. The bifurcation analysis usually performed at the material point level is extended to quasi-static boundary-value problems, by considering the stiffness matrix arising from finite element discretization. Lyapunov's definition of stability (Annales de la faculté des sciences de Toulouse 1907; 9:203,274), as well as definitions of bifurcation criteria (Rice's localization criterion (Theoretical and Applied Mechanics. Fourteenth IUTAM Congress, Amsterdam, 1976; 207,220) and the plasticity limit criterion are revived in order to clarify the application field of the second-order work criterion and to contrast these criteria. The first part of this paper analyses the second-order work criterion at the material point level. The bifurcation domain is presented in the 3D stress space as well as 3D cones of unstable loading directions for an incrementally nonlinear constitutive model. The relevance of this criterion, when the nonlinear constitutive model is expressed in the classical form (d, = Md,) or in the dual form (d, = Nd,), is discussed. In the second part, the analysis is extended to the boundary-value problems in quasi-static conditions. Nonlinear finite element computations are performed and the global tangent stiffness matrix is analyzed. For several examples, the eigenvector associated with the first vanishing eigenvalue of the symmetrical part of the stiffness matrix gives an accurate estimation of the failure mode in the homogeneous and nonhomogeneous boundary-value problem. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Some numerical issues using element-free Galerkin mesh-less method for coupled hydro-mechanical problems

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 7 2009
    Mohammad Norouz Oliaei
    Abstract A new formulation of the element-free Galerkin (EFG) method is developed for solving coupled hydro-mechanical problems. The numerical approach is based on solving the two governing partial differential equations of equilibrium and continuity of pore water simultaneously. Spatial variables in the weak form, i.e. displacement increment and pore water pressure increment, are discretized using the same EFG shape functions. An incremental constrained Galerkin weak form is used to create the discrete system equations and a fully implicit scheme is used for discretization in the time domain. Implementation of essential boundary conditions is based on a penalty method. Numerical stability of the developed formulation is examined in order to achieve appropriate accuracy of the EFG solution for coupled hydro-mechanical problems. Examples are studied and compared with closed-form or finite element method solutions to demonstrate the validity of the developed model and its capabilities. The results indicate that the EFG method is capable of handling coupled problems in saturated porous media and can predict well both the soil deformation and variation of pore water pressure over time. Some guidelines are proposed to guarantee the accuracy of the EFG solution for coupled hydro-mechanical problems. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Thermal reservoir modeling in petroleum geomechanics

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 4 2009
    Shunde Yin
    Abstract Thermal oil recovery processes involve high pressures and temperatures, leading to large volume changes and induced stresses. These cannot be handled by traditional reservoir simulation because it does not consider coupled geomechanics effects. In this paper we present a fully coupled, thermal half-space model using a hybrid DDFEM method. A finite element method (FEM) solution is adopted for the reservoir and the surrounding thermally affected zone, and a displacement discontinuity method is used for the surrounding elastic, non-thermal zone. This approach analyzes stress, pressure, temperature and volume change in the reservoir; it also provides stresses and displacements around the reservoir (including transient ground surface movements) in a natural manner without introducing extra spatial discretization outside the FEM zone. To overcome spurious spatial temperature oscillations in the convection-dominated thermal advection,diffusion problem, we place the transient problem into an advection,diffusion,reaction problem framework, which is then efficiently addressed by a stabilized finite element approach, the subgrid-scale/gradient subgrid-scale method. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Numerical modelling for earthquake engineering: the case of lightly RC structural walls

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 7-8 2004
    J. Mazars
    Abstract Different types of numerical models exist to describe the non-linear behaviour of reinforced concrete structures. Based on the level of discretization they are often classified as refined or simplified ones. The efficiency of two simplified models using beam elements and damage mechanics in describing the global and local behaviour of lightly reinforced concrete structural walls subjected to seismic loadings is investigated in this paper. The first model uses an implicit and the second an explicit numerical scheme. For each case, the results of the CAMUS 2000 experimental programme are used to validate the approaches. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    An integral equation solution for three-dimensional heat extraction from planar fracture in hot dry rock

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 12 2003
    A. Ghassemi
    Abstract In the numerical simulation of heat extraction by circulating water in a fracture embedded in geothermal reservoir, the heat conduction in the reservoir is typically assumed to be one-dimensional and perpendicular to the fracture in order to avoid the discretization of the three-dimensional reservoir geometry. In this paper we demonstrate that by utilizing the integral equation formulation with a Green's function, the three-dimensional heat flow in the reservoir can be modelled without the need of discretizing the reservoir. Numerical results show that the three-dimensional heat conduction effect can significantly alter the prediction of heat extraction temperature and the reservoir life as compared to its one-dimensional simplification. Copyright © 2003 John Wiley & Sons, Ltd. [source]