Solver

Distribution by Scientific Domains
Distribution within Engineering

Kinds of Solver

  • differential equation solver
  • direct solver
  • element solver
  • equation solver
  • finite element solver
  • flow solver
  • frontal solver
  • iterative solver
  • linear solver
  • multigrid solver
  • new solver
  • problem solver
  • programming solver


  • Selected Abstracts


    A Semi-Lagrangian CIP Fluid Solver without Dimensional Splitting

    COMPUTER GRAPHICS FORUM, Issue 2 2008
    Doyub Kim
    Abstract In this paper, we propose a new constrained interpolation profile (CIP) method that is stable and accurate but requires less amount of computation compared to existing CIP-based solvers. CIP is a high-order fluid advection solver that can reproduce rich details of fluids. It has third-order accuracy but its computation is performed over a compact stencil. These advantageous features of CIP are, however, diluted by the following two shortcomings: (1) CIP contains a defect in the utilization of the grid data, which makes the method suitable only for simulations with a tight CFL restriction; and (2) CIP does not guarantee unconditional stability. There have been several attempts to fix these problems in CIP, but they have been only partially successful. The solutions that fixed both problems ended up introducing other undesirable features, namely increased computation time and/or reduced accuracy. This paper proposes a novel modification of the original CIP method that fixes all of the above problems without increasing the computational load or reducing the accuracy. Both quantitative and visual experiments were performed to test the performance of the new CIP in comparison to existing fluid solvers. The results show that the proposed method brings significant improvements in both accuracy and speed. [source]


    A Parallel PCG Solver for MODFLOW

    GROUND WATER, Issue 6 2009
    Yanhui Dong
    In order to simulate large-scale ground water flow problems more efficiently with MODFLOW, the OpenMP programming paradigm was used to parallelize the preconditioned conjugate-gradient (PCG) solver with in this study. Incremental parallelization, the significant advantage supported by OpenMP on a shared-memory computer, made the solver transit to a parallel program smoothly one block of code at a time. The parallel PCG solver, suitable for both MODFLOW-2000 and MODFLOW-2005, is verified using an 8-processor computer. Both the impact of compilers and different model domain sizes were considered in the numerical experiments. Based on the timing results, execution times using the parallel PCG solver are typically about 1.40 to 5.31 times faster than those using the serial one. In addition, the simulation results are the exact same as the original PCG solver, because the majority of serial codes were not changed. It is worth noting that this parallelizing approach reduces cost in terms of software maintenance because only a single source PCG solver code needs to be maintained in the MODFLOW source tree. [source]


    IT project managers' construction of successful project management practice: a repertory grid investigation

    INFORMATION SYSTEMS JOURNAL, Issue 3 2009
    Nannette P. Napier
    Abstract Although effective project management is critical to the success of information technology (IT) projects, little empirical research has investigated skill requirements for IT project managers (PMs). This study addressed this gap by asking 19 practicing IT PMs to describe the skills that successful IT PMs exhibit. A semi-structured interview method known as the repertory grid (RepGrid) technique was used to elicit these skills. Nine skill categories emerged: client management, communication, general management, leadership, personal integrity, planning and control, problem solving, systems development and team development. Our study complements existing research by providing a richer understanding of several skills that were narrowly defined (client management, planning and control, and problem solving) and by introducing two new skill categories that had not been previously discussed (personal integrity and team development). Analysis of the individual RepGrids revealed four distinct ways in which study participants combined skill categories to form archetypes of effective IT PMs. We describe these four IT PM archetypes , General Manager, Problem Solver, Client Representative and Balanced Manager , and discuss how this knowledge can be useful for practitioners, researchers and educators. The paper concludes with suggestions for future research. [source]


    A finite volume solver for 1D shallow-water equations applied to an actual river

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 1 2002
    N. Gouta
    Abstract This paper describes the numerical solution of the 1D shallow-water equations by a finite volume scheme based on the Roe solver. In the first part, the 1D shallow-water equations are presented. These equations model the free-surface flows in a river. This set of equations is widely used for applications: dam-break waves, reservoir emptying, flooding, etc. The main feature of these equations is the presence of a non-conservative term in the momentum equation in the case of an actual river. In order to apply schemes well adapted to conservative equations, this term is split in two terms: a conservative one which is kept on the left-hand side of the equation of momentum and the non-conservative part is introduced as a source term on the right-hand side. In the second section, we describe the scheme based on a Roe Solver for the homogeneous problem. Next, the numerical treatment of the source term which is the essential point of the numerical modelisation is described. The source term is split in two components: one is upwinded and the other is treated according to a centred discretization. By using this method for the discretization of the source term, one gets the right behaviour for steady flow. Finally, in the last part, the problem of validation is tackled. Most of the numerical tests have been defined for a working group about dam-break wave simulation. A real dam-break wave simulation will be shown. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Performance improvements for olive oil refining plants

    INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 6 2010
    Elif Bozoglan
    Abstract The main objective of this study, which is conducted for the first time to the best of the authors' knowledge, is to identify improvements in olive oil refinery plants' performance. In the analyses, the actual operational data are used for performance assessment purposes. The refinery plant investigated is located in Izmir Turkey and has an oil capacity of 6250,kg,h,1. It basically incorporates steam generators, several tanks, heat exchangers, a distillation column, flash tanks and several pumps. The values for exergy efficiency and exergy destruction of operating components are determined based on a reference (dead state) temperature of 25°C. An Engineering Equation Solver (EES) software program is utilized to do the analyses of the plant. The exergy transports between the components and the consumptions in each of the components of the whole plant are determined for the average parameters obtained from the actual data. The exergy loss and flow diagram (the so-called Grassmann diagram) are also presented for the entire plant studied to give quantitative information regarding the proportion of the exergy input that is dissipated in the various plant components. Among the observed components in the plant, the most efficient equipment is found to be the shell- and tube-type heat exchanger with an exergy efficiency value of 85%. The overall exergetic efficiency performance of the plant (the so-called functional exergy efficiency) is obtained to be about 12%, while the exergy efficiency value on the exergetic fuel,product basis is calculated to be about 65%. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Interactive animation of virtual humans based on motion capture data

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5-6 2009
    Franck Multon
    Abstract This paper presents a novel, parameteric framework for synthesizing new character motions from existing motion capture data. Our framework can conduct morphological adaptation as well as kinematic and physically-based corrections. All these solvers are organized in layers in order to be easily combined together. Given locomotion as an example, the system automatically adapts the motion data to the size of the synthetic figure and to its environment; the character will correctly step over complex ground shapes and counteract with external forces applied to the body. Our framework is based on a frame-based solver. This ensures animating hundreds of humanoids with different morphologies in real-time. It is particularly suitable for interactive applications such as video games and virtual reality where a user interacts in an unpredictable way. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    The application of spreadsheets to the analysis and optimization of systems and processes in the teaching of hydraulic and thermal engineering

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 4 2006
    A. Rivas
    Abstract This article shows the capability of current spreadsheets to define, analyze and optimize models of systems and processes. Specifically, the Microsoft spreadsheet Excel is used, with its built-in solver, to analyze and to optimize systems and processes of medium complexity, whose mathematical models are expressed by means of nonlinear systems of equations. Two hydraulic and thermal engineering-based application examples are presented, respectively: the analysis and optimization of vapor power cycles, and the analysis and design of piping networks. The mathematical models of these examples have been implemented in Excel and have been solved with the solver. For the power cycles, the thermodynamic properties of water have been calculated by means of the add-in TPX (Thermodynamic Properties for Excel). Performance and optimum designs are presented in cases studies, according to the optimization criteria of maximum efficiency for the power cycle and minimum cost for the piping networks. © 2006 Wiley Periodicals, Inc. Comput Appl Eng Educ 14: 256,268, 2006; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20085 [source]


    Magnetostatic analysis of a brushless DC motor using a two-dimensional partial differential equation solver

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 2 2001
    A. Kostaridis
    Abstract A finite element, magnetostatic analysis, of a brushless direct current motor containing non-linear materials and permanent magnets is presented. The analysis is performed with PDEaseÔ, a low cost, two-dimensional partial differential equation solver. The descriptor file is remarkably short and easy to understand, enabling students to focus on the application and not on the finite element method. © 2001 John Wiley & Sons, Inc. Comput Appl Eng Educ 9: 93,100, 2001 [source]


    A Semi-Lagrangian CIP Fluid Solver without Dimensional Splitting

    COMPUTER GRAPHICS FORUM, Issue 2 2008
    Doyub Kim
    Abstract In this paper, we propose a new constrained interpolation profile (CIP) method that is stable and accurate but requires less amount of computation compared to existing CIP-based solvers. CIP is a high-order fluid advection solver that can reproduce rich details of fluids. It has third-order accuracy but its computation is performed over a compact stencil. These advantageous features of CIP are, however, diluted by the following two shortcomings: (1) CIP contains a defect in the utilization of the grid data, which makes the method suitable only for simulations with a tight CFL restriction; and (2) CIP does not guarantee unconditional stability. There have been several attempts to fix these problems in CIP, but they have been only partially successful. The solutions that fixed both problems ended up introducing other undesirable features, namely increased computation time and/or reduced accuracy. This paper proposes a novel modification of the original CIP method that fixes all of the above problems without increasing the computational load or reducing the accuracy. Both quantitative and visual experiments were performed to test the performance of the new CIP in comparison to existing fluid solvers. The results show that the proposed method brings significant improvements in both accuracy and speed. [source]


    Practical CFD Simulations on Programmable Graphics Hardware using SMAC,

    COMPUTER GRAPHICS FORUM, Issue 4 2005
    Carlos E. Scheidegger
    Abstract The explosive growth in integration technology and the parallel nature of rasterization-based graphics APIs (Application Programming Interface) changed the panorama of consumer-level graphics: today, GPUs (Graphics Processing Units) are cheap, fast and ubiquitous. We show how to harness the computational power of GPUs and solve the incompressible Navier-Stokes fluid equations significantly faster (more than one order of magnitude in average) than on CPU solvers of comparable cost. While past approaches typically used Stam's implicit solver, we use a variation of SMAC (Simplified Marker and Cell). SMAC is widely used in engineering applications, where experimental reproducibility is essential. Thus, we show that the GPU is a viable and affordable processor for scientific applications. Our solver works with general rectangular domains (possibly with obstacles), implements a variety of boundary conditions and incorporates energy transport through the traditional Boussinesq approximation. Finally, we discuss the implications of our solver in light of future GPU features, and possible extensions such as three-dimensional domains and free-boundary problems. [source]


    Initialization Strategies in Simulation-Based SFE Eigenvalue Analysis

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2005
    Song Du
    Poor initializations often result in slow convergence, and in certain instances may lead to an incorrect or irrelevant answer. The problem of selecting an appropriate starting vector becomes even more complicated when the structure involved is characterized by properties that are random in nature. Here, a good initialization for one sample could be poor for another sample. Thus, the proper eigenvector initialization for uncertainty analysis involving Monte Carlo simulations is essential for efficient random eigenvalue analysis. Most simulation procedures to date have been sequential in nature, that is, a random vector to describe the structural system is simulated, a FE analysis is conducted, the response quantities are identified by post-processing, and the process is repeated until the standard error in the response of interest is within desired limits. A different approach is to generate all the sample (random) structures prior to performing any FE analysis, sequentially rank order them according to some appropriate measure of distance between the realizations, and perform the FE analyses in similar rank order, using the results from the previous analysis as the initialization for the current analysis. The sample structures may also be ordered into a tree-type data structure, where each node represents a random sample, the traverse of the tree starts from the root of the tree until every node in the tree is visited exactly once. This approach differs from the sequential ordering approach in that it uses the solution of the "closest" node to initialize the iterative solver. The computational efficiencies that result from such orderings (at a modest expense of additional data storage) are demonstrated through a stability analysis of a system with closely spaced buckling loads and the modal analysis of a simply supported beam. [source]


    Integration of General Sparse Matrix and Parallel Computing Technologies for Large,Scale Structural Analysis

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 6 2002
    Hsien Hsieh, Shang
    Both general sparse matrix and parallel computing technologies are integrated in this study as a finite element solution of large,scale structural problems in a PC cluster environment. The general sparse matrix technique is first employed to reduce execution time and storage requirements for solving the simultaneous equilibrium equations in finite element analysis. To further reduce the time required for large,scale structural analyses, two parallel processing approaches for sharing computational workloads among collaborating processors are then investigated. One approach adopts a publicly available parallel equation solver, called SPOOLES, to directly solve the sparse finite element equations, while the other employs a parallel substructure method for the finite element solution. This work focuses more on integrating the general sparse matrix technique and the parallel substructure method for large,scale finite element solutions. Additionally, numerical studies have been conducted on several large,scale structural analyses using a PC cluster to investigate the effectiveness of the general sparse matrix and parallel computing technologies in reducing time and storage requirements in large,scale finite element structural analyses. [source]


    Complex version of high performance computing LINPACK benchmark (HPL)

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 5 2010
    R. F. Barrett
    Abstract This paper describes our effort to enhance the performance of the AORSA fusion energy simulation program through the use of high-performance LINPACK (HPL) benchmark, commonly used in ranking the top 500 supercomputers. The algorithm used by HPL, enhanced by a set of tuning options, is more effective than that found in the ScaLAPACK library. Retrofitting these algorithms, such as look-ahead processing of pivot elements, into ScaLAPACK is considered as a major undertaking. Moreover, HPL is configured as a benchmark, but only for real-valued coefficients. We therefore developed software to convert HPL for use within an application program that generates complex coefficient linear systems. Although HPL is not normally perceived as a part of an application, our results show that the modified HPL software brings a significant increase in the performance of the solver when simulating the highest resolution experiments thus far configured, achieving 87.5 TFLOPS on over 20 000 processors on the Cray XT4. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Usability levels for sparse linear algebra components,

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 12 2008
    M. Sosonkina
    Abstract Sparse matrix computations are ubiquitous in high-performance computing applications and often are their most computationally intensive part. In particular, efficient solution of large-scale linear systems may drastically improve the overall application performance. Thus, the choice and implementation of the linear system solver are of paramount importance. It is difficult, however, to navigate through a multitude of available solver packages and to tune their performance to the problem at hand, mainly because of the plethora of interfaces, each requiring application adaptations to match the specifics of solver packages. For example, different ways of setting parameters and a variety of sparse matrix formats hinder smooth interactions of sparse matrix computations with user applications. In this paper, interfaces designed for components that encapsulate sparse matrix computations are discussed in the light of their matching with application usability requirements. Consequently, we distinguish three levels of interfaces, high, medium, and low, corresponding to the degree of user involvement in the linear system solution process and in sparse matrix manipulations. We demonstrate when each interface design choice is applicable and how it may be used to further users' scientific goals. Component computational overheads caused by various design choices are also examined, ranging from low level, for matrix manipulation components, to high level, in which a single component contains the entire linear system solver. Published in 2007 by John Wiley & Sons, Ltd. [source]


    Parallel space-filling curve generation through sorting

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2007
    J. Luitjens
    Abstract In this paper we consider the scalability of parallel space-filling curve generation as implemented through parallel sorting algorithms. Multiple sorting algorithms are studied and results show that space-filling curves can be generated quickly in parallel on thousands of processors. In addition, performance models are presented that are consistent with measured performance and offer insight into performance on still larger numbers of processors. At large numbers of processors, the scalability of adaptive mesh refined codes depends on the individual components of the adaptive solver. One such component is the dynamic load balancer. In adaptive mesh refined codes, the mesh is constantly changing resulting in load imbalance among the processors requiring a load-balancing phase. The load balancing may occur often, requiring the load balancer to perform quickly. One common method for dynamic load balancing is to use space-filling curves. Space-filling curves, in particular the Hilbert curve, generate good partitions quickly in serial. However, at tens and hundreds of thousands of processors serial generation of space-filling curves will hinder scalability. In order to avoid this issue we have developed a method that generates space-filling curves quickly in parallel by reducing the generation to integer sorting. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Parallelization and scalability of a spectral element channel flow solver for incompressible Navier,Stokes equations

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2007
    C. W. Hamman
    Abstract Direct numerical simulation (DNS) of turbulent flows is widely recognized to demand fine spatial meshes, small timesteps, and very long runtimes to properly resolve the flow field. To overcome these limitations, most DNS is performed on supercomputing machines. With the rapid development of terascale (and, eventually, petascale) computing on thousands of processors, it has become imperative to consider the development of DNS algorithms and parallelization methods that are capable of fully exploiting these massively parallel machines. A highly parallelizable algorithm for the simulation of turbulent channel flow that allows for efficient scaling on several thousand processors is presented. A model that accurately predicts the performance of the algorithm is developed and compared with experimental data. The results demonstrate that the proposed numerical algorithm is capable of scaling well on petascale computing machines and thus will allow for the development and analysis of high Reynolds number channel flows. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Full waveform seismic inversion using a distributed system of computers

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11 2005
    Indrajit G. Roy
    Abstract The aim of seismic waveform inversion is to estimate the elastic properties of the Earth's subsurface layers from recordings of seismic waveform data. This is usually accomplished by using constrained optimization often based on very simplistic assumptions. Full waveform inversion uses a more accurate wave propagation model but is extremely difficult to use for routine analysis and interpretation. This is because computational difficulties arise due to: (1) strong nonlinearity of the inverse problem; (2) extreme ill-posedness; and (3) large dimensions of data and model spaces. We show that some of these difficulties can be overcome by using: (1) an improved forward problem solver and efficient technique to generate sensitivity matrix; (2) an iteration adaptive regularized truncated Gauss,Newton technique; (3) an efficient technique for matrix,matrix and matrix,vector multiplication; and (4) a parallel programming implementation with a distributed system of processors. We use a message-passing interface in the parallel programming environment. We present inversion results for synthetic and field data, and a performance analysis of our parallel implementation. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Optimization Study of ICRF Heating in the LHD and HSX Configurations

    CONTRIBUTIONS TO PLASMA PHYSICS, Issue 6-7 2010
    S. Murakami
    Abstract Two global simulation codes, TASK/WM (a full wave solver) and GNET (a 5-D drift kinetic equation solver), are combined to simulate the ICRF heating in the 3D magnetic configuration. The combined code is applied to the ICRF minority heating in the LHD configuration. An optimization of the ICRF heating is considered in changing the magnetic configurations and the resonance surfaces in the LHD plasmas using GNET code. It is found that the heating efficiency is improved about 30% with the heating power of 10MW in the optimized heating scenario from that of the present standard off-axis heating scenario. Also the ICRF minority heating is studied in the HSX plasma and it is found that the ICRF heating of about 100kW is still effective to heat the plasma even , /a , 1/7.5 for tail ions (© 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    A parallel multigrid solver for high-frequency electromagnetic field analyses with small-scale PC cluster

    ELECTRONICS & COMMUNICATIONS IN JAPAN, Issue 9 2008
    Kuniaki Yosui
    Abstract Finite element analyses of electromagnetic fields are commonly used for designing various electronic devices. The scale of the analyses becomes larger and larger, therefore, a fast linear solver is needed to solve linear equations arising from the finite element method. Since a multigrid solver is the fastest linear solver for these problems, parallelization of a multigrid solver is quite a useful approach. From the viewpoint of industrial applications, an effective usage of a small-scale PC cluster is important due to initial cost for introducing parallel computers. In this paper, a distributed parallel multigrid solver for a small-scale PC cluster is developed. In high-frequency electromagnetic analyses, a special block Gauss, Seidel smoother is used for the multigrid solver instead of general smoothers such as a Gauss, Seidel or Jacobi smoother in order to improve the convergence rate. The block multicolor ordering technique is applied to parallelize the smoother. A numerical example shows that a 3.7-fold speed-up in computational time and a 3.0-fold increase in the scale of the analysis were attained when the number of CPUs was increased from one to five. © 2009 Wiley Periodicals, Inc. Electron Comm Jpn, 91(9): 28, 36, 2008; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecj.10160 [source]


    Application of the equivalent multipole moment method with polar translations to forward calculation of neuromagnetic fields

    ELECTRONICS & COMMUNICATIONS IN JAPAN, Issue 4 2008
    Shoji Hamada
    Abstract This paper describes an application of the equivalent multipole moment method (EMMM) with polar translations to calculation of magnetic fields induced by a current dipole placed in a human head model. Although the EMMM is a conventional Laplacian field solver based on spherical harmonic functions, the polar translations enable it to treat eccentric and exclusive spheres in arbitrary arrangements. The head model is composed of seven spheres corresponding to skin, two eyeballs, skull, cerebral spinal fluid, gray matter, and white matter. The validity of the calculated magnetic fields and the magnetic flux linkages with a loop coil located near the model is successfully confirmed by the reciprocity theorem derived by Eaton. © 2008 Wiley Periodicals, Inc. Electron Comm Jpn, 91(4): 34,44, 2008; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecj.10079 [source]


    Modelling of Hot Ductility during Solidification of Steel Grades in Continuous Casting , Part II,

    ADVANCED ENGINEERING MATERIALS, Issue 3 2010
    Bernd Böttger
    In continuous casting, the probability of hot cracks developing strongly depends on the local solidification process and the microstructure formation. In ref. 1, an integrative model for hot cracking of the initial solid shell is developed. This paper focuses on solidification modelling, which plays an important role in the integrated approach. Solidification is simulated using a multiphase-field model, coupled online to thermodynamic and diffusion databases and using an integrated 1D temperature solver to describe the local temperature field. Less-complex microsegregation models are discussed for comparison. The results are compared to EDX results from strand samples of different steel grades. [source]


    Design of an estimator of the kinematics of AC contactors

    EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 7 2009
    Jordi-Roger Riba Ruiz
    Abstract This paper develops an estimator of the kinematics of the movable parts of any AC powered contactor. This estimator uses easily measurable electrical variables such as the voltage across the coil terminals and the current flowing through the main coil of the contactor. Hence, a low cost microcontroller would be able to implement a control algorithm in order to reduce the undesirable phenomenon of contact bounce, which causes severe erosion of the contacts and dramatically reduces the electrical life and reliability of the contacts. To develop such an estimator is essential to have at our disposal a robust model of the contactor. Therefore, a rigorous parametric model that allows us to predict the dynamic response of the AC contactor is proposed. It solves the mechanic and electromagnetic coupled differential equations that govern the dynamics of the contactor by applying a Runge,Kutta-based solver. Several approaches have been described in the technical literature. Most of them are based on high cost computational finite elements method or on simplified parametric models. The parametric model presented here takes into account the fringing flux and deals with shading rings interaction from a general point of view, thus avoiding simplified assumptions. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Survivable wavelength-routed optical network design using genetic algorithms

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 3 2008
    Y. S. Kavian
    The provision of acceptable service in the presence of failures and attacks is a major issue in the design of next generation dense wavelength division multiplexing (DWDM) networks. Survivability is provided by the establishment of spare lightpaths for each connection request to protect the working lightpaths. This paper presents a genetic algorithm (GA) solver for the routing and wavelength assignment problem with working and spare lightpaths to design survivable optical networks in the presence of a single link failure. Lightpaths are encoded into chromosomes made up of a fixed number of genes equal to the number of entries in the traffic demand matrix. Each gene represents one valid path and is thus coded as a variable length binary string. After crossover and mutation, each member of the population represents a set of valid but possibly incompatible paths and those that do not satisfy the problem constraints are discarded. The best paths are then found by use of a fitness function and these are assigned the minimum number of wavelengths according to the problem constraints. The proposed approach has been evaluated on dedicated path protection and shared path protection. Simulation results show that the GA method is efficient and able to design DWDM survivable real-world optical mesh networks. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    2-D/3-D multiply transmitted, converted and reflected arrivals in complex layered media with the modified shortest path method

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2009
    Chao-Ying Bai
    SUMMARY Grid-cell based schemes for tracing seismic arrivals, such as the finite difference eikonal equation solver or the shortest path method (SPM), are conventionally confined to locating first arrivals only. However, later arrivals are numerous and sometimes of greater amplitude than the first arrivals, making them valuable information, with the potential to be used for precise earthquake location, high-resolution seismic tomography, real-time automatic onset picking and identification of multiple events on seismic exploration data. The purpose of this study is to introduce a modified SPM (MSPM) for tracking multiple arrivals comprising any kind of combination of transmissions, conversions and reflections in complex 2-D/3-D layered media. A practical approach known as the multistage scheme is incorporated into the MSPM to propagate seismic wave fronts from one interface (or subsurface structure for 3-D application) to the next. By treating each layer that the wave front enters as an independent computational domain, one obtains a transmitted and/or converted branch of later arrivals by reinitializing it in the adjacent layer, and a reflected and/or converted branch of later arrivals by reinitializing it in the incident layer. A simple local grid refinement scheme at the layer interface is used to maintain the same accuracy as in the one-stage MSPM application in tracing first arrivals. Benchmark tests against the multistage fast marching method are undertaken to assess the solution accuracy and the computational efficiency. Several examples are presented that demonstrate the viability of the multistage MSPM in highly complex layered media. Even in the presence of velocity variations, such as the Marmousi model, or interfaces exhibiting a relatively high curvature, later arrivals composed of any combination of the transmitted, converted and reflected events are tracked accurately. This is because the multistage MSPM retains the desirable properties of a single-stage MSPM: high computational efficiency and a high accuracy compared with the multistage FMM scheme. [source]


    Parsimonious finite-volume frequency-domain method for 2-D P,SV -wave modelling

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2008
    R. Brossier
    SUMMARY A new numerical technique for solving 2-D elastodynamic equations based on a finite-volume frequency-domain approach is proposed. This method has been developed as a tool to perform 2-D elastic frequency-domain full-waveform inversion. In this context, the system of linear equations that results from the discretization of the elastodynamic equations is solved with a direct solver, allowing efficient multiple-source simulations at the partial expense of the memory requirement. The discretization of the finite-volume approach is through triangles. Only fluxes with the required quantities are shared between the cells, relaxing the meshing conditions, as compared to finite-element methods. The free surface is described along the edges of the triangles, which can have different slopes. By applying a parsimonious strategy, the stress components are eliminated from the discrete equations and only the velocities are left as unknowns in the triangles. Together with the local support of the P0 finite-volume stencil, the parsimonious approach allows the minimizing of core memory requirements for the simulation. Efficient perfectly matched layer absorbing conditions have been designed for damping the waves around the grid. The numerical dispersion of this FV formulation is similar to that of O(,x2) staggered-grid finite-difference (FD) formulations when considering structured triangular meshes. The validation has been performed with analytical solutions of several canonical problems and with numerical solutions computed with a well-established FD time-domain method in heterogeneous media. In the presence of a free surface, the finite-volume method requires 10 triangles per wavelength for a flat topography, and fifteen triangles per wavelength for more complex shapes, well below the criteria required by the staircase approximation of O(,x2) FD methods. Comparisons between the frequency-domain finite-volume and the O(,x2) rotated FD methods also show that the former is faster and less memory demanding for a given accuracy level, an attractive feature for frequency-domain seismic inversion. We have thus developed an efficient method for 2-D P,SV -wave modelling on structured triangular meshes as a tool for frequency-domain full-waveform inversion. Further work is required to improve the accuracy of the method on unstructured meshes. [source]


    A practical grid-based method for tracking multiple refraction and reflection phases in three-dimensional heterogeneous media

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2006
    M. De Kool
    SUMMARY We present a practical grid-based method in 3-D spherical coordinates for computing multiple phases comprising any number of reflection and transmission branches in heterogeneous layered media. The new scheme is based on a multistage approach which treats each layer that the wave front enters as a separate computational domain. A finite-difference eikonal solver known as the fast-marching method (FMM) is reinitialized at each interface to track the evolving wave front as either a reflection back into the incident layer or a transmission through to the adjacent layer. Unlike the standard FMM, which only finds first arrivals, this multistage approach can track those later arriving phases explicitly caused by the presence of discontinuities. Notably, the method does not require an irregular mesh to be constructed in order to connect interface nodes to neighbouring velocity nodes which lie on a regular grid. To improve accuracy, local grid refinement is used in the neighbourhood of a source point where wave front curvature is high. The method also provides a way to trace reflections from an interface that are not the first arrival (e.g. the global PP phase). These are computed by initializing the multistage FMM from both the source and receiver, propagating the two wave fronts to the reflecting interface, and finding stationary points of the sum of the two traveltime fields on the reflecting interface. A series of examples are presented to test the efficiency, accuracy and robustness of the new scheme. As well as efficiently computing various global phases to an acceptable accuracy through the ak135 model, we also demonstrate the ability of the scheme to track complex crustal phases that may be encountered in coincident reflection, wide-angle reflection/refraction or local earthquake surveys. In one example, a variety of phases are computed in the presence of a realistic subduction zone, which includes several layer pinch-outs and a subducting slab. Our numerical tests show that the new scheme is a practical and robust alternative to conventional ray tracing for finding various phases in layered media at a variety of scales. [source]


    Traveltime computation with the linearized eikonal equation for anisotropic media

    GEOPHYSICAL PROSPECTING, Issue 4 2002
    Tariq Alkhalifah
    A linearized eikonal equation is developed for transversely isotropic (TI) media with a vertical symmetry axis (VTI). It is linear with respect to perturbations in the horizontal velocity or the anisotropy parameter ,. An iterative linearization of the eikonal equation is used as the basis for an algorithm of finite-difference traveltime computations. A practical implementation of this iterative technique is to start with a background model that consists of an elliptically anisotropic, inhomogeneous medium, since traveltimes for this type of medium can be calculated efficiently using eikonal solvers, such as the fast marching method. This constrains the perturbation to changes in the anisotropy parameter , (the parameter most responsible for imaging improvements in anisotropic media). The iterative implementation includes repetitive calculation of , from traveltimes, which is then used to evaluate the perturbation needed for the next round of traveltime calculations using the linearized eikonal equation. Unlike isotropic media, interpolation is needed to estimate , in areas where the traveltime field is independent of ,, such as areas where the wave propagates vertically. Typically, two to three iterations can give sufficient accuracy in traveltimes for imaging applications. The cost of each iteration is slightly less than the cost of a typical eikonal solver. However, this method will ultimately provide traveltime solutions for VTI media. The main limitation of the method is that some smoothness of the medium is required for the iterative implementation to work, especially since we evaluate derivatives of the traveltime field as part of the iterative approach. If a single perturbation is sufficient for the traveltime calculation, which may be the case for weak anisotropy, no smoothness of the medium is necessary. Numerical tests demonstrate the robustness and efficiency of this approach. [source]


    A Parallel PCG Solver for MODFLOW

    GROUND WATER, Issue 6 2009
    Yanhui Dong
    In order to simulate large-scale ground water flow problems more efficiently with MODFLOW, the OpenMP programming paradigm was used to parallelize the preconditioned conjugate-gradient (PCG) solver with in this study. Incremental parallelization, the significant advantage supported by OpenMP on a shared-memory computer, made the solver transit to a parallel program smoothly one block of code at a time. The parallel PCG solver, suitable for both MODFLOW-2000 and MODFLOW-2005, is verified using an 8-processor computer. Both the impact of compilers and different model domain sizes were considered in the numerical experiments. Based on the timing results, execution times using the parallel PCG solver are typically about 1.40 to 5.31 times faster than those using the serial one. In addition, the simulation results are the exact same as the original PCG solver, because the majority of serial codes were not changed. It is worth noting that this parallelizing approach reduces cost in terms of software maintenance because only a single source PCG solver code needs to be maintained in the MODFLOW source tree. [source]


    Impact of Simulation Model Solver Performance on Ground Water Management Problems

    GROUND WATER, Issue 5 2008
    David P. Ahlfeld
    Ground water management models require the repeated solution of a simulation model to identify an optimal solution to the management problem. Limited precision in simulation model calculations can cause optimization algorithms to produce erroneous solutions. Experiments are conducted on a transient field application with a streamflow depletion control management formulation solved with a response matrix approach. The experiment consists of solving the management model with different levels of simulation model solution precision and comparing the differences in optimal solutions obtained. The precision of simulation model solutions is controlled by choice of solver and convergence parameter and is monitored by observing reported budget discrepancy. The difference in management model solutions results from errors in computation of response coefficients. Error in the largest response coefficients is found to have the most significant impact on the optimal solution. Methods for diagnosing the adequacy of precision when simulation models are used in a management model framework are proposed. [source]


    Numerical simulation of a dam break for an actual river terrain environment

    HYDROLOGICAL PROCESSES, Issue 4 2007
    C. B. Liao
    Abstract A two-dimensional (2D) finite-difference shallow water model based on a second-order hybrid type of total variation diminishing (TVD) approximate solver with a MUSCL limiter function was developed to model flooding and inundation problems where the evolution of the drying and wetting interface is numerically challenging. Both a minimum positive depth (MPD) scheme and a non-MPD scheme were employed to handle the advancement of drying and wetting fronts. We used several model problems to verify the model, including a dam break in a slope channel, a dam break flooding over a triangular obstacle, an idealized circular dam-break, and a tide flow over a mound. Computed results agreed well with the experiment data and other numerical results available. The model was then applied to simulate the dam breaking and flooding of Hsindien Creek, Taiwan, with the detailed river basin topography. Computed flooding scenarios show reasonable flow characteristics. Though the average speed of flooding is 6,7 m s,1, which corresponds to the subcritical flow condition (Fr < 1), the local maximum speed of flooding is 14·12 m s,1, which corresponds to the supercritical flow condition (Fr , 1·31). It is necessary to conduct some kind of comparison of the numerical results with measurements/experiments in further studies. Nevertheless, the model exhibits its capability to capture the essential features of dam-break flows with drying and wetting fronts. It also exhibits the potential to provide the basis for computationally efficient flood routing and warning information. Copyright © 2006 John Wiley & Sons, Ltd. [source]