Computational Time (computational + time)

Distribution by Scientific Domains
Distribution within Engineering


Selected Abstracts


A parallel multigrid solver for high-frequency electromagnetic field analyses with small-scale PC cluster

ELECTRONICS & COMMUNICATIONS IN JAPAN, Issue 9 2008
Kuniaki Yosui
Abstract Finite element analyses of electromagnetic fields are commonly used for designing various electronic devices. The scale of the analyses becomes larger and larger, therefore, a fast linear solver is needed to solve linear equations arising from the finite element method. Since a multigrid solver is the fastest linear solver for these problems, parallelization of a multigrid solver is quite a useful approach. From the viewpoint of industrial applications, an effective usage of a small-scale PC cluster is important due to initial cost for introducing parallel computers. In this paper, a distributed parallel multigrid solver for a small-scale PC cluster is developed. In high-frequency electromagnetic analyses, a special block Gauss, Seidel smoother is used for the multigrid solver instead of general smoothers such as a Gauss, Seidel or Jacobi smoother in order to improve the convergence rate. The block multicolor ordering technique is applied to parallelize the smoother. A numerical example shows that a 3.7-fold speed-up in computational time and a 3.0-fold increase in the scale of the analysis were attained when the number of CPUs was increased from one to five. © 2009 Wiley Periodicals, Inc. Electron Comm Jpn, 91(9): 28, 36, 2008; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecj.10160 [source]


Optimal measurement placement for security constrained state estimation using hybrid genetic algorithm and simulated annealing

EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 2 2009
T. Kerdchuen
Abstract This paper proposes a hybrid genetic algorithm and simulated annealing (HGS) for solving optimal measurement placement for power system state estimation. Even though the minimum number of measurement pairs is N considering the single measurement loss, their positions are required to make the system observable. HGS algorithm is a genetic algorithm (GA) using the acceptance criterion of simulated annealing (SA) for chromosome selection. The P, observable concept is used to check the network observability with and without single measurement pair loss contingency and single branch outage. Test results of 10-bus, IEEE 14, 30, 57, and 118-bus systems indicate that HGS is superior to tabu search (TS), GA, and SA in terms of higher frequency of the best hit and faster computational time. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Development of the extended parametric meshless Galerkin method to predict the crack propagation path in two-dimensional damaged structures

FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 7 2009
M. MUSIVAND-ARZANFUDI
ABSTRACT The parametric meshless Galerkin method (PMGM) enhances the promising features of the meshless methods by utilizing the parametric spaces and parametric mapping, and improves their efficiency from the practical viewpoints. The computation of meshless shape functions has been usually a time-consuming and complicated task in the meshless methods. In the PMGM, the meshless shape functions are mapped from the parametric space to the physical space, and therefore, the necessary computational time to generate the meshless shape functions is saved. The extended parametric meshless Galerkin method (X-PMGM) even improves the parametric property of the PMGM by incorporating the partition of unity concepts. In this paper, the development of the X-PMGM is extended by incorporating a crack-tip formulation in X-PMGM for fracture analysis and prediction of crack propagation path in the damaged structures. In this formulation, meshless shape functions are enriched by discontinuous enrichment function as well as crack-tip enrichment functions. The obtained results show that the predicted crack growth path is in good agreement with the experimental results. [source]


Approximation methods for reliability-based design optimization problems

GAMM - MITTEILUNGEN, Issue 2 2007
Irfan Kaymaz
Abstract Deterministic optimum designs are obtained without considering of uncertainties related to the problem parameters such as material parameters (yield stress, allowable stresses, moment capacities, etc.), external loadings, manufacturing errors, tolerances, cost functions, which could lead to unreliable designs, therefore several methods have been developed to treat uncertainties in engineering analysis and, more recently, to carry out design optimization with the additional requirement of reliability, which referred to as reliability-based design optimization. In this paper, two most common approaches for reliability-based design optimization are reviewed, one of which is reliability-index based approach and the other performancemeasure approach. Although both approaches can be used to evaluate the probabilistic constraint, their use can be prohibitive when the associated function evaluation required by the probabilistic constraint is expensive, especially for real engineering problems. Therefore, an adaptive response surface method is proposed by which the probabilistic constraint is replaced with a simple polynomial function, thus the computational time can be reduced significantly as presented in the example given in this paper. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Inversion of time-dependent nuclear well-logging data using neural networks

GEOPHYSICAL PROSPECTING, Issue 1 2008
Laura Carmine
ABSTRACT The purpose of this work was to investigate a new and fast inversion methodology for the prediction of subsurface formation properties such as porosity, salinity and oil saturation, using time-dependent nuclear well logging data. Although the ultimate aim is to apply the technique to real-field data, an initial investigation as described in this paper, was first required; this has been carried out using simulation results from the time-dependent radiation transport problem within a borehole. Simulated neutron and ,-ray fluxes at two sodium iodide (NaI) detectors, one near and one far from a pulsed neutron source emitting at ,14 MeV, were used for the investigation. A total of 67 energy groups from the BUGLE96 cross section library together with 567 property combinations were employed for the original flux response generation, achieved by solving numerically the time-dependent Boltzmann radiation transport equation in its even parity form. Material property combinations (scenarios) and their correspondent teaching outputs (flux response at detectors) are used to train the Artificial Neural Networks (ANNs) and test data is used to assess the accuracy of the ANNs. The trained networks are then used to produce a surrogate model of the expensive, in terms of computational time and resources, forward model with which a simple inversion method is applied to calculate material properties from the time evolution of flux responses at the two detectors. The inversion technique uses a fast surrogate model comprising 8026 artificial neural networks, which consist of an input layer with three input units (neurons) for porosity, salinity and oil saturation; and two hidden layers and one output neuron representing the scalar photon or neutron flux prediction at the detector. This is the first time this technique has been applied to invert pulsed neutron logging tool information and the results produced are very promising. The next step in the procedure is to apply the methodology to real data. [source]


Energy Group optimization for forward and inverse problems in nuclear engineering: application to downwell-logging problems

GEOPHYSICAL PROSPECTING, Issue 2 2006
Elsa Aristodemou
ABSTRACT Simulating radiation transport of neutral particles (neutrons and ,-ray photons) within subsurface formations has been an area of research in the nuclear well-logging community since the 1960s, with many researchers exploiting existing computational tools already available within the nuclear reactor community. Deterministic codes became a popular tool, with the radiation transport equation being solved using a discretization of phase-space of the problem (energy, angle, space and time). The energy discretization in such codes is based on the multigroup approximation, or equivalently the discrete finite-difference energy approximation. One of the uncertainties, therefore, of simulating radiation transport problems, has become the multigroup energy structure. The nuclear reactor community has tackled the problem by optimizing existing nuclear cross-sectional libraries using a variety of group-collapsing codes, whilst the nuclear well-logging community has relied, until now, on libraries used in the nuclear reactor community. However, although the utilization of such libraries has been extremely useful in the past, it has also become clear that a larger number of energy groups were available than was necessary for the well-logging problems. It was obvious, therefore, that a multigroup energy structure specific to the needs of the nuclear well-logging community needed to be established. This would have the benefit of reducing computational time (the ultimate aim of this work) for both the stochastic and deterministic calculations since computational time increases with the number of energy groups. We, therefore, present in this study two methodologies that enable the optimization of any multigroup neutron,, energy structure. Although we test our theoretical approaches on nuclear well-logging synthetic data, the methodologies can be applied to other radiation transport problems that use the multigroup energy approximation. The first approach considers the effect of collapsing the neutron groups by solving the forward transport problem directly using the deterministic code EVENT, and obtaining neutron and ,-ray fluxes deterministically for the different group-collapsing options. The best collapsing option is chosen as the one which minimizes the effect on the ,-ray spectrum. During this methodology, parallel processing is implemented to reduce computational times. The second approach uses the uncollapsed output from neural network simulations in order to estimate the new, collapsed fluxes for the different collapsing cases. Subsequently, an inversion technique is used which calculates the properties of the subsurface, based on the collapsed fluxes. The best collapsing option is chosen as the one that predicts the subsurface properties with a minimal error. The fundamental difference between the two methodologies relates to their effect on the generated ,-rays. The first methodology takes the generation of ,-rays fully into account by solving the transport equation directly. The second methodology assumes that the reduction of the neutron groups has no effect on the ,-ray fluxes. It does, however, utilize an inversion scheme to predict the subsurface properties reliably, and it looks at the effect of collapsing the neutron groups on these predictions. Although the second procedure is favoured because of (a) the speed with which a solution can be obtained and (b) the application of an inversion scheme, its results need to be validated against a physically more stringent methodology. A comparison of the two methodologies is therefore given. [source]


Compressing infrared spectrum of exhaust plume by wavelets

HEAT TRANSFER - ASIAN RESEARCH (FORMERLY HEAT TRANSFER-JAPANESE RESEARCH), Issue 2 2010
Yanming Wang
Abstract A study on multivariate calibration for the infrared spectrum of rocket exhaust plume was presented. As samples taken in the data set, the apparent infrared radiative properties of the high-temperature plume flowfield consisted of variable concentrations gas components and were obtained by using a flux method combined with a narrow-band model and Mie theory. The discrete wavelet transformation as a pre-processing tool was carried out to decompose the infrared spectrum and compress the data set. The compressed data regression model was applied to simultaneous multi-component concentrations for determination of the exhaust plume. The compression performance with several wavelet functions at different resolution scales was studied, and the prediction reliability of the compressed regression model was investigated. Numerical experiment results show that the wavelet transform performs an effective compression preprocessing technique in multivariate calibration and enhances the ability in characteristic extraction of the exhaust plume infrared spectrum. Using the compressed data regression model, the reconstructing results are almost identical when compared to the original spectrum, and the original size of the data set has been reduced to about 5% while the computational time needed decreases significantly. © 2009 Wiley Periodicals, Inc. Heat Trans Asian Res; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/htj.20280 [source]


A data-driven algorithm for constructing artificial neural network rainfall-runoff models

HYDROLOGICAL PROCESSES, Issue 6 2002
K. P. Sudheer
Abstract A new approach for designing the network structure in an artificial neural network (ANN)-based rainfall-runoff model is presented. The method utilizes the statistical properties such as cross-, auto- and partial-auto-correlation of the data series in identifying a unique input vector that best represents the process for the basin, and a standard algorithm for training. The methodology has been validated using the data for a river basin in India. The results of the study are highly promising and indicate that it could significantly reduce the effort and computational time required in developing an ANN model. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Investigation of a modified sequential iteration approach for solving coupled reactive transport problems

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 2 2006
David J. Z. Chen
Abstract When contaminants enter the soil or groundwater, they may interact physically, geochemically and biochemically with the native water, microorganisms and solid matrix. A realistic description of a reactive transport regime that includes these processes requires joint consideration of multiple chemical species. Currently there are three common numerical approaches for coupling multispecies reaction and solute transport: the one-step approach, the sequential non-iterative approach (SNIA), and the sequential iterative approach (SIA). A modification of the SNIA method is the Strang-splitting method. In this study, a new modified sequential iteration approach (MSIA) for solving multicomponent reactive transport in steady state groundwater flow is presented. This coupling approach has been applied to two realistic reactive transport problems and its performance compared with the SIA and the Strang-splitting methods. The comparison shows that MSIA consistently converges faster than the other two coupling schemes. For the simulation of nitrogen and related species transport and reaction in a riparian aquifer, the total CPU time required by MSIA is only about 38% of the total CPU time required by the SIA, and only 50% of the CPU time required by the Strang-splitting method. The test problem results indicate that the SIA has superior accuracy, while the accuracy of MSIA is marginally better than that of the Strang-splitting method. The overall performance of MSIA is considered to be good, especially for simulations in which computational time is a critical factor. Copyright © 2005 John Wiley & Sons, Ltd. [source]


A linearized implicit pseudo-spectral method for some model equations: the regularized long wave equations

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 11 2003
K. Djidjeli
Abstract An efficient numerical method is developed for the numerical solution of non-linear wave equations typified by the regularized long wave equation (RLW) and its generalization (GRLW). The method developed uses a pseudo-spectral (Fourier transform) treatment of the space dependence together with a linearized implicit scheme in time. =10pt An important advantage to be gained from the use of this method, is the ability to vary the mesh length, thereby reducing the computational time. Using a linearized stability analysis, it is shown that the proposed method is unconditionally stable. The method is second order in time and all-order in space. The method presented here is for the RLW equation and its generalized form, but it can be implemented to a broad class of non-linear long wave equations (Equation (2)), with obvious changes in the various formulae. Test problems, including the simulation of a single soliton and interaction of solitary waves, are used to validate the method, which is found to be accurate and efficient. The three invariants of the motion are evaluated to determine the conservation properties of the algorithm. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Multi-scale domain decomposition method for large-scale structural analysis with a zooming technique: Application to plate assembly

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 4 2009
A. Mobasher Amini
Abstract This article is concerned with a multi-scale domain decomposition method (DDM), based on the FETI-DP solver, for large-scale structural elastic analysis and suited to problems that exhibit structural heterogeneities, such as plate assemblies in the presence of structural details. In this approach once a partition of the global fine mesh into subdomains has been performed (all subdomains possess a fine mesh) and to optimize the computational time, the fine mesh is preserved only in the zones of interest (with local phenomena due to discontinuity, hole, etc.) while the remaining subdomains are replaced by numerical homogenized coarse elements. Indeed, the multi-scale aspect is introduced by the description of subdomains with either a fine or a coarse scale mesh. As a result, an extension of the FETI-DP DDM is proposed in this article (called herein FETI-DP micro,macro) that allows the simultaneous usage of different discretizations: fine (microscopic) mesh for subdomains in zones of interest and coarse (macroscopic or homogenized) mesh for the complementary part of the structure. Using this strategy raises the problem of the determination of the stiffness of coarse subdomains, and of the incompatible finite element connection between fine and coarse subdomains. Two approaches (collocation and Mortar) are presented and compared. The article ends with patch tests and some numerical examples in 2D and 3D. The obtained numerical results exemplify the efficiency and capability of the FETI-DP micro,macro approach and reveal that the Mortar approach is more accurate, at constant cost, than the collocation approach. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Dispersion analysis of the meshfree radial point interpolation method for the Helmholtz equation

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 12 2009
Christina Wenterodt
Abstract When numerical methods such as the finite element method (FEM) are used to solve the Helmholtz equation, the solutions suffer from the so-called pollution effect which leads to inaccurate results, especially for high wave numbers. The main reason for this is that the wave number of the numerical solution disagrees with the wave number of the exact solution, which is known as dispersion. In order to obtain admissible results a very high element resolution is necessary and increased computational time and memory capacity are the consequences. In this paper a meshfree method, namely the radial point interpolation method (RPIM), is investigated with respect to the pollution effect in the 2D-case. It is shown that this methodology is able to reduce the dispersion significantly. Two modifications of the RPIM, namely one with polynomial reproduction and another one with a problem-dependent sine/cosine basis, are also described and tested. Numerical experiments are carried out to demonstrate the advantages of the method compared with the FEM. For identical discretizations, the RPIM yields considerably better results than the FEM. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Development of a genetic algorithm-based lookup table approach for efficient numerical integration in the method of finite spheres with application to the solution of thin beam and plate problems

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 12 2006
Suleiman BaniHani
Abstract It is observed that for the solution of thin beam and plate problems using the meshfree method of finite spheres, Gaussian and adaptive quadrature schemes are computationally inefficient. In this paper, we develop a novel technique in which the integration points and weights are generated using genetic algorithms and stored in a lookup table using normalized coordinates as part of an offline computational step. During online computations, this lookup table is used much like a table of Gaussian integration points and weights in the finite element computations. This technique offers significant reduction of computational time without sacrificing accuracy. Example problems are solved which demonstrate the effectiveness of the procedure. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Forced vibrations in the medium frequency range solved by a partition of unity method with local information

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 9 2005
E. De Bel
Abstract A new approach for the computation of the forced vibrations up to the medium frequency range is formulated for thin plates. It is based on the partition of unity method (PUM), first proposed by Babu,ka, and used here to solve the elastodynamic problem. The paper focuses on the introduction of local information in the basis of the PUM coming from previous approximations, in order to enhance the accuracy of the solution. The method may be iterative and generates a PUM approximation leading to smaller models compared with the finite element ones required for a same accuracy level. It shows very promising results, in terms of frequency range, accuracy and computational time. Copyright © 2004 John Wiley & Sons, Ltd. [source]


A parallel Galerkin boundary element method for surface radiation and mixed heat transfer calculations in complex 3-D geometries

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 12 2004
X. Cui
Abstract This paper presents a parallel Galerkin boundary element method for the solution of surface radiation exchange problems and its coupling with the finite element method for mixed mode heat transfer computations in general 3-D geometries. The computational algorithm for surface radiation calculations is enhanced with the implementation of ideas used for 3-D computer graphics applications and with data structure management involving creating and updating various element lists optimized for numerical performance. The algorithm for detecting the internal third party blockages of thermal rays is presented, which involves a four-step procedure, i.e. the primary clip, secondary clip and adaptive integration with checking. Case studies of surface radiation and mixed heat transfer in both simple and complex 3-D geometric configurations are presented. It is found that a majority of computational time is spent on the detection of foreign element blockages and parallel computing is ideally suited for surface radiation calculations. Results show that the decrease of the CPU time approaches asymptotically to an inverse rule for parallel computing of surface radiation exchanges. For large-scale computations involving complex 3-D geometries, an iterative procedure is a preferred approach for the coupling of the Galerkin boundary and finite elements for mixed mode heat transfer calculations. Copyright © 2004 John Wiley & Sons, Ltd. [source]


An iterative defect-correction type meshless method for acoustics

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 15 2003
V. Lacroix
Abstract Accurate numerical simulation of acoustic wave propagation is still an open problem, particularly for medium frequencies. We have thus formulated a new numerical method better suited to the acoustical problem: the element-free Galerkin method (EFGM) improved by appropriate basis functions computed by a defect correction approach. One of the EFGM advantages is that the shape functions are customizable. Indeed, we can construct the basis of the approximation with terms that are suited to the problem which has to be solved. Acoustical problems, in cavities , with boundary T, are governed by the Helmholtz equation completed with appropriate boundary conditions. As the pressure p(x,y) is a complex variable, it can always be expressed as a function of cos,(x,y) and sin,(x,y) where ,(x,y) is the phase of the wave in each point (x,y). If the exact distribution ,(x,y) of the phase is known and if a meshless basis {1, cos,(x,y), sin, (x,y) } is used, then the exact solution of the acoustic problem can be obtained. Obviously, in real-life cases, the distribution of the phase is unknown. The aim of our work is to resolve, as a first step, the acoustic problem by using a polynomial basis to obtain a first approximation of the pressure field p(x,y). As a second step, from p(x,y) we compute the distribution of the phase ,(x,y) and we introduce it in the meshless basis in order to compute a second approximated pressure field p(x,y). From p(x,y), a new distribution of the phase is computed in order to obtain a third approximated pressure field and so on until a convergence criterion, concerning the pressure or the phase, is obtained. So, an iterative defect-correction type meshless method has been developed to compute the pressure field in ,. This work will show the efficiency of this meshless method in terms of accuracy and in terms of computational time. We will also compare the performance of this method with the classical finite element method. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Boundary elements for half-space problems via fundamental solutions: A three-dimensional analysis

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 11 2001
J. Liang
Abstract An efficient solution technique is proposed for the three-dimensional boundary element modelling of half-space problems. The proposed technique uses alternative fundamental solutions of the half-space (Mindlin's solutions for isotropic case) and full-space (Kelvin's solutions) problems. Three-dimensional infinite boundary elements are frequently employed when the stresses at the internal points are required to be evaluated. In contrast to the published works, the strongly singular line integrals are avoided in the proposed solution technique, while the discretization of infinite elements is independent of the finite boundary elements. This algorithm also leads to a better numerical accuracy while the computational time is reduced. Illustrative numerical examples for typical isotropic and transversely isotropichalf-space problems demonstrate the potential applications of the proposed formulations. Incidentally, the results of the illustrative examples also provide a parametric study for the imperfect contact problem. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Influence of reaction mechanisms, grid spacing, and inflow conditions on the numerical simulation of lifted supersonic flames

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 12 2010
P. Gerlinger
Abstract The simulation of supersonic combustion requires finite-rate chemistry because chemical and fluid mechanical time scales may be of the same order of magnitude. The size of the chosen reaction mechanism (number of species and reactions involved) has a strong influence on the computational time and thus should be chosen carefully. This paper investigates several hydrogen/air reaction mechanisms frequently used in supersonic combustion. It is shown that at low flight Mach numbers of a supersonic combustion ramjet (scramjet), some kinetic schemes can cause highly erroneous results. Moreover, extremely fine computational grids are required in the lift-off region of supersonic flames to obtain grid-independent solutions. The fully turbulent Mach 2 combustion experiment of Cheng et al. (Comb. Flame 1994; 99: 157,173) is chosen to investigate the influences of different reaction mechanisms, grid spacing, and inflow conditions (contaminations caused by precombustion). A detailed analysis of the experiment will be given and errors of previous simulations are identified. Thus, the paper provides important information for an accurate simulation of the Cheng et al. experiment. The importance of this experiment results from the fact that it is the only supersonic combustion test case where temperature and species fluctuations have been measured simultaneously. Such data are needed for the validation of probability density function methods. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Defining and optimizing algorithms for neighbouring particle identification in SPH fluid simulations

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 6 2008
G. Viccione
Abstract Lagrangian particle methods such as smoothed particle hydrodynamics (SPH) are very demanding in terms of computing time for large domains. Since the numerical integration of the governing equations is only carried out for each particle on a restricted number of neighbouring ones located inside a cut-off radius rc, a substantial part of the computational burden depends on the actual search procedure; it is therefore vital that efficient methods are adopted for such a search. The cut-off radius is indeed much lower than the typical domain's size; hence, the number of neighbouring particles is only a little fraction of the total number. Straightforward determination of which particles are inside the interaction range requires the computation of all pair-wise distances, a procedure whose computational time would be unpractical or totally impossible for large problems. Two main strategies have been developed in the past in order to reduce the unnecessary computation of distances: the first based on dynamically storing each particle's neighbourhood list (Verlet list) and the second based on a framework of fixed cells. The paper presents the results of a numerical sensitivity study on the efficiency of the two procedures as a function of such parameters as the Verlet size and the cell dimensions. An insight is given into the relative computational burden; a discussion of the relative merits of the different approaches is also given and some suggestions are provided on the computational and data structure of the neighbourhood search part of SPH codes. Copyright © 2008 John Wiley & Sons, Ltd. [source]


A pseudospectral Fourier method for a 1D incompressible two-fluid model

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 6 2008
H. Holmås
Abstract This paper presents an accurate and efficient pseudospectral (PS) Fourier method for a standard 1D incompressible two-fluid model. To the knowledge of the authors, it is the first PS method developed for the purpose of modelling waves in multiphase pipe flow. Contrary to conventional numerical methods, the PS method combines high accuracy and low computational costs with flexibility in terms of handling higher order derivatives and different types of partial differential equations. In an effort to improve the description of the stratified wavy flow regime, it can thus serve as a valuable tool for testing out new two-fluid model formulations. The main part of the algorithm is based on mathematical reformulations of the governing equations combined with extensive use of fast Fourier transforms. All the linear operations, including differentiations, are performed in Fourier space, whereas the nonlinear computations are performed in physical space. Furthermore, by exploiting the concept of an integrating factor, all linear parts of the problem are integrated analytically. The remaining nonlinear parts are advanced in time using a Runge,Kutta solver with an adaptive time step control. As demonstrated in the results section, these steps in sum yield a very accurate, fast and stable numerical method. A grid refinement analysis is used to compare the spatial convergence with the convergence rates of finite difference (FD) methods of up to order six. It is clear that the exponential convergence of the PS method is by far superior to the algebraic convergence of the FD schemes. Combined with the fact that the scheme is unconditionally linearly stable, the resulting increase in accuracy opens for several orders of magnitude savings in computational time. Finally, simulations of small amplitude, long wavelength sinusoidal waves are presented to illustrate the remarkable ability of the PS method to reproduce the linear stability properties of the two-fluid model. Copyright © 2008 John Wiley & Sons, Ltd. [source]


A 3-D non-hydrostatic pressure model for small amplitude free surface flows

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 6 2006
J. W. Lee
Abstract A three-dimensional, non-hydrostatic pressure, numerical model with k,, equations for small amplitude free surface flows is presented. By decomposing the pressure into hydrostatic and non-hydrostatic parts, the numerical model uses an integrated time step with two fractional steps. In the first fractional step the momentum equations are solved without the non-hydrostatic pressure term, using Newton's method in conjunction with the generalized minimal residual (GMRES) method so that most terms can be solved implicitly. This method only needs the product of a Jacobian matrix and a vector rather than the Jacobian matrix itself, limiting the amount of storage and significantly decreasing the overall computational time required. In the second step the pressure,Poisson equation is solved iteratively with a preconditioned linear GMRES method. It is shown that preconditioning reduces the central processing unit (CPU) time dramatically. In order to prevent pressure oscillations which may arise in collocated grid arrangements, transformed velocities are defined at cell faces by interpolating velocities at grid nodes. After the new pressure field is obtained, the intermediate velocities, which are calculated from the previous fractional step, are updated. The newly developed model is verified against analytical solutions, published results, and experimental data, with excellent agreement. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Progressive optimization on unstructured grids using multigrid-aided finite-difference sensitivities

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 10-11 2005
L. A. Catalano
Abstract This paper proposes an efficient and robust progressive-optimization procedure, employing cheap, flexible and easy-to-program multigrid-aided finite-differences for the computation of the sensitivity derivatives. The entire approach is combined with an upwind finite-volume method for the Euler and the Navier,Stokes equations on cell-vertex unstructured (triangular) grids, and validated versus the inverse design of an airfoil, under inviscid (subsonic and transonic) and laminar flow conditions. The methodology turns out to be robust and highly efficient, the converged design optimization being obtained in a computational time equal to that required by 11,17 (depending on the application) multigrid flow analyses on the finest grid. Copyright © 2004 John Wiley & Sons, Ltd. [source]


On the influence of numerical schemes and subgrid,stress models on large eddy simulation of turbulent flow past a square cylinder

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 3 2002
A. Nakayama
Abstract Influence of finite difference schemes and subgrid-stress models on the large eddy simulation calculation of turbulent flow around a bluff body of square cylinder at a laboratory Reynolds number, has been examined. It is found that the type and the order of accuracy of finite-difference schemes and the subgrid-stress model for satisfactory results are dependent on each other, and the grid resolution and the Reynolds number. Using computational grids manageable by workstation-level computers, with which the near-wall region of the separating boundary layer cannot be resolved, central-difference schemes of realistic orders of accuracy, either fully conservative or non-conservative, suffer stability problems. The upwind-biased schemes of third order and the Smagorinsky eddy-viscosity subgrid model can give reasonable results resolving much of the energy-containing turbulent eddies in the boundary layers and in the wake and representing the subgrid stresses in most parts of the flow. Noticeable improvements can be obtained by either using higher order difference schemes, increasing the grid resolution and/or by implementing a dynamic subgrid stress model, but each at a cost of increased computational time. For further improvements, the very small-scale eddies near the upstream corners and in the laminar sublayers need to be resolved but would require a substantially larger number of grid points that are out of the range of easily accessible computers. Copyright © 2002 John Wiley & Sons, Ltd. [source]


An implicit velocity decoupling procedure for the incompressible Navier,Stokes equations

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 2 2002
Kyoungyoun Kim
Abstract An efficient numerical method to solve the unsteady incompressible Navier,Stokes equations is developed. A fully implicit time advancement is employed to avoid the Courant,Friedrichs,Lewy restriction, where the Crank,Nicolson discretization is used for both the diffusion and convection terms. Based on a block LU decomposition, velocity,pressure decoupling is achieved in conjunction with the approximate factorization. The main emphasis is placed on the additional decoupling of the intermediate velocity components with only nth time step velocity. The temporal second-order accuracy is preserved with the approximate factorization without any modification of boundary conditions. Since the decoupled momentum equations are solved without iteration, the computational time is reduced significantly. The present decoupling method is validated by solving several test cases, in particular, the turbulent minimal channel flow unit. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Neural Signal Manager: a collection of classical and innovative tools for multi-channel spike train analysis

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 11 2009
Antonio Novellino
Abstract Recent developments in the neuroengineering field and the widespread use of the micro electrode arrays (MEAs) for electrophysiological investigations made available new approaches for studying the dynamics of dissociated neuronal networks as well as acute/organotypic slices maintained ex vivo. Importantly, the extraction of relevant parameters from these neural populations is likely to involve long-term measurements, lasting from a few hours to entire days. The processing of huge amounts of electrophysiological data, in terms of computational time and automation of the procedures, is actually one of the major bottlenecks for both in vivo and in vitro recordings. In this paper we present a collection of algorithms implemented within a new software package, named the Neural Signal Manager (NSM), aimed at analyzing a huge quantity of data recorded by means of MEAs in a fast and efficient way. The NSM offers different approaches for both spike and burst analysis, and integrates state-of-the-art statistical algorithms, such as the inter-spike interval histogram or the post stimulus time histogram, with some recent ones, such as the burst detection and its related statistics. In order to show the potentialities of the software, the application of the developed algorithms to a set of spontaneous activity recordings from dissociated cultures at different ages is presented in the Results section. Copyright © 2008 John Wiley & Sons, Ltd. [source]


A simple reactive gasdynamic model for the computation of gas temperature and species concentrations behind reflected shock waves,

INTERNATIONAL JOURNAL OF CHEMICAL KINETICS, Issue 4 2008
H. Li
A simple gasdynamic model, called CHEMSHOCK, has been developed to predict the temporal evolution of combustion gas temperature and species concentrations behind reflected shock waves with significant energy release. CHEMSHOCK provides a convenient simulation method to study various sized combustion mechanisms over a wide range of conditions. The model consists of two successive suboperations that are performed on a control mass during each infinitesimal time step: (1) first the gas mixture is allowed to combust at constant internal energy and volume; (2) then the gas is isentropically expanded (or compressed) at frozen composition to the measured pressure. The CHEMSHOCK model is first validated against results from a one-dimensional reacting computational fluid dynamics (CFD) code for a representative case of heptane/O2/Ar mixture using a reduced mechanism. CHEMSHOCK is found to accurately reproduce the results of the CFD calculation with significantly reduced computational time. The CHEMSHOCK simulation results are then compared to experimental results, for gas temperature and water vapor concentration, obtained using a novel laser sensor based on fixed-wavelength absorption of two H2O rovibrational transitions near 1.4 ,m. Excellent agreement is found between CHEMSHOCK simulations and measurements in a progression of shock wave tests: (1) in H2O/Ar, with no energy release; (2) in H2/O2/Ar, with relatively small energy release; and (3) in heptane/O2/Ar, with large energy release. © 2008 Wiley Periodicals, Inc. Int J Chem Kinet 40: 189,198, 2008 [source]


A theory of tie-set graph and its application to information network management

INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 4 2001
Norihiko Shinomiya
Abstract This paper presents a new circuit theoretical concept based on the principal partition theorem for distributed network management focusing on loops of an information network. To realize a simple network management with the minimum number of local agents, namely the topological degrees of freedom of a graph, a reduced loop agent graph generated by contracting the minimal principal minor is proposed. To investigate the optimal distribution of the loop agents, a theory of tie-set graph is proposed. Considering the total processing load of loop agents, a complexity of a tie-set graph is introduced to obtain the simplest tie-set graph with the minimum complexity. As for the simplest tie-set graph search, an experimental result shows that the computational time depends heavily on the nullity of the original graph. Therefore, a tie-set graph with the smallest nullity is essential for network management. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Thermodynamic optimization of a solar system for cogeneration of water heating and absorption cooling

INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 13 2008
R. Hovsapian
Abstract This paper presents a contribution to understanding the behavior of solar-powered air conditioning and refrigeration systems with a view to determining the manner in which refrigeration rate, mass flows, heat transfer areas, and internal architecture are related. A cogeneration system consisting of a solar concentrator, a cavity-type receiver, a gas burner, and a thermal storage reservoir is devised to simultaneously produce heat (hot water) and cooling (absorption refrigerator system). A simplified mathematical model, which combines fundamental and empirical correlations, and principles of classical thermodynamics, mass and heat transfer, is developed. The proposed model is then utilized to simulate numerically the system transient and steady-state response under different operating and design conditions. A system global optimization for maximum performance (or minimum exergy destruction) in the search for minimum pull-down and pull-up times, and maximum system second law efficiency is performed with low computational time. Appropriate dimensionless groups are identified and the results are presented in normalized charts for general application. The numerical results show that the three-way maximized system second law efficiency, ,II,max,max,max, occurs when three system characteristic mass flow rates are optimally selected in general terms as dimensionless heat capacity rates, i.e. (,ss, ,wxwx, ,Hs)opt=(0.335, 0.28, 0.2). The minimum pull-down and pull-up times, and maximum second law efficiencies found with respect to the optimized operating parameters are sharp and, therefore, important to be considered in actual design. As a result, the model is expected to be a useful tool for simulation, design, and optimization of solar energy systems in the context of distributed power generation. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Implementation of the symmetric doped double-gate MOSFET model in Verilog-A for circuit simulation

INTERNATIONAL JOURNAL OF NUMERICAL MODELLING: ELECTRONIC NETWORKS, DEVICES AND FIELDS, Issue 2 2010
Joaquín Alvarado
Abstract Recently we developed a model for symmetric double-gate MOSFETs (SDDGM) that, for the first time, considers the doping concentration in the Si film in the complete range from 1×1014 to 3×1018,cm,3. The model covers a wide range of technological parameters and includes short channel effects. It was validated for different devices using data from simulations, as well as measured in real devices. In this paper, we present the implementation in Verilog-A code of this model, which allows its introduction in commercial simulators. The Verilog-A implementation was optimized to achieve reduction in computational time, as well as good accuracy. Results are compared with data from 2D simulations, showing a very good agreement in all transistor operation regions. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Modelling of photonic bandgap devices by the leaky mode propagation method

INTERNATIONAL JOURNAL OF NUMERICAL MODELLING: ELECTRONIC NETWORKS, DEVICES AND FIELDS, Issue 3 2003
Agostino Giorgio
Abstract Main modelling approaches used for investigating the Photonic bandgap (PBG) devices are reviewed. In particular, the model based on Leaky Mode Propagation (LMP) method is described. A complete analysis of the propagation characteristics, including the determination of modal propagation constants, electromagnetic field harmonics and total field distribution, transmission and reflection coefficients, total forward and backward power flow in the structure, guided and radiated power, and total losses, can be carried out by a computer program based on the LMP approach. The numerical results have been validated by comparisons with those obtained by using other more complex and expensive models. The new model shows some significant advantages in terms of very low computational time, absence of any a priori theoretical assumptions and approximations, capability of simulating the actual physical behaviour of the device and fast determination of the bandgap position.Copyright © 2003 John Wiley & Sons, Ltd. [source]