High Computational Cost (high + computational_cost)

Distribution by Scientific Domains


Selected Abstracts


Adaptive Zooming in Web Cartography

COMPUTER GRAPHICS FORUM, Issue 4 2002
Alesandro Cecconi
Abstract Beyond any doubt much of the current web mapping and web GIS applications lack cartographic quality. Thereasons aren't only the technical limitations related to Internet delivery, but also the neglect of one of the maincartographic principles of digital mapping, namely adaptive zooming. Adaptive zooming describes the adjustmentof a map, its contents and the symbolization to target scale in consequence of a zooming operation. The approachdescribed in this paper proposes the combination of two commonly known concepts: on the one hand levelsof detail (LoD) for those object classes, that require high computational cost for the automated generalizationprocess (e.g. buildings, road network); on the other hand an on-the-fly generalization for those object classeswhich can be generalized by less complex methods and algorithms (e.g. rivers, lakes). Realizing such interactiveand dynamic concept for web mapping requires the use of vector based visualization tools. The data format bestmeeting the criteria is the W3C standard Scalable Vector Graphics (SVG). Thus, it has been used to implementthe presented ideas in a prototype application for topographic web mapping based on the landscape modelVECTOR25 of the Swiss Federal Office of Topography. [source]


Restoration of degraded moving image for predicting a moving object

ELECTRONICS & COMMUNICATIONS IN JAPAN, Issue 2 2009
Kei Akiyama
Abstract Iterative optimal calculation methods have been proposed for degraded static image restoration based on the multiresolution wavelet decomposition. However, it is quite difficult to apply these methods to process moving images due to the high computational cost. In this paper, we propose an effective restoration method for degraded moving images by modeling the motion of moving object and predicting the future object position. We verified our method by computer simulations and experiments to show that our method can achieve favorable results. © 2009 Wiley Periodicals, Inc. Electron Comm Jpn, 92(2): 38,48, 2009; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecj.10013 [source]


FOIST: Fluid,object interaction subcomputation technique

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 9 2009
V. Udoewa
Abstract Our target is to develop computational techniques for studying aerodynamic interactions between multiple objects. The computational challenge is to predict the dynamic behavior and path of the object, so that separation (the process of objects relatively falling or moving away from each other) is safe and effective. This is a very complex problem because it has an unsteady, 3D nature and requires the solution of complex equations that govern the fluid dynamics (FD) of the object and the aircraft together, with their relative positions changing in time. Large-scale 3D FD simulations require a high computational cost. Not only must one solve the time-dependent Navier,Stokes equations governing the fluid flow, but also one must handle the equations of motion of the object as well as the treatment of the moving domain usually treated as a type of pseudo-solid. These costs include mesh update methods, distortion-limiting techniques, and remeshing and projection tactics. To save computational costs, point force calculations have been performed in the past. This paper presents a hybrid between full mesh-moving simulations and the point force calculation. This mesh-moving alternative is called FOIST: fluid,object subcomputation interaction technique. Copyright © 2009 John Wiley & Sons, Ltd. [source]


A unified formulation for continuum mechanics applied to fluid,structure interaction in flexible tubes

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 12 2005
C. J. Greenshields
Abstract This paper outlines the development of a new procedure for analysing continuum mechanics problems with a particular focus on fluid,structure interaction in flexible tubes. A review of current methods of fluid,structure coupling highlights common limitations of high computational cost and solution instability. It is proposed that these limitations can be overcome by an alternative approach in which both fluid and solid components are solved within a single discretized continuum domain. A single system of momentum and continuity equations is therefore derived that governs both fluids and solids and which are solved with a single mesh using finite volume discretization schemes. The method is validated first by simulating dynamic oscillation of a clamped elastic beam. It is then applied to study the case of interest,wave propagation in highly flexible tubes,in which a predicted wave speed of 8.58 m/s falls within 2% of an approximate analytical solution. The method shows further good agreement with analytical solutions for tubes of increasing rigidity, covering a range of wave speeds from those found in arteries to that in the undisturbed fluid. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Adaptive moving mesh methods for simulating one-dimensional groundwater problems with sharp moving fronts

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 11 2002
Weizhang Huang
Abstract Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Studies of molecular docking between fibroblast growth factor and heparin using generalized simulated annealing

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 13 2008
Samuel Silva da Rocha Pita
Abstract Since the middle 70s, the main molecular docking problem consists in limitations to treat adequately the degrees of freedom of protein (or a receptor) due to the energy landscape roughness and the high computational cost. Until recently, only few algorithms considering flexible simultaneously both ligand and receptor at low computational cost were developed. As a recent proposed Statistical Mechanics, generalized simulated annealing (GSA) has been employed at diverse works concerning global optimization problems. In this work, we used this method exploring the molecular docking problem taking into account the FGF-2 and heparin complex. Since the requirements of an efficient docking algorithm are accuracy and velocity, we tested the influence of GSA parameters qA (new configuration acceptance index), qV (energy surface visiting index), and qT (temperature decreasing control) on the performance of GSADOCK program. Our simulations showed that as temperature parameter qT increases, qA parameter follows this behavior in the interval ranging from 1.1 to 2.3. We found that the GSA parameters have the best performance for the qA values ranging from 1.1 to 1.3, qV values from 1.3 to 1.5, and qT values from 1.1 to 1.7. Most of good qV values were equal or next the good qT values. Finally, the implemented algorithm is trustworthy and can be employed as a tool of molecular modeling methods. The final version of the program will be free of charge and will be accessible at our home-page or could be requested to the authors for e-mail. © 2008 Wiley Periodicals, Inc. Int J Quantum Chem, 2008 [source]


Recent advances of neural network-based EM-CAD

INTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 5 2010
Humayun Kabir
Abstract In this article, we provide an overview of recent advances in computer-aided design techniques using neural networks for electromagnetic (EM) modeling and design applications. Summary of various recent neural network modeling techniques including passive component modeling, design and optimization using the models are discussed. Training data for the models are generated from EM simulations. The trained neural networks become fast and accurate models of EM structures. The models are then incorporated into various optimization methods and commercially available circuit simulators for fast design and optimization. We also provide an overview of recently developed neural network inverse modeling technique. Training a neural network inverse model directly may become difficult due to the nonuniqueness of the input,output relationship in the inverse model. Training data containing multivalued solutions are divided into groups according to derivative information. Multiple inverse submodels are built based on divided data groups and are then combined to form a complete model. Comparison between the conventional EM-based design approach and the inverse design approach has also been discussed. These computer-aided design techniques using neural models provide circuit level simulation speed with EM level accuracy avoiding the high computational cost of EM simulation. © 2010 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2010. [source]


Recursive estimation in constrained nonlinear dynamical systems

AICHE JOURNAL, Issue 3 2005
Pramod Vachhani
In any modern chemical plant or refinery, process operation and the quality of product depend on the reliability of data used for process monitoring and control. The task of improving the quality of data to be consistent with material and energy balances is called reconciliation. Because chemical processes often operate dynamically in nonlinear regimes, techniques such as extended-Kalman filter (EKF) and nonlinear dynamic data reconciliation (NDDR) have been developed for reconciliation. There are various issues that arise with the use of either of these techniques. EKF cannot handle inequality or equality constraints, whereas the NDDR has high computational cost. Therefore, a more efficient and robust method is required for reconciling process measurements and estimating parameters involved in nonlinear dynamic processes. Two solution techniques are presented: recursive nonlinear dynamic data reconciliation (RNDDR) and a combined predictor,corrector optimization (CPCO) method for efficient state and parameter estimation in nonlinear systems. The proposed approaches combine the efficiency of EKF and the ability of NDDR to handle algebraic inequality and equality constraints. Moreover, the CPCO technique allows deterministic parameter variation, thus relaxing another restriction of EKF where the parameter changes are modeled through a discrete stochastic equation. The proposed techniques are compared against the EKF and the NDDR formulations through simulation studies on a continuous stirred tank reactor and a polymerization reactor. In general, the RNDDR performs as well as the two traditional approaches, whereas the CPCO formulation provides more accurate results than RNDDR at a marginal increase in computational cost. © 2005 American Institute of Chemical Engineers AIChE J, 51: 946,959, 2005 [source]


Reliability-based design optimization with equality constraints

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 11 2007
Xiaoping Du
Abstract Equality constraints have been well studied and widely used in deterministic optimization, but they have rarely been addressed in reliability-based design optimization (RBDO). The inclusion of an equality constraint in RBDO results in dependency among random variables. Theoretically, one random variable can be substituted in terms of remaining random variables given an equality constraint; and the equality constraint can then be eliminated. However, in practice, eliminating an equality constraint may be difficult or impossible because of complexities such as coupling, recursion, high dimensionality, non-linearity, implicit formats, and high computational costs. The objective of this work is to develop a methodology to model equality constraints and a numerical procedure to solve a RBDO problem with equality constraints. Equality constraints are classified into demand-based type and physics-based type. A sequential optimization and reliability analysis strategy is used to solve RBDO with physics-based equality constraints. The first-order reliability method is employed for reliability analysis. The proposed method is illustrated by a mathematical example and a two-member frame design problem. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Fast liquid composite molding simulation of unsaturated flow in dual-scale fiber mats using the imbibition characteristics of a fabric-based unit cell

POLYMER COMPOSITES, Issue 10 2010
Hua Tan
The use of the dual-scale fiber mats in liquid composite molding (LCM) process for making composites parts gives rise to the unsaturated flow during the mold-filling process. The usual approaches for modeling such flows involve using a sink term in the mass balance equation along with the Darcy's law. Sink functions involving complex microflows inside tows with realistic tow geometries have not been attempted in the past because of the problem of high computational costs arising from the coupling of the macroscopic gap flows with the microscopic tow flows. In this study, a new "lumped" sink function is proposed for the isothermal flow simulation, which is a function of the gap pressure, capillary pressure, and tow saturation, and which is estimated without solving for the microscopic tow simulations at each node of the FE mesh in the finite element/control volume algorithm. The sink function is calibrated with the help of the tow microflow simulation in a stand-alone unit cell of the dual-scale fiber mat. This new approach, which does not use any fitting parameters, achieved a good validation against a previous published result on the 1D unsaturated flow in a biaxial stitched mat,satisfactory comparisons of the inlet-pressure history as well as the saturation distributions were achieved. Finally, the unsaturated flow is studied in a car hood-type LCM mold geometry using the code PORE-FLOW© based on the proposed algorithm. POLYM. COMPOS., 31:1790,1807, 2010. © 2010 Society of Plastics Engineers. [source]