Data Structure (data + structure)

Distribution by Scientific Domains


Selected Abstracts


BqR-Tree: A Data Structure for Flights and Walkthroughs in Urban Scenes with Mobile Elements

COMPUTER GRAPHICS FORUM, Issue 6 2010
J.L. Pina
I.3.6 [Computer Graphics]: Graphics data structures and data types Abstract BqR-Tree, the data structure presented in this paper is an improved R-Tree data structure based on a quadtree spatial partitioning which improves the rendering speed of the usual R-trees when view-culling is implemented, especially in urban scenes. The city is split by means of a spatial quadtree partition and the block is adopted as the basic urban unit. One advantage of blocks is that they can be easily identified in any urban environment, regardless of the origins and structure of the input data. The aim of the structure is to accelerate the visualization of complex scenes containing not only static but dynamic elements. The usefulness of the structure has been tested with low structured data, which makes its application appropriate to almost all city data. The results of the tests show that when using the BqR-Tree structure to perform walkthroughs and flights, rendering times vastly improve in comparison to the data structures which have yielded best results to date, with average improvements of around 30%. [source]


Data structures in Java for matrix computations

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 8 2004
Geir Gundersen
Abstract In this paper we show how to utilize Java's native arrays for matrix computations. The disadvantages of Java arrays used as a 2D array for dense matrix computation are discussed and ways to improve the performance are examined. We show how to create efficient dynamic data structures for sparse matrix computations using Java's native arrays. This data structure is unique for Java and shown to be more dynamic and efficient than the traditional storage schemes for large sparse matrices. Numerical testing indicates that this new data structure, called Java Sparse Array, is competitive with the traditional Compressed Row Storage scheme on matrix computation routines. Java gives increased flexibility without losing efficiency. Compared with other object-oriented data structures Java Sparse Array is shown to have the same flexibility. Copyright © 2004 John Wiley & Sons, Ltd. [source]


BqR-Tree: A Data Structure for Flights and Walkthroughs in Urban Scenes with Mobile Elements

COMPUTER GRAPHICS FORUM, Issue 6 2010
J.L. Pina
I.3.6 [Computer Graphics]: Graphics data structures and data types Abstract BqR-Tree, the data structure presented in this paper is an improved R-Tree data structure based on a quadtree spatial partitioning which improves the rendering speed of the usual R-trees when view-culling is implemented, especially in urban scenes. The city is split by means of a spatial quadtree partition and the block is adopted as the basic urban unit. One advantage of blocks is that they can be easily identified in any urban environment, regardless of the origins and structure of the input data. The aim of the structure is to accelerate the visualization of complex scenes containing not only static but dynamic elements. The usefulness of the structure has been tested with low structured data, which makes its application appropriate to almost all city data. The results of the tests show that when using the BqR-Tree structure to perform walkthroughs and flights, rendering times vastly improve in comparison to the data structures which have yielded best results to date, with average improvements of around 30%. [source]


Out-of-Core and Dynamic Programming for Data Distribution on a Volume Visualization Cluster

COMPUTER GRAPHICS FORUM, Issue 1 2009
S. Frank
I.3.2 [Computer Graphics]: Distributed/network graphics; C.2.4 [Distributed Systems]: Distributed applications Abstract Ray directed volume-rendering algorithms are well suited for parallel implementation in a distributed cluster environment. For distributed ray casting, the scene must be partitioned between nodes for good load balancing, and a strict view-dependent priority order is required for image composition. In this paper, we define the load balanced network distribution (LBND) problem and map it to the NP-complete precedence constrained job-shop scheduling problem. We introduce a kd-tree solution and a dynamic programming solution. To process a massive data set, either a parallel or an out-of-core approach is required. Parallel preprocessing is performed by render nodes on data, which are allocated using a static data structure. Volumetric data sets often contain a large portion of voxels that will never be rendered, or empty space. Parallel preprocessing fails to take advantage of this. Our slab-projection slice, introduced in this paper, tracks empty space across consecutive slices of data to reduce the amount of data distributed and rendered. It is used to facilitate out-of-core bricking and kd-tree partitioning. Load balancing using each of our approaches is compared with traditional methods using several segmented regions of the Visible Korean data set. [source]


A Hierarchical Topology-Based Model for Handling Complex Indoor Scenes

COMPUTER GRAPHICS FORUM, Issue 2 2006
D. Fradin
Abstract This paper presents a topology-based representation dedicated to complex indoor scenes. It accounts for memory management and performances during modelling, visualization and lighting simulation. We propose to enlarge a topological model (called generalized maps) with multipartition and hierarchy. Multipartition allows the user to group objects together according to semantics. Hierarchy provides a coarse-to-fine description of the environment. The topological model we propose has been used for devising a modeller prototype and generating efficient data structure in the context of visualization, global illumination and 1 GHz wave propagation simulation. We presently handle buildings composed of up to one billion triangles. [source]


Incremental Updates for Rapid Glossy Global Illumination

COMPUTER GRAPHICS FORUM, Issue 3 2001
Xavier Granier
We present an integrated global illumination algorithm including non-diffuse light transport which can handle complex scenes and enables rapid incremental updates. We build on a unified algorithm which uses hierarchical radiosity with clustering and particle tracing for diffuse and non-diffuse transport respectively. We present a new algorithm which chooses between reconstructing specular effects such as caustics on the diffuse radiosity mesh, or special purpose caustic textures, when high frequencies are present. Algorithms are presented to choose the resolution of these textures and to reconstruct the high-frequency non-diffuse lighting effects. We use a dynamic spatial data structure to restrict the number of particles re-emitted during the local modifications of the scene. By combining this incremental particle trace with a line-space hierarchy for incremental update of diffuse illumination, we can locally modify complex scenes rapidly. We also develop an algorithm which, by permitting slight quality degradation during motion, achieves quasi-interactive updates. We present an implementation of our new method and its application to indoors and outdoors scenes. [source]


Fast Volume Rendering and Data Classification Using Multiresolution in Min-Max Octrees

COMPUTER GRAPHICS FORUM, Issue 3 2000
Feng Dong
Large-sized volume datasets have recently become commonplace and users are now demanding that volume-rendering techniques to visualise such data provide acceptable results on relatively modest computing platforms. The widespread use of the Internet for the transmission and/or rendering of volume data is also exerting increasing demands on software providers. Multiresolution can address these issues in an elegant way. One of the fastest volume-rendering alrogithms is that proposed by Lacroute & Levoy 1 , which is based on shear-warp factorisation and min-max octrees (MMOs). Unfortunately, since an MMO captures only a single resolution of a volume dataset, this method is unsuitable for rendering datasets in a multiresolution form. This paper adapts the above algorithm to multiresolution volume rendering to enable near-real-time interaction to take place on a standard PC. It also permits the user to modify classification functions and/or resolution during rendering with no significant loss of rendering speed. A newly-developed data structure based on the MMO is employed, the multiresolution min-max octree, M 3 O, which captures the spatial coherence for datasets at all resolutions. Speed is enhanced by the use of multiresolution opacity transfer functions for rapidly determining and discarding transparent dataset regions. Some experimental results on sample volume datasets are presented. [source]


Initialization Strategies in Simulation-Based SFE Eigenvalue Analysis

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2005
Song Du
Poor initializations often result in slow convergence, and in certain instances may lead to an incorrect or irrelevant answer. The problem of selecting an appropriate starting vector becomes even more complicated when the structure involved is characterized by properties that are random in nature. Here, a good initialization for one sample could be poor for another sample. Thus, the proper eigenvector initialization for uncertainty analysis involving Monte Carlo simulations is essential for efficient random eigenvalue analysis. Most simulation procedures to date have been sequential in nature, that is, a random vector to describe the structural system is simulated, a FE analysis is conducted, the response quantities are identified by post-processing, and the process is repeated until the standard error in the response of interest is within desired limits. A different approach is to generate all the sample (random) structures prior to performing any FE analysis, sequentially rank order them according to some appropriate measure of distance between the realizations, and perform the FE analyses in similar rank order, using the results from the previous analysis as the initialization for the current analysis. The sample structures may also be ordered into a tree-type data structure, where each node represents a random sample, the traverse of the tree starts from the root of the tree until every node in the tree is visited exactly once. This approach differs from the sequential ordering approach in that it uses the solution of the "closest" node to initialize the iterative solver. The computational efficiencies that result from such orderings (at a modest expense of additional data storage) are demonstrated through a stability analysis of a system with closely spaced buckling loads and the modal analysis of a simply supported beam. [source]


From a Product Model to Visualization: Simulation of Indoor Flows with Lattice-Boltzmann Methods

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 6 2004
Siegfried Kühner
All models are derived from a product data model based on Industry Foundation Classes. Concepts of the Lattice-Boltzmann method are described, being used as the numerical kernel of our simulation system. We take advantage of spacetrees as a central data structure for all geometry related objects. Finally, we describe some advanced postprocessing and visualization techniques allowing to efficiently analyze huge amounts of simulation data. [source]


Plug-and-play remote portlet publishing

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 12 2007
X. D. Wang
Abstract Web Services for Remote Portlets (WSRP) is gaining attention among portal developers and vendors to enable easy development, increased richness in functionality, pluggability, and flexibility of deployment. Whilst currently not supporting all WSRP functionalities, open-source portal frameworks could in future use WSRP Consumers to access remote portlets found from a WSRP Producer registry service. This implies that we need a central registry for the remote portlets and a more expressive WSRP Consumer interface to implement the remote portlet functions. This paper reports on an investigation into a new system architecture, which includes a Web Services repository, registry, and client interface. The Web Services repository holds portlets as remote resource producers. A new data structure for expressing remote portlets is found and published by populating a Universal Description, Discovery and Integration (UDDI) registry. A remote portlet publish and search engine for UDDI has also been developed. Finally, a remote portlet client interface was developed as a Web application. The client interface supports remote portlet features, as well as window status and mode functions. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Data structures in Java for matrix computations

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 8 2004
Geir Gundersen
Abstract In this paper we show how to utilize Java's native arrays for matrix computations. The disadvantages of Java arrays used as a 2D array for dense matrix computation are discussed and ways to improve the performance are examined. We show how to create efficient dynamic data structures for sparse matrix computations using Java's native arrays. This data structure is unique for Java and shown to be more dynamic and efficient than the traditional storage schemes for large sparse matrices. Numerical testing indicates that this new data structure, called Java Sparse Array, is competitive with the traditional Compressed Row Storage scheme on matrix computation routines. Java gives increased flexibility without losing efficiency. Compared with other object-oriented data structures Java Sparse Array is shown to have the same flexibility. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Object-oriented distributed computing based on remote class reference

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 1 2003
Yan Huang
Abstract Java RMI, Jini and CORBA provide effective mechanisms for implementing a distributed computing system. Recently many numeral libraries have been developed that take advantage of Java as an object-oriented and portable language. The widely-used client-server method limits the extent to which the benefits of the object-oriented approach can be exploited because of the difficulties arising when a remote object is the argument or return value of a remote or local method. In this paper this problem is solved by introducing a data object that stores the data structure of the remote object and related access methods. By using this data object, the client can easily instantiate a remote object, and use it as the argument or return value of either a local or remote method. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Individual-based Computational Modeling of Smallpox Epidemic Control Strategies

ACADEMIC EMERGENCY MEDICINE, Issue 11 2006
Donald S. Burke MD
In response to concerns about possible bioterrorism, the authors developed an individual-based (or "agent-based") computational model of smallpox epidemic transmission and control. The model explicitly represents an "artificial society" of individual human beings, each implemented as a distinct object, or data structure in a computer program. These agents interact locally with one another in code-represented social units such as homes, workplaces, schools, and hospitals. Over many iterations, these microinteractions generate large-scale macroscopic phenomena of fundamental interest such as the course of an epidemic in space and time. Model variables (incubation periods, clinical disease expression, contagiousness, and physical mobility) were assigned following realistic values agreed on by an advisory group of experts on smallpox. Eight response scenarios were evaluated at two epidemic scales, one being an introduction of ten smallpox cases into a 6,000-person town and the other an introduction of 500 smallpox cases into a 50,000-person town. The modeling exercise showed that contact tracing and vaccination of household, workplace, and school contacts, along with prompt reactive vaccination of hospital workers and isolation of diagnosed cases, could contain smallpox at both epidemic scales examined. [source]


Processing methods for partially encrypted data in multihop Web services

ELECTRONICS & COMMUNICATIONS IN JAPAN, Issue 5 2008
Kojiro Nakayama
Abstract Message layer security is necessary to ensure the end-to-end security of Web services. To provide confidentiality against the intermediaries along the message path, XML encryption is used to partially encrypt the message. Because the data structure is changed by the partial encryption, the encrypted message is no longer valid with respect to the original schema definition. Thus, problems occur regarding the processing of the schema validation and the data binding by the intermediary. In this paper, we discuss two possible methods to solve these problems. The first method is to transform the original schema definition. The second is to transform the received message. We examined these methods by applying them to demonstration experiment of Web services. © 2008 Wiley Periodicals, Inc. Electron Comm Jpn, 91(5): 26, 32, 2008; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecj.10112 [source]


Efficient finite element simulation of crack propagation using adaptive iterative solvers

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 2 2006
A. Meyer
Abstract This paper delivers an efficient solution technique for the numerical simulation of crack propagation of 2D linear elastic formulations based on finite elements together with the conjugate gradient method in order to solve the corresponding linear equation systems. The developed iterative numerical approach using hierarchical preconditioners has the interesting feature that the hierarchical data structure will not be destroyed during crack propagation. Thus, it is possible to simulate crack advance in a very effective numerical manner, including adaptive mesh refinement and mesh coarsening. Test examples are presented to illustrate the efficiency of the given approach. Numerical simulations of crack propagation are compared with experimental data. Copyright © 2005 John Wiley & Sons, Ltd. [source]


An adaptive multiresolution method for parabolic PDEs with time-step control

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 6 2009
M. O. Domingues
Abstract We present an efficient adaptive numerical scheme for parabolic partial differential equations based on a finite volume (FV) discretization with explicit time discretization using embedded Runge,Kutta (RK) schemes. A multiresolution strategy allows local grid refinement while controlling the approximation error in space. The costly fluxes are evaluated on the adaptive grid only. Compact RK methods of second and third order are then used to choose automatically the new time step while controlling the approximation error in time. Non-admissible choices of the time step are avoided by limiting its variation. The implementation of the multiresolution representation uses a dynamic tree data structure, which allows memory compression and CPU time reduction. This new numerical scheme is validated using different classical test problems in one, two and three space dimensions. The gain in memory and CPU time with respect to the FV scheme on a regular grid is reported, which demonstrates the efficiency of the new method. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Extrinsic cohesive modelling of dynamic fracture and microbranching instability in brittle materials

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 8 2007
Zhengyu (Jenny) Zhang
Abstract Dynamic crack microbranching processes in brittle materials are investigated by means of a computational fracture mechanics approach using the finite element method with special interface elements and a topological data structure representation. Experiments indicate presence of a limiting crack speed for dynamic crack in brittle materials as well as increasing fracture resistance with crack speed. These phenomena are numerically investigated by means of a cohesive zone model (CZM) to characterize the fracture process. A critical evaluation of intrinsic versus extrinsic CZMs is briefly presented, which highlights the necessity of adopting an extrinsic approach in the current analysis. A novel topology-based data structure is employed to enable fast and robust manipulation of evolving mesh information when extrinsic cohesive elements are inserted adaptively. Compared to intrinsic CZMs, which include an initial hardening segment in the traction,separation curve, extrinsic CZMs involve additional issues both in implementing the procedure and in interpreting simulation results. These include time discontinuity in stress history, fracture pattern dependence on time step control, and numerical energy balance. These issues are investigated in detail through a ,quasi-steady-state' crack propagation problem in polymethylmethacrylate. The simulation results compare reasonably well with experimental observations both globally and locally, and demonstrate certain advantageous features of the extrinsic CZM with respect to the intrinsic CZM. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Lower bound limit analysis of cohesive-frictional materials using second-order cone programming

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 4 2006
A. Makrodimopoulos
Abstract The formulation of limit analysis by means of the finite element method leads to an optimization problem with a large number of variables and constraints. Here we present a method for obtaining strict lower bound solutions using second-order cone programming (SOCP), for which efficient primal-dual interior-point algorithms have recently been developed. Following a review of previous work, we provide a brief introduction to SOCP and describe how lower bound limit analysis can be formulated in this way. Some methods for exploiting the data structure of the problem are also described, including an efficient strategy for detecting and removing linearly dependent constraints at the assembly stage. The benefits of employing SOCP are then illustrated with numerical examples. Through the use of an effective algorithm/software, very large optimization problems with up to 700 000 variables are solved in minutes on a desktop machine. The numerical examples concern plane strain conditions and the Mohr,Coulomb criterion, however we show that SOCP can also be applied to any other problem of lower bound limit analysis involving a yield function with a conic quadratic form (notable examples being the Drucker,Prager criterion in 2D or 3D, and Nielsen's criterion for plates). Copyright © 2005 John Wiley & Sons, Ltd. [source]


Parallel computing of high-speed compressible flows using a node-based finite-element method

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2003
T. Fujisawa
Abstract An efficient parallel computing method for high-speed compressible flows is presented. The numerical analysis of flows with shocks requires very fine computational grids and grid generation requires a great deal of time. In the proposed method, all computational procedures, from the mesh generation to the solution of a system of equations, can be performed seamlessly in parallel in terms of nodes. Local finite-element mesh is generated robustly around each node, even for severe boundary shapes such as cracks. The algorithm and the data structure of finite-element calculation are based on nodes, and parallel computing is realized by dividing a system of equations by the row of the global coefficient matrix. The inter-processor communication is minimized by renumbering the nodal identification number using ParMETIS. The numerical scheme for high-speed compressible flows is based on the two-step Taylor,Galerkin method. The proposed method is implemented on distributed memory systems, such as an Alpha PC cluster, and a parallel supercomputer, Hitachi SR8000. The performance of the method is illustrated by the computation of supersonic flows over a forward facing step. The numerical examples show that crisp shocks are effectively computed on multiprocessors at high efficiency. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Defining and optimizing algorithms for neighbouring particle identification in SPH fluid simulations

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 6 2008
G. Viccione
Abstract Lagrangian particle methods such as smoothed particle hydrodynamics (SPH) are very demanding in terms of computing time for large domains. Since the numerical integration of the governing equations is only carried out for each particle on a restricted number of neighbouring ones located inside a cut-off radius rc, a substantial part of the computational burden depends on the actual search procedure; it is therefore vital that efficient methods are adopted for such a search. The cut-off radius is indeed much lower than the typical domain's size; hence, the number of neighbouring particles is only a little fraction of the total number. Straightforward determination of which particles are inside the interaction range requires the computation of all pair-wise distances, a procedure whose computational time would be unpractical or totally impossible for large problems. Two main strategies have been developed in the past in order to reduce the unnecessary computation of distances: the first based on dynamically storing each particle's neighbourhood list (Verlet list) and the second based on a framework of fixed cells. The paper presents the results of a numerical sensitivity study on the efficiency of the two procedures as a function of such parameters as the Verlet size and the cell dimensions. An insight is given into the relative computational burden; a discussion of the relative merits of the different approaches is also given and some suggestions are provided on the computational and data structure of the neighbourhood search part of SPH codes. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Automatic CAD model topology generation

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 8 2006
Paresh S. Patel
Abstract Computer aided design (CAD) models often need to be processed due to the data translation issues and requirements of the downstream applications like computational field simulation, rapid prototyping, computer graphics, computational manufacturing, and real-time rendering before they can be used. Automatic CAD model processing tools can significantly reduce the amount of time and cost associated with the manual processing. The topology generation algorithm, commonly known as CAD repairing/healing, is presented to detect commonly found geometrical and topological issues like cracks, gaps, overlaps, intersections, T-connections, and no/invalid topology in the model, process them and build correct topological information. The present algorithm is based on the iterative vertex pair contraction and expansion operations called stitching and filling, respectively, to process the model accurately. Moreover, the topology generation algorithm can process manifold as well as non-manifold models, which makes the procedure more general and flexible. In addition, a spatial data structure is used for searching and neighbour finding to process large models efficiently. In this way, the combination of generality, accuracy, and efficiency of this algorithm seems to be a significant improvement over existing techniques. Results are presented showing the effectiveness of the algorithm to process two- and three-dimensional configurations. Copyright © 2006 John Wiley & Sons, Ltd. [source]


A parallel cell-based DSMC method on unstructured adaptive meshes

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 12 2004
Min Gyu Kim
Abstract A parallel DSMC method based on a cell-based data structure is developed for the efficient simulation of rarefied gas flows on PC-clusters. Parallel computation is made by decomposing the computational domain into several subdomains. Dynamic load balancing between processors is achieved based on the number of simulation particles and the number of cells allocated in each subdomain. Adjustment of cell size is also made through mesh adaptation for the improvement of solution accuracy and the efficient usage of meshes. Applications were made for a two-dimensional supersonic leading-edge flow, the axi-symmetric Rothe's nozzle, and the open hollow cylinder flare flow for validation. It was found that the present method is an efficient tool for the simulation of rarefied gas flows on PC-based parallel machines. Copyright © 2004 John Wiley & Sons, Ltd. [source]


An implicit edge-based ALE method for the incompressible Navier,Stokes equations,

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 3 2003
Richard W. Smith
Abstract A new finite volume method for the incompressible Navier,Stokes equations, expressed in arbitrary Lagrangian,Eulerian (ALE) form, is presented. The method uses a staggered storage arrangement for the pressure and velocity variables and adopts an edge-based data structure and assembly procedure which is valid for arbitrary n-sided polygonal meshes. Edge formulas are presented for assembling the ALE form of the momentum and pressure equations. An implicit multi-stage time integrator is constructed that is geometrically conservative to the precision of the arithmetic used in the computation. The method is shown to be second-order-accurate in time and space for general time-dependent polygonal meshes. The method is first evaluated using several well-known unsteady incompressible Navier,Stokes problems before being applied to a periodically forced aeroelastic problem and a transient free surface problem. Published in 2003 by John Wiley & Sons, Ltd. [source]


Numerical simulation of three-dimensional free surface flows

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 7 2003
V. Maronnier
Abstract A numerical model is presented for the simulation of complex fluid flows with free surfaces in three space dimensions. The model described in Maronnier et al. (J. Comput. Phys. 1999; 155(2) : 439) is extended to three dimensional situations. The mathematical formulation of the model is similar to that of the volume of fluid (VOF) method, but the numerical procedures are different. A splitting method is used for the time discretization. At each time step, two advection problems,one for the predicted velocity field and the other for the volume fraction of liquid,are to be solved. Then, a generalized Stokes problem is solved and the velocity field is corrected. Two different grids are used for the space discretization. The two advection problems are solved on a fixed, structured grid made out of small cubic cells, using a forward characteristic method. The generalized Stokes problem is solved using continuous, piecewise linear stabilized finite elements on a fixed, unstructured mesh of tetrahedrons. The three-dimensional implementation is discussed. Efficient postprocessing algorithms enhance the quality of the numerical solution. A hierarchical data structure reduces memory requirements. Numerical results are presented for complex geometries arising in mold filling. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Simulating three-dimensional aeronautical flows on mixed block-structured/semi-structured/unstructured meshes

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 3 2002
J. A. Shaw
Abstract The design requirements of a computational fluid dynamics (CFD) method for modelling high Reynolds number flows over complete aircraft are reviewed. It is found that the specifications are unlikely to be met by an approach based on the sole use of either structured or unstructured grids. Instead, it is proposed that a hybrid combination of these grids is appropriate. Techniques for developing such meshes are given and the process of establishing the data structure defining the meshes described. Details of a flow algorithm which operates on a hybrid mesh are presented. A description is given of the suitability and generation of hybrid grids for a number of examples, and results from flow simulations shown. Finally, issues still to be addressed in the practical use of these meshes are discussed. Copyright © 2002 John Wiley & Sons, Ltd. [source]


The shallow flow equations solved on adaptive quadtree grids

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 6 2001
A. G. L. Borthwick
Abstract This paper describes an adaptive quadtree grid-based solver of the depth-averaged shallow water equations. The model is designed to approximate flows in complicated large-scale shallow domains while focusing on important smaller-scale localized flow features. Quadtree grids are created automatically by recursive subdivision of a rectangle about discretized boundary, bathymetric or flow-related seeding points. It can be fitted in a fractal-like sense by local grid refinement to any boundary, however distorted, provided absolute convergence to the boundary is not required and a low level of stepped boundary can be tolerated. Grid information is stored as a tree data structure, with a novel indexing system used to link information on the quadtree to a finite volume discretization of the governing equations. As the flow field develops, the grids may be adapted using a parameter based on vorticity and grid cell size. The numerical model is validated using standard benchmark tests, including seiches, Coriolis-induced set-up, jet-forced flow in a circular reservoir, and wetting and drying. Wind-induced flow in the Nichupté Lagoon, México, provides an illustrative example of an application to flow in extremely complicated multi-connected regions. Copyright © 2001 John Wiley & Sons, Ltd. [source]


An efficient pursuit automata approach for estimating stable all-pairs shortest paths in stochastic network environments,

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 4 2009
Sudip Misra
Abstract This paper presents a new solution to the dynamic all-pairs shortest-path routing problem using a fast-converging pursuit automata learning approach. The particular instance of the problem that we have investigated concerns finding the all-pairs shortest paths in a stochastic graph, where there are continuous probabilistically based updates in edge-weights. We present the details of the algorithm with an illustrative example. The algorithm can be used to find the all-pairs shortest paths for the ,statistical' average graph, and the solution converges irrespective of whether there are new changes in edge-weights or not. On the other hand, the existing popular algorithms will fail to exhibit such a behavior and would recalculate the affected all-pairs shortest paths after each edge-weight update. There are two important contributions of the proposed algorithm. The first contribution is that not all the edges in a stochastic graph are probed and, even if they are, they are not all probed equally often. Indeed, the algorithm attempts to almost always probe only those edges that will be included in the final list involving all pairs of nodes in the graph, while probing the other edges minimally. This increases the performance of the proposed algorithm. The second contribution is designing a data structure, the elements of which represent the probability that a particular edge in the graph lies in the shortest path between a pair of nodes in the graph. All the algorithms were tested in environments where edge-weights change stochastically, and where the graph topologies undergo multiple simultaneous edge-weight updates. Its superiority in terms of the average number of processed nodes, scanned edges and the time per update operation, when compared with the existing algorithms, was experimentally established. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Nonuniform video coding by means of multifoveal geometries

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 1 2002
J.A. Rodríguez
This paper presents a control mechanism for video transmission that relies on transmitting nonuniform resolution images depending on the delay of the communication channel. These images are built in an active way to keep the areas of interest of the image at the highest resolution available. In order to shift the areas of high resolution over the image and to achieve a data structure that is easy to process by using conventional algorithms, a shifted foveal multiresolution geometry of adaptive size is used. If delays are too high, the resolution areas of the image can be transmitted at different rates. A functional system has been developed for corridor surveillance with static cameras. Tests with real video images have proven that the method allows an almost constant rate of images per second as long as the channel is not collapsed. A new method for determining the areas of interest is also proposed, based on hierarchical object tracking by means of adaptive stabilization of pyramidal structures. © 2002 John Wiley & Sons, Inc. Int J Imaging Syst Technol 12, 27,34, 2002 [source]


From fuzzy sets to shadowed sets: Interpretation and computing

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 1 2009
Witold Pedrycz
In this study, we discuss a concept of shadowed sets and present their applications. To establish some sound compromise between the qualitative Boolean (two-valued) description of data and quantitative membership grades, we introduce an interpretation framework of shadowed sets. Shadowed sets are discussed as three-valued constructs induced by fuzzy sets assuming three values (that could be interpreted as full membership, full exclusion, and uncertain membership). The algorithm of converting membership functions into this quantification is a result of a certain optimization problem guided by the principle of uncertainty localization. We revisit fundamental ideas of relational calculus in the setting of shadowed sets. We demonstrate how shadowed sets help in problems in data interpretation in fuzzy clustering by leading to the three-valued quantification of data structure that consists of core, shadowed, and uncertain structure. © 2008 Wiley Periodicals, Inc. [source]


Regional Inequalities in Consumption Patterns: A Multilevel Approach to the Case of Italy

INTERNATIONAL STATISTICAL REVIEW, Issue 1 2007
Filippa Bono
Summary The main aim of this paper is to evaluate the disparities in the Italian regions on the demand side. In more detail, an attempt will be made to find if the consumption behaviour of Italian households is different in the regions. With this in mind, Istat's 2000 Italian Family Budget data set was analysed. The data in question, which were collected through a two-stage sample over Italy's 20 regions, contains information regarding the expenses of approximately 23,000 households. In this analysis, both households and regions are considered as units: households are nested in the regions so that the basic data structure is hierarchical. In order to take this hierarchical structure into account, a multilevel model was used, making it possible for parameters to vary randomly from region to region. The model in question also made it possible to consider heterogeneity across different groups (regions), such as stochastic variation. First, regional inequalities were tested using a simple model in which households constituted the first level of analysis and were grouped according to their region (the second level). As a second step, and in order to investigate the interaction between geographical context and income distribution, another model was used. This was cross-classified by income and regions. The most relevant results showed that there is wide fragmentation of consumption behaviour and, at the same time, various differentiated types of behaviour in the regions under analysis. These territorial differentials become clear from income class and items of consumption. Resumé L'objet du travail est l'analyse des différences, entre les régions italiennes, des comportements des consommateurs. Le traitement statistique des données individuelles est originale car il est conduit par un modèle ,multilevel'. Le modèle multilevel tient compte de la structure hiérarchique de données et permet au paramètres estimée de varier par hasard. En outre, ce modèle permet que l'hétérogénéité entre le différent groupes de familles (les unités statistiques) peut varier par hasard entre le régions. Pour l'analyse des différences régionales modèle ,multilevel nous avons estimée un premier modèle avec les familles au premier dégrée hiérarchique et les régions au second dégrée. Car le facteur géographique interagit avec la distribution du revenue dans chaque région nous avons estimée un autre modèle cross-classifiée par lequel le familles sont groupées par le revenue. [source]