Iteration

Distribution by Scientific Domains
Distribution within Engineering

Kinds of Iteration

  • first iteration
  • newton iteration
  • non-linear iteration
  • several iteration
  • subsequent iteration

  • Terms modified by Iteration

  • iteration algorithm
  • iteration method
  • iteration methods
  • iteration number
  • iteration procedure
  • iteration scheme
  • iteration step
  • iteration strategy

  • Selected Abstracts


    Global Illumination as a Combination of Continuous Random Walk and Finite-Element Based Iteration

    COMPUTER GRAPHICS FORUM, Issue 3 2001
    László Szirmay-Kalos
    The paper introduces a global illumination method that combines continuous and finite-element approaches, pre-serving the speed of finite-element based iteration and the accuracy of continuous random walks. The basic idea is to decompose the radiance function to a finite-element component that is only a rough estimate and to a difference component that is obtained by Monte-Carlo techniques. Iteration and random walk are handled uniformly in the framework of stochastic iteration. This uniform treatment allows the finite-element component to be built up adap-tively aiming at minimizing the Monte-Carlo component. The method is also suited for interactive walkthrough animation in glossy scenes since when the viewpoint changes, only the small Monte-Carlo component needs to be recomputed. [source]


    Symbolic methods for invariant manifolds in chemical kinetics

    INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 1 2006
    Simon J. Fraser
    Abstract Chemical reactions show a separation of time scales in transient decay due to the stiffness of the ordinary differential equations (ODEs) that describe their evolution. This evolution can be represented as motion in the phase space spanned by the concentration variables of the chemical reaction. Transient decay corresponds to a collapse of the "compressible fluid" representing the continuum of possible dynamical states of the system. Collapse occurs sequentially through a hierarchy of nested, attracting, slow invariant manifolds (SIMs), i.e., sets that map into themselves under the action of the phase flow, eventually reaching the asymptotic attractor of the system. Using a symbolic manipulative language, explicit formulas for the SIMs can be found by iterating functional equations obtained from the system's ODEs. Iteration converges geometrically fast to a SIM at large concentrations and, if necessary, can be stabilized at small concentrations. Three different chemical models are examined in order to show how finding the SIM for a model depends on its underlying dynamics. For every model the iterative method provides a global SIM formula; however, formal series expansions for the SIM diverge in some models. Repelling SIMs can be also found by iterative methods because of the invariance of trajectory geometry under time reversal. © 2005 Wiley Periodicals, Inc. Int J Quantum Chem, 2006 [source]


    Effect of wear on EHD film thickness in sliding contacts

    LUBRICATION SCIENCE, Issue 1 2006
    R. Michalczewski
    Abstract A theoretical solution to the elastohydrodynamic (EHD) lubrication problem in sliding contacts, which takes into consideration the effect of the change in shape of the gap due to wear on the load-carrying capacity, is presented. The model of such a contact is based on assumptions of Grubin and Ertel (von Mohrenstein). The resultant dimensionless Reynolds and film profile equations have been solved numerically for a number of cases with several values of thickness of the worn layer. Iteration of the EHD film thickness is performed by means of the secant method. Values of the calculated dimensionless film thickness are presented as a function of dimensionless wear. The conclusions concern the influence of the linear wear on the film thickness in heavily loaded sliding contacts. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Der asymmetrische konjugierte Silyltransfer in iterativen katalytischen Sequenzen: Synthese des C7,C16-Fragments von (+)-Neopeltolid,

    ANGEWANDTE CHEMIE, Issue 35 2010
    Eduard Hartmann
    Was nicht passt, wird passend gemacht! Die anti,anti -Konfiguration des C7,C16-Fragments von (+)-Neopeltolid wird in einer iterativen Sequenz katalysatorkontrollierter Si - und Me-Gruppentransfers stereoselektiv aufgebaut, und das unbeeinträchtigt von der Mismatched-Selektivität in der ersten Iteration (Si=Me2PhSi, siehe Schema; TBS=tert -Butyldimethylsilyl). [source]


    Fuzzy Sarsa Learning and the proof of existence of its stationary points,

    ASIAN JOURNAL OF CONTROL, Issue 5 2008
    Vali Derhami
    Abstract This paper provides a new Fuzzy Reinforcement Learning (FRL) algorithm based on critic-only architecture. The proposed algorithm, called Fuzzy Sarsa Learning (FSL), tunes the parameters of conclusion parts of the Fuzzy Inference System (FIS) online. Our FSL is based on Sarsa, which approximates the Action Value Function (AVF) and is an on-policy method. In each rule, actions are selected according to the proposed modified Softmax action selection so that the final inferred action selection probability in FSL is equivalent to the standard Softmax formula. We prove the existence of fixed points for the proposed Approximate Action Value Iteration (AAVI). Then, we show that FSL satisfies the necessary conditions that guarantee the existence of stationary points for it, which coincide with the fixed points of the AAVI. We prove that the weight vector of FSL with stationary action selection policy converges to a unique value. We also compare by simulation the performance of FSL and Fuzzy Q-Learning (FQL) in terms of learning speed, and action quality. Moreover, we show by another example the convergence of FSL and the divergence of FQL when both algorithms use a stationary policy. Copyright © 2008 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society [source]


    ARMS: an algebraic recursive multilevel solver for general sparse linear systems

    NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 5 2002
    Y. Saad
    Abstract This paper presents a general preconditioning method based on a multilevel partial elimination approach. The basic step in constructing the preconditioner is to separate the initial points into two parts. The first part consists of ,block' independent sets, or ,aggregates'. Unknowns of two different aggregates have no coupling between them, but those in the same aggregate may be coupled. The nodes not in the first part constitute what might be called the ,coarse' set. It is natural to call the nodes in the first part ,fine' nodes. The idea of the methods is to form the Schur complement related to the coarse set. This leads to a natural block LU factorization which can be used as a preconditioner for the system. This system is then solved recursively using as preconditioner the factorization that could be obtained from the next level. Iterations between levels are allowed. One interesting aspect of the method is that it provides a common framework for many other techniques. Numerical experiments are reported which indicate that the method can be fairly robust. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Strategies for identifying pregnancies in the automated medical records of the General Practice Research Database,,

    PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, Issue 11 2004
    Janet R. Hardy
    Abstract Purpose To develop a method for identifying the beginning and ending records of pregnancies in the automated medical records of the General Practice Research Database (GPRD). Methods Women's records from 1991 to 1999 were searched for codes from 17 pregnancy marker and 7 pregnancy outcome categories. Using the retrieved records, all possible pregnancy marker-outcome combinations were formed per woman. For each combination, the difference in days between record event dates was calculated. Restrictions were applied to select the combination with the earliest pregnancy marker mapped to the first outcome for each pregnancy. Iterations of the algorithm identified multiple pregnancies per woman when present. The algorithm was evaluated by analyzing time between marker and outcome event dates of mapped pregnancies and by analyzing unmapped pregnancy markers and outcomes. Results A total of 297,082 pregnancies were identified: 80% by general practitioner (GP) visit codes as the earliest pregnancy marker and 14% by laboratory or procedure codes. Limiting pregnancies to one per woman aged 15,44 years yielded 209,266 pregnancies. Pregnancy mapping success was greater than 80%. Plotting the pregnancies by weeks from earliest pregnancy marker to outcome and by pregnancy marker category showed two peaks in the distribution: 2,3 weeks and 33 weeks. Conclusions Arranging codes and time into algorithms provides a useful tool for pregnancy identification in databases whose size prohibits the audit of printed records. Evaluation of our algorithm confirmed a high degree of mapping success and a sensible time distribution from pregnancy marker to outcome. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Isotropic Remeshing with Fast and Exact Computation of Restricted Voronoi Diagram

    COMPUTER GRAPHICS FORUM, Issue 5 2009
    Dong-Ming Yan
    Abstract We propose a new isotropic remeshing method, based on Centroidal Voronoi Tessellation (CVT). Constructing CVT requires to repeatedly compute Restricted Voronoi Diagram (RVD), defined as the intersection between a 3D Voronoi diagram and an input mesh surface. Existing methods use some approximations of RVD. In this paper, we introduce an efficient algorithm that computes RVD exactly and robustly. As a consequence, we achieve better remeshing quality than approximation-based approaches, without sacrificing efficiency. Our method for RVD computation uses a simple procedure and a kd -tree to quickly identify and compute the intersection of each triangle face with its incident Voronoi cells. Its time complexity is O(mlog n), where n is the number of seed points and m is the number of triangles of the input mesh. Fast convergence of CVT is achieved using a quasi-Newton method, which proved much faster than Lloyd's iteration. Examples are presented to demonstrate the better quality of remeshing results with our method than with the state-of-art approaches. [source]


    Scalable, Versatile and Simple Constrained Graph Layout

    COMPUTER GRAPHICS FORUM, Issue 3 2009
    Tim Dwyer
    Abstract We describe a new technique for graph layout subject to constraints. Compared to previous techniques the proposed method is much faster and scalable to much larger graphs. For a graph with n nodes, m edges and c constraints it computes incremental layout in time O(n log n+m+c) per iteration. Also, it supports a much more powerful class of constraint: inequalities or equalities over the Euclidean distance between nodes. We demonstrate the power of this technique by application to a number of diagramming conventions which previous constrained graph layout methods could not support. Further, the constraint-satisfaction method,inspired by recent work in position-based dynamics,is far simpler to implement than previous methods. [source]


    Hierarchical Higher Order Face Cluster Radiosity for Global Illumination Walkthroughs of Complex Non-Diffuse Environments

    COMPUTER GRAPHICS FORUM, Issue 3 2003
    Enrico Gobbetti
    We present an algorithm for simulating global illumination in scenes composed of highly tessellated objects withdiffuse or moderately glossy reflectance. The solution method is a higher order extension of the face cluster radiositytechnique. It combines face clustering, multiresolution visibility, vector radiosity, and higher order baseswith a modified progressive shooting iteration to rapidly produce visually continuous solutions with limited memoryrequirements. The output of the method is a vector irradiance map that partitions input models into areaswhere global illumination is well approximated using the selected basis. The programming capabilities of moderncommodity graphics architectures are exploited to render illuminated models directly from the vector irradiancemap, exploiting hardware acceleration for approximating view dependent illumination during interactive walkthroughs.Using this algorithm, visually compelling global illumination solutions for scenes of over one millioninput polygons can be computed in minutes and examined interactively on common graphics personal computers. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture and Image Generation; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism. [source]


    Global Illumination as a Combination of Continuous Random Walk and Finite-Element Based Iteration

    COMPUTER GRAPHICS FORUM, Issue 3 2001
    László Szirmay-Kalos
    The paper introduces a global illumination method that combines continuous and finite-element approaches, pre-serving the speed of finite-element based iteration and the accuracy of continuous random walks. The basic idea is to decompose the radiance function to a finite-element component that is only a rough estimate and to a difference component that is obtained by Monte-Carlo techniques. Iteration and random walk are handled uniformly in the framework of stochastic iteration. This uniform treatment allows the finite-element component to be built up adap-tively aiming at minimizing the Monte-Carlo component. The method is also suited for interactive walkthrough animation in glossy scenes since when the viewpoint changes, only the small Monte-Carlo component needs to be recomputed. [source]


    Ray Tracing Triangular Bézier Patches

    COMPUTER GRAPHICS FORUM, Issue 3 2001
    S. H. Martin Roth
    We present a new approach to finding ray,patch intersections with triangular Bernstein,Bézier patches of arbitrary degree. This paper extends and complements on the short presentation17 . Unlike a previous approach which was based on a combination of hierarchical subdivision and a Newton,like iteration scheme21 , this work adapts the concept of Bézier clipping to the triangular domain. The problem of reporting wrong intersections, inherent to the original Bézier clipping algorithm14 , is inves-tigated and opposed to the triangular case. It turns out that reporting wrong hits is very improbable, even close to impossible, in the triangular set,up. A combination of Bézier clipping and a simple hierarchy of nested bounding volumes offers a reliable and accurate solution to the problem of ray tracing triangular Bézier patches. [source]


    The Optimization of Signal Settings on a Signalized Roundabout Using the Cross-entropy Method

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 2 2008
    Mike Maher
    It is an iterative process that consists of generating solutions from some probability distribution whose parameter values are updated in each iteration using information from the best solutions found in that iteration. The article applies the method to the problem of the optimization of signal settings on a signalized roundabout. The performance of any given set of timings is evaluated using the cell transmission model, a deterministic macroscopic traffic flow model that permits the modeling of the spatial extent of queues and the possibility of "blocking back." The results from the investigations are encouraging, and show that the CEM has the potential to be a useful technique for tackling global optimization problems. [source]


    Deterministic parallel selection algorithms on coarse-grained multicomputers

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 18 2009
    M. Cafaro
    Abstract We present two deterministic parallel Selection algorithms for distributed memory machines, under the coarse-grained multicomputer model. Both are based on the use of two weighted 3-medians, that allows discarding at least 1/3 of the elements in each iteration. The first algorithm slightly improves the current experimentally fastest algorithm by Saukas and Song where at least 1/4 of the elements are discarded in each iteration, while the second one is a fast, special purpose algorithm working for a particular class of input, namely an input that can be sorted in linear time using RadixSort. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Detecting particle swarm optimization

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2009
    Ying-Nan Zhang
    Abstract Here, we propose a detecting particle swarm optimization (DPSO). In DPSO, we define several detecting particles that are randomly selected from the population. The detecting particles use the newly proposed velocity formula to search the adjacent domains of a settled position in approximate spiral trajectories. In addition, we define the particles that use the canonical velocity updating formula as common particles. In each iteration, the common particles use the canonical velocity updating formula to update their velocities and positions, and then the detecting particles do search in approximate spiral trajectories created by the new velocity updating formula in order to find better solutions. As a whole, the detecting particles and common particles would do the high-performance search. DPSO implements the common particles' swarm search behavior and the detecting particles' individual search behavior, thereby trying to improve PSO's performance on swarm diversity, the ability of quick convergence and jumping out the local optimum. The experimental results from several benchmark functions demonstrate good performance of DPSO. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Bayesian Networks and Adaptive Management of Wildlife Habitat

    CONSERVATION BIOLOGY, Issue 4 2010
    ALISON L. HOWES
    herramientas para la toma de decisiones; incertidumbre ecológica; pastoreo feral; regímenes de quema; validación de modelos Abstract:,Adaptive management is an iterative process of gathering new knowledge regarding a system's behavior and monitoring the ecological consequences of management actions to improve management decisions. Although the concept originated in the 1970s, it is rarely actively incorporated into ecological restoration. Bayesian networks (BNs) are emerging as efficient ecological decision-support tools well suited to adaptive management, but examples of their application in this capacity are few. We developed a BN within an adaptive-management framework that focuses on managing the effects of feral grazing and prescribed burning regimes on avian diversity within woodlands of subtropical eastern Australia. We constructed the BN with baseline data to predict bird abundance as a function of habitat structure, grazing pressure, and prescribed burning. Results of sensitivity analyses suggested that grazing pressure increased the abundance of aggressive honeyeaters, which in turn had a strong negative effect on small passerines. Management interventions to reduce pressure of feral grazing and prescribed burning were then conducted, after which we collected a second set of field data to test the response of small passerines to these measures. We used these data, which incorporated ecological changes that may have resulted from the management interventions, to validate and update the BN. The network predictions of small passerine abundance under the new habitat and management conditions were very accurate. The updated BN concluded the first iteration of adaptive management and will be used in planning the next round of management interventions. The unique belief-updating feature of BNs provides land managers with the flexibility to predict outcomes and evaluate the effectiveness of management interventions. Resumen:,El manejo adaptativo es un proceso interactivo de recopilación de conocimiento nuevo relacionado con el comportamiento de un sistema y el monitoreo de las consecuencias ecológicas de las acciones de manejo para refinar las opciones de manejo. Aunque el concepto se originó en la década de los 1970s, rara vez es incorporado activamente en la restauración ecológica. Las redes Bayesianas (RBs) están emergiendo como herramientas eficientes para la toma de decisiones ecológicas en el contexto del manejo adaptativo, pero los ejemplos de su aplicación en este sentido son escasos. Desarrollamos una RB en el marco del manejo adaptativo que se centra en el manejo de los efectos del pastoreo feral y los regímenes de quemas prescritas sobre la diversidad de aves en bosques subtropicales del este de Australia. Construimos la RB con datos para predecir la abundancia de aves como una función de la estructura del hábitat, la presión de pastoreo y las quemas prescritas. Los resultados del análisis de sensibilidad sugieren que la presión de pastoreo incrementó la abundancia de melífagos agresivos, que a su vez tuvieron un fuerte efecto negativo sobre paserinos pequeńos. Posteriormente se llevaron a cabo intervenciones de manejo para reducir la presión del pastoreo feral y quemas prescritas, después de las cuales recolectamos un segundo conjunto de datos de campo para probar la respuesta de paserinos pequeńos a estas medidas. Utilizamos estos datos, que incorporaron cambios ecológicos que pueden haber resultado de la intervención de manejo, para validar y actualizar la RB. Las predicciones de la abundancia de paserinos pequeńos bajo las nuevas condiciones de hábitat y manejo fueron muy precisas. La RB actualizada concluyó la primera iteración de manejo adaptativo y será utilizada para la planificación de la siguiente ronda de intervenciones de manejo. La característica única de actualización de la RBs permite que los manejadores tengan flexibilidad para predecir los resultados y evaluar la efectividad de las intervenciones de manejo. [source]


    Bridging the gap between field data and global models: current strategies in aeolian research

    EARTH SURFACE PROCESSES AND LANDFORMS, Issue 4 2010
    Joanna Bullard
    Abstract Modern global models of earth-atmosphere-ocean processes are becoming increasingly sophisticated but still require validation against empirical data and observations. This commentary reports on international initiatives amongst aeolian researchers that seek to combine field-based data sets and geomorphological frameworks for improving the quality of data available to constrain and validate global models. These include a second iteration of the Dust Indicators and Records from Terrestrial Marine Palaeoenvironments (DIRTMAP2) database, the Digital Atlas of Sand Seas and Dunefields of the World and a new geomorphology-based land surface map produced by the QUEST (Quantifying Uncertainties in the Earth System) Working Group on Dust. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Non-iterative equivalent linearization of inelastic SDOF systems for earthquakes in Japan and California

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 11 2010
    Katsuichiro Goda
    Abstract The seismic performance of existing structures can be assessed based on nonlinear static procedures, such as the Capacity Spectrum Method. This method essentially approximates peak responses of an inelastic single-degree-of-freedom (SDOF) system using peak responses of an equivalent linear SDOF model. In this study, the equivalent linear models of inelastic SDOF systems are developed based on the constant strength approach, which does not require iteration for assessing the seismic performance of existing structures. To investigate the effects of earthquake type and seismic region on the equivalent linear models, four ground-motion data sets,Japanese crustal/interface/inslab records and California crustal records,are compiled and used for nonlinear dynamic analysis. The analysis results indicate that: (1) the optimal equivalent linear model parameters (i.e. equivalent vibration period ratio and damping ratio) decrease with the natural vibration period, whereas they increase with the strength reduction factor; (2) the impacts of earthquake type and seismic region on the equivalent linear model parameters are not significant except for short vibration periods; and (3) the degradation and pinching effects affect the equivalent linear model parameters. We develop prediction equations for the optimal equivalent linear model parameters based on nonlinear least-squares fitting, which improve and extend the current nonlinear static procedure for existing structures with degradation and pinching behavior. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Kinematic transformations for planar multi-directional pseudodynamic testing

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 9 2009
    Oya Mercan
    Abstract The pseudodynamic (PSD) test method imposes command displacements to a test structure for a given time step. The measured restoring forces and displaced position achieved in the test structure are then used to integrate the equations of motion to determine the command displacements for the next time step. Multi-directional displacements of the test structure can introduce error in the measured restoring forces and displaced position. The subsequently determined command displacements will not be correct unless the effects of the multi-directional displacements are considered. This paper presents two approaches for correcting kinematic errors in planar multi-directional PSD testing, where the test structure is loaded through a rigid loading block. The first approach, referred to as the incremental kinematic transformation method, employs linear displacement transformations within each time step. The second method, referred to as the total kinematic transformation method, is based on accurate nonlinear displacement transformations. Using three displacement sensors and the trigonometric law of cosines, this second method enables the simultaneous nonlinear equations that express the motion of the loading block to be solved without using iteration. The formulation and example applications for each method are given. Results from numerical simulations and laboratory experiments show that the total transformation method maintains accuracy, while the incremental transformation method may accumulate error if the incremental rotation of the loading block is not small over the time step. A procedure for estimating the incremental error in the incremental kinematic transformation method is presented as a means to predict and possibly control the error. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Real-time hybrid testing using the unconditionally stable explicit CR integration algorithm

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 1 2009
    Cheng Chen
    Abstract Real-time hybrid testing combines experimental testing and numerical simulation, and provides a viable alternative for the dynamic testing of structural systems. An integration algorithm is used in real-time hybrid testing to compute the structural response based on feedback restoring forces from experimental and analytical substructures. Explicit integration algorithms are usually preferred over implicit algorithms as they do not require iteration and are therefore computationally efficient. The time step size for explicit integration algorithms, which are typically conditionally stable, can be extremely small in order to avoid numerical stability when the number of degree-of-freedom of the structure becomes large. This paper presents the implementation and application of a newly developed unconditionally stable explicit integration algorithm for real-time hybrid testing. The development of the integration algorithm is briefly reviewed. An extrapolation procedure is introduced in the implementation of the algorithm for real-time testing to ensure the continuous movement of the servo-hydraulic actuator. The stability of the implemented integration algorithm is investigated using control theory. Real-time hybrid test results of single-degree-of-freedom and multi-degree-of-freedom structures with a passive elastomeric damper subjected to earthquake ground motion are presented. The explicit integration algorithm is shown to enable the exceptional real-time hybrid test results to be achieved. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Equivalent force control method for generalized real-time substructure testing with implicit integration

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 9 2007
    Bin Wu
    Abstract This paper presents a new method, called the equivalent force control method, for solving the nonlinear equations of motion in a real-time substructure test using an implicit time integration algorithm. The method replaces the numerical iteration in implicit integration with a force-feedback control loop, while displacement control is retained to control the motion of an actuator. The method is formulated in such a way that it represents a unified approach that also encompasses the effective force test method. The accuracy and effectiveness of the method have been demonstrated with numerical simulations of real-time substructure tests with physical substructures represented by spring and damper elements, respectively. The method has also been validated with actual tests in which a Magnetorheological damper was used as the physical substructure. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    A reduced-order modeling technique for tall buildings with active tuned mass damper

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 3 2001
    Zu-Qing Qu
    Abstract It is impractical to install sensors on every floor of a tall building to measure the full state vector because of the large number of degrees of freedom. This makes it necessary to introduce reduced-order control. A kind of system reduction scheme (dynamic condensation method) is proposed in this paper. This method is iterative and Guyan condensation is looked upon as an initial approximation of the iteration. Since the reduced-order system is updated repeatedly until a desired one is obtained, the accuracy of the reduced-order system resulting from the proposed method is much higher than that obtained from the Guyan condensation method. Another advantage of the method is that the reduced-order system is defined in the subspace of the original physical space, which makes the state vectors have physical meaning. An eigenvalue shifting technique is applied to accelerate the convergence of iteration and to make the reduced system retain all the dynamic characteristics of the full system within a given frequency range. Two schemes to establish the reduced-order system by using the proposed method are also presented and discussed in this paper. The results for a tall building with active tuned mass damper show that the proposed method is efficient for the reduced-order modelling and the accuracy is very close to exact only after two iterations. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Leaf Processing by Wild Chimpanzees: Physically Defended Leaves Reveal Complex Manual Skills

    ETHOLOGY, Issue 8 2002
    Nadia Corp
    The manual processing of eight species of leaf was investigated in the M-group chimpanzees of Mahale Mountains National Park, Tanzania. Leaf species varied in the extent to which physical defences made consumption difficult. In all, 96 distinct techniques for leaf processing were identified, but two species with defended leaves (Ficus asperifolia and F. exasperata) required 2.5 as many techniques as did any of the six undefended species. Moreover, chimpanzees made more multiple leaf detachments, and made more subsequent modifications of the leaves, when dealing with the leaves of these two Ficus species, compared with the undefended leaf species. This greater complexity was associated with evidence of flexible, hierarchical organization of the process: iteration of modules consisting of several processing elements, facultative omission of modules, or substitutions of alternative modules. Comparison with data from mountain gorillas is made, and is consistent with similar cognitive architecture in the two species. We consider that, not only is hierarchical organization currently associated with mechanical difficulty in food processing, but that over evolutionary time-scales difficulties in food processing may have selected for cognitive advance. [source]


    Adaptive resource allocation in OFDMA systems with fairness and QoS constraints,

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 6 2007
    Liang Chen
    This paper describes several practical and efficient adaptive subchannel, power and bit allocation algorithms for orthogonal frequency-division multiple-access (OFDMA) systems. Assuming perfect knowledge of channel state information (CSI) at the transmitter, we look at the problem of minimising the total power consumption while maintaining individual rate requirements and QoS constraints. An average signal-to-noise ratio (SNR) approximation is used to determine the allocation while substantially reducing the computational complexity. The proposed algorithms guarantee improvement through each iteration and converge quickly to stable suboptimal solutions. Numerical results and complexity analysis show that the proposed algorithms offer beneficial cost versus performance trade-offs compared to existing approaches. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    On extrinsic information of good binary codes operating over Gaussian channels

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 2 2007
    M. Peleg
    We show that the extrinsic information about the coded bits of any good (capacity achieving) binary code operating over a Gaussian channel is zero when the channel capacity is lower than the code rate and unity when capacity exceeds the code rate, that is, the extrinsic information transfer (EXIT) chart is a step function of the signal to noise ratio and independent of the code. It follows that, for a common class of iterative receivers where the error correcting decoder must operate at first iteration at rate above capacity (such as in turbo equalization, iterative channel estimation, parallel and serial concatenated coding and the like), classical good codes which achieve capacity over the Additive White Gaussian Noise Channel are not effective and should be replaced by different new ones. Copyright © 2006 AEIT. [source]


    THE EVOLVABILITY OF GROWTH FORM IN A CLONAL SEAWEED

    EVOLUTION, Issue 12 2009
    Keyne Monro
    Although modular construction is considered the key to adaptive growth or growth-form plasticity in sessile taxa (e.g., plants, seaweeds and colonial invertebrates), the serial expression of genes in morphogenesis may compromise its evolutionary potential if growth forms emerge as integrated wholes from module iteration. To explore the evolvability of growth form in the red seaweed, Asparagopsis armata, we estimated genetic variances, covariances, and cross-environment correlations for principal components of growth-form variation in contrasting light environments. We compared variance,covariance matrices across environments to test environmental effects on heritable variation and examined the potential for evolutionary change in the direction of plastic responses to light. Our results suggest that growth form in Asparagopsis may constitute only a single genetic entity whose plasticity affords only limited evolutionary potential. We argue that morphological integration arising from modular construction may constrain the evolvability of growth form in Asparagopsis, emphasizing the critical distinction between genetic and morphological modularity in this and other modular taxa. [source]


    The use of transient pressure analysis at the Dounreay Shaft Isolation Project.

    GEOMECHANICS AND TUNNELLING, Issue 5 2009
    Die Verwendung der Analyse instationärer Druckentwicklung am Dounreay Schachtabdichtungsprojekt
    Grouting; Innovative methods; Injektionen; Neue Verfahren Abstract This paper provides an assessment of the use of pressure fall-off data during the Dounreay Shaft Isolation Project. The instrumentation controlling the injection of grout monitors and records both the pressure and the flow rate throughout the process, so pressure fall-off data is collected during any pauses to, and at the end of, each grout injection. The shapes of the pressure fall-off vs time curves have been examined qualitatively and categorised. The fall-off data has also been examined using PanSystem well test software, which creates the pressure change and pressure derivative curves, then attempts to simulate the fall-off curve by iteration after selection of a flow and boundary model chosen from the wide range available. The implications that the shapes of the pressure and derivative curves and the flow and boundary models have for the grout curtain have been examined. The caveats that surround the quantitative use of results from Pan-System analyses for a cement grout rather than a Newtonian fluid are discussed. Diese Veröffentlichung handelt von der Beurteilung des Einsatzes und der Analyse von Daten zum Druckabfall im Zuge der Injektion am Schachtabdichtungsprojekt Dounreay, Schottland, UK. Die Instrumentation der Baustelle war darauf ausgelegt, Messwerte von Druck und Injektionsrate anzuzeigen, aufzuzeichnen und als Diagramm darzustellen. Damit war es möglich, in jeder Injektionsunterbrechung (also bei Rate = Null) und zu jedem Passenende Druckabfalldaten aufzuzeichnen. Die Form dieser Druckabfallkurven gegen die Zeit wurde qualitativ untersucht und kategorisiert. Eine weitere Interpretation dieser Daten erfolgte mittels des Programms "PanSystem". Bei dieser Methodik werden die Druckänderungen über kleine Zeitinkremente errechnet und deren Ableitung über die Zeit in Kurvenform dargestellt. Durch iterative Simulation und Eingabe von Randbedingungen ("boundaries") für das jeweilige Strömungsmodell , ausgewählt aus einer weiten Bandbreite von Möglichkeiten , ergibt sich die Möglichkeit, u. a. die Strömungsdimension, Strömungshindernisse und Reichweite der Injektion zu prognostizieren. Die daraus gezogenen Schlüsse für den Injektionsschirm wurden für die qualitative Abnahme der Arbeiten mitverwendet. In dem vorliegenden Artikel wird auch auf die möglichen Vorbehalte eingegangen, die sich aus den rheologischen Abweichungen von Injektionsmischungen gegenüber einer Newtonschen Flüssigkeit ergeben. [source]


    Spectral estimation on a sphere in geophysics and cosmology

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2008
    F. A. Dahlen
    SUMMARY We address the problem of estimating the spherical-harmonic power spectrum of a statistically isotropic scalar signal from noise-contaminated data on a region of the unit sphere. Three different methods of spectral estimation are considered: (i) the spherical analogue of the one-dimensional (1-D) periodogram, (ii) the maximum-likelihood method and (iii) a spherical analogue of the 1-D multitaper method. The periodogram exhibits strong spectral leakage, especially for small regions of area A, 4,, and is generally unsuitable for spherical spectral analysis applications, just as it is in 1-D. The maximum-likelihood method is particularly useful in the case of nearly-whole-sphere coverage, A, 4,, and has been widely used in cosmology to estimate the spectrum of the cosmic microwave background radiation from spacecraft observations. The spherical multitaper method affords easy control over the fundamental trade-off between spectral resolution and variance, and is easily implemented regardless of the region size, requiring neither non-linear iteration nor large-scale matrix inversion. As a result, the method is ideally suited for most applications in geophysics, geodesy or planetary science, where the objective is to obtain a spatially localized estimate of the spectrum of a signal from noisy data within a pre-selected and typically small region. [source]


    Models of Earth's main magnetic field incorporating flux and radial vorticity constraints

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2007
    A. Jackson
    SUMMARY We describe a new technique for implementing the constraints on magnetic fields arising from two hypotheses about the fluid core of the Earth, namely the frozen-flux hypothesis and the hypothesis that the core is in magnetostrophic force balance with negligible leakage of current into the mantle. These hypotheses lead to time-independence of the integrated flux through certain ,null-flux patches' on the core surface, and to time-independence of their radial vorticity. Although the frozen-flux hypothesis has received attention before, constraining the radial vorticity has not previously been attempted. We describe a parametrization and an algorithm for preserving topology of radial magnetic fields at the core surface while allowing morphological changes. The parametrization is a spherical triangle tesselation of the core surface. Topology with respect to a reference model (based on data from the Oersted satellite) is preserved as models at different epochs are perturbed to optimize the fit to the data; the topology preservation is achieved by the imposition of inequality constraints on the model, and the optimization at each iteration is cast as a bounded value least-squares problem. For epochs 2000, 1980, 1945, 1915 and 1882 we are able to produce models of the core field which are consistent with flux and radial vorticity conservation, thus providing no observational evidence for the failure of the underlying assumptions. These models are a step towards the production of models which are optimal for the retrieval of frozen-flux velocity fields at the core surface. [source]


    P - and S -velocity images of the lithosphere,asthenosphere system in the Central Andes from local-source tomographic inversion

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2006
    Ivan Koulakov
    SUMMARY About 50 000 P and S arrival times and 25 000 values of t* recorded at seismic arrays operated in the Central Andes between 20°S and 25°S in the time period from 1994 to 1997 have been used for locating more than 1500 deep and crustal earthquakes and creating 3-D P, S velocity and Qp models. The study volume in the reference model is subdivided into three domains: slab, continental crust and mantle wedge. A starting velocity distribution in each domain is set from a priori information: in the crust it is based on the controlled sources seismic studies; in slab and mantle wedge it is defined using relations between P and S velocities, temperature and composition given by mineral physics. Each iteration of tomographic inversion consists of the following steps: (1) absolute location of sources in 3-D velocity model using P and S arrival times; (2) double-difference relocation of the sources and (3) simultaneous determination of P and S velocity anomalies, P and S station corrections and source parameters by inverting one matrix. Velocity parameters are computed in a mesh with the density of nodes proportional to the ray density with double-sided nodes at the domain boundaries. The next iteration is repeated with the updated velocity model and source parameters obtained at the previous step. Different tests aimed at checking the reliability of the obtained velocity models are presented. In addition, we present the results of inversion for Vp and Vp/Vs parameters, which appear to be practically equivalent to Vp and Vs inversion. A separate inversion for Qp has been performed using the ray paths and source locations in the final velocity model. The resulting Vp, Vs and Qp distributions show complicated, essentially 3-D structure in the lithosphere and asthenosphere. P and S velocities appear to be well correlated, suggesting the important role of variations of composition, temperature, water content and degree of partial melting. [source]