Home About us Contact | |||
Test Cases (test + case)
Kinds of Test Cases Selected AbstractsThe Bible among Lutherans in America: The ELCA as a Test CaseDIALOG, Issue 1 2006By Erik M. Heen Abstract:, This article describes the biblical hermeneutics that inform the Evangelical Lutheran Church in America by comparing the ELCA's tradition of biblical interpretation with that of the Lutheran Church-Missouri Synod. It sets both against the great social and intellectual challenges of the early twentieth century, including the modernist/fundamentalist controversy. One commonality that surfaces is that both church bodies appropriated pre-modern hermeneutical impulses for "counter modern" biblical apologetics. In this process the LC-MS privileged the period of Lutheran Orthodoxy (17th century) while the ELCA constructed its hermeneutical paradigm through a recovery of the early Reformation (Luther). This observation suggests that both interpretive trajectories need further historical as well as theological review and revision. [source] Constraint of Oxygen Fugacity During Field-Assisted Sintering: TiO2 as a Test CaseJOURNAL OF THE AMERICAN CERAMIC SOCIETY, Issue 3 2008Dat V. Quach Field-assisted sintering exposes samples in a graphite die to reducing conditions. Using TiO2 as a test case, this work shows that internal redox equlibria in the sample, rather than the graphite,CO,O2 equilibrium, appear to control the oxygen fugacity. Samples sintered at 1160°C for 20 min are homogeneous in oxygen content and have an average composition of TiO1.983±0.001. The oxygen fugacity during these sintering experiments is calculated to be about 10,16 atm, which is higher than the value obtained from thermodynamic equilibrium of graphite,CO,O2 at the given temperature. The oxygen fugacity is similar to that for the quasi-two-phase region, or hysteresis loop, representing the coexistence of reduced rutile with random crystallographic shear (CS) planes and the first ordered CS phase. [source] The Ordovician Trilobite Carolinites, A Test Case for Microevolution in A Macrofossil LineagePALAEONTOLOGY, Issue 2 2002Tim McCormick We use geometric morphometrics to test a claim that the Ordovician trilobite Carolinites exhibits gradualistic evolution. We follow a previously proposed definition of gradualism, and define the criteria an ideal microevolutionary case study should satisfy. We consider the Lower,Middle Ordovician succession at Ibex, western Utah to meet these criteria. We discovered examples of: (1) morphometric characters which fluctuate with little or no net change; (2) characters which show abrupt ,step' change; (3) characters which show transitional change through intermediate states. Examples belonging to (2) and (3) exhibit reversals. The transitional characters were tested against a null hypothesis of symmetrical random walk. The tests indicated that they were not changing under sustained directional selection. Two alternative interpretations are possible. (1) The characters are responding to random causes (genetic drift or rapidly fluctuating selection pressures) or to causes that interact in so complex a way that they appear random. This observation may be applicable to most claimed cases of gradualistic evolution in the literature. (2) Sampling was at too poor a resolution to allow meaningful testing against the random walk. If so, then this situation is likely to apply in most evolutionary case studies involving Palaeozoic macrofossils. [source] Concerning the Different Roles of Cations in Metallic Zintl Phases: Ba7Ga4Sb9 as a Test CaseCHEMINFORM, Issue 46 2006Pere Alemany Abstract ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 200 leading journals. To access a ChemInform Abstract, please click on HTML or PDF. [source] Civil Disobedience and Test CasesRATIO JURIS, Issue 3 2004María José Falcón y Tella The novelty of our focus resides in the priority given to the legal aspect of civil disobedience, especially to the possible legal justification of civil disobedience, a perspective that is generally overlooked in analysing the phenomenon. This is where the Achilles heel is to be found, though it may provide unexploited insights into the issue from which significant conclusions can be drawn. [source] A novel selectivity technique for high impedance arcing fault detection in compensated MV networksEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 4 2008Nagy I. Elkalashy Abstract In this paper, the initial transients due to arc reignitions associated with high impedance faults caused by leaning trees are extracted using discrete wavelet transform (DWT). In this way, the fault occurrence is localized. The feature extraction is carried out for the phase quantities corresponding to a band frequency 12.5,6.25,kHz. The detection security is enhanced because the DWT corresponds to the periodicity of these transients. The selectivity term of the faulty feeder is based on a novel technique, in which the power polarity is examined. This power is mathematically processed by multiplying the DWT detail coefficients of the phase voltage and current for each feeder. Its polarity identifies the faulty feeder. In order to reduce the computational burden of the technique, the extraction of the fault features from the residual components is examined. The same methodology of computing the power is considered by taking into account the residual voltage and current detail coefficients where the proposed algorithm performs best. Test cases provide evidence of the efficacy of the proposed technique. Copyright © 2007 John Wiley & Sons, Ltd. [source] Computation of turbulent free-surface flows around modern shipsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 4 2003Tingqiu Li Abstract This paper presents the calculated results for three classes of typical modern ships in modelling of ship-generated waves. Simulations of turbulent free-surface flows around ships are performed in a numerical water tank, based on the FINFLO-RANS SHIP solver developed at Helsinki University of Technology. The Reynolds-averaged Navier,Stokes (RANS) equations with the artificial compressibility and the non-linear free-surface boundary conditions are discretized by means of a cell-centred finite-volume scheme. The convergence performance is improved with the multigrid method. A free surface is tracked using a moving mesh technology, in which the non-linear free-surface boundary conditions are given on the actual location of the free surface. Test cases recommended are a container ship, a US Navy combatant and a tanker. The calculated results are compared with the experimental data available in the literature in terms of the wave profiles, wave pattern, and turbulent flow fields for two turbulence models, Chien's low Reynolds number k,,model and Baldwin,Lomax's model. Furthermore, the convergence performance, the grid refinement study and the effect of turbulence models on the waves have been investigated. Additionally, comparison of two types of the dynamic free-surface boundary conditions is made. Copyright © 2003 John Wiley& Sons, Ltd. [source] Hardware implementation of CNN architecture-based test bed for studying synchronization phenomenon in oscillatory and chaotic networksINTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 4 2009Ákos Tar Abstract A 3D modular cellular nonlinear network (CNN) architecture-based test bed, with four-neighbor connectivity, used to study synchronization phenomena in oscillatory and chaotic networks is designed. The architecture is implemented as hardware panels including a standalone robust Chua's circuit kit. The details of electronic implementation along with several test cases of connecting Chua's circuits in different topologies are provided. Test cases are adequately supported by oscilloscope traces. Copyright © 2008 John Wiley & Sons, Ltd. [source] Minimum sequence requirements for selective RNA-ligand binding: A molecular mechanics algorithm using molecular dynamics and free-energy techniquesJOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 14 2006Peter C. Anderson Abstract In vitro evolution techniques allow RNA molecules with unique functions to be developed. However, these techniques do not necessarily identify the simplest RNA structures for performing their functions. Determining the simplest RNA that binds to a particular ligand is currently limited to experimental protocols. Here, we introduce a molecular-mechanics based algorithm employing molecular dynamics simulations and free-energy methods to predict the minimum sequence requirements for selective ligand binding to RNA. The algorithm involves iteratively deleting nucleotides from an experimentally determined structure of an RNA-ligand complex, performing energy minimizations and molecular dynamics on each truncated structure, and assessing which truncations do not prohibit RNA binding to the ligand. The algorithm allows prediction of the effects of sequence modifications on RNA structural stability and ligand-binding energy. We have implemented the algorithm in the AMBER suite of programs, but it could be implemented in any molecular mechanics force field parameterized for nucleic acids. Test cases are presented to show the utility and accuracy of the methodology. © 2006 Wiley Periodicals, Inc. J Comput Chem, 2006 [source] Motifs in nucleic acids: Molecular mechanics restraints for base pairing and base stackingJOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 1 2003Stephen C. Harvey Abstract In building and refining nucleic acid structures, it is often desirable to enforce particular base pairing and/or base stacking interactions. Energy-based modeling programs with classical molecular mechanics force fields do not lend themselves to the easy imposition of penalty terms corresponding to such restraints, because the requirement that two bases lie in or near the same plane (pairing) or that they lie in parallel planes (stacking) cannot be easily expressed in terms of traditional interactions involving two atoms (bonds), three atoms (angles), or four atoms (torsions). Here we derive expressions that define a collection of pseudobonds and pseudoangles through which molecular mechanics restraints for base pairing and stacking can be imposed. We have implemented these restraints into the JUMNA package for modeling DNA and RNA structures. JUMNA scripts can specify base pairing with a variety of standard geometries (Watson,Crick, Hoogsteen, wobble, etc.), or with user-defined geometries; they can also specify stacking arrangements. We have also implemented "soft-core" functions to modify van der Waals and electrostatic interactions to avoid steric conflicts in particularly difficult refinements where two backbones need to pass through one another. Test cases are presented to show the utility of the method. The restraints could be adapted for implementation in other molecular mechanics packages. © 2002 Wiley Periodicals, Inc. J Comput Chem 24: 1,9, 2003 [source] Affective Modelling: Profiling Geometrical Models with Human Emotional ResponsesCOMPUTER GRAPHICS FORUM, Issue 7 2009Cheng-Hung Lo Abstract In this paper, a novel concept, Affective Modelling, is introduced to encapsulate the idea of creating 3D models based on the emotional responses that they may invoke. Research on perceptually-related issues in Computer Graphics focuses mostly on the rendering aspect. Low-level perceptual criteria taken from established Psychology theories or identified by purposefully-designed experiments are utilised to reduce rendering effort or derive quality evaluation schemes. For modelling, similar ideas have been applied to optimise the level of geometrical details. High-level cognitive responses such as emotions/feelings are less addressed in graphics literatures. This paper investigates the possibility of incorporating emotional/affective factors for 3D model creations. Using a glasses frame model as our test case, we demonstrate a methodological framework to build the links between human emotional responses and geometrical features. We design and carry out a factorial experiment to systematically analyse how certain shape factors individually and interactively influence the viewer's impression of the shape of glasses frames. The findings serve as a basis for establishing computational models that facilitate emotionally-guided 3D modelling. [source] Ibis: a flexible and efficient Java-based Grid programming environmentCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 7-8 2005Rob V. van Nieuwpoort Abstract In computational Grids, performance-hungry applications need to simultaneously tap the computational power of multiple, dynamically available sites. The crux of designing Grid programming environments stems exactly from the dynamic availability of compute cycles: Grid programming environments (a) need to be portable to run on as many sites as possible, (b) they need to be flexible to cope with different network protocols and dynamically changing groups of compute nodes, while (c) they need to provide efficient (local) communication that enables high-performance computing in the first place. Existing programming environments are either portable (Java), or flexible (Jini, Java Remote Method Invocation or (RMI)), or they are highly efficient (Message Passing Interface). No system combines all three properties that are necessary for Grid computing. In this paper, we present Ibis, a new programming environment that combines Java's ,run everywhere' portability both with flexible treatment of dynamically available networks and processor pools, and with highly efficient, object-based communication. Ibis can transfer Java objects very efficiently by combining streaming object serialization with a zero-copy protocol. Using RMI as a simple test case, we show that Ibis outperforms existing RMI implementations, achieving up to nine times higher throughputs with trees of objects. Copyright © 2005 John Wiley & Sons, Ltd. [source] Smaller and more numerous harvesting gaps emulate natural forest disturbances: a biodiversity test case using rove beetles (Coleoptera, Staphylinidae)DIVERSITY AND DISTRIBUTIONS, Issue 6 2008Jan Klimaszewski ABSTRACT Aim To evaluate changes in the abundance, species richness and community composition of rove beetles (Coleoptera, Staphylinidae) in response to three configurations of experimental gap cuts and to the effects of ground scarification in early succession yellow birch-dominated boreal forest. In each experimental treatment, total forest removed was held constant (35% removal by partial cutting with a concomitant decrease in gap size) but the total number of gaps was increased (two, four and eight gaps, respectively), resulting in an experimental increase in the total amount of ,edge' within each stand. Location Early succession yellow birch-dominated forests, Quebec, Canada. Methods Pitfall traps, ANOVA, MIXED procedure in sas®, post hoc Tukey's adjustment, rarefaction estimates, sum-of-squares and distance-based multivariate regression trees (ssMRT, dbMRT). Results Estimates of species richness using rarefaction were highest in clearcut and two-gap treatments, decreased in smaller and more numerous gaps and were significantly higher in scarified areas than in unscarified areas. ANOVA indicated a significant impact of harvesting on the overall standardized catch. Post hoc Tukey's tests indicated that the total catch of all rove beetles was significantly higher in uncut forests than in the treated areas. Both sum-of-squares and distance-based multivariate regression trees indicated that community structure of rove beetles differed among treatments. Assemblages were grouped into (a) control plots, (b) four- and eight-gap treatments and (c) two-gap and clearcut treatments. Main conclusions Rove beetle composition responded significantly to increasing gap size. Composition among intermediate and small-sized gap treatments (four- and eight-gap treatments) was more similar to uncut control forests than were larger gap treatments (two-gap) and clearcuts. Effects of scarification were nested within the harvested treatments. When the total area of forest removed is held constant, smaller, more numerous gaps are more similar to uncut control stands than to larger gaps and falls more closely within the natural forest heterogeneity. [source] GENOMICS IN THE LIGHT OF EVOLUTIONARY TRANSITIONSEVOLUTION, Issue 6 2010Pierre M. Durand Molecular biology has entrenched the gene as the basic hereditary unit and genomes are often considered little more than collections of genes. However, new concepts and genomic data have emerged, which suggest that the genome has a unique place in the hierarchy of life. Despite this, a framework for the genome as a major evolutionary transition has not been fully developed. Instead, genome origin and evolution are frequently considered as a series of neutral or nonadaptive events. In this article, we argue for a Darwinian multilevel selection interpretation for the origin of the genome. We base our arguments on the multilevel selection theory of hypercycles of cooperative interacting genes and predictions that gene-level trade-offs in viability and reproduction can help drive evolutionary transitions. We consider genomic data involving mobile genetic elements as a test case of our view. A new concept of the genome as a discrete evolutionary unit emerges and the gene,genome juncture is positioned as a major evolutionary transition in individuality. This framework offers a fresh perspective on the origin of macromolecular life and sets the scene for a new, emerging line of inquiry,the evolutionary ecology of the genome. [source] Application of a simple enthalpy-based pyrolysis model in numerical simulations of pyrolysis of charring materialsFIRE AND MATERIALS, Issue 1 2010S. R. Wasan Abstract A new, simple pyrolysis model for charring materials is applied to several numerical and experimental test cases with variable externally imposed heat fluxes. The model is based on enthalpy. A piecewise linear temperature field representation is adopted, in combination with an estimate for the pyrolysis front position. Chemical kinetics are not accounted for: the pyrolysis process takes place in an infinitely thin front, at the ,pyrolysis temperature'. The evolution in time of pyrolysis gases mass flow rates and surface temperatures is discussed. The presented model is able to reproduce numerical reference results, which were obtained with the more complex moving mesh model. It performs better than the integral model. We illustrate good agreement with numerical reference results for variable thickness and boundary conditions. This reveals that the model provides good results for the entire range of thermally thin and thermally thick materials. It also shows that possible interruption of the pyrolysis process, due to excessive heat losses, is automatically predicted with the present approach. Finally, an experimental test case is considered. Copyright © 2009 John Wiley & Sons, Ltd. [source] Distinguishing between naturally and culturally flaked cobbles: A test case from Alberta, CanadaGEOARCHAEOLOGY: AN INTERNATIONAL JOURNAL, Issue 7 2004Jason David Gillespie Distinguishing between naturally and culturally produced, simply flaked cobbles has been a problem for proponents of a pre-Clovis occupation in the Americas. Several sites in Alberta have been assigned a pre-Clovis status based on the presence of simply flaked cobbles found in Late Pleistocene till deposits. Historically, these types of assemblages have been assigned a cultural status based on subjective criteria and appeals to the analyst's expertise. To determine the archaeological status of two such assemblages from Alberta (Varsity Estates and Silver Springs), they were compared to a known natural assemblage and two known cultural assemblages. Chi-square testing was used to evaluate several lithic attributes. Only those attributes that statistically differentiated between natural and cultural assemblages were used for further analyses. All cobbles were then scored using these attributes. A point was awarded when a statistically significant attribute of human-manufacture was present. These points were then totaled, providing an aggregate score for each cobble. These scores were plotted to determine whether the test assemblages had closer affinities with the known natural or known cultural assemblages. The results indicate that the proposed pre-Clovis assemblages have closer affinities to known natural assemblages than to cultural assemblages. Our results suggest that these sites provide no evidence for a pre-Clovis occupation in the Americas. © 2004 Wiley Periodicals, Inc. [source] Wavefield Migration plus Monte Carlo Imaging of 3D Prestack Seismic DataGEOPHYSICAL PROSPECTING, Issue 5 2006Ernesto Bonomi ABSTRACT Prestack wave-equation migration has proved to be a very accurate shot-by-shot imaging tool. However, 3D imaging with this technique of a large field acquisition, especially one with hundreds of thousands of shots, is prohibitively costly. Simply adapting the technique to migrate many superposed shot-gathers simultaneously would render 3D wavefield prestack migration cost-effective but it introduces uncontrolled non-physical interference among the shot-gathers, making the final image useless. However, it has been observed that multishot signal interference can be kept under some control by averaging over many such images, if each multishot migration is modified by a random phase encoding of the frequency spectra of the seismic traces. In this article, we analyse this technique, giving a theoretical basis for its observed behaviour: that the error of the image produced by averaging over M phase encoded migrations decreases as M,1. Furthermore, we expand the technique and define a general class of Monte-Carlo encoding methods for which the noise variance of the average imaging condition decreases as M,1; these methods thus all converge asymptotically to the correct reflectivity map, without generating prohibitive costs. The theoretical asymptotic behaviour is illustrated for three such methods on a 2D test case. Numerical verification in 3D is then presented for one such method implemented with a 3D PSPI extrapolation kernel for two test cases: the SEG,EAGE salt model and a real test constructed from field data. [source] If the Armada Had Landed: A Reappraisal of England's Defences in 1588HISTORY, Issue 311 2008NEIL YOUNGER The defeat of the Spanish Armada in 1588 stands as one of the greatest triumphs of Elizabeth I's reign, but, the success of the navy notwithstanding, received wisdom presents the land defences as woefully inadequate. This article shows that the existing picture of the English preparations is flawed in several ways and that they were better organized, more efficient and more willing than has been recognized. The privy council was called upon to deploy limited forces to defend a long coastline against an unpredictable attacker, and the evidence shows that they contrived to maximize the effectiveness of the available resources whilst balancing the calls of military practicality, financial necessity and political constraints. An assessment is also made of the response from the counties, using the mobilization as a test case of the structures put in place by the Elizabethan regime to deal with such an emergency. [source] Novel tools for extraction and validation of disease-related mutations applied to fabry disease,HUMAN MUTATION, Issue 9 2010Remko Kuipers Abstract Genetic disorders are often caused by nonsynonymous nucleotide changes in one or more genes associated with the disease. Specific amino acid changes, however, can lead to large variability of phenotypic expression. For many genetic disorders this results in an increasing amount of publications describing phenotype-associated mutations in disorder-related genes. Keeping up with this stream of publications is essential for molecular diagnostics and translational research purposes but often impossible due to time constraints: there are simply too many articles to read. To help solve this problem, we have created Mutator, an automated method to extract mutations from full-text articles. Extracted mutations are crossreferenced to sequence data and a scoring method is applied to distinguish false-positives. To analyze stored and new mutation data for their (potential) effect we have developed Validator, a Web-based tool specifically designed for DNA diagnostics. Fabry disease, a monogenetic gene disorder of the GLA gene, was used as a test case. A structure-based sequence alignment of the alpha-amylase superfamily was used to validate results. We have compared our data with existing Fabry mutation data sets obtained from the HGMD and Swiss-Prot databases. Compared to these data sets, Mutator extracted 30% additional mutations from the literature. Hum Mutat 31:1026,1032, 2010. © 2010 Wiley-Liss, Inc. [source] Testing and improving experimental parameters for the use of low molecular weight targets in array-CGH experiments,HUMAN MUTATION, Issue 11 2006Marianne Stef Abstract Array,comparative genomic hybridization (CGH) has evolved as a useful technique for the detection and characterization of deletions, and, to a lesser extent, of duplications. The resolution of the technique is dictated by the genomic distance between targets spotted on the microarray, and by the targets' sizes. The use of region-specific, high-resolution microarrays is a specific goal when studying regions that are prone to rearrangements, such as those involved in deletion syndromes. The aim of the present study was to evaluate the best experimental conditions to be used for array-CGH analysis using low molecular weight (LMW) targets. The parameters tested were: the target concentration, the way LMW targets are prepared (either as linearized plasmids or as purified PCR products), and the way the targets are attached to the array-CGH slide (in a random fashion on amino-silane coated slides, or by one amino-modified end on epoxysilane-coated slides). As a test case, we constructed a microarray harboring LMW targets located in the CREBBP gene, mutations of which cause the Rubinstein-Taybi syndrome (RTS). From 10 to 15% of RTS patients have a CREBBP deletion. We showed that aminosilane- and epoxysilane-coated slides were equally efficient with targets above 1,000,bp in size. On the other hand, with the smallest targets, especially those below 500,bp, epoxysilane-coated slides were superior to aminosilane-coated slides, which did not allow deletion detection. Use of the high resolution array allowed us to map intragenic breakpoints with precision and to identify a very small deletion and a duplication that were not detected by the currently available techniques for finding CREBBP deletions. Hum Mutat 27(11), 1143,1150, 2006. © 2006 Wiley-Liss, Inc. [source] Darfur and the failure of the responsibility to protectINTERNATIONAL AFFAIRS, Issue 6 2007ALEX DE WAAL When official representatives of more than 170 countries adopted the principle of the ,responsibility to protect' (R2P) at the September 2005 World Summit, Darfur was quickly identified as the test case for this new doctrine. The general verdict is that the international community has failed the test due to lack of political will. This article argues that the failure is real but that it is more fundamentally located within the doctrine of R2P itself. Fulfilling the aspiration of R2P demands an international protection capability that does not exist now and cannot be realistically expected. The critical weakness in R2P is that the ,responsibility to react' has been framed as coercive protection, which attempts to be a middle way between classic peacekeeping and outright military intervention that can be undertaken without the consent of the host government. Thus far, theoretical and practical attempts to create this intermediate space for coercive protection have failed to resolve basic strategic and operational issues. In addition, the very act of raising the prospect of external military intervention for human protection purposes changes and distorts the political process and can in fact make a resolution more difficult. Following an introductory section that provides background to the war in Darfur and international engagement, this article examines the debates over the R2P that swirled around the Darfur crisis and operational concepts developed for the African Union Mission in Sudan (AMIS) and its hybrid successor, the UN,African Union Mission in Darfur (UNAMID), especially during the Abuja peace negotiations. Three operational concepts are examined: ceasefire, disarmament and civilian protection. Unfortunately, the international policy priority o bringing UN troops to Darfur had an adverse impact on the Darfur peace talks without grappling with the central question of what international forces would do to resolve the crisis. Advocacy for the R2P set an unrealistic ideal which became the enemy of achievable goals. [source] On the stability and convergence of a Galerkin reduced order model (ROM) of compressible flow with solid wall and far-field boundary treatment,INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 10 2010I. Kalashnikova Abstract A reduced order model (ROM) based on the proper orthogonal decomposition (POD)/Galerkin projection method is proposed as an alternative discretization of the linearized compressible Euler equations. It is shown that the numerical stability of the ROM is intimately tied to the choice of inner product used to define the Galerkin projection. For the linearized compressible Euler equations, a symmetry transformation motivates the construction of a weighted L2 inner product that guarantees certain stability bounds satisfied by the ROM. Sufficient conditions for well-posedness and stability of the present Galerkin projection method applied to a general linear hyperbolic initial boundary value problem (IBVP) are stated and proven. Well-posed and stable far-field and solid wall boundary conditions are formulated for the linearized compressible Euler ROM using these more general results. A convergence analysis employing a stable penalty-like formulation of the boundary conditions reveals that the ROM solution converges to the exact solution with refinement of both the numerical solution used to generate the ROM and of the POD basis. An a priori error estimate for the computed ROM solution is derived, and examined using a numerical test case. Published in 2010 by John Wiley & Sons, Ltd. [source] FLEXMG: A new library of multigrid preconditioners for a spectral/finite element incompressible flow solverINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 12 2010M. Rasquin Abstract A new library called FLEXMG has been developed for a spectral/finite element incompressible flow solver called SFELES. FLEXMG allows the use of various types of iterative solvers preconditioned by algebraic multigrid methods. Two families of algebraic multigrid preconditioners have been implemented, namely smooth aggregation-type and non-nested finite element-type. Unlike pure gridless multigrid, both of these families use the information contained in the initial fine mesh. A hierarchy of coarse meshes is also needed for the non-nested finite element-type multigrid so that our approaches can be considered as hybrid. Our aggregation-type multigrid is smoothed with either a constant or a linear least-square fitting function, whereas the non-nested finite element-type multigrid is already smooth by construction. All these multigrid preconditioners are tested as stand-alone solvers or coupled with a GMRES method. After analyzing the accuracy of the solutions obtained with our solvers on a typical test case in fluid mechanics, their performance in terms of convergence rate, computational speed and memory consumption is compared with the performance of a direct sparse LU solver as a reference. Finally, the importance of using smooth interpolation operators is also underlined in the study. Copyright © 2010 John Wiley & Sons, Ltd. [source] Numerical analysis of a new Eulerian,Lagrangian finite element method applied to steady-state hot rolling processesINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 5 2005Josef Synka Abstract A finite element code for steady-state hot rolling processes of rigid,visco-plastic materials under plane,strain conditions was developed in a mixed Eulerian,Lagrangian framework. This special set up allows for a direct calculation of the local deformations occurring at the free surfaces outside the contact region between the strip and the work roll. It further simplifies the implementation of displacement boundary conditions, such as the impenetrability condition. When applied to different practical hot rolling situations, ranging from thick slab to ultra-thin strip rolling, the velocity,displacement based model (briefly denoted as vu-model) in this mixed Eulerian,Lagrangian reference system proves to be a robust and efficient method. The vu-model is validated against a solely velocity-based model (vv-model) and against elementary methods based on the Kármán,Siebel and Orowan differential equations. The latter methods, when calibrated, are known to be in line with experimental results for homogeneous deformation cases. For a massive deformation it is further validated against the commercial finite-element software package Abaqus/Explicit. It is shown that the results obtained with the vu-model are in excellent agreement with the predictions of the vv-model and that the vu-model is even more robust than its vv-counterpart. Throughout the study we assumed a rigid cylindrical work roll; only for the homogeneous test case, we also investigated the effect of an elastically deformable work roll within the frame of the Jortner Green's function method. The new modelling approach combines the advantages of conventional Eulerian and Lagrangian modelling concepts and can be extended to three dimensions in a straightforward manner. Copyright © 2004 John Wiley & Sons, Ltd. [source] Classical and advanced multilayered plate elements based upon PVD and RMVT.INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2002Part 2: Numerical implementations Abstract This paper presents numerical evaluations related to the multilayered plate elements which were proposed in the companion paper (Part 1). Two-dimensional modellings with linear and higher-order (up to fourth order) expansion in the z -plate/layer thickness direction have been implemented for both displacements and transverse stresses. Layer-wise as well as equivalent single-layer modellings are considered on both frameworks of the principle of virtual displacements and Reissner mixed variational theorem. Such a variety has led to the implementation of 22 plate theories. As far as finite element approximation is concerned, three quadrilaters have been considered (four-, eight- and nine-noded plate elements). As a result, 22×3 different finite plate elements have been compared in the present analysis. The automatic procedure described in Part 1, which made extensive use of indicial notations, has herein been referred to in the considered computer implementations. An assessment has been made as far as convergence rates, numerical integrations and comparison to correspondent closed-form solutions are concerned. Extensive comparison to early and recently available results has been made for sample problems related to laminated and sandwich structures. Classical formulations, full mixed, hybrid, as well as three-dimensional solutions have been considered in such a comparison. Numerical substantiation of the importance of the fulfilment of zig-zag effects and interlaminar equilibria is given. The superiority of RMVT formulated finite elements over those related to PVD has been concluded. Two test cases are proposed as ,desk-beds' to establish the accuracy of the several theories. Results related to all the developed theories are presented for the first test case. The second test case, which is related to sandwich plates, restricts the comparison to the most significant implemented finite elements. It is proposed to refer to these test cases to establish the accuracy of existing or new higher-order, refined or improved finite elements for multilayered plate analyses. Copyright © 2002 John Wiley & Sons, Ltd. [source] Shoreline tracking and implicit source terms for a well balanced inundation modelINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 10 2010Giovanni FranchelloArticle first published online: 31 JUL 200 Abstract The HyFlux2 model has been developed to simulate severe inundation scenario due to dam break, flash flood and tsunami-wave run-up. The model solves the conservative form of the two-dimensional shallow water equations using the finite volume method. The interface flux is computed by a Flux Vector Splitting method for shallow water equations based on a Godunov-type approach. A second-order scheme is applied to the water surface level and velocity, providing results with high accuracy and assuring the balance between fluxes and sources also for complex bathymetry and topography. Physical models are included to deal with bottom steps and shorelines. The second-order scheme together with the shoreline-tracking method and the implicit source term treatment makes the model well balanced in respect to mass and momentum conservation laws, providing reliable and robust results. The developed model is validated in this paper with a 2D numerical test case and with the Okushiri tsunami run up problem. It is shown that the HyFlux2 model is able to model inundation problems, with a satisfactory prediction of the major flow characteristics such as water depth, water velocity, flood extent, and flood-wave arrival time. The results provided by the model are of great importance for the risk assessment and management. Copyright © 2009 John Wiley & Sons, Ltd. [source] Influence of reaction mechanisms, grid spacing, and inflow conditions on the numerical simulation of lifted supersonic flamesINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 12 2010P. Gerlinger Abstract The simulation of supersonic combustion requires finite-rate chemistry because chemical and fluid mechanical time scales may be of the same order of magnitude. The size of the chosen reaction mechanism (number of species and reactions involved) has a strong influence on the computational time and thus should be chosen carefully. This paper investigates several hydrogen/air reaction mechanisms frequently used in supersonic combustion. It is shown that at low flight Mach numbers of a supersonic combustion ramjet (scramjet), some kinetic schemes can cause highly erroneous results. Moreover, extremely fine computational grids are required in the lift-off region of supersonic flames to obtain grid-independent solutions. The fully turbulent Mach 2 combustion experiment of Cheng et al. (Comb. Flame 1994; 99: 157,173) is chosen to investigate the influences of different reaction mechanisms, grid spacing, and inflow conditions (contaminations caused by precombustion). A detailed analysis of the experiment will be given and errors of previous simulations are identified. Thus, the paper provides important information for an accurate simulation of the Cheng et al. experiment. The importance of this experiment results from the fact that it is the only supersonic combustion test case where temperature and species fluctuations have been measured simultaneously. Such data are needed for the validation of probability density function methods. Copyright © 2009 John Wiley & Sons, Ltd. [source] PID adaptive control of incremental and arclength continuation in nonlinear applicationsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2009A. M. P. Valli Abstract A proportional-integral-derivative (PID) control approach is developed, implemented and investigated numerically in conjunction with continuation techniques for nonlinear problems. The associated algorithm uses PID control to adapt parameter stepsize for branch,following strategies such as those applicable to turning point and bifurcation problems. As representative continuation strategies, incremental Newton, Euler,Newton and pseudo-arclength continuation techniques are considered. Supporting numerical experiments are conducted for finite element simulation of the ,driven cavity' Navier,Stokes benchmark over a range in Reynolds number, the classical Bratu turning point problem over a reaction parameter range, and for coupled fluid flow and heat transfer over a range in Rayleigh number. Computational performance using PID stepsize control in conjunction with inexact Newton,Krylov solution for coupled flow and heat transfer is also examined for a 3D test case. Copyright © 2009 John Wiley & Sons, Ltd. [source] Momentum/continuity coupling with large non-isotropic momentum source termsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 9 2009J. D. Franklin Abstract Pressure-based methods such as the SIMPLE algorithm are frequently used to determine a coupled solution between the component momentum equations and the continuity equation. This paper presents a colocated variable pressure correction algorithm for control volumes of polyhedral/polygonal cell topologies. The correction method is presented independent of spatial approximation. The presence of non-isotropic momentum source terms is included in the proposed algorithm to ensure its applicability to multi-physics applications such as gas and particulate flows. Two classic validation test cases are included along with a newly proposed test case specific to multiphase flows. The classic validation test cases demonstrate the application of the proposed algorithm on truly arbitrary polygonal/polyhedral cell meshes. A comparison between the current algorithm and commercially available software is made to demonstrate that the proposed algorithm is competitively efficient. The newly proposed test case demonstrates the benefits of the current algorithm when applied to a multiphase flow situation. The numerical results from this case show that the proposed algorithm is more robust than other methods previously proposed. Copyright © 2009 John Wiley & Sons, Ltd. [source] Numerical simulation of cavitating flow in 2D and 3D inducer geometriesINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 2 2005O. Coutier-Delgosha Abstract A computational method is proposed to simulate 3D unsteady cavitating flows in spatial turbopump inducers. It is based on the code FineTurbo, adapted to take into account two-phase flow phenomena. The initial model is a time-marching algorithm devoted to compressible flow, associated with a low-speed preconditioner to treat low Mach number flows. The presented work covers the 3D implementation of a physical model developed in LEGI for several years to simulate 2D unsteady cavitating flows. It is based on a barotropic state law that relates the fluid density to the pressure variations. A modification of the preconditioner is proposed to treat efficiently as well highly compressible two-phase flow areas as weakly compressible single-phase flow conditions. The numerical model is applied to time-accurate simulations of cavitating flow in spatial turbopump inducers. The first geometry is a 2D Venturi type section designed to simulate an inducer blade suction side. Results obtained with this simple test case, including the study of its general cavitating behaviour, numerical tests, and precise comparisons with previous experimental measurements inside the cavity, lead to a satisfactory validation of the model. A complete three-dimensional rotating inducer geometry is then considered, and its quasi-static behaviour in cavitating conditions is investigated. Numerical results are compared to experimental measurements and visualizations, and a promising agreement is obtained. Copyright © 2004 John Wiley & Sons, Ltd. [source] |