Appropriate Choice (appropriate + choice)

Distribution by Scientific Domains
Distribution within Engineering


Selected Abstracts


Migration velocity analysis and waveform inversion

GEOPHYSICAL PROSPECTING, Issue 6 2008
William W. Symes
ABSTRACT Least-squares inversion of seismic reflection waveform data can reconstruct remarkably detailed models of subsurface structure and take into account essentially any physics of seismic wave propagation that can be modelled. However, the waveform inversion objective has many spurious local minima, hence convergence of descent methods (mandatory because of problem size) to useful Earth models requires accurate initial estimates of long-scale velocity structure. Migration velocity analysis, on the other hand, is capable of correcting substantially erroneous initial estimates of velocity at long scales. Migration velocity analysis is based on prestack depth migration, which is in turn based on linearized acoustic modelling (Born or single-scattering approximation). Two major variants of prestack depth migration, using binning of surface data and Claerbout's survey-sinking concept respectively, are in widespread use. Each type of prestack migration produces an image volume depending on redundant parameters and supplies a condition on the image volume, which expresses consistency between data and velocity model and is hence a basis for velocity analysis. The survey-sinking (depth-oriented) approach to prestack migration is less subject to kinematic artefacts than is the binning-based (surface-oriented) approach. Because kinematic artefacts strongly violate the consistency or semblance conditions, this observation suggests that velocity analysis based on depth-oriented prestack migration may be more appropriate in kinematically complex areas. Appropriate choice of objective (differential semblance) turns either form of migration velocity analysis into an optimization problem, for which Newton-like methods exhibit little tendency to stagnate at nonglobal minima. The extended modelling concept links migration velocity analysis to the apparently unrelated waveform inversion approach to estimation of Earth structure: from this point of view, migration velocity analysis is a solution method for the linearized waveform inversion problem. Extended modelling also provides a basis for a nonlinear generalization of migration velocity analysis. Preliminary numerical evidence suggests a new approach to nonlinear waveform inversion, which may combine the global convergence of velocity analysis with the physical fidelity of model-based data fitting. [source]


MPP programs emerging around the world19

JOURNAL OF POLICY ANALYSIS AND MANAGEMENT, Issue 1 2008
Iris Geva-May
This paper examines public policy and management programs in Canada, Europe, Australia, and New Zealand, and makes comparisons with similar programs in the United States. Our study of public policy programs shows that there are many challenges ahead in terms of making good decisions on the form and content of programs that will add value to governments and citizens. Appropriate choices in terms of program design and pedagogy will reflect different economic, social, environmental, and cultural influences and will be shaped by history, values, and the roles of public policy and management professionals within a particular governmental context. [source]


Reconstructing ancestral ecologies: challenges and possible solutions

DIVERSITY AND DISTRIBUTIONS, Issue 1 2006
Christopher R. Hardy
ABSTRACT There are several ways to extract information about the evolutionary ecology of clades from their phylogenies. Of these, character state optimization and ,ancestor reconstruction' are perhaps the most widely used despite their being fraught with assumptions and potential pitfalls. Requirements for robust inferences of ancestral traits in general (i.e. those applicable to all types of characters) include accurate and robust phylogenetic hypotheses, complete species-level sampling and the appropriate choice of optimality criterion. Ecological characters, however, also require careful consideration of methods for accounting for intraspecific variability. Such methods include ,Presence Coding' and ,Polymorphism Coding' for discrete ecological characters, and ,Range Coding' and ,MaxMin Coding' for continuously variable characters. Ultimately, however, historical inferences such as these are, as with phylogenetic inference itself, associated with a degree of uncertainty. Statistically based uncertainty estimates are available within the context of model-based inference (e.g. maximum likelihood and Bayesian); however, these measures are only as reliable as the chosen model is appropriate. Although generally thought to preclude the possibility of measuring relative uncertainty or support for alternative possible reconstructions, certain useful non-statistical support measures (i.e. ,Sharkey support' and ,Parsimony support') are applicable to parsimony reconstructions. [source]


Tensile and compressive damage coupling for fully-reversed bending fatigue of fibre-reinforced composites

FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 6 2002
W. Van Paepegem
ABSTRACT Due to their high specific stiffness and strength, fibre-reinforced composite materials are winning through in a wide range of applications in automotive, naval and aerospace industry. Their design for fatigue is a complicated problem and a large research effort is being spent on it today. However there is still a need for extensive experimental testing or large safety factors to be adopted, because numerical simulations of the fatigue damage behaviour of fibre-reinforced composites are often found to be unreliable. This is due to the limited applicability of the theoretical models developed so far, compared to the complex multi-axial fatigue loadings that composite components often have to sustain in in-service loading conditions. In this paper a new phenomenological fatigue model is presented. It is basically a residual stiffness model, but through an appropriate choice of the stress measure, the residual strength and thus final failure can be predicted as well. Two coupled growth rate equations for tensile and compressive damage describe the damage growth under tension,compression loading conditions and provide a much more general approach than the use of the stress ratio R. The model has been applied to fully-reversed bending of plain woven glass/epoxy specimens. Stress redistributions and the three stages of stiffness degradation (sharp initial decline , gradual deterioration , final failure) could be simulated satisfactorily. [source]


The pros and cons of noninferiority trials

FUNDAMENTAL & CLINICAL PHARMACOLOGY, Issue 4 2003
Stuart J. Pocock
Abstract Noninferiority trials comparing new treatment with an active standard control are becoming increasingly common. This article discusses relevant issues regarding their need, design, analysis and interpretation: the appropriate choice of control group, types of noninferiority trial, ethical considerations, sample size determination and potential pitfalls to consider. [source]


Using DC resistivity tomography to detect and characterize mountain permafrost

GEOPHYSICAL PROSPECTING, Issue 4 2003
Christian Hauck
ABSTRACT Direct-current (DC) resistivity tomography has been applied to different mountain permafrost regions. Despite problems with the very high resistivities of the frozen material, plausible results were obtained. Inversions with synthetic data revealed that an appropriate choice of regularization constraints was important, and that a joint analysis of several tomograms computed with different constraints was required to judge the reliability of individual features. The theoretical results were verified with three field experiments conducted in the Swiss and the Italian Alps. At the first site, near Zermatt, Switzerland, the location and the approximate lateral and vertical extent of an ice core within a moraine could be delineated. On the Murtel rock glacier, eastern Swiss Alps, a steeply dipping boundary at its frontal part was observed, and extremely high resistivities of several M, indicated a high ice content. The base of the rock glacier remained unresolved by the DC resistivity measurements, but it could be constrained with transient EM soundings. On another rock glacier near the Stelvio Pass, eastern Italian Alps, DC resistivity tomography allowed delineation of the rock glacier base, and the only moderately high resistivities within the rock glacier body indicated that the ice content must be lower compared with the Murtel rock glacier. [source]


Balancing Intermolecular and Molecule,Substrate Interactions in Supramolecular Assemblies

ADVANCED FUNCTIONAL MATERIALS, Issue 2 2009
Dimas G. de Oteyza
Abstract Self-assembly of functional supra-molecular nanostructures is among the most promising strategies for further development of organic electronics. However, a poor control of the interactions driving the assembling phenomena still hampers the tailored growth of designed structures. Here exploration of how non-covalent molecule-substrate interactions can be modified on a molecular level is described. For that, mixtures of DIP and F16CuPc, two molecules with donor and acceptor character, respectively are investigated. A detailed study of their structural and electronic properties is performed. In reference to the associated single-component layers, the growth of binary layers results in films with strongly enhanced intermolecular interactions and consequently reduced molecule-substrate interactions. This new insight into the interplay among the aforementioned interactions provides a novel strategy to balance the critical interactions in the assembly processes by the appropriate choice of molecular species in binary supra-molecular assemblies, and thereby control the self-assembly of functional organic nanostructures. [source]


Computational mechanics of the steel,concrete interface,

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 2 2002
M. R. Ben Romdhane
Abstract Concrete cracking in reinforced concrete structures is governed by two mechanisms: the activation of bond forces at the steel,concrete interface and the bridge effects of the reinforcement crossing a macro-crack. The computational modelling of these two mechanisms, acting at different scales, is the main objective of this paper. The starting point is the analysis of the micro-mechanisms, leading to an appropriate choice of (measurable) state variables describing the energy state in the surface systems: on the one side the relative displacement between the steel and the concrete, modelling the bond activation; on the other hand, the crack opening governing the bridge effects. These displacement jumps are implemented in the constitutive model using thermodynamics of surfaces of discontinuity. On the computational side, the constitutive model is implemented in a discrete crack approach. A truss element with slip degrees of freedom is developed. This degree of freedom represents the relative displacement due to bond activation. In turn, the bridge effect is numerically taken into account by modifying the post-cracking behaviour of the contact elements representing discrete concrete cracks crossed by a rebar. First simulation results obtained with this model show a good agreement in crack pattern and steel stress distribution with micro-mechanical results and experimental results. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Iterative correlation-based controller tuning

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 8 2004
A. Karimi
Abstract This paper gives an overview on the theoretical results of recently developed algorithms for iterative controller tuning based on the correlation approach. The basic idea is to decorrelate the output error between the achieved and designed closed-loop systems by iteratively tuning the controller parameters. Two different approaches are investigated. In the first one, a correlation equation involving a vector of instrumental variables is solved using the stochastic approximation method. It is shown that, with an appropriate choice of instrumental variables and a finite number of data at each iteration, the algorithm converges to the solution of the correlation equation. The convergence conditions are derived and the accuracy of the estimates are studied. The second approach is based on the minimization of a correlation criterion. The frequency analysis of the criterion shows that the two norm of the error between the desired and achieved closed-loop transfer functions is minimized independent of the noise characteristics. This analysis leads to the definition of a generalized correlation criterion which allows the mixed sensitivity problem to be handled in two norm. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Modelling and simulation of a double-star induction machine vector control using copper-losses minimization and parameters estimation

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 9 2002
M.F. Mimouni
Abstract This paper shows that it is possible to extend the principle of field-oriented control (FOC) approach to a double-star induction motor (DSIM). In the first stage, a robust variable structure current controller based on space phasor voltages PWM scheme is established. In this current controller design, only the stator currents and rotor speed sensors are used. In the second stage, the FOC method developed for DSIM is motivated by the minimization of the copper losses. The developed approach uses a loss model controller (LMC) and an adaptive rotor flux observer to compute the adequate rotor flux value minimizing the copper losses. The control variables are the stator currents or the machine input power. Compared to the constant rotor flux approach, it is proved that higher performances are achieved. However, the sensitivity of the FOC to parameter error of the machine still remains a problem. To guarantee the performance of the vector control, the stator and rotor resistances are adapted on-line, based on the Lyapunov theory. An appropriate choice of the reference model allows building a Lyapunov function by means of which the updating law can be found. The simulation results show the satisfactory behaviour of the proposed identification algorithm. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Effect of load distribution in path protection of MPLS

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 4 2003
Sook-Yeon Kim
Abstract We analyse and compare a protection mechanism based on load distribution with a typical protection mechanism in an multiprotocol label switching (MPLS) network. The protection mechanism based on load distribution is modelled as a fully shared mechanism (FSM) and the typical protection mechanism is a partially shared mechanism (PSM). By comparing the FSM and the PSM, we numerically analyse the effect of load distribution in path protection of MPLS. The comparison is based on numerical equations representing the relationship between service reliability and resource utilization. From the equations, we show that both the FSM and the PSM have a tradeoff between service reliability and resource utilization. In addition, we provide solutions for the FSM and the PSM to determine the amount of bandwidth occupied according to the requested service reliability. The comparison of the FSM and the PSM shows that the PSM cannot provide greater service reliability than the FSM under the same utilization. In addition, the PSM is not capable of accommodating greater resource utilization than the FSM for the same level of service reliability. However, an appropriate choice of the number of protection paths allows the PSM to provide the same level of service reliability as the FSM. The choice is the maximum among the possible numbers of protection paths of the PSM. In short, the typical protection mechanism is as good as the FSM in terms of service reliability and resource utilization even though the FSM is an attractive alternative to the typical protection mechanism. Copyright © 2003 John Wiley & Sons, Ltd. [source]


On stabilization of nonlinear systems under data rate constraints using output measurements

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 6 2006
C. De Persis
Abstract We present a contribution to the problem of semi-global asymptotic stabilization of nonlinear continuous-time systems under data rate constraints when only output measurements are available. We consider systems which are uniformly observable and we point out that the design of an ,embedded-observer' decoder and a controller which semi-globally stabilize this class of systems under data-rate constraints requires an appropriate choice of the observer gain and the data rate. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Synthesis of Chiral 2-Phospha[3]ferrocenophanes and their Behaviour as Organocatalysts in [3+2],Cyclization Reactions

ADVANCED SYNTHESIS & CATALYSIS (PREVIOUSLY: JOURNAL FUER PRAKTISCHE CHEMIE), Issue 11-12 2009
Arnaud Voituriez
Abstract Planar chiral 2-phospha[3]ferrocenophanes have been prepared via a stereoselective three-step synthesis. The key step is the lithiation of the 1,1,-disubstituted ferrocene 11 bearing (S)-2-(methoxymethyl)pyrrolidines as the chiral ortho -directing groups. The diastereoselectivity of these reactions has been mastered by an appropriate choice of the metallating agent, so as to afford a suitable access to C2 -symmetrical, tetrasubstituted ferrocenes. These compounds have been converted into the enantiomerically pure 2-phospha[3]-ferrocenophanes 16, via the corresponding acetates and their reactions with primary phosphines. Phosphines 16 have been used as nucleophilic catalysts in model cyclization reactions. Unlike 2-phospha[3]-ferrocenophanes with stereogenic ,-carbons, the planar chiral derivatives 16 proved to be suitable catalysts for these processes. Thus, for instance, phosphine 16c successfully promotes the enantioselective [3+2],annulations of allenes and enones into functionalized cyclopentenes (ees up to 96%). Among others, spirocyclic derivatives have been obtained in good yields and ees in the range 77,85%. The robustness of this catalyst has been demonstrated by recycling experiments. [source]


Solvent models for protein,ligand binding: Comparison of implicit solvent poisson and surface generalized born models with explicit solvent simulations

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 6 2001
Linda Yu Zhang
Abstract Solvent effects play a crucial role in mediating the interactions between proteins and their ligands. Implicit solvent models offer some advantages for modeling these interactions, but they have not been parameterized on such complex problems, and therefore, it is not clear how reliable they are. We have studied the binding of an octapeptide ligand to the murine MHC class I protein using both explicit solvent and implicit solvent models. The solvation free energy calculations are more than 103 faster using the Surface Generalized Born implicit solvent model compared to FEP simulations with explicit solvent. For some of the electrostatic calculations needed to estimate the binding free energy, there is near quantitative agreement between the explicit and implicit solvent model results; overall, the qualitative trends in the binding predicted by the explicit solvent FEP simulations are reproduced by the implicit solvent model. With an appropriate choice of reference system based on the binding of the discharged ligand, electrostatic interactions are found to enhance the binding affinity because the favorable Coulomb interaction energy between the ligand and protein more than compensates for the unfavorable free energy cost of partially desolvating the ligand upon binding. Some of the effects of protein flexibility and thermal motions on charging the peptide in the solvated complex are also considered. © 2001 John Wiley & Sons, Inc. J Comput Chem 22: 591,607, 2001 [source]


Utility transversality: a value-based approach

JOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 5-6 2005
James E. Matheson
Abstract We examine multiattribute decision problems where a value function is specified over the attributes of a decision problem, as is typically done in the deterministic phase of a decision analysis. When uncertainty is present, a utility function is assigned over the value function to represent the decision maker's risk attitude towards value, which we refer to as a value-based approach. A fundamental result of using the value-based approach is a closed form expression that relates the risk aversion functions of the individual attributes to the trade-off functions between them. We call this relation utility transversality. The utility transversality relation asserts that once the value function is specified there is only one dimension of risk attitude in multiattribute decision problems. The construction of multiattribute utility functions using the value-based approach provides the flexibility to model more general functional forms that do not require assumptions of utility independence. For example, we derive a new family of multiattribute utility functions that describes richer preference structures than the usual multilinear family. We also show that many classical results of utility theory, such as risk sharing and the notion of a corporate risk tolerance, can be derived simply from the utility transversality relations by appropriate choice of the value function. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Effect of lipid bilayer alteration on transdermal delivery of a high-molecular-weight and lipophilic drug: Studies with paclitaxel

JOURNAL OF PHARMACEUTICAL SCIENCES, Issue 9 2004
Ramesh Panchagnula
Abstract Skin forms an excellent barrier against drug permeation, due to the rigid lamellar structure of the stratum corneum (SC) lipids. Poor permeability of drugs can be enhanced through alteration in partition and diffusion coefficients, or concentration gradient of drug with an appropriate choice of solvent system, along with penetration enhancers. The aim of the current investigation was to assess applicability of lipid bilayer alteration by fatty acids and terpenes toward the permeation enhancement of a high-molecular-weight, lipophilic drug, paclitaxel (PCL) through rat skin. From among the fatty acids studied using ethanol/isopropyl myristate (1:1) vehicle, no significant enhancement in flux of PCL was observed (p,>,0.05). In the case of cis mono and polyunsaturated fatty acids lag time was found to be similar to control (p,>,0.05). This suggests that the permeation of a high-molecular-weight, lipophilic drug may not be enhanced by the alteration of the lipid bilayer, or the main barrier to permeation could lie in lower hydrophilic layers of skin. A significant increase in lag time was observed with trans unsaturated fatty acids unlike the cis isomers, and this was explained on the basis of conformation and preferential partitioning of fatty acids into skin. From among the terpenes, flux of PCL with cineole was significantly different from other studied terpenes and controls, and after treatment with menthol and menthone permeability was found to be reduced. Menthol and menthone cause loosening of the SC lipid bilayer due to breaking of hydrogen bonding between ceramides, resulting in penetration of water into the lipids of the SC lipid bilayer that leads to creation of new aqueous channels and is responsible for increased hydrophilicity of SC. This increased hydrophilicity of the SC bilayer might have resulted in unfavorable conditions for ethanol/isopropyl myristate (1:1) along with PCL to penetrate into skin, therefore permeability was reduced. The findings of this study suggest that the permeation of a high-molecular-weight and lipophilic drug cannot be enhanced through bilayer alteration by penetration enhancers, and alteration in partitioning of drug into skin could be a feasible mode to enhance the permeation of drug. © 2004 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 93:2177,2183, 2004 [source]


Impact of lubricant formulation on the friction properties of carbon fiber clutch plates

LUBRICATION SCIENCE, Issue 1 2006
R. C. Oldfield
Since their introduction over ten years ago, carbon fiber based friction materials have been employed by transmission builders in a wide variety of applications, including torque converter clutches, synchronizers, limited slip devices and shifting clutches. This new generation of materials gives improved durability relative to cellulose; carbon fiber materials offer inherently greater wear resistance and improved resistance to thermal degradation. However, carbon fiber based materials also bring inherently different friction characteristics than their cellulose based counterparts. As a result, a different approach to lubricant formulation is required to provide optimized friction control in applications where they are used. It is well known that in order to achieve and maintain the required friction in a clutch, the correct combination of surface properties and additive chemistry is required. In this paper the impact of different additive chemistries on the friction of carbon fiber clutch plates has been investigated. It will be shown that with the appropriate choice of additive system, carbon fiber based friction plates can offer a number of performance improvements over more conventional materials. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Dissipative Particle Dynamics Simulations of Polymer Brushes: Comparison with Molecular Dynamics Simulations

MACROMOLECULAR THEORY AND SIMULATIONS, Issue 9 2006
Sandeep Pal
Abstract Summary: The structure of polymer brushes is investigated by dissipative particle dynamics (DPD) simulations that include explicit solvent particles. With an appropriate choice of the DPD interaction parameters , we obtain good agreement with previous molecular dynamics (MD) results where the good solvent behavior has been modeled by an effective Lennard,Jones potential. The present results confirm that DPD simulation techniques can be applied for large length scale simulations of polymer brushes. A relation between the different length scales and is established. Polymer brush at a solid,liquid interface. [source]


Appropriate SCF basis sets for orbital studies of galaxies and a ,quantum-mechanical' method to compute them

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 3 2008
Constantinos Kalapotharakos
ABSTRACT We address the question of an appropriate choice of basis functions for the self-consistent field (SCF) method of simulation of the N -body problem. Our criterion is based on a comparison of the orbits found in N -body realizations of analytical potential,density models of triaxial galaxies, in which the potential is fitted by the SCF method using a variety of basis sets, with those of the original models. Our tests refer to maximally triaxial Dehnen ,-models for values of , in the range 0 ,,, 1, i.e. from the harmonic core up to the weak cusp limit. When an N -body realization of a model is fitted by the SCF method, the choice of radial basis functions affects significantly the way the potential, forces or derivatives of the forces are reproduced, especially in the central regions of the system. We find that this results in serious discrepancies in the relative amounts of chaotic versus regular orbits, or in the distributions of the Lyapunov characteristic exponents, as found by different basis sets. Numerical tests include the Clutton-Brock and the Hernquist,Ostriker basis sets, as well as a family of numerical basis sets which are ,close' to the Hernquist,Ostriker basis set (according to a given definition of distance in the space of basis functions). The family of numerical basis sets is parametrized in terms of a quantity , which appears in the kernel functions of the Sturm,Liouville equation defining each basis set. The Hernquist,Ostriker basis set is the ,= 0 member of the family. We demonstrate that grid solutions of the Sturm,Liouville equation yielding numerical basis sets introduce large errors in the variational equations of motion. We propose a quantum-mechanical method of solution of the Sturm,Liouville equation which overcomes these errors. We finally give criteria for a choice of optimal value of , and calculate the latter as a function of the value of ,, i.e. of the power-law exponent of the radial density profile at the central regions of the galaxy. [source]


Reionization bias in high-redshift quasar near-zones

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 2 2008
J. Stuart B. Wyithe
ABSTRACT Absorption spectra of high-redshift quasars exhibit an increasingly thick Ly, forest, suggesting that the fraction of neutral hydrogen in the intergalactic medium (IGM) is increasing towards z, 6. However, the interpretation of these spectra is complicated by the fact that the Ly, optical depth is already large for neutral hydrogen fractions in excess of 10,4, and also because quasars are expected to reside in dense regions of the IGM. We present a model for the evolution of the ionization state of the IGM which is applicable to the dense, biased regions around high-redshift quasars as well as more typical regions in the IGM. We employ a cold dark matter based model in which the ionizing photons for reionization are produced by star formation in dark matter haloes spanning a wide range of masses, combined with numerical radiative transfer simulations which model the resulting opacity distribution in quasar absorption spectra. With an appropriate choice for the parameter which controls the star formation efficiency, our model is able to simultaneously reproduce the observed Ly, forest opacity at 4 < z < 6, the ionizing photon mean-free-path at z, 4 and the rapid evolution of highly ionized near-zone sizes around high-redshift quasars at 5.8 < z < 6.4. In our model, reionization extends over a wide redshift range, starting at z, 10 and completing as H ii regions overlap at z, 6,7. We find that within 5 physical Mpc of a high-redshift quasar, the evolution of the ionization state of the IGM precedes that in more typical regions by around 0.3 redshift units. More importantly, when combined with the rapid increase in the ionizing photon mean-free-path expected shortly after overlap, this offset results in an ionizing background near the quasar which exceeds the value in the rest of the IGM by a factor of ,2,3. We further find that in the post-overlap phase of reionization the size of the observed quasar near-zones is not directly sensitive to the neutral hydrogen fraction of the IGM. Instead, these sizes probe the level of the background ionization rate and the temperature of the surrounding IGM. The observed rapid evolution of the quasar near-zone sizes at 5.8 < z < 6.4 can thus be explained by the rapid evolution of the ionizing background, which in our model is caused by the completion of overlap at the end of reionization by 6 ,z, 7. [source]


A multilevel finite element method in space-time for the Navier-Stokes problem,

NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 6 2005
Yinnian He
Abstract A multilevel finite element method in space-time for the two-dimensional nonstationary Navier-Stokes problem is considered. The method is a multi-scale method in which the fully nonlinear Navier-Stokes problem is only solved on a single coarsest space-time mesh; subsequent approximations are generated on a succession of refined space-time meshes by solving a linearized Navier-Stokes problem about the solution on the previous level. The a priori estimates and error analysis are also presented for the J -level finite element method. We demonstrate theoretically that for an appropriate choice of space and time mesh widths: hj , h, kj , k, j = 2, ,, J, the J -level finite element method in space-time provides the same accuracy as the one-level method in space-time in which the fully nonlinear Navier-Stokes problem is solved on a final finest space-time mesh. © 2005 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2005 [source]


The Jurassic Bivalve Genus Placunopsis: New Evidence On Anatomy and Affinities

PALAEONTOLOGY, Issue 3 2002
Jonathan A. Todd
The Jurassic bivalve genus Placunopsis Morris and Lycett, 1853 is shown to be an anomiid on account of the detailed anatomy of its hitherto unknown right valve and the corresponding musculature in the left valve. Herein the most appropriate choice for type species is considered to be P. inaequalis (Phillips, 1829), which accommodates a number of the larger Late Jurassic nominal species. A species from the English Bathonian previously confused with P. inaequalis is described as P. fuersichi sp. nov. Placunopsis inaequalis is shown to be closely related to Recent Pododesmus, which has previously been interpreted as the most ,primitive' of the extant anomiids on the basis of its anatomy. There is thus no need to retain a separate family for the genus, as has been proposed by some workers. The distinct small species P. socialis Morris and Lycett, 1853 can also be assigned to the anomiids on the basis of the differences between the structure of the outer layers in the two valves, and the presence of a byssal foramen. There is some suggestion of calcification of the byssus, but not enough detail is known of its musculature to justify transferring it to the genus Juranomia Fürsich and Werner, 1989 at this stage. The cemented bivalves traditionally referred to Placunopsis that are so common in the European Muschelkalk (Triassic) are not anomiids and thus require systematic revision. [source]


Quality awards as a public sector benchmarking concept in OECD member countries: some guidelines for quality award organizers

PUBLIC ADMINISTRATION & DEVELOPMENT, Issue 1 2001
Elke Löffler
In many OECD member countries, quality awards have become an important benchmarking instrument for public and especially private sector organizations. Quality awards pursue two main goals: one is to introduce elements of competition in areas of the public and the private sectors that lack of market competition; the other is to encourage organizational learning. The problem is that in a public sector context these aims seem to be mutually exclusive. The aim of the article is to show quality award organizers how to realize the full potential of quality awards by making the appropriate choices in the design of a public sector quality award. The conclusion is that the stage of public sector quality management and the degree of ,publicness' of the public sector in a given country will influence the competition-inducing and learning effect of a national quality award in an adverse way. Nevertheless, the negative effects on one or the other element of quality awards can be counterbalanced by the appropriate choice of the scope of the quality award, the area to be evaluated, the evaluation criteria as well as the benchmarking concept. Last but not least, quality award organizers should keep in mind that quality awards are not a benchmarking instrument for all seasons. Copyright © John Wiley & Sons, Ltd. [source]


On a fast calculation of structure factors at a subatomic resolution

ACTA CRYSTALLOGRAPHICA SECTION A, Issue 1 2004
P. V. Afonine
In the last decade, the progress of protein crystallography allowed several protein structures to be solved at a resolution higher than 0.9,Å. Such studies provide researchers with important new information reflecting very fine structural details. The signal from these details is very weak with respect to that corresponding to the whole structure. Its analysis requires high-quality data, which previously were available only for crystals of small molecules, and a high accuracy of calculations. The calculation of structure factors using direct formulae, traditional for `small-molecule' crystallography, allows a relatively simple accuracy control. For macromolecular crystals, diffraction data sets at a subatomic resolution contain hundreds of thousands of reflections, and the number of parameters used to describe the corresponding models may reach the same order. Therefore, the direct way of calculating structure factors becomes very time expensive when applied to large molecules. These problems of high accuracy and computational efficiency require a re-examination of computer tools and algorithms. The calculation of model structure factors through an intermediate generation of an electron density [Sayre (1951). Acta Cryst.4, 362,367; Ten Eyck (1977). Acta Cryst. A33, 486,492] may be much more computationally efficient, but contains some parameters (grid step, `effective' atom radii etc.) whose influence on the accuracy of the calculation is not straightforward. At the same time, the choice of parameters within safety margins that largely ensure a sufficient accuracy may result in a significant loss of the CPU time, making it close to the time for the direct-formulae calculations. The impact of the different parameters on the computer efficiency of structure-factor calculation is studied. It is shown that an appropriate choice of these parameters allows the structure factors to be obtained with a high accuracy and in a significantly shorter time than that required when using the direct formulae. Practical algorithms for the optimal choice of the parameters are suggested. [source]


Evaluation of Culture and Antibiotic Use in Patients With Pharyngitis,

THE LARYNGOSCOPE, Issue 10 2006
Justin B. Rufener MD
Abstract Objectives: The objectives of this study were to evaluate practice patterns for treatment of patients with pharyngitis with regard to testing for group A beta hemolytic streptococcal (GABHS) infection, frequency of antibiotic use, and appropriate choice of antibiotics. Study Design: The authors conducted a retrospective review of billing data for 10,482 office visits for pharyngitis. Methods: The 2004 billing database for a tertiary institution was queried for outpatient visits for pharyngitis or tonsillitis, group A Streptococcus tests (GAST), and antibiotic prescriptions filled after the visit. Patients were separated by age group and analyzed for the proportion of patients that received a GAST and proportion prescribed an antibiotic. Antibiotic prescriptions were also analyzed to determine whether they were appropriate for treatment of GABHS. Results: A total of 68.7% of all patients and 82.2% of pediatric patients were tested for GAST. A total of 47.1% of adult patients and 44.9% of pediatric patients received an antibiotic. For adult patients for whom GAST was obtained, 48.6% were prescribed an antibiotic versus 53.6% of those not tested. Streptococcus testing was a significant predictor of antibiotic use (P < .0001), whereas age was not (P = .22). A total of 82.1% of all antibiotics prescribed were recommended for treatment of GABHS. Conclusions: Most patients seen for pharyngitis were tested for GABHS, but pediatric patients were tested more frequently than adults. Patients who received a GAST were less likely to receive antibiotics. The rates experienced in our tertiary academic institution are higher than previously quoted for community practice. When antibiotics were prescribed, they were usually appropriate for the treatment of GABHS based on current recommendations. [source]


Evaluation of Information Loss in Digital Elevation Models With Digital Photogrammetric Systems

THE PHOTOGRAMMETRIC RECORD, Issue 95 2000
Y. D. Huang
Information loss is caused when a surface is sampled with a finite interval, such as in the production of a digital elevation model (DEM). This information loss can become the dominant part of the error in a DEM. The ability to quantify information loss enables guidance to be provided for an appropriate choice of grid interval and better accuracy assessment for the DEM. With the use of digital photogrammetric systems, evaluation of information loss has become much easier. This paper describes three methods of evaluating information loss. An example is given of the method which is most appropriate for use with a digital photogrammetric system, based on rock cliff surface data and the VirtuoZo system. [source]


Adaptive output feedback control for a class of planar nonlinear systems,

ASIAN JOURNAL OF CONTROL, Issue 5 2009
Fang Shang
Abstract This paper is concerned with the problem of global adaptive stabilization by output feedback for a class of planar nonlinear systems with uncertain control coefficient and unknown growth rate. The control coefficient is not supposed to have known upper bound, and this relaxes the corresponding requirement in the existing literature (see e.g. 1, 2. First, by the universal control method, an observer is constructed based on the dynamic high-gain K-filters. Then, the control design procedure is developed to obtain the stabilizing controller and dynamic compensator for the uncertainties in the control coefficient. It is shown that the global stability of the closed-loop system can be guaranteed by the appropriate choice of the design parameters. A simulation example is also provided to illustrate the correctness of the theoretical results. © 2009 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society. [source]


THE SYSTEMATIC RISK OF DEBT: AUSTRALIAN EVIDENCE,

AUSTRALIAN ECONOMIC PAPERS, Issue 1 2005
KEVIN DAVISArticle first published online: 21 FEB 200
This paper examines systematic risk (betas) of Australian government debt securities for the period 1979,2004 and makes three contributions to academic research and practical debate. First, the empirical work provides direct evidence on the systematic risk of government debt, and provides a benchmark for estimating the systematic risk of corporate debt which is relevant for cost of capital estimation and for optimal portfolio selection by asset managers such as superannuation funds. Second, analysis of reasons for non-zero (and time varying) betas for fixed income securities aids understanding of the primary sources of systematic risk. Third, the results cast light on the appropriate choice of maturity of risk free interest rate for use in the Capital Asset Pricing Model and have implications for the current applicability of historical estimates of the market risk premium. Debt betas are found to be, on average, significantly positive and (as expected) closely related, cross sectionally, to duration. They are, however, subject to significant time series variation, and over the past few years the pre-existing positive correlation between bond and stock returns appears to have vanished. [source]


Total alpha-fetoprotein and Lens culinaris agglutinin-reactive alpha-fetoprotein in fetal chromosomal abnormalities

BJOG : AN INTERNATIONAL JOURNAL OF OBSTETRICS & GYNAECOLOGY, Issue 11 2001
Ritsu Yamamoto
Objective To examine the differences in multiples of the median (MoM) of total alpha-fetoprotein, and the proportion of Lens culinaris agglutinin reactive alpha-fetoprotein (% alpha-fetoprotein-L2+L3) in the maternal serum and amniotic fluid of pregnant women whose fetuses were diagnosed with autosomal or sex chromosomal abnormalities. Design Prospective consecutive series. Setting University hospital. Sample Maternal sera and amniotic fluids from 46 pregnant women with trisomy 21 fetuses, 10 pregnant women with trisomy 18 fetuses, one pregnant woman with a trisomy 13 fetus, six pregnant women with fetal sex chromosomal abnormalities, and 100 pregnant women for whom the fetal karyotype was diagnosed as normal following a genetic amniocentesis. Results The proportion of alpha-fetoprotein-L2+L3 in maternal serum for trisomy 21 (40.3%, P<0.0001) and trisomy 18 (39.8%, P<0.05) showed a significantly higher value compared with normal (32.6%). The proportion of alpha-fetoprotein-L2+L3 in amniotic fluid was significantly higher (P<0.0001) for trisomy 21 (46.6%) than for a normal karyotype (41.5%). Only for the trisomy 21 group was there a strong correlation in the % alpha-fetoprotein-L2+L3 between maternal serum and amniotic fluid (r=0.840, P<0.0001). For all groups, there was no correlation between alpha-fetoprotein MoM and % alpha-fetoprotein-L2+L3 in maternal serum and amniotic fluid. Conclusion The proportion of alpha-fetoprotein-L2+L3 in maternal serum is an appropriate choice for a trisomy 21 biochemical marker, and it is possible that combining alpha-fetoprotein-L2+L3 analysis with assays of alpha-fetoprotein in maternal serum could further improve the sensitivity and specificity of multiple marker screening. [source]


A Polylinker Approach to Reductive Loop Swaps in Modular Polyketide Synthases

CHEMBIOCHEM, Issue 16 2008
Laurenz Kellenberger Dr.
Abstract Multiple versions of the DEBS 1-TE gene, which encodes a truncated bimodular polyketide synthase (PKS) derived from the erythromycin-producing PKS, were created by replacing the DNA encoding the ketoreductase (KR) domain in the second extension module by either of two synthetic oligonucleotide linkers. This made available a total of nine unique restriction sites for engineering. The DNA for donor "reductive loops," which are sets of contiguous domains comprising either KR or KR and dehydratase (DH), or KR, DH and enoylreductase (ER) domains, was cloned from selected modules of five natural PKS multienzymes and spliced into module 2 of DEBS 1-TE using alternative polylinker sites. The resulting hybrid PKSs were tested for triketide production in vivo. Most of the hybrid multienzymes were active, vindicating the treatment of the reductive loop as a single structural unit, but yields were dependent on the restriction sites used. Further, different donor reductive loops worked optimally with different splice sites. For those reductive loops comprising DH, ER and KR domains, premature TE-catalysed release of partially reduced intermediates was sometimes seen, which provided further insight into the overall stereochemistry of reduction in those modules. Analysis of loops containing KR only, which should generate stereocentres at both C-2 and C-3, revealed that the 3-hydroxy configuration (but not the 2-methyl configuration) could be altered by appropriate choice of a donor loop. The successful swapping of reductive loops provides an interesting parallel to a recently suggested pathway for the natural evolution of modular PKSs by recombination. [source]