Straightforward Application (straightforward + application)

Distribution by Scientific Domains


Selected Abstracts


Mutation analysis in mitochondrial fatty acid oxidation defects: Exemplified by acyl-CoA dehydrogenase deficiencies, with special focus on genotype,phenotype relationship

HUMAN MUTATION, Issue 3 2001
Niels Gregersen
Abstract Mutation analysis of metabolic disorders, such as the fatty acid oxidation defects, offers an additional, and often superior, tool for specific diagnosis compared to traditional enzymatic assays. With the advancement of the structural part of the Human Genome Project and the creation of mutation databases, procedures for convenient and reliable genetic analyses are being developed. The most straightforward application of mutation analysis is to specific diagnoses in suspected patients, particularly in the context of family studies and for prenatal/preimplantation analysis. In addition, from these practical uses emerges the possibility to study genotype,phenotype relationships and investigate the molecular pathogenesis resulting from specific mutations or groups of mutations. In the present review we summarize current knowledge regarding genotype,phenotype relationships in three disorders of mitochondrial fatty acid oxidation: very-long chain acyl-CoA dehydrogenase (VLCAD, also ACADVL), medium-chain acyl-CoA dehydrogenase (MCAD, also ACADM), and short-chain acyl-CoA dehydrogenase (SCAD, also ACADS) deficiencies. On the basis of this knowledge we discuss current understanding of the structural implications of mutation type, as well as the modulating effect of the mitochondrial protein quality control systems, composed of molecular chaperones and intracellular proteases. We propose that the unraveling of the genetic and cellular determinants of the modulating effects of protein quality control systems may help to assess the balance between genetic and environmental factors in the clinical expression of a given mutation. The realization that the effect of the monogene, such as disease-causing mutations in the VLCAD, MCAD, and SCAD genes, may be modified by variations in other genes presages the need for profile analyses of additional genetic variations. The rapid development of mutation detection systems, such as the chip technologies, makes such profile analyses feasible. However, it remains to be seen to what extent mutation analysis will be used for diagnosis of fatty acid oxidation defects and other metabolic disorders. Hum Mutat 18:169,189, 2001. © 2001 Wiley-Liss, Inc. [source]


An embedded Dirichlet formulation for 3D continua

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 5 2010
A. Gerstenberger
Abstract This paper presents a new approach for imposing Dirichlet conditions weakly on non-fitting finite element meshes. Such conditions, also called embedded Dirichlet conditions, are typically, but not exclusively, encountered when prescribing Dirichlet conditions in the context of the eXtended finite element method (XFEM). The method's key idea is the use of an additional stress field as the constraining Lagrange multiplier function. The resulting mixed/hybrid formulation is applicable to 1D, 2D and 3D problems. The method does not require stabilization for the Lagrange multiplier unknowns and allows the complete condensation of these unknowns on the element level. Furthermore, only non-zero diagonal-terms are present in the tangent stiffness, which allows the straightforward application of state-of-the-art iterative solvers, like algebraic multigrid (AMG) techniques. Within this paper, the method is applied to the linear momentum equation of an elastic continuum and to the transient, incompressible Navier,Stokes equations. Steady and unsteady benchmark computations show excellent agreement with reference values. The general formulation presented in this paper can also be applied to other continuous field problems. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Composite adaptive and input observer-based approaches to the cylinder flow estimation in spark ignition automotive engines

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 2 2004
A. Stotsky
Abstract The performance of air charge estimation algorithms in spark ignition automotive engines can be enhanced using advanced estimation techniques available in the controls literature. This paper illustrates two approaches of this kind that can improve the cylinder flow estimation for gasoline engines without external exhaust gas recirculation (EGR). The first approach is based on an input observer, while the second approach relies on an adaptive estimator. Assuming that the cylinder flow is nominally estimated via a speed-density calculation, and that the uncertainty is additive to the volumetric efficiency, the straightforward application of an input observer provides an easy to implement algorithm that corrects the nominal air flow estimate. The experimental results that we report in the paper point to a sufficiently good transient behaviour of the estimator. The signal quality may deteriorate, however, for extremely fast transients. This motivates the development of an adaptive estimator that relies mostly on the feedforward speed-density calculation during transients, while during engine operation close to steady-state conditions, it relies mostly on the adaptation. In our derivation of the adaptive estimator, the uncertainty is modelled as an unknown parameter multiplying the intake manifold temperature. We use the tracking error between the measured and modelled intake manifold pressure together with an appropriately defined prediction error estimate to develop an adaptation algorithm with improved identifiability and convergence rate. A robustness enhancement, via a ,-modification with the ,-factor depending on the prediction error estimate, ensures that in transients the parameter estimate converges to a pre-determined a priori value. In close to steady-state conditions, the ,-modification is rendered inactive and the evolution of the parameter estimate is determined by both tracking error and prediction error estimate. Further enhancements are made by incorporating a functional dependence of the a priori value on the engine operating conditions such as the intake manifold pressure. The coefficients of this function can be learned during engine operation from the values to which the parameter estimate converges in close to steady-state conditions. This feedforward learning functionality improves transient estimation accuracy and reduces the convergence time of the parameter estimate. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Bayesian estimation of traffic lane state

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 1 2003
Ivan Nagy
Abstract Modelling of large transportation systems requires a reliable description of its elements that can be easily adapted to the specific situation. This paper offers mixture model as a flexible candidate for modelling of such element. The mixture model describes particular and possibly very different states of a specific system by its individual components. A hierarchical model built on such elements can describe complexes of big city communications as well as railway or highway networks. Bayesian paradigm is adopted for estimation of parameters and the actual component label of the mixture model as it serves well for the subsequent decision making. As a straightforward application of Bayesian method to mixture models leads to infeasible computations, an approximation is applied. For normal stochastic variations, the resulting estimation algorithm reduces to a simple recursive weighted least squares. The elementary modelling is demonstrated on a model of traffic flow state in a single point of a roadway. The examples for simulated as well as real data show excellent properties of the suggested model. They represent much wider set of extensive tests made. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Non-smooth structured control design with application to PID loop-shaping of a process

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 14 2007
Pierre Apkarian
Abstract Feedback controllers with specific structure arise frequently in applications because they are easily apprehended by design engineers and facilitate on-board implementations and re-tuning. This work is dedicated to H, synthesis with structured controllers. In this context, straightforward application of traditional synthesis techniques fails, which explains why only a few ad hoc methods have been developed over the years. In response, we propose a more systematic way to design H, optimal controllers with fixed structure using local optimization techniques. Our approach addresses in principle all those controller structures which can be built into mathematical programming constraints. We apply non-smooth optimization techniques to compute locally optimal solutions, and provide practical tests for descent and optimality. In the experimental part we apply our technique to H, loop-shaping proportional integral derivative (PID) controllers for MIMO systems and demonstrate its use for PID control of a chemical process. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Principles and applications of control in quantum systems

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 15 2005
Hideo Mabuchi
Abstract We describe in this article some key themes that emerged during a Caltech/AFOSR Workshop on ,Principles and Applications of Control in Quantum Systems' (PRACQSYS), held 21,24 August 2004 at the California Institute of Technology. This workshop brought together engineers, physicists and applied mathematicians to construct an overview of new challenges that arise when applying constitutive methods of control theory to nanoscale systems whose behaviour is manifestly quantum. Its primary conclusions were that the number of experimentally accessible quantum control systems is steadily growing (with a variety of motivating applications), that appropriate formal perspectives enable straightforward application of the essential ideas of classical control to quantum systems, and that quantum control motivates extensive study of model classes that have previously received scant consideration. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Novel highly elastic magnetic materials for dampers and seals: part II.

POLYMERS FOR ADVANCED TECHNOLOGIES, Issue 7 2007
Material behavior in a magnetic field
Abstract The combination of polymers with magnetic particles displays novel and often enhanced properties compared to the traditional materials. They can open up possibilities for new technological applications. The magnetic field sensitive elastomers represent a new type of composites consisting of small particles, usually from nanometer range to micron range, dispersed in a highly elastic polymeric matrix. In this paper, we show that in the presence of built-in magnetic particles it is possible to tune the elastic modulus by an external magnetic field. We propose a phenomenological equation to describe the effect of the external magnetic field on the elastic modulus. We demonstrate the engineering potential of new materials on the examples of two devices. The first one is a new type of seals fundamentally different from those used before. In the simplest case, the sealing assembly includes a magnetoelastic strip and a permanent magnet. They attract due to the magnetic forces. This ensures that due to high elasticity of the proposed composites and good adhesion properties, the strip of magnetoelastic will adopt the shape of the surface to be sealed, this fact leading to an excellent sealing. Another straightforward application of the magnetic composites is based on their magnetic field dependent elastic modulus. Namely, we demonstrate in this paper the possible application of these materials as adjustable vibration dampers. Copyright © 2007 John Wiley & Sons, Ltd. [source]


A Contractualist Defense of Democratic Authority

RATIO JURIS, Issue 3 2005
DAVID LEFKOWITZ
My first argument is a straightforward application of contractualist reasoning, and mirrors T. M. Scanlon's defense of a principle of fairness for the distribution of benefits produced by a cooperative scheme. My second argument develops and defends the intuition that treating others morally requires respecting their exercise of moral judgment, or a sense of justice. I conclude by addressing the problem of disagreement over the design of the democratic decision procedure itself, and rebutting Jeremy Waldron's claim that democratic authority is incompatible with judicial review. [source]


Trend estimation of financial time series

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 3 2010
Víctor M. Guerrero
Abstract We propose to decompose a financial time series into trend plus noise by means of the exponential smoothing filter. This filter produces statistically efficient estimates of the trend that can be calculated by a straightforward application of the Kalman filter. It can also be interpreted in the context of penalized least squares as a function of a smoothing constant has to be minimized by trading off fitness against smoothness of the trend. The smoothing constant is crucial to decide the degree of smoothness and the problem is how to choose it objectively. We suggest a procedure that allows the user to decide at the outset the desired percentage of smoothness and derive from it the corresponding value of that constant. A definition of smoothness is first proposed as well as an index of relative precision attributable to the smoothing element of the time series. The procedure is extended to series with different frequencies of observation, so that comparable trends can be obtained for say, daily, weekly or intraday observations of the same variable. The theoretical results are derived from an integrated moving average model of order (1, 1) underlying the statistical interpretation of the filter. Expressions of equivalent smoothing constants are derived for series generated by temporal aggregation or systematic sampling of another series. Hence, comparable trend estimates can be obtained for the same time series with different lengths, for different time series of the same length and for series with different frequencies of observation of the same variable. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Variation, Natural Selection, and Information Content , A Simulation

CHEMISTRY & BIODIVERSITY, Issue 10 2007
Bernard Testa
Abstract In Neo-Darwinism, variation and natural selection are the two evolutionary mechanisms that propel biological evolution. Variation implies changes in the gene pool of a population, enlarging the genetic variability from which natural selection can choose. But in the absence of natural selection, variation causes dissipation and randomization. Natural selection, in contrast, constrains this variability by decreasing the survival and fertility of the less-adapted organisms. The objective of this study is to propose a highly simplified simulation of variation and natural selection, and to relate the observed evolutionary changes in a population to its information content. The model involves an imaginary population of individuals. A quantifiable character allows the individuals to be categorized into bins. The distribution of bins (a histogram) was assumed to be Gaussian. The content of each bin was calculated after one to twelve cycles, each cycle spanning N generations (N being undefined). In a first study, selection was simulated in the absence of variation. This was modeled by assuming a differential fertility factor F that increased linearly from the lower bins (F<1.00) to the higher bins (F>1.00). The fertility factor was applied as a multiplication factor during each cycle. Several ranges of fertility were investigated. The resulting histograms became skewed to the right. In a second study, variation was simulated in the absence of selection. This was modeled by assuming that during each cycle each bin lost a fixed percentage of its content (variation factor Y) to its two adjacent bins. The resulting histograms became broader and flatter, while retaining their bilateral symmetry. Different values of Y were monitored. In a third study, various values of F and Y were combined. Our model allows the straightforward application of Shannon's equation and the calculation of a Shannon -entropy (SE) values for each histogram. Natural selection was, thus, shown to result in a progressive decrease in SE as a function of F. In other words, natural selection, when acting alone, progressively increased the information content of the population. In contrast, variation resulted in a progressive increase in SE as a function of Y. In other words, variation acting alone progressively decreased the information content of a population. When both factors, F and Y, were applied simultaneously, their relative weight determined the progressive change in SE. [source]


A multilayered approach to approximating solute polarization

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 5 2004
Richard I. Maurer
Abstract A hybrid multilayered "ONIOM"-type approach to solvation is presented in which the basic free energy of hydration is taken from the Poisson Boltzmann method and the contribution to the solute polarization is taken from a quantum mechanical implementation of the Born method. The method has been tested on the 52 neutral molecules used in the AM1,SM2 parameterization, and the polarized continuum method is taken as the standard by which the results are assessed. Regression analysis shows that the method gives a small improvement over the standard Poisson Boltzmann method or a dramatic improvement over the Born method. The system presented here represents one of the more straightforward applications of the multilayered approach to solvation, but other more sophisticated approaches are discussed. © 2004 Wiley Periodicals, Inc. J Comput Chem 25: 627,631, 2004 [source]


Joint projections of temperature and precipitation change from multiple climate models: a hierarchical Bayesian approach

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2009
Claudia Tebaldi
Summary., Posterior distributions for the joint projections of future temperature and precipitation trends and changes are derived by applying a Bayesian hierachical model to a rich data set of simulated climate from general circulation models. The simulations that are analysed here constitute the future projections on which the Intergovernmental Panel on Climate Change based its recent summary report on the future of our planet's climate, albeit without any sophisticated statistical handling of the data. Here we quantify the uncertainty that is represented by the variable results of the various models and their limited ability to represent the observed climate both at global and at regional scales. We do so in a Bayesian framework, by estimating posterior distributions of the climate change signals in terms of trends or differences between future and current periods, and we fully characterize the uncertain nature of a suite of other parameters, like biases, correlation terms and model-specific precisions. Besides presenting our results in terms of posterior distributions of the climate signals, we offer as an alternative representation of the uncertainties in climate change projections the use of the posterior predictive distribution of a new model's projections. The results from our analysis can find straightforward applications in impact studies, which necessitate not only best guesses but also a full representation of the uncertainty in climate change projections. For water resource and crop models, for example, it is vital to use joint projections of temperature and precipitation to represent the characteristics of future climate best, and our statistical analysis delivers just that. [source]


Titanium dioxide thin films deposited by the sol-gel technique starting from titanium oxy-acetyl acetonate: gas sensing and photocatalyst applications

PHYSICA STATUS SOLIDI (C) - CURRENT TOPICS IN SOLID STATE PHYSICS, Issue 9 2010
A. Maldonado
Abstract Titanium dioxide (TiO2) thin films were deposited onto sodocalcic glass plates by the sol-gel technique, starting from a non-alkoxide route, namely, titanium oxy-acetyl acetonate as Ti precursor. Film thickness effect on both the gas sensing and photocatalytic degradation performance was studied. The as-deposited films were annealed in air at 400 °C. All the X-ray spectra of the films show a very broad-peak centered in a 2, angle around 30°. In the case of the thinnest films the surface morphology is uniform and very smooth, whereas for the thickest films the corresponding surface is covered by grains with a rod-like shape with a length on the order of 140 nm. The films were tested both for two straightforward applications: ultraviolet assisted-degradation of methylene blue dissolved in water, at different times, as well as gas sensor in a controlled propane (C3H8) atmosphere. As the film thickness increases, the degradation of methylene blue (MB) also increases. The thickest TiO2 thin films after being exposed by 5 hours to the catalytic degradation, promoted by ultraviolet illumination, showed a final MB solution degradation in the order of 48%. This reult can be associated with the increase in the effective exposed area of the TiO2 thin films. On the other hand, the exposition of the films to a controlled propane atmosphere produced a significant change in the surface electrical resistance of the films at operating temperatures of 200 °C and above. In fact, in the case of the thickest TiO2 films, a dramatic electrical resistance change of non-exposed and propane exposed , 560 to 0.7 M, ,, was registered. The results show that TiO2 films deposited by an economical deposition technique, as is the case of the sol-gel technique, could have an important potential in industrial applications. (© 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]