Home About us Contact | |||
Direct Computation (direct + computation)
Selected AbstractsDirect computation of thermodynamic properties of chemically reacting air with consideration to CFDINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 4 2003Joe IannelliArticle first published online: 2 SEP 200 Abstract This paper details a two-equation procedure to calculate exactly mass and mole fractions, pressure, temperature, specific heats, speed of sound and the thermodynamic and jacobian partial derivatives of pressure and temperature for a five-species chemically reacting equilibrium air. The procedure generates these thermodynamic properties using as independent variables either pressure and temperature or density and internal energy, for CFD applications. An original element in this procedure consists in the exact physically meaningful solution of the mass-fraction and mass-action equations. Air-equivalent molecular masses for oxygen and nitrogen are then developed to account, within a mixture of only oxygen and nitrogen, for the presence of carbon dioxide, argon and the other noble gases within atmospheric air. The mathematical formulation also introduces a versatile system non-dimensionalization that makes the procedure uniformly applicable to flows ranging from shock-tube flows with zero initial velocity to aerothermodynamic flows with supersonic/hypersonic free-stream Mach numbers. Over a temperature range of more than 10000 K and pressure and density ranges corresponding to an increase in altitude in standard atmosphere of 30000 m above sea level, the predicted distributions of mole fractions, constant-volume specific heat, and speed of sound for the model five species agree with independently published results, and all the calculated thermodynamic properties, including their partial derivatives, remain continuous, smooth, and physically meaningful. Copyright © 2003 John Wiley & Sons, Ltd. [source] Simultaneous prediction intervals for ARMA processes with stable innovationsJOURNAL OF FORECASTING, Issue 3 2009John P. Nolan Abstract We describe a method for calculating simultaneous prediction intervals for ARMA times series with heavy-tailed stable innovations. The spectral measure of the vector of prediction errors is shown to be discrete. Direct computation of high-dimensional stable probabilities is not feasible, but we show that Monte Carlo estimates of the interval width is practical. Copyright © 2008 John Wiley & Sons, Ltd. [source] An approximate-state Riemann solver for the two-dimensional shallow water equations with porosityINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 12 2010P. Finaud-Guyot Abstract PorAS, a new approximate-state Riemann solver, is proposed for hyperbolic systems of conservation laws with source terms and porosity. The use of porosity enables a simple representation of urban floodplains by taking into account the global reduction in the exchange sections and storage. The introduction of the porosity coefficient induces modified expressions for the fluxes and source terms in the continuity and momentum equations. The solution is considered to be made of rarefaction waves and is determined using the Riemann invariants. To allow a direct computation of the flux through the computational cells interfaces, the Riemann invariants are expressed as functions of the flux vector. The application of the PorAS solver to the shallow water equations is presented and several computational examples are given for a comparison with the HLLC solver. Copyright © 2009 John Wiley & Sons, Ltd. [source] MAP fusion method for superresolution of images with locally varying pixel qualityINTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 4 2008Kio Kim Abstract Superresolution is a procedure that produces a high-resolution image from a set of low-resolution images. Many of superresolution techniques are designed for optical cameras, which produce pixel values of well-defined uncertainty, while there are still various imaging modalities for which the uncertainty of the images is difficult to control. To construct a superresolution image from low-resolution images with varying uncertainty, one needs to keep track of the uncertainty values in addition to the pixel values. In this paper, we develop a probabilistic approach to superresolution to address the problem of varying uncertainty. As direct computation of the analytic solution for the superresolution problem is difficult, we suggest a novel algorithm for computing the approximate solution. As this algorithm is a noniterative method based on Kalman filter-like recursion relations, there is a potential for real-time implementation of the algorithm. To show the efficiency of our method, we apply this algorithm to a video sequence acquired by a forward looking sonar system. © 2008 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 18, 242,250, 2008; Published online in Wiley InterScience (www.interscience.wiley.com). [source] Development of a skew µ lower boundINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 11 2005Rod Holland Abstract Exploitation of the NP hard, mixed µ problem structure provides a polynomial time algorithm that approximates µ with usually reasonable answers. When the problem is extended to the skew µ problem an extension of the existing method to the skew µ formulation is required. The focus of this paper is to extend the µ lower bound derivation to the skew µ lower bound and show its direct computation by way of a power algorithm. Copyright © 2005 John Wiley & Sons, Ltd. [source] A study on the optimization of the deployment of targeted observations using adjoint-based methodsTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 583 2002Thierry Bergot Abstract A new adjoint-based method to find the optimal deployment of targeted observations, called Kalman Filter Sensitivity (KFS), is introduced. The major advantage of this adjoint-based method is that it allows direct computation of the reduction of the forecast-score error variance that would result from future deployment of targeted observations. This method is applied in a very simple one-dimensional context, and is then compared to other adjoint-based products, such as classical gradients and gradients with respect to observations. The major conclusion is that the deployment of targeted observation is strongly constrained by the aspect ratio between the length-scale of the sensitivity area and the length-scale of the analysis-error covariance matrix. This very simple example also clearly illustrates that the reduction of forecast-error variance is stronger for assimilation schemes which have a smaller characteristic length-scale. Finally, the KFS technique is applied in a diagnostic way (i.e. once the observations are done) to four FASTEX cases. For these cases, the reduction of the forecasterror variance is in agreement with the efficiency of targeted observations as previously studied. A preliminary step towards an operational use has been performed on FASTEX IOP18, and results seem to validate the KFS approach of targeting. Copyright © 2002 Royal Meteorological Society. [source] Using Empirical Likelihood to Combine Data: Application to Food Risk AssessmentBIOMETRICS, Issue 1 2009Amélie Crépet Summary This article introduces an original methodology based on empirical likelihood, which aims at combining different food contamination and consumption surveys to provide risk managers with a risk measure, taking into account all the available information. This risk index is defined as the probability that exposure to a contaminant exceeds a safe dose. It is naturally expressed as a nonlinear functional of the different consumption and contamination distributions, more precisely as a generalized U-statistic. This nonlinearity and the huge size of the data sets make direct computation of the problem unfeasible. Using linearization techniques and incomplete versions of the U-statistic, a tractable "approximated" empirical likelihood program is solved yielding asymptotic confidence intervals for the risk index. An alternative "Euclidean likelihood program" is also considered, replacing the Kullback,Leibler distance involved in the empirical likelihood by the Euclidean distance. Both methodologies are tested on simulated data and applied to assess the risk due to the presence of methyl mercury in fish and other seafood. [source] Atomic Properties of Amino Acids: Computed Atom Types as a Guide for Future Force-Field DesignCHEMPHYSCHEM, Issue 8 2003Paul L. A. Popelier Dr. Abstract The quantum chemical topology (QCT) is able to propose atom types by direct computation rather than by chemical intuition. In previous work, molecular electron densities of 20 amino acids and smaller derived molecules were partitioned into a set of 760 topological atoms. Each atom was characterised by seven atomic properties and subjected to cluster analysis element by element, that is, C, H, O, N, and S. From the respective dendrograms, 21 carbon atom types were distinguished, 7 hydrogen, 2 nitrogen, 6 oxygen, and 6 sulfur atom types. Herein, we contrast the QCT atom types with those of the assisted model building with energy refinement (AMBER) force field. We conclude that in spite of fair agreement between QCT and AMBER atom types, the latter are sometimes underdifferentiated and sometimes overdifferentiated. In summary, we suggest that QCT is a useful guide in designing new force fields or improving existing ones. The computational origin of QCT atom types makes their determination unbiased compared to atom type determination by chemical intuition and a priori assumptions. We provide a list of specific recommendations. [source] |