Home About us Contact | |||
Linear Combination (linear + combination)
Selected AbstractsLocal characteristics of the electronic structure of MgO: LCAO and plane-wave calculationsINTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 2 2005R. A. Evarestov Abstract Linear combinations of atomic orbitals and plane-wave calculations of the electronic structure of the ionic crystal MgO were performed. Local characteristics of the electronic structure of this crystal are obtained using the traditional approaches and the method based on Wannier-type atomic orbitals (WTAOs). It is demonstrated that the results of the conventional methods for chemical bonding analysis in MgO are contradictory and unreasonable. On the contrary, the results of the WTAO method for both types of the basis correctly exhibit the ionic nature of chemical bonding in this crystal. © 2005 Wiley Periodicals, Inc. Int J Quantum Chem, 2005 [source] Smoothness adaptive average derivative estimationTHE ECONOMETRICS JOURNAL, Issue 1 2010Marcia M. A. Schafgans Summary, Many important models utilize estimation of average derivatives of the conditional mean function. Asymptotic results in the literature on density weighted average derivative estimators (ADE) focus on convergence at parametric rates; this requires making stringent assumptions on smoothness of the underlying density; here we derive asymptotic properties under relaxed smoothness assumptions. We adapt to the unknown smoothness in the model by consistently estimating the optimal bandwidth rate and using linear combinations of ADE estimators for different kernels and bandwidths. Linear combinations of estimators (i) can have smaller asymptotic mean squared error (AMSE) than an estimator with an optimal bandwidth and (ii) when based on estimated optimal rate bandwidth can adapt to unknown smoothness and achieve rate optimality. Our combined estimator minimizes the trace of estimated MSE of linear combinations. Monte Carlo results for ADE confirm good performance of the combined estimator. [source] Neural Network Earnings per Share Forecasting Models: A Comparative Analysis of Alternative MethodsDECISION SCIENCES, Issue 2 2004Wei Zhang ABSTRACT In this paper, we present a comparative analysis of the forecasting accuracy of univariate and multivariate linear models that incorporate fundamental accounting variables (i.e., inventory, accounts receivable, and so on) with the forecast accuracy of neural network models. Unique to this study is the focus of our comparison on the multivariate models to examine whether the neural network models incorporating the fundamental accounting variables can generate more accurate forecasts of future earnings than the models assuming a linear combination of these same variables. We investigate four types of models: univariate-linear, multivariate-linear, univariate-neural network, and multivariate-neural network using a sample of 283 firms spanning 41 industries. This study shows that the application of the neural network approach incorporating fundamental accounting variables results in forecasts that are more accurate than linear forecasting models. The results also reveal limitations of the forecasting capacity of investors in the security market when compared to neural network models. [source] Predicting intra-urban variation in air pollution concentrations with complex spatio-temporal dependencies,ENVIRONMETRICS, Issue 6 2010Adam A. Szpiro Abstract We describe a methodology for assigning individual estimates of long-term average air pollution concentrations that accounts for a complex spatio-temporal correlation structure and can accommodate spatio-temporally misaligned observations. This methodology has been developed as part of the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air), a prospective cohort study funded by the US EPA to investigate the relationship between chronic exposure to air pollution and cardiovascular disease. Our hierarchical model decomposes the space--time field into a "mean" that includes dependence on covariates and spatially varying seasonal and long-term trends and a "residual" that accounts for spatially correlated deviations from the mean model. The model accommodates complex spatio-temporal patterns by characterizing the temporal trend at each location as a linear combination of empirically derived temporal basis functions, and embedding the spatial fields of coefficients for the basis functions in separate linear regression models with spatially correlated residuals (universal kriging). This approach allows us to implement a scalable single-stage estimation procedure that easily accommodates a significant number of missing observations at some monitoring locations. We apply the model to predict long-term average concentrations of oxides of nitrogen (NOx) from 2005 to 2007 in the Los Angeles area, based on data from 18 EPA Air Quality System regulatory monitors. The cross-validated IR2 is 0.67. The MESA Air study is also collecting additional concentration data as part of a supplementary monitoring campaign. We describe the sampling plan and demonstrate in a simulation study that the additional data will contribute to improved predictions of long-term average concentrations. Copyright © 2009 John Wiley & Sons, Ltd. [source] Non-parametric tests and confidence regions for intrinsic diversity profiles of ecological populationsENVIRONMETRICS, Issue 8 2003Tonio Di Battista Abstract Evaluation of diversity profiles is useful for ecologists to quantify the diversity of biological communities. Measures of diversity profile can be expressed as a function of the unknown abundance vector. Thus, the estimators and related confidence regions and tests of hypotheses involve aspects of multivariate analysis. In this setting, using a suitable sampling design, inference is developed assuming an asymptotic specific distribution of the profile estimator. However, in a biological framework, ecologists work with small sample sizes, and the use of any probability distribution is hazardous. Assuming that a sample belongs to the family of replicated sampling design, we show that the diversity profile estimator can be expressed as a linear combination of the ranked abundance vector estimators. Hence we are able to develop a non-parametric approach based on a bootstrap in order to build balanced simultaneous confidence sets and tests of hypotheses for diversity profiles. Finally, the proposed procedure is applied on the avian populations of four parks in Milan, Italy. Copyright © 2003 John Wiley & Sons, Ltd. [source] Economic Sentiment and Yield Spreads in EuropeEUROPEAN FINANCIAL MANAGEMENT, Issue 2 2008Eva Ferreira G12; E43 Abstract According toHarvey (1988), the forecasting ability of the term spread on economic growth is due to the fact that interest rates reflect investors' expectations about the future economic situation when deciding their plans for consumption and investment. Past literature has used ex post data on output or consumption growth as proxies for their expected value. In this paper, we employ a direct measure of economic agents' expectations, the Economic Sentiment Indicator elaborated by the European Commission, to test this hypothesis. Our results indicate that a linear combination of European yield spreads explains a surprising 93.7\% of the variability of the Economic Sentiment Indicator. This ability of yield spreads to capture economic agent expectations may be the actual reason for the predictive power of yield spreads about future business cycle. [source] A Model of Factors Correlated to Homeownership: The Case of UtahFAMILY & CONSUMER SCIENCES RESEARCH JOURNAL, Issue 1 2001Lucy Delgadillo This article examines the relationship between homeownership and socioeconomic, demographic, and market factors in Utah. Units of analyses were census-designated places. The goal was to provide a model that can be replicated by housing specialists and consumer scientists to gain a better understanding of how homeownership (dependent variable) differs from place to place and how this variation relates to socioeconomic index, population density, affordability ratio, and the median value of owner occupied housing units (independent variables). The 1990 data set was analyzed using bivariate and multivariate analyses. Homeownership percentages were regressed on the linear combination of the socioeconomic scale, log of population density, and affordability ratios. Log of population density was the factor that explained most of the variance. The interaction equation slightly improved the explanatory power, accounting for more than 50% of the variance. [source] A frequency-domain formulation of MCE method for multi-axial random loadingsFATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 11 2008D. BENASCIUTTI ABSTRACT Many multi-axial fatigue limit criteria are formalized as a linear combination of a shear stress amplitude and a normal stress. To identify the shear stress amplitude, appropriate conventional definitions, as the minimum circumscribed circle (MCC) or ellipse (MCE) proposals, are in use. Despite computational improvements, deterministic algorithms implementing the MCC/MCE methods are exceptionally time-demanding when applied to "coiled" random loading paths resulting from in-service multi-axial loadings and they may also provide insufficiently robust and reliable results. It would be then preferable to characterize multi-axial random loadings by statistical re-formulations of the deterministic MCC/MCE methods. Following an early work of Pitoiset et al., this paper presents a statistical re-formulation for the MCE method. Numerical simulations are used to compare both statistical re-formulations with their deterministic counterparts. The observed general good trend, with some better performance of the statistical approach, confirms the validity, reliability and robustness of the proposed formulation. [source] Entanglement and symmetry effects in the transition to the Schrödinger cat regimeFORTSCHRITTE DER PHYSIK/PROGRESS OF PHYSICS, Issue 11-12 2009F. de Pasquale Abstract We study two-spin entanglement and order parameter fluctuations as a function of the system size in the XY model in a transverse field and in the isotropic XXX model. Both models are characterized by the occurrence of ground state degeneracy also when systems of finite size are considered. This is always true for the XXX model, but only at the factorizing field for the XY model. We study the size dependence of symmetric states, which, in the presence of degeneracy, can be expanded as a linear combination of broken symmetry states. We show that, while the XY model looses its quantum superposition content exponentially with the size N, a decrease of the order of 1 / N is observed when the XXX model is considered. The emergence of two qualitatively different regimes is directly related to the difference in the symmetry of the models. [source] Pleiotropy and principal components of heritability combine to increase power for association analysisGENETIC EPIDEMIOLOGY, Issue 1 2008Lambertus Klei Abstract When many correlated traits are measured the potential exists to discover the coordinated control of these traits via genotyped polymorphisms. A common statistical approach to this problem involves assessing the relationship between each phenotype and each single nucleotide polymorphism (SNP) individually (PHN); and taking a Bonferroni correction for the effective number of independent tests conducted. Alternatively, one can apply a dimension reduction technique, such as estimation of principal components, and test for an association with the principal components of the phenotypes (PCP) rather than the individual phenotypes. Building on the work of Lange and colleagues we develop an alternative method based on the principal component of heritability (PCH). For each SNP the PCH approach reduces the phenotypes to a single trait that has a higher heritability than any other linear combination of the phenotypes. As a result, the association between a SNP and derived trait is often easier to detect than an association with any of the individual phenotypes or the PCP. When applied to unrelated subjects, PCH has a drawback. For each SNP it is necessary to estimate the vector of loadings that maximize the heritability over all phenotypes. We develop a method of iterated sample splitting that uses one portion of the data for training and the remainder for testing. This cross-validation approach maintains the type I error control and yet utilizes the data efficiently, resulting in a powerful test for association. Genet. Epidemiol. 2007. © 2007 Wiley-Liss, Inc. [source] Joint inversion of multiple data types with the use of multiobjective optimization: problem formulation and application to the seismic anisotropy investigationsGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2007E. Kozlovskaya SUMMARY In geophysical studies the problem of joint inversion of multiple experimental data sets obtained by different methods is conventionally considered as a scalar one. Namely, a solution is found by minimization of linear combination of functions describing the fit of the values predicted from the model to each set of data. In the present paper we demonstrate that this standard approach is not always justified and propose to consider a joint inversion problem as a multiobjective optimization problem (MOP), for which the misfit function is a vector. The method is based on analysis of two types of solutions to MOP considered in the space of misfit functions (objective space). The first one is a set of complete optimal solutions that minimize all the components of a vector misfit function simultaneously. The second one is a set of Pareto optimal solutions, or trade-off solutions, for which it is not possible to decrease any component of the vector misfit function without increasing at least one other. We investigate connection between the standard formulation of a joint inversion problem and the multiobjective formulation and demonstrate that the standard formulation is a particular case of scalarization of a multiobjective problem using a weighted sum of component misfit functions (objectives). We illustrate the multiobjective approach with a non-linear problem of the joint inversion of shear wave splitting parameters and longitudinal wave residuals. Using synthetic data and real data from three passive seismic experiments, we demonstrate that random noise in the data and inexact model parametrization destroy the complete optimal solution, which degenerates into a fairly large Pareto set. As a result, non-uniqueness of the problem of joint inversion increases. If the random noise in the data is the only source of uncertainty, the Pareto set expands around the true solution in the objective space. In this case the ,ideal point' method of scalarization of multiobjective problems can be used. If the uncertainty is due to inexact model parametrization, the Pareto set in the objective space deviates strongly from the true solution. In this case all scalarization methods fail to find the solution close to the true one and a change of model parametrization is necessary. [source] Magnetic quantification of urban pollution sources in atmospheric particulate matterGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2004S. Spassov SUMMARY A new method is presented for fast quantification of urban pollution sources in atmospheric particulate matter (PM). The remanent magnetization of PM samples collected in Switzerland at sites with different exposures to pollution sources is analysed. The coercivity distribution of each sample is calculated from detailed demagnetization curves of anhysteretic remanent magnetization (ARM) and is modelled using a linear combination of appropriate functions which represent the contribution of different sources of magnetic minerals to the total magnetization. Two magnetic components, C1 and C2, are identified in all samples. The low-coercivity component C1 predominates in less polluted sites, whereas the concentration of the higher-coercivity component C2 is large in urban areas. The same sites were monitored independently by Hüglin using detailed chemical analysis and a quantitative source attribution of the PM. His results are compared with the magnetic component analysis. The absolute and relative magnetic contributions of component C2 correlate very well with absolute and relative mass contributions of exhaust emissions, respectively. Traffic is the most important PM pollution source in Switzerland: it includes exhaust emissions and abrasion products released by vehicle brakes. Component C2 and traffic-related PM sources correlate well, which is encouraging for the implementation of non-destructive magnetic methods as an economic alternative to chemical analysis when mapping urban dust pollution. [source] Network-magnetotelluric method and its first results in central and eastern Hokkaido, NE JapanGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2001Makoto Uyeshima Summary A new field observation technique based on the magnetotelluric (MT) method has been developed to determine deep and large-scale 3-D electrical conductivity distributions in the Earth. The method is named ,Network-MT', and employs a commercial telephone network, to measure voltage differences with long dipole lengths ranging from 10 to several tens of kilometres. This observation configuration enables us to obtain the telluric field distribution with nearly continuous coverage over a target region. Response functions are estimated between the respective voltage differences and the horizontal magnetic fields at a reference point. Owing to the long electrode spacing, the observed responses are relatively free from the effects of small-scale near-surface heterogeneity with a scalelength shorter than the typical electrode spacing. Therefore, physically meaningful direct comparison between the observations and model responses is feasible even if the fine-scale features of near-surface heterogeneity are ignored. This extensively reduces the difficulty, especially in 3-D MT interpretation. The first Network-MT experiment was performed in central and eastern Hokkaido, NE Japan, in 1989. It took about five months to complete all of the measurements, and used 209 dipoles to cover the target area of 200(EW) × 200(NS) km2. The long electrode spacing enabled us to obtain the voltage differences with a high signal-to-noise ratio. For 175 dipoles, the squared multiple coherency between the voltage difference and the horizontal magnetic field at Memambetsu Geomagnetic Observatory was determined to be more than 0.9 in the period from 102 to 104 s. 193 MT impedances were computed in tensor form by linear combination of the response functions. The estimated impedances generally possessed smooth period dependence throughout the period range. No drastic spatial change was observed in the characteristics of the tensors for neighbouring sites, and some regional trend could be detected in the spatial distribution. Thus, we confirmed the merit of the Network-MT method, that its responses are little affected by small-scale near-surface structures. The regional feature of the response implied a significant influence of the coast effect, and was well correlated with the regional geological setting in Hokkaido. Conventional Groom,Bailey tensor decomposition analysis revealed that the target region is not regionally one- or two-dimensional. Therefore, we developed a 3-D forward modelling scheme specially designed for the Network-MT experiment, and tried to reproduce the Network-MT responses directly. In the 3-D model, a realistic land,sea distribution was considered. The resistivity of sea water was fixed to be 0.25 , m and, as a first trial of 3-D modelling, the resistivity of the land was assumed to be uniform and its value was determined to be 200 , m by a simple one-parameter inversion. Overall agreements between the observations and the best-fit model responses indicated the importance of the 3-D coast effect in the target region. However, there remained significant discrepancies, especially in the phase of the responses, which provide a clue to determining a regional deep 3-D structure. [source] Detecting the impact of oceano-climatic changes on marine ecosystems using a multivariate index: The case of the Bay of Biscay (North Atlantic-European Ocean)GLOBAL CHANGE BIOLOGY, Issue 1 2008GEORGES HEMERY Abstract Large-scale univariate climate indices (such as NAO) are thought to outperform local weather variables in the explanation of trends in animal numbers but are not always suitable to describe regional scale patterns. We advocate the use of a Multivariate Oceanic and Climatic index (MOCI), derived from ,synthetic' and independent variables from a linear combination of the total initial variables objectively obtained from Principal Component Analysis. We test the efficacy of the index using long-term data from marine animal populations. The study area is the southern half of the Bay of Biscay (43°,47°N; western Europe). Between 1974 and 2000 we monitored cetaceans and seabirds along 131000 standardized line transects from ships. Fish abundance was derived from commercial fishery landings. We used 44 initial variables describing the oceanic and atmospheric conditions and characterizing the four annual seasons in the Bay of Biscay. The first principal component of our MOCI is called the South Biscay Climate (SBC) index. The winter NAO index was correlated to this SBC index. Inter-annual fluctuations for most seabird, cetacean and fish populations were significant. Boreal species (e.g. gadiformes fish species, European storm petrel and Razorbill ,) with affinities to cold temperate waters declined significantly over time while two (Puffin and Killer Whale) totally disappeared from the area during the study period. Meridional species with affinities to hotter waters increased in population size. Those medium-term demographic trends may reveal a regime shift for this part of the Atlantic Ocean. Most of the specific observed trends were highly correlated to the SBC index and not to the NAO. Between 40% and 60% of temporal variations in species abundance were explained by the multivariate SBC index suggesting that the whole marine ecosystem is strongly affected by a limited number of physical parameters revealed by the multivariate SBC index. Aside the statistical error of the field measurements, the remaining variation unexplained by the physical characteristics of the environment correspond to the impact of anthropogenic activities such overfishing and oil-spills. [source] Correction for pulse height variability reduces physiological noise in functional MRI when studying spontaneous brain activityHUMAN BRAIN MAPPING, Issue 2 2010Petra J. van Houdt Abstract EEG correlated functional MRI (EEG-fMRI) allows the delineation of the areas corresponding to spontaneous brain activity, such as epileptiform spikes or alpha rhythm. A major problem of fMRI analysis in general is that spurious correlations may occur because fMRI signals are not only correlated with the phenomena of interest, but also with physiological processes, like cardiac and respiratory functions. The aim of this study was to reduce the number of falsely detected activated areas by taking the variation in physiological functioning into account in the general linear model (GLM). We used the photoplethysmogram (PPG), since this signal is based on a linear combination of oxy- and deoxyhemoglobin in the arterial blood, which is also the basis of fMRI. We derived a regressor from the variation in pulse height (VIPH) of PPG and added this regressor to the GLM. When this regressor was used as predictor it appeared that VIPH explained a large part of the variance of fMRI signals acquired from five epilepsy patients and thirteen healthy volunteers. As a confounder VIPH reduced the number of activated voxels by 30% for the healthy volunteers, when studying the generators of the alpha rhythm. Although for the patients the number of activated voxels either decreased or increased, the identification of the epileptogenic zone was substantially enhanced in one out of five patients, whereas for the other patients the effects were smaller. In conclusion, applying VIPH as a confounder diminishes physiological noise and allows a more reliable interpretation of fMRI results. Hum Brain Mapp, 2010. © 2009 Wiley-Liss, Inc. [source] Random fields,Union intersection tests for detecting functional connectivity in EEG/MEG imagingHUMAN BRAIN MAPPING, Issue 8 2009Felix Carbonell Abstract Electrophysiological (EEG/MEG) imaging challenges statistics by providing two views of the same underlying spatio-temporal brain activity: a topographic view (EEG/MEG) and tomographic view (EEG/MEG source reconstructions). It is a common practice that statistical parametric mapping (SPM) for these two situations is developed separately. In particular, assessing statistical significance of functional connectivity is a major challenge in these types of studies. This work introduces statistical tests for assessing simultaneously the significance of spatio-temporal correlation structure between ERP/ERF components as well as that of their generating sources. We introduce a greatest root statistic as the multivariate test statistic for detecting functional connectivity between two sets of EEG/MEG measurements at a given time instant. We use some new results in random field theory to solve the multiple comparisons problem resulting from the correlated test statistics at each time instant. In general, our approach using the union-intersection (UI) principle provides a framework for hypothesis testing about any linear combination of sensor data, which allows the analysis of the correlation structure of both topographic and tomographic views. The performance of the proposed method is illustrated with real ERP data obtained from a face recognition experiment. Hum Brain Mapp 2009. © 2009 Wiley-Liss, Inc. [source] Relations between load and settlement of circular foundations on or in a dense sand expressed by a function of diameter and depthINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 6 2005Hiroaki Nagaoka Abstract When load acts on a circular foundation on or in a dense sand, average contact pressure on the lower surface of the foundation is q and settlement of the foundation is s. Diameter and depth of the foundation are B and Df. When the sand, B and Df are given, we can know the relation between q and s/B by, e.g. a loading test, i.e. the relation is determined by B and Df for the sand. Using the results of numerical analyses, we express a relation between q and s/B up to s=0.1B by functions of a single variable which is a linear combination of B and Df. Consequently when two foundations have different B's and different Df's but have the same value of the variable, the relations are the same. Then we examine whether the functions can express the results of eleven tests of model foundations of wide range of B and/or Df. In all the tests, the relations are expressed with sufficient accuracy. Copyright © 2005 John Wiley & Sons, Ltd. [source] Nonlinear transient dynamic analysis by explicit finite element with iterative consistent mass matrixINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 3 2009Shen Rong Wu Abstract Various mass matrices in the explicit finite element analyses of nonlinear transient dynamic problems are investigated. The matrices are obtained as a linear combination of lumped and consistent mass matrices. An iterative procedure to calculate the inverse of the consistent and the mixed mass matrices in the framework of explicit finite element method is presented. The convergence of the iterative procedure is proved. The inverse of the consistent and mixed mass matrices is approximated by the iteration and is used to compare the results from the lumped mass matrix. For the impact of a structural component and a vehicle, some difference in the results by using coarse mesh is observed. For the component using fine mesh, no significant difference is found. Copyright © 2008 John Wiley & Sons, Ltd. [source] Error estimation in a stochastic finite element method in electrokineticsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 11 2010S. Clénet Abstract Input data to a numerical model are not necessarily well known. Uncertainties may exist both in material properties and in the geometry of the device. They can be due, for instance, to ageing or imperfections in the manufacturing process. Input data can be modelled as random variables leading to a stochastic model. In electromagnetism, this leads to solution of a stochastic partial differential equation system. The solution can be approximated by a linear combination of basis functions rising from the tensorial product of the basis functions used to discretize the space (nodal shape function for example) and basis functions used to discretize the random dimension (a polynomial chaos expansion for example). Some methods (SSFEM, collocation) have been proposed in the literature to calculate such approximation. The issue is then how to compare the different approaches in an objective way. One solution is to use an appropriate a posteriori numerical error estimator. In this paper, we present an error estimator based on the constitutive relation error in electrokinetics, which allows the calculation of the distance between an average solution and the unknown exact solution. The method of calculation of the error is detailed in this paper from two solutions that satisfy the two equilibrium equations. In an example, we compare two different approximations (Legendre and Hermite polynomial chaos expansions) for the random dimension using the proposed error estimator. In addition, we show how to choose the appropriate order for the polynomial chaos expansion for the proposed error estimator. Copyright © 2009 John Wiley & Sons, Ltd. [source] Lower-bound limit analysis by using the EFG method and non-linear programmingINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2008Shenshen Chen Abstract Intended to avoid the complicated computations of elasto-plastic incremental analysis, limit analysis is an appealing direct method for determining the load-carrying capacity of structures. On the basis of the static limit analysis theorem, a solution procedure for lower-bound limit analysis is presented firstly, making use of the element-free Galerkin (EFG) method rather than traditional numerical methods such as the finite element method and boundary element method. The numerical implementation is very simple and convenient because it is only necessary to construct an array of nodes in the domain under consideration. The reduced-basis technique is adopted to solve the mathematical programming iteratively in a sequence of reduced self-equilibrium stress subspaces with very low dimensions. The self-equilibrium stress field is expressed by a linear combination of several self-equilibrium stress basis vectors with parameters to be determined. These self-equilibrium stress basis vectors are generated by performing an equilibrium iteration procedure during elasto-plastic incremental analysis. The Complex method is used to solve these non-linear programming sub-problems and determine the maximal load amplifier. Numerical examples show that it is feasible and effective to solve the problems of limit analysis by using the EFG method and non-linear programming. Copyright © 2007 John Wiley & Sons, Ltd. [source] Explicit calculation of smoothed sensitivity coefficients for linear problemsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 2 2003R. A. Bia, ecki Abstract A technique of explicit calculation of sensitivity coefficients based on the approximation of the retrieved function by a linear combination of trial functions of compact support is presented. The method is applicable to steady state and transient linear inverse problems where unknown distributions of boundary fluxes, temperatures, initial conditions or source terms are retrieved. The sensitivity coefficients are obtained by solving a sequence of boundary value problems with boundary conditions and source term being homogeneous except for one term. This inhomogeneous term is taken as subsequent trial functions. Depending on the type of the retrieved function, it may appear on boundary conditions (Dirichlet or Neumann), initial conditions or the source term. Commercial software and analytic techniques can be used to solve this sequence of boundary value problems producing the required sensitivity coefficients. The choice of the approximating functions guarantees a filtration of the high frequency errors. Several numerical examples are included where the sensitivity coefficients are used to retrieve the unknown values of boundary fluxes in transient state and volumetric sources. Analytic, boundary-element and finite-element techniques are employed in the study. Copyright © 2003 John Wiley & Sons, Ltd. [source] An eigenvector-based linear reconstruction scheme for the shallow-water equations on two-dimensional unstructured meshesINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 1 2007Sandra Soares Frazão Abstract This paper presents a new approach to MUSCL reconstruction for solving the shallow-water equations on two-dimensional unstructured meshes. The approach takes advantage of the particular structure of the shallow-water equations. Indeed, their hyperbolic nature allows the flow variables to be expressed as a linear combination of the eigenvectors of the system. The particularity of the shallow-water equations is that the coefficients of this combination only depend upon the water depth. Reconstructing only the water depth with second-order accuracy and using only a first-order reconstruction for the flow velocity proves to be as accurate as the classical MUSCL approach. The method also appears to be more robust in cases with very strong depth gradients such as the propagation of a wave on a dry bed. Since only one reconstruction is needed (against three reconstructions in the MUSCL approach) the EVR method is shown to be 1.4,5 times as fast as the classical MUSCL scheme, depending on the computational application. Copyright © 2006 John Wiley & Sons, Ltd. [source] A general methodology for investigating flow instabilities in complex geometries: application to natural convection in enclosuresINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 2 2001E. Gadoin Abstract This paper presents a general methodology for studying instabilities of natural convection flows enclosed in cavities of complex geometry. Different tools have been developed, consisting of time integration of the unsteady equations, steady state solving, and computation of the most unstable eigenmodes of the Jacobian and its adjoint. The methodology is validated in the classical differentially heated cavity, where the steady solution branch is followed for vary large values of the Rayleigh number and most unstable eigenmodes are computed at selected Rayleigh values. Its effectiveness for complex geometries is illustrated on a configuration consisting of a cavity with internal heated partitions. We finally propose to reduce the Navier,Stokes equations to a differential system by expanding the unsteady solution as the sum of the steady state solution and of a linear combination of the leading eigenmodes. The principle of the method is exposed and preliminary results are presented. Copyright © 2001 John Wiley & Sons, Ltd. [source] Application of Krylov subspaces to SPECT imagingINTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 5 2002P. Calvini The application of the conjugate gradient (CG) algorithm to the problem of data reconstruction in SPECT imaging indicates that most of the useful information is already contained in Krylov subspaces of small dimension, ranging from 9 (two-dimensional case) to 15 (three-dimensional case). On this basis, a new, proposed approach can be basically summarized as follows: construction of a basis spanning a Krylov subspace of suitable dimension and projection of the projector,backprojector matrix (a 106 × 106 matrix in the three-dimensional case) onto such a subspace. In this way, one is led to a problem of low dimensionality, for which regularized solutions can be easily and quickly obtained. The required SPECT activity map is expanded as a linear combination of the basis elements spanning the Krylov subspace and the regularization acts by modifying the coefficients of such an expansion. By means of a suitable graphical interface, the tuning of the regularization parameter(s) can be performed interactively on the basis of the visual inspection of one or some slices cut from a reconstruction. © 2003 Wiley Periodicals, Inc. Int J Imaging Syst Technol 12, 217,228, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10026 [source] A compact spin-free combinatoric open-shell coupled cluster theory applied to single-reference doubletsINTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 12 2008Dipayan Datta Abstract In this article, we present an explicitly spin-free compact coupled cluster (CC) theory for simple open-shell systems, e.g., doublets, biradicals, which can be described either by a single open-shell determinant or by a configuration state function (CSF) which corresponds to a single spatial configuration but is a linear combination of determinants with different spin allocations. A new cluster expansion Ansatz for the wave-operator is introduced, in which the spin-free cluster operators are either of the form of closed-shell-like n hole-n particle excitations or contain valence excitations, which may involve exchange spectator scatterings. These latter type of operators are allowed to contract among themselves through the spectator orbitals. The novelty of the Ansatz is in the choice of a suitable automorphic factor accompanying each composite of noncommuting operators, ensuring that each such composite appears only once. The resulting CC equations consist of two types of terms: one is direct and the other is folded and the latter involves the effective Hamiltonian operator. We emphasize that while the direct term terminates exactly at the quartic power of the cluster amplitudes, termination of the folded term is dictated by the valence rank of the effective Hamiltonian operator, just as in the spin-free open-shell CC theory with a normal ordered exponential Ansatz. Example applications are presented by computing the core and valence-ionized state energies of H2O molecule and comparing the results with benchmark full CI results. The results show the efficacy of the method. © 2008 Wiley Periodicals, Inc. Int J Quantum Chem, 2008 [source] Improvements of parametric quantum methods with new elementary parametric functionalsINTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 10 2008Fernando Ruette Abstract A series of elementary parametric functionals (EPFs) for resonance integral, electron-electron repulsion, electron-nucleus attraction, core-core interaction, and bond correlation correction were included in the new version of CATIVIC method [Int J Quantum Chem 2004, 96, 321]. In the present work, a systematic way to improve the precision of parametric quantum methods (PQMs) by using several EPFs in the parameterization of a set of molecules is proposed. Based on the fact that a linear combination of elementary functionals from the exact Hamiltonian is also a functional, a linear combination of EPFs has been proved that can enhance the accuracy of PQMs by considering the convex condition. A general formulation of simulation techniques for molecular properties is presented and a formal extension of the minimax principle to PQMs is also considered. © 2008 Wiley Periodicals, Inc. Int J Quantum Chem, 2008 [source] Analysis, design, and performance limitations of H, optimal filtering in the presence of an additional input with known frequencyINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 16 2007Ali Saberi Abstract A generalized ,-level H, sub-optimal input decoupling (SOID) filtering problem is formulated. It is a generalization of ,-level H, SOID filtering problem when, besides an input with unknown statistical properties but with a finite RMS norm, there exists an additional input to the given plant or system. The additional input is a linear combination of sinusoidal signals each of which has an unknown amplitude and phase but known frequency. The analysis, design, and performance limitations of generalized H, optimal filters are presented. Copyright © 2007 John Wiley & Sons, Ltd. [source] Estimating Long-term Trends in Tropospheric Ozone LevelsINTERNATIONAL STATISTICAL REVIEW, Issue 1 2002Michael Smith Summary This paper develops Bayesian methodology for estimating long-term trends in the daily maxima of tropospheric ozone. The methods are then applied to study long-term trends in ozone at six monitoring sites in the state of Texas. The methodology controls for the effects of meteorological variables because it is known that variables such as temperature, wind speed and humidity substantially affect the formation of tropospheric ozone. A semiparametric regression model is estimated in which a nonparametric trivariate surface is used to model the relationship between ozone and these meteorological variables because, while it is known that the relatinship is a complex nonlinear one, its functional form is unknown. The model also allows for the effects of wind direction and seasonality. The errors are modeled as an autoregression, which is methodologically challenging because the observations are unequally spaced over time. Each function in the model is represented as a linear combination of basis functions located at all of the design points. We also estimate an appropriate data transformation simulataneously with the functions. The functions are estimated nonparametrically by a Bayesian hierarchical model that uses indicator variables to allow a non-zero probability that the coefficient of each basis term is zero. The entire model, including the nonparametric surfaces, data transformation and autoregression for the unequally spaced errors, is estimated using a Markov chain Monte Carlo sampling scheme with a computationally efficient transition kernel for generating the indicator variables. The empirical results indicate that key meteorological variables explain most of the variation in daily ozone maxima through a nonlinear interaction and that their effects are consistent across the six sites. However, the estimated trends vary considerably from site to site, even within the same city. [source] A score test for non-nested hypotheses with applications to discrete data modelsJOURNAL OF APPLIED ECONOMETRICS, Issue 5 2001J. M. C. Santos Silva In this paper it is shown that a convenient score test against non-nested alternatives can be constructed from the linear combination of the likelihood functions of the competing models. This is essentially a test for the correct specification of the conditional distribution of the variable of interest. Given its characteristics, the proposed test is particularly attractive to check the distributional assumptions in models for discrete data. The usefulness of the test is illustrated with an application to models for recreational boating trips. Copyright © 2001 John Wiley & Sons, Ltd. [source] Preoperative Electrocardiographic Risk Assessment of Atrial Fibrillation After Coronary Artery Bypass GraftingJOURNAL OF CARDIOVASCULAR ELECTROPHYSIOLOGY, Issue 12 2004Ph.D., YI GANG M.D. Introduction: This study evaluated the role of surface ECG in assessment of risk of new-onset atrial fibrillation (AF) after coronary artery bypass grafting surgery (CABG). Methods and Results: One hundred fifty-one patients (126 men and 25 women; age 65 ± 10 years) without a history of AF undergoing primary elective and isolated CABG were studied. Standard 12-lead ECGs and P wave signal-averaged ECG (PSAE) were recorded 24 hours before CABG using a MAC VU ECG recorder. In addition to routine ECG measurements, two P wave (P wave complexity ratio [pCR]; P wave morphology dispersion [PMD]) and six T wave morphology descriptors (total cosine R to T [TCRT]; T wave morphology dispersion of ascending and descending part of the T wave [aTMD and dTMD], and others), and three PSAE indices (filtered P wave duration [PD]; root mean square voltage of terminal 20 msec of averaged P wave [RMS20]; and integral of P wave [Pi]) were investigated. During a mean hospital stay of 7.3 ± 6.2 days after CABG, 40 (26%) patients developed AF (AF group) and 111 remained AF-free (no AF group). AF patients were older (69 ± 9 years vs 64 ± 10 years, P = 0.005). PD (135 ± 9 msec vs 133 ± 12 msec, P = NS) and RMS20 (4.5 ± 1.7 ,V vs 4.0 ± 1.6 ,V, P = NS) in AF were similar to that in no AF, whereas Pi was significantly increased in AF (757 ± 230 ,Vmsec vs 659 ± 206 ,Vmsec, P = 0.007). Both pCR (32 ± 11 vs 27 ± 10) and PMD (31.5 ± 14.0 vs 26.4 ± 12.3) were significantly greater in AF (P = 0.012 and 0.048, respectively). TCRT (0.028 ± 0.596 vs 0.310 ± 0.542, P = 0.009) and dTMD (0.63 ± 0.03 vs 0.64 ± 0.02, P = 0.004) were significantly reduced in AF compared with no AF. Measurements of aTMD and three other T wave descriptors were similar in AF and no AF. Significant variables by univariate analysis, including advanced age (P = 0.014), impaired left ventricular function (P = 0.02), greater Pi (P = 0.012), and lower TCRT (P = 0.007) or dTMD, were entered into multiple logistic regression models. Increased Pi (P = 0.038), reduced TCRT (P = 0.040), and lower dTMD (P = 0.014) predicted AF after CABG independently. In patients <70 years, a linear combination of increased pCR and lower TCRT separated AF and no AF with a sensitivity of 74% and specificity of 62% (P = 0.005). Conclusion: ECG assessment identifies patients vulnerable to AF after CABG. Combination of ECG parameters assessed preoperatively may play an important role in predicting new-onset AF after CABG. [source] |