Proposed Methodology (proposed + methodology)

Distribution by Scientific Domains
Distribution within Engineering


Selected Abstracts


Multi-component analysis: blind extraction of pure components mass spectra using sparse component analysis

JOURNAL OF MASS SPECTROMETRY (INCORP BIOLOGICAL MASS SPECTROMETRY), Issue 9 2009
Ivica Kopriva
Abstract The paper presents sparse component analysis (SCA)-based blind decomposition of the mixtures of mass spectra into pure components, wherein the number of mixtures is less than number of pure components. Standard solutions of the related blind source separation (BSS) problem that are published in the open literature require the number of mixtures to be greater than or equal to the unknown number of pure components. Specifically, we have demonstrated experimentally the capability of the SCA to blindly extract five pure components mass spectra from two mixtures only. Two approaches to SCA are tested: the first one based on ,1 norm minimization implemented through linear programming and the second one implemented through multilayer hierarchical alternating least square nonnegative matrix factorization with sparseness constraints imposed on pure components spectra. In contrast to many existing blind decomposition methods no a priori information about the number of pure components is required. It is estimated from the mixtures using robust data clustering algorithm together with pure components concentration matrix. Proposed methodology can be implemented as a part of software packages used for the analysis of mass spectra and identification of chemical compounds. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Structural Health Monitoring via Measured Ritz Vectors Utilizing Artificial Neural Networks

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 4 2006
Heung-Fai Lam
Unlike most other pattern recognition methods, an artificial neural network (ANN) technique is employed as a tool for systematically identifying the damage pattern corresponding to an observed feature. An important aspect of using an ANN is its design but this is usually skipped in the literature on ANN-based SHM. The design of an ANN has significant effects on both the training and performance of the ANN. As the multi-layer perceptron ANN model is adopted in this work, ANN design refers to the selection of the number of hidden layers and the number of neurons in each hidden layer. A design method based on a Bayesian probabilistic approach for model selection is proposed. The combination of the pattern recognition method and the Bayesian ANN design method forms a practical SHM methodology. A truss model is employed to demonstrate the proposed methodology. [source]


Multimode Project Scheduling Based on Particle Swarm Optimization

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 2 2006
Hong Zhang
This article introduces a methodology for solving the MRCPSP based on particle swarm optimization (PSO) that has not been utilized for this and other construction-related problems. The framework of the PSO-based methodology is developed. A particle representation formulation is proposed to represent the potential solution to the MRCPSP in terms of priority combination and mode combination for activities. Each particle-represented solution should be checked against the nonrenewable resource infeasibility and will be handled by adjusting the mode combination. The feasible particle-represented solution is transformed to a schedule through a serial generation scheme. Experimental analyses are presented to investigate the performance of the proposed methodology. [source]


A probabilistic framework for quantification of aftershock ground-motion hazard in California: Methodology and parametric study

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 1 2009
Gee Liek Yeo
Abstract This paper presents a proposed method of aftershock probabilistic seismic hazard analysis (APSHA) similar to conventional ,mainshock' PSHA in that it estimates the likelihoods of ground motion intensity (in terms of peak ground accelerations, spectral accelerations or other ground motion intensity measures) due to aftershocks following a mainshock occurrence. This proposed methodology differs from the conventional mainshock PSHA in that mainshock occurrence rates remain constant for a conventional (homogeneous Poisson) earthquake occurrence model, whereas aftershock occurrence rates decrease with increased elapsed time from the initial occurrence of the mainshock. In addition, the aftershock ground motion hazard at a site depends on the magnitude and location of the causative mainshock, and the location of aftershocks is limited to an aftershock zone, which is also dependent on the location and magnitude of the initial mainshock. APSHA is useful for post-earthquake safety evaluation where there is a need to quantify the rates of occurrence of ground motions caused by aftershocks following the initial rupture. This knowledge will permit, for example, more informed decisions to be made for building tagging and entry of damaged buildings for rescue, repair or normal occupancy. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Evaluation of the influence of vertical irregularities on the seismic performance of a nine-storey steel frame

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 12 2006
Fragiadakis Michalis
Abstract A methodology based on incremental dynamic analysis (IDA) is presented for the evaluation of structures with vertical irregularities. Four types of storey-irregularities are considered: stiffness, strength, combined stiffness and strength, and mass irregularities. Using the well-known nine-storey LA9 steel frame as a base, the objective is to quantify the effect of irregularities, both for individual and for combinations of stories, on its response. In this context a rational methodology for comparing the seismic performance of different structural configurations is proposed by means of IDA. This entails performing non-linear time history analyses for a suite of ground motion records scaled to several intensity levels and suitably interpolating the results to calculate capacities for a number of limit-states, from elasticity to final global instability. By expressing all limit-state capacities with a common intensity measure, the reference and each modified structure can be naturally compared without needing to have the same period or yield base shear. Using the bootstrap method to construct appropriate confidence intervals, it becomes possible to isolate the effect of irregularities from the record-to-record variability. Thus, the proposed methodology enables a full-range performance evaluation using a highly accurate analysis method that pinpoints the effect of any source of irregularity for each limit-state. Copyright © 2006 John Wiley & Sons, Ltd. [source]


System identification of linear structures based on Hilbert,Huang spectral analysis.

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 10 2003
Part 2: Complex modes
Abstract A method, based on the Hilbert,Huang spectral analysis, has been proposed by the authors to identify linear structures in which normal modes exist (i.e., real eigenvalues and eigenvectors). Frequently, all the eigenvalues and eigenvectors of linear structures are complex. In this paper, the method is extended further to identify general linear structures with complex modes using the free vibration response data polluted by noise. Measured response signals are first decomposed into modal responses using the method of Empirical Mode Decomposition with intermittency criteria. Each modal response contains the contribution of a complex conjugate pair of modes with a unique frequency and a damping ratio. Then, each modal response is decomposed in the frequency,time domain to yield instantaneous phase angle and amplitude using the Hilbert transform. Based on a single measurement of the impulse response time history at one appropriate location, the complex eigenvalues of the linear structure can be identified using a simple analysis procedure. When the response time histories are measured at all locations, the proposed methodology is capable of identifying the complex mode shapes as well as the mass, damping and stiffness matrices of the structure. The effectiveness and accuracy of the method presented are illustrated through numerical simulations. It is demonstrated that dynamic characteristics of linear structures with complex modes can be identified effectively using the proposed method. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Probabilistic seismic demand analysis of controlled steel moment-resisting frame structures

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 12 2002
Luciana R. Barroso
Abstract This paper describes a proposed methodology, referred to as probabilistic seismic control analysis, for the development of probabilistic seismic demand curves for structures with supplemental control devices. The resulting curves may be used to determine the probability that any response measure, whether for a structure or control device, exceeds a pre-determined allowable limit. This procedure couples conventional probabilistic seismic hazard analysis with non-linear dynamic structural analyses to provide system specific information. This method is performed by evaluating the performance of specific controlled systems under seismic excitations using the SAC Phase II structures for the Los Angeles region, and three different control-systems: (i) base isolation; (ii) linear viscous brace dampers; and (iii) active tendon braces. The use of a probabilistic format allows for consideration of structural response over a range of seismic hazards. The resulting annual hazard curves provide a basis for comparison between the different control strategies. Results for these curves indicate that no single control strategy is the most effective at all hazard levels. For example, at low return periods the viscous system has the lowest drift demands. However, at higher return periods, the isolation system becomes the most effective strategy. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Utilization of a Copper Solid Amalgam Electrode for the Analytical Determination of Atrazine

ELECTROANALYSIS, Issue 22 2005
Djenaine De, Souza
Abstract A copper solid amalgam electrode was prepared and used for the voltammetric determination of atrazine in natural water samples by square wave voltammetry. This electrode is a convenient substitute for the hanging mercury electrode since it is selective, sensitive, reliable and inexpensive and presents low toxicity characteristic. The detection limit of atrazine obtained in pure water (laboratory samples) was shown to be lower than the maximum limit of residue established for natural water by the Brazilian Environmental Agency. The relative standard deviation for 10 different measurements was found to be only 3.98% in solutions containing 8.16×10,6,mol L,1 of atrazine. In polluted stream water samples, the recovery measurements were approximately 70.00%, sustaining the applicability of the proposed methodology to the analysis of atrazine in such matrices. [source]


Pattern recognition in capillary electrophoresis data using dynamic programming in the wavelet domain

ELECTROPHORESIS, Issue 13 2008
Gerardo A. Ceballos
Abstract A novel approach for CE data analysis based on pattern recognition techniques in the wavelet domain is presented. Low-resolution, denoised electropherograms are obtained by applying several preprocessing algorithms including denoising, baseline correction, and detection of the region of interest in the wavelet domain. The resultant signals are mapped into character sequences using first derivative information and multilevel peak height quantization. Next, a local alignment algorithm is applied on the coded sequences for peak pattern recognition. We also propose 2-D and 3-D representations of the found patterns for fast visual evaluation of the variability of chemical substances concentration in the analyzed samples. The proposed approach is tested on the analysis of intracerebral microdialysate data obtained by CE and LIF detection, achieving a correct detection rate of about 85% with a processing time of less than 0.3,s per 25,000-point electropherogram. Using a local alignment algorithm on low-resolution denoised electropherograms might have a great impact on high-throughput CE since the proposed methodology will substitute automatic fast pattern recognition analysis for slow, human based time-consuming visual pattern recognition methods. [source]


Baghouse system design based on economic optimization

ENVIRONMENTAL PROGRESS & SUSTAINABLE ENERGY, Issue 4 2000
Antonio C. Caputo
In this paper a method is described for using economic optimization in the design of baghouse systems. That is, for a given emission control problem, the total filtration surface area, the overall pressure drop, fabric material effects, and the cleaning cycle frequency, may all be evaluated simultaneously. In fact, as baghouse design parameters affect capital and operating expenses in interrelated and counteracting manners, a minimum total cost may be searched defining the best arrangement of dust collection devices. With this in mind, detailed cost functions have been developed with the aim of providing an overall economic model. As a result, a discounted total annual cost has been obtained that may be minimized by allowing for optimal baghouse characterization. Finally, in order to highlight the capabilities of the proposed methodology, some optimized solutions are also presented, which consider the economic impact of both bag materials and dust properties. [source]


Estimation of trends in high urban ozone levels using the quantiles of (GEV)

ENVIRONMETRICS, Issue 5 2010
Hortensia J. Reyes
Abstract In this paper we propose a statistical methodology to analyze the trends of very high values of tropospheric ozone. The methodology is based on the estimation of percentiles of the distribution of extreme values. The asymptotic distribution of the estimated percentiles is obtained with a normal result. This allows us to use a linear regression to investigate linear and non-linear trends. To illustrate the proposed methodology we use the information on ozone levels from some monitoring stations in Mexico City during the period from 1986 to 2005. The analysis of the information indicates a decrease in the very high ozone levels in the later years. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Determination of cypermethrin in palm oil matrices

EUROPEAN JOURNAL OF LIPID SCIENCE AND TECHNOLOGY, Issue 10 2009
Badrul Hisyam Zainudin
Abstract In this study, a new method was developed for the determination of cypermethrin residue in both crude palm oil (CPO) and crude palm kernel oil (CPKO) using GC with electron capture detector. In this method, the oil was extracted with acetonitrile. Aliquots were cleaned-up using combined solid phase extraction (SPE), and a primary-secondary amine in combination with graphitized carbon black. The SPE cartridges were first conditioned and then eluted with acetonitrile. Cypermethrin recoveries from the fortified CPO samples were 87,98% with relative standard deviation (RSD) values of 4,8%, while those for the fortified CPKO samples were 83,100% with RSD values of 3,10%. Since good recoveries were obtained with RSD values below 10% in most cases, the proposed methodology will be useful for the analyses of palm oil samples. The method was successfully applied to the analysis of cypermethrin in real palm oil samples from various parts of Malaysia. No cypermethrin residue was found among 30 samples analyzed. [source]


Transmission network expansion planning with security constraints based on bi-level linear programming

EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 3 2009
Hong Fan
Abstract In deregulated power market, multiple conflicting objectives with many constraints should be balanced in transmission planning. The primary objective is to ensure the reliable supply to the demand as economically as possible. In this paper, a new bi-level linear programming model for transmission network expansion planning (TNEP) with security constraints has been proposed. The modeling improves traditional building style by adding reliability planning into economy planning as constraints, letting optimal planning strategy be more economic and highly reliable. A hybrid algorithm which integrates improved niching genetic algorithm and prime-dual interior point method is newly proposed to solve the TNEP based on bi-level programming. The advantages of the new methodology include (1) the highest reliability planning scheme can be acquired as economically as possible; (2) new model avoids the contradictions of conflicting objectives in TNEP, and explores new ideas for TNEP modeling; (3) the proposed hybrid algorithm is able to solve bi-level programming and fully manifests the merits of two algorithms as well. Simulation results obtained from two well-known systems and comparison analysis reveal that the proposed methodology is valid. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Inverse determination of the elastoplastic and damage parameters on small punch tests

FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 11 2009
I. PEÑUELAS
ABSTRACT The small punch test (SPT) is very useful in those situations where it is necessary to use small volumes of material. The aim of this paper is to create and validate a methodology for the determination of the mechanical and damage properties of steels from the load-displacement curve obtained by means of SPTs. This methodology is based on the inverse method, the design of experiments, the polynomial curve adjustment and the evolutionary multi-objective optimization, and also allows simulating the SPTs. In order to validate the proposed methodology, the numerical results have been compared with experimental results obtained by means of normalized tests. Two dimensional axisymmetric and three-dimensional simulations have been performed in order to allow the analysis of isotropic and anisotropic materials, respectively. [source]


Frequency-based fatigue analysis of non-stationary switching random loads

FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 11 2007
D. BENASCIUTTI
ABSTRACT The service loadings in real systems are not only random, but also non-stationary. The spectral methods based on a frequency-domain characterization of random loads, which have been used in alternative to classical time-domain approaches, cannot be applied to non-stationary loads, because the conventional spectral density spectrum is not able to capture the evolutionary frequency characteristics of non-stationary loads. This clearly restricts the applicability of the existing frequency-based methods only to loads which are stationary. At the same time, it is also very difficult to propose general models valid for all types of load non-stationarity encountered in practice. Therefore, a practical approach is to restrict the analysis to a specific class of non-stationary loads; in this work, we consider particular non-stationary loads (i.e. switching loads), which are piecewise stationary in their variance. A frequency-domain analysis of such loads is proposed, which is based on a combination of the frequency-based analysis of each adjacent stationary segment, which can be either Gaussian or non-Gaussian. Numerically simulated load histories, as well as loads measured on mountain bikes in special tracks, are analysed to validate the proposed methodology. The presented results also show the correlation between load non-stationarity and non-Gaussianity. [source]


Heterogeneity testing in meta-analysis of genome searches

GENETIC EPIDEMIOLOGY, Issue 2 2005
Elias Zintzaras
Abstract Genome searches for identifying susceptibility loci for the same complex disease often give inconclusive or inconsistent results. Genome Search Meta-analysis (GSMA) is an established non-parametric method to identify genetic regions that rank high on average in terms of linkage statistics (e.g., lod scores) across studies. Meta-analysis typically aims not only to obtain average estimates, but also to quantify heterogeneity. However, heterogeneity testing between studies included in GSMA has not been developed yet. Heterogeneity may be produced by differences in study designs, study populations, and chance, and the extent of heterogeneity might influence the conclusions of a meta-analysis. Here, we propose and explore metrics that indicate the extent of heterogeneity for specific loci in GSMA based on Monte Carlo permutation tests. We have also developed software that performs both the GSMA and the heterogeneity testing. To illustrate the concept, the proposed methodology was applied to published data from meta-analyses of rheumatoid arthritis (4 scans) and schizophrenia (20 scans). In the first meta-analysis, we identified 11 bins with statistically low heterogeneity and 8 with statistically high heterogeneity. The respective numbers were 9 and 6 for the schizophrenia meta-analysis. For rheumatoid arthritis, bins 6.2 (the HLA region that is a well-documented susceptibility locus for the disease) and 16.3 (16q12.2-q23.1) had both high average ranks and low between-study heterogeneity. For schizophrenia, this was seen for bin 3.2 (3p25.3-p22.1) and heterogeneity was still significantly low after adjusting for its high average rank. Concordance was high between the proposed metrics and between weighted and unweighted analyses. Data from genome searches should be synthesized and interpreted considering both average ranks and heterogeneity between studies. Genet. Epidemiol. 28:123,137, 2005. © 2004 Wiley-Liss, Inc. [source]


Geostatistical Prediction and Simulation of Point Values from Areal Data

GEOGRAPHICAL ANALYSIS, Issue 2 2005
Phaedon C. Kyriakidis
The spatial prediction and simulation of point values from areal data are addressed within the general geostatistical framework of change of support (the term support referring to the domain informed by each measurement or unknown value). It is shown that the geostatistical framework (i) can explicitly and consistently account for the support differences between the available areal data and the sought-after point predictions, (ii) yields coherent (mass-preserving or pycnophylactic) predictions, and (iii) provides a measure of reliability (standard error) associated with each prediction. In the case of stochastic simulation, alternative point-support simulated realizations of a spatial attribute reproduce (i) a point-support histogram (Gaussian in this work), (ii) a point-support semivariogram model (possibly including anisotropic nested structures), and (iii) when upscaled, the available areal data. Such point-support-simulated realizations can be used in a Monte Carlo framework to assess the uncertainty in spatially distributed model outputs operating at a fine spatial resolution because of uncertain input parameters inferred from coarser spatial resolution data. Alternatively, such simulated realizations can be used in a model-based hypothesis-testing context to approximate the sampling distribution of, say, the correlation coefficient between two spatial data sets, when one is available at a point support and the other at an areal support. A case study using synthetic data illustrates the application of the proposed methodology in a remote sensing context, whereby areal data are available on a regular pixel support. It is demonstrated that point-support (sub-pixel scale) predictions and simulated realizations can be readily obtained, and that such predictions and realizations are consistent with the available information at the coarser (pixel-level) spatial resolution. [source]


A stabilized pseudo-shell approach for surface parametrization in CFD design problems

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 4 2002
O. Soto
Abstract A surface representation for computational fluid dynamics (CFD) shape design problems is presented. The surface representation is based on the solution of a simplified pseudo-shell problem on the surface to be optimized. A stabilized finite element formulation is used to perform this step. The methodology has the advantage of being completely independent of the CAD representation. Moreover, the user does not have to predefine any set of shape functions to parameterize the surface. The scheme uses a reasonable discretization of the surface to automatically build the shape deformation modes, by using the pseudo-shell approach and the design parameter positions. Almost every point of the surface grid can be chosen as design parameter, which leads to a very rich design space. Most of the design variables are chosen in an automatic way, which makes the scheme easy to use. Furthermore, the surface grid is not distorted through the design cycles which avoids remeshing procedures. An example is presented to demonstrate the proposed methodology. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Coupling BEM/TBEM and MFS for the simulation of transient conduction heat transfer

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 2 2010
António Tadeu
Abstract The coupling between the boundary element method (BEM)/the traction boundary element method (TBEM) and the method of fundamental solutions (MFS) is proposed for the transient analysis of conduction heat transfer in the presence of inclusions, thereby overcoming the limitations posed by each method. The full domain is divided into sub-domains, which are modeled using the BEM/TBEM and the MFS, and the coupling of the sub-domains is achieved by imposing the required boundary conditions. The accuracy of the proposed algorithms, using different combinations of BEM/TBEM and MFS formulations, is checked by comparing the resulting solutions against referenced solutions. The applicability of the proposed methodology is shown by simulating the thermal behavior of a solid ring incorporating a crack or a thin inclusion in its wall. The crack is assumed to have null thickness and does not allow diffusion of energy; hence, the heat fluxes are null along its boundary. The thin inclusion is modeled as filled with thermal insulating material. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Two-level multiscale enrichment methodology for modeling of heterogeneous plates

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 9 2009
Caglar OskayArticle first published online: 15 JUN 200
Abstract A new two-level multiscale enrichment methodology for analysis of heterogeneous plates is presented. The enrichments are applied in the displacement and strain levels: the displacement field of a Reissner,Mindlin plate is enriched using the multiscale enrichment functions based on the partition of unity principle; the strain field is enriched using the mathematical homogenization theory. The proposed methodology is implemented for linear and failure analysis of brittle heterogeneous plates. The eigendeformation-based model reduction approach is employed to efficiently evaluate the non-linear processes in case of failure. The capabilities of the proposed methodology are verified against direct three-dimensional finite element models with full resolution of the microstructure. Copyright © 2009 John Wiley & Sons, Ltd. [source]


A rational approach to mass matrix diagonalization in two-dimensional elastodynamics

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 15 2004
E. A. Paraskevopoulos
Abstract A variationally consistent methodology is presented, which yields diagonal mass matrices in two-dimensional elastodynamic problems. The proposed approach avoids ad hoc procedures and applies to arbitrary quadrilateral and triangular finite elements. As a starting point, a modified variational principle in elastodynamics is used. The time derivatives of displacements, the velocities, and the momentum type variables are assumed as independent variables and are approximated using piecewise linear or constant functions and combinations of piecewise constant polynomials and Dirac distributions. It is proved that the proposed methodology ensures consistency and stability. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Anisotropic mesh adaption by metric-driven optimization

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2004
Carlo L. Bottasso
Abstract We describe a Gauss,Seidel algorithm for optimizing a three-dimensional unstructured grid so as to conform to a given metric. The objective function for the optimization process is based on the maximum value of an elemental residual measuring the distance of any simplex in the grid to the local target metric. We analyse different possible choices for the objective function, and we highlight their relative merits and deficiencies. Alternative strategies for conducting the optimization are compared and contrasted in terms of resulting grid quality and computational costs. Numerical simulations are used for demonstrating the features of the proposed methodology, and for studying some of its characteristics. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Robust diagnosis and fault-tolerant control of distributed processes over communication networks

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 8 2009
Sathyendra Ghantasala
Abstract This paper develops a robust fault detection and isolation (FDI) and fault-tolerant control (FTC) structure for distributed processes modeled by nonlinear parabolic partial differential equations (PDEs) with control constraints, time-varying uncertain variables, and a finite number of sensors that transmit their data over a communication network. The network imposes limitations on the accuracy of the output measurements used for diagnosis and control purposes that need to be accounted for in the design methodology. To facilitate the controller synthesis and fault diagnosis tasks, a finite-dimensional system that captures the dominant dynamic modes of the PDE is initially derived and transformed into a form where each dominant mode is excited directly by only one actuator. A robustly stabilizing bounded output feedback controller is then designed for each dominant mode by combining a bounded Lyapunov-based robust state feedback controller with a state estimation scheme that relies on the available output measurements to provide estimates of the dominant modes. The controller synthesis procedure facilitates the derivation of: (1) an explicit characterization of the fault-free behavior of each mode in terms of a time-varying bound on the dissipation rate of the corresponding Lyapunov function, which accounts for the uncertainty and network-induced measurement errors and (2) an explicit characterization of the robust stability region where constraint satisfaction and robustness with respect to uncertainty and measurement errors are guaranteed. Using the fault-free Lyapunov dissipation bounds as thresholds for FDI, the detection and isolation of faults in a given actuator are accomplished by monitoring the evolution of the dominant modes within the stability region and declaring a fault when the threshold is breached. The effects of network-induced measurement errors are mitigated by confining the FDI region to an appropriate subset of the stability region and enlarging the FDI residual thresholds appropriately. It is shown that these safeguards can be tightened or relaxed by proper selection of the sensor spatial configuration. Finally, the implementation of the networked FDI,FTC architecture on the infinite-dimensional system is discussed and the proposed methodology is demonstrated using a diffusion,reaction process example. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Development of a New Comprehensive Ocean Atlas for Indian Ocean utilizing ARGO Data

INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 2 2010
B. Prasad Kumar
Abstract The World Ocean Atlas (WOA), also termed ,Levitus Climatology', is a global ocean climatology containing monthly, seasonal and annual means of temperature (T) and salinity (S) fields at standard ocean depths. The monthly climatology for T and S is available for standard depths up to 1000 m. The database used in the preparation of this climatology (WOA) are historical records of Conductivity, Temperature and Depth (CTD) casts and other available marine observations collected in the past. The methodology used in preparation of this WOA is objective analysis which is essentially non-synoptic and widely scattered in the space domain. We understand that ARGO data has so far not been blended with WOA, nor has its impact for improving WOA climatology been attempted. Presently, with the wealth of marine data from ARGO profilers in the Indian Ocean, we propose a new approach to reconstruct T and S fields optimally utilizing the ARGO data. Here we develop a new model using Delaunay Tessellation with QHull algorithm delivering three-dimensional T and S fields from a non-uniform scattered database up to a depth of 1000 m. For gaps in a data-sparse region, we use all available quality-checked Ocean Station Data (OSD) and Profiling Float Data (PFL) information on T and S, in addition to the existing ARGO data. The initiative here was to replace WOA data points with realistic information from ARGO and in situ data, thereby producing a new climatology atlas. We demonstrate the robustness of our approach, and the final climatology on T and S is better compared with the existing state-of-the-art WOA. The advantage of the proposed methodology is the scope of improving the ocean atlas with the addition of more ARGO data in the near future. The clustered approach in modelling enables ocean parameter retrieval in geometrically disconnected regions with an option for hot restart. We believe that the new climatology will benefit the research community immensely. Copyright © 2009 Royal Meteorological Society [source]


Unsupervized aggregation of commensurate correlated attributes by means of the choquet integral and entropy functionals,

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 2 2008
Ivan Kojadinovic
In the framework of aggregation by the discrete Choquet integral, the unsupervized method for the identification of the underlying capacity initially put forward in Kojadinovic, Eur J Oper Res 2004; 155:741,751 is presented and improvements are proposed. The suggested approach consists in replacing the subjective notion of importance of a subset of attributes by that of information content of a subset of attributes, which can be estimated from the set of profiles by means of an entropy measure. An example of the application of the proposed methodology is given: in the absence of initial preferences, the approach is applied to the evaluation of students. © 2008 Wiley Periodicals, Inc. [source]


SOM-based estimation of climatic profiles

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 5 2006
Tatiana Tambouratzis
This article introduces a self-organizing map-based approach for estimating the climatic profile of locations of interest situated within an area of known morphology. The potential of the proposed methodology is illustrated on a number of locations within the Greek territory, and its superiority over other,customarily used as well as novel,climatic profile estimation methodologies is demonstrated and numerically evaluated. It is envisioned that, after further development, the proposed methodology can be employed for creating accurate climatic maps of areas of known morphology. © 2006 Wiley Periodicals, Inc. Int J Int Syst 21: 503,522, 2006. [source]


Krylov model order reduction of finite element approximations of electromagnetic devices with frequency-dependent material properties

INTERNATIONAL JOURNAL OF NUMERICAL MODELLING: ELECTRONIC NETWORKS, DEVICES AND FIELDS, Issue 5 2007
Hong Wu
Abstract A methodology is presented for the Krylov subspace-based model order reduction of finite element models of electromagnetic structures with material properties and impedance boundary conditions exhibiting arbitrary frequency dependence. The proposed methodology is a generalization of an equation-preserving Krylov model order reduction scheme for methodology for second-order, linear dynamical systems. The emphasis of this paper is on the application of this method to the broadband model order reduction of planar circuits including lossy strips of arbitrary thickness and lossy reference planes. In particular, it is shown that the proposed model order reduction methodology provides for the accurately modelling of the impact of the frequency dependence of the internal impedance per unit length of the thick lossy strip on the electromagnetic response of the stripline structure over a very broad, multi-GHz frequency band, extending all the way down to frequencies in the DC neighbourhood. In addition, the application of the proposed methodology to the broadband modelling of electromagnetic coupling between strips on either side of a lossy ground plane is demonstrated. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Passive rational fitting of a network transfer function from its real part

INTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 3 2008
Anne Y. Woo
Abstract A methodology is presented for the rational function approximation of a passive network function from sampled values of its real part over the bandwidth of interest. The accuracy and validity of the proposed methodology are demonstrated through its application to the fitting of several broadband, multiport transfer functions. © 2008 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2008. [source]


Artificial neural network modeling of RF MEMS resonators

INTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 4 2004
Yongjae Lee
Abstract In this article, a novel and efficient approach for modeling radio-frequency microelectromechanical system (RF MEMS) resonators by using artificial neural network (ANN) modeling is presented. In the proposed methodology, the relationship between physical-input parameters and corresponding electrical-output parameters is obtained by combined circuit/full-wave/ANN modeling. More specifically, in order to predict the electrical responses from a resonator, an analytical representation of the electrical equivalent-network model (EENM) is developed from the well-known electromechanical analogs. Then, the reduced-order, nonlinear, dynamic macromodels from 3D finite-element method (FEM) simulations are generated to provide training, validating, and testing datasets for the ANN model. The developed ANN model provides an accurate prediction of an electrical response for various sets of driving parameters and it is suitable for integration with an RF/microwave circuit simulator. Although the proposed approach is demonstrated on a clamped-clamped (C-C) beam resonator, it can be readily adapted for the analysis of other micromechanical resonators. © 2004 Wiley Periodicals, Inc. Int J RF and Microwave CAE 14: 302,316, 2004. [source]


A General Algorithm for Univariate Stratification

INTERNATIONAL STATISTICAL REVIEW, Issue 3 2009
Sophie Baillargeon
Summary This paper presents a general algorithm for constructing strata in a population using,X, a univariate stratification variable known for all the units in the population. Stratum,h,consists of all the units with an,X,value in the interval[bh,1,,bh). The stratum boundaries{bh}are obtained by minimizing the anticipated sample size for estimating the population total of a survey variable,Y,with a given level of precision. The stratification criterion allows the presence of a take-none and of a take-all stratum. The sample is allocated to the strata using a general rule that features proportional allocation, Neyman allocation, and power allocation as special cases. The optimization can take into account a stratum-specific anticipated non-response and a model for the relationship between the stratification variable,X,and the survey variable,Y. A loglinear model with stratum-specific mortality for,Y,given,X,is presented in detail. Two numerical algorithms for determining the optimal stratum boundaries, attributable to Sethi and Kozak, are compared in a numerical study. Several examples illustrate the stratified designs that can be constructed with the proposed methodology. All the calculations presented in this paper were carried out with stratification, an R package that will be available on CRAN (Comprehensive R Archive Network). Résumé Cet article présente un algorithme général pour construire des strates dans une population à l'aide de,X, une variable de stratification unidimensionnelle connue pour toutes les unités de la population. La strate,h,contient toutes les unités ayant une valeur de,X,dans l'intervalle [bh,1,,bh). Les frontières des strates {bh} sont obtenues en minimisant la taille d'échantillon anticipée pour l'estimation du total de la variable d'intérêt,Y,avec un niveau de précision prédéterminé. Le critère de stratification permet la présence d'une strate à tirage nul et de strates recensement. L'échantillon est réparti dans les strates à l'aide d'une règle générale qui inclut l'allocation proportionnelle, l'allocation de Neyman et l'allocation de puissance comme des cas particuliers. L'optimisation peut tenir compte d'un taux de non réponse spécifique à la strate et d'un modèle reliant la variable de stratification,X,à la variable d'intérêt,Y. Un modèle loglinéaire avec un taux de mortalité propre à la strate est présenté en détail. Deux algorithmes numériques pour déterminer les frontières de strates optimales, dus à Sethi et Kozak, sont comparés dans une étude numérique. Plusieurs exemples illustrent les plans stratifiés qui peuvent être construits avec la méthodologie proposée. Tous les calculs présentés dans l'article ont été effectués avec stratification, un package R disponible auprès des auteurs. [source]