Proposed Methods (proposed + methods)

Distribution by Scientific Domains
Distribution within Engineering

Selected Abstracts

Estimation of backward impedance on low-voltage distribution system using measured resonant current

Toru Miki
Abstract Two estimation methods for a backward impedance of a power distribution system are proposed in this paper. According to the first method, the backward impedance is estimated based on information obtained from the frequency response of a transient current flowing into a capacitor connected to a distribution line. The backward impedance is determined from the attenuation constant and the resonant frequency calculated using the capacitance and the impedance of the power distribution system. These parameters can be reliably obtained from a frequency response of the transient current using the least square method. The accuracy of the method strongly depends on the origin on the time axis for Fourier transform. An additional estimate of the time-origin is required for an accurate estimation of the backward impedance. The second method estimates the backward impedance using two transient current waveforms obtained by alternately connecting different capacitors to a distribution line. The backward impedance can be represented as a function of the frequency responses of these currents. Since this method is independent from the time-origin, it is suitable for automatic measurements of the backward impedance. Proposed methods are applicable to the estimation of harmonic currents in distribution systems. In this paper, harmonic currents flowing through a distribution line are calculated based on the estimated backward impedance and on the measured values of voltage harmonics obtained by the instrument developed by the authors. © 2010 Wiley Periodicals, Inc. Electr Eng Jpn, 171(3): 28,40, 2010; Published online in Wiley InterScience ( DOI 10.1002/eej.20900 [source]

The efficiency frontier approach to economic evaluation of health-care interventions

J. Jaime Caro
Abstract Background: IQWiG commissioned an international panel of experts to develop methods for the assessment of the relation of benefits to costs in the German statutory health-care system. Proposed methods: The panel recommended that IQWiG inform German decision makers of the net costs and value of additional benefits of an intervention in the context of relevant other interventions in that indication. To facilitate guidance regarding maximum reimbursement, this information is presented in an efficiency plot with costs on the horizontal axis and value of benefits on the vertical. The efficiency frontier links the interventions that are not dominated and provides guidance. A technology that places on the frontier or to the left is reasonably efficient, while one falling to the right requires further justification for reimbursement at that price. This information does not automatically give the maximum reimbursement, as other considerations may be relevant. Given that the estimates are for a specific indication, they do not address priority setting across the health-care system. Conclusion: This approach informs decision makers about efficiency of interventions, conforms to the mandate and is consistent with basic economic principles. Empirical testing of its feasibility and usefulness is required. Copyright © 2010 John Wiley & Sons, Ltd. [source]

Traffic Estimation and Optimal Counting Location Without Path Enumeration Using Bayesian Networks

Enrique Castillo
A combination (bi-level) of an OD-pair matrix estimation model based on Bayesian networks, and a Wardrop-minimum-variance model, which identifies origins and destinations of link flows, is used to estimate OD-pair and unobserved link flows based on some observations of links and/or OD-pair flows. The Bayesian network model is also used to select the optimal number and locations of the links counters based on maximum correlation. Finally, the proposed methods are illustrated by their application to the Nguyen,Dupuis and the Ciudad Real networks. [source]

Two Parallel Computing Methods for Coupled Thermohydromechanical Problems

B. A. Schrefler
Two different approaches are presented for the parallel implementation of computer programs for the solution of coupled thermohydromechanical problems. One is an asynchronous method in connection with staggered and monolithic solution procedures. The second one is a domain decomposition method making use of substructuring techniques and a Newton-Raphson procedure. The advantages of the proposed methods are illustrated by examples. Both methods are promising, but we actually have no comparison between the two because one works on a linear program with only two interacting fields and the other on a full nonlinear set of (multifield) equations. [source]

Analysis of b -value calculations in diffusion weighted and diffusion tensor imaging

Daniel Güllmar
Abstract Diffusion weighted imaging has opened new diagnostic possibilities by using microscopic diffusion of water molecules as a means of image contrast. The directional dependence of diffusion has led to the development of diffusion tensor imaging, which allows us to characterize microscopic tissue geometry. The link between the measured NMR signal and the self-diffusion tensor is established by the so-called b matrices that depend on the gradient's direction, strength, and timing. However, in the calculation of b -matrix elements, the influence of imaging gradients on each element of the b matrix is often neglected. This may cause errors, which in turn leads to an incorrect extraction of diffusion coefficients. In cases where the imaging gradients are high (high spatial resolution), these errors may be substantial. Using a generic pulsed gradient spin-echo (PGSE) imaging sequence, the effects of neglecting the imaging gradients on the b -matrix calculation are demonstrated. By measuring an isotropic phantom with this sequence it can be analytically as well as experimentally shown that large deviations in single b -matrix elements are generated. These deviations are obtained by applying the diffusion weighting in the readout direction of the imaging dimension in combination with relatively large imaging gradients. The systematic errors can be avoided by a full b -matrix calculation considering all the gradients of the sequence or by generating cross-term free signals using the geometric average of two diffusion weighted images with opposite polarity. The importance of calculating the exact b matrices by the proposed methods is based on the fact that more precise diffusion parameters are obtained for extracting correct property maps, such as fractional anisotropy, volume ratio, or conductivity tensor maps. © 2005 Wiley Periodicals, Inc. Concepts Magn Reson Part A 25A: 53,66, 2005 [source]

Simultaneous determination of metronidazole and spiramycin in bulk powder and in tablets using different spectrophotometric techniques

Fatma I. Khattab
Abstract Metronidazole (MZ) is an anti-infective drug used in the treatment of anaerobic bacterial and protozoa infections in humans. It is also used as a vetinary antiparasitic drug. Spiramycin (SP) is a medium-spectrum antibiotic with high effectiveness against Gram-positive bacteria. Three simple, sensitive, selective and precise spectrophotometric methods were developed and validated for the simultaneous determination of MZ and SP in their pure form and in pharmaceutical formulations. In methods A and B, MZ was determined by the application of direct spectrophotometry and by measuring its zero-order (D0) absorption spectra at its ,max = 311 nm. In method A, SP was determined by the application of first derivative spectrophotometry (D1) and by measuring the amplitude at 218.3 nm. In method B, the first derivative of the ratio spectra (DD1) was applied, and SP was determined by measuring the peak amplitude at 245.6 nm. Method C entailed mean centring of the ratio spectra (MCR), which allows the determination of both MZ and SP. The methods developed were used for the determination of MZ and SP over a concentration range of 5,25 µg ml,1. The proposed methods were used to determine both drugs in their pure, powdered forms with mean percentage recoveries of 100.16 ± 0.73 for MZ in methods A and B, 101.10 ± 0.90 in method C, 100.09 ± 0.70, 100.02 ± 0.88 and 100.49 ± 1.26 for SP in methods A, B and C, respectively. The proposed methods were proved using laboratory-prepared mixtures of the two drugs and were successfully applied to the analysis of MZ and SP in tablet formulation without any interference from each other or from the excipients. The results obtained by applying the proposed methods were compared statistically with a reported HPLC method and no significant difference was observed between these methods regarding both accuracy and precision. Copyright © 2010 John Wiley & Sons, Ltd. [source]

Approximate analysis methods for asymmetric plan base-isolated buildings

Keri L. Ryan
Abstract An approximate method for linear analysis of asymmetric-plan, multistorey buildings is specialized for a single-storey, base-isolated structure. To find the mode shapes of the torsionally coupled system, the Rayleigh,Ritz procedure is applied using the torsionally uncoupled modes as Ritz vectors. This approach reduces to analysis of two single-storey systems, each with vibration properties and eccentricities (labelled ,effective eccentricities') similar to corresponding properties of the isolation system or the fixed-base structure. With certain assumptions, the vibration properties of the coupled system can be expressed explicitly in terms of these single-storey system properties. Three different methods are developed: the first is a direct application of the Rayleigh,Ritz procedure; the second and third use simplifications for the effective eccentricities, assuming a relatively stiff superstructure. The accuracy of these proposed methods and the rigid structure method in determining responses are assessed for a range of system parameters including eccentricity and structure flexibility. For a subset of systems with equal isolation and structural eccentricities, two of the methods are exact and the third is sufficiently accurate; all three are preferred to the rigid structure method. For systems with zero isolation eccentricity, however, all approximate methods considered are inconsistent and should be applied with caution, only to systems with small structural eccentricities or stiff structures. Copyright © 2001 John Wiley & Sons, Ltd. [source]

On the analysis of non-linear allometries

Abstract 1.,Non-linear allometries are those where a log,log scatterplot of trait size against body size deviates from simple linearity. These are found in many insects, including the horns of beetles, the forceps of earwigs, and the heads of certain castes of ant. 2.,Non-linear allometries are often associated with polyphenism that is itself related to behaviour: for example, the alternative mating tactics displayed by many species of beetle are widely associated with dimorphisms in horn size. 3.,This paper critically reviews the current techniques used to analyse these datasets. 4.,Recommendations include the use of scatterplots and assessment of the goodness of fit of simple linear models as an initial screen for non-linear allometry. The use of recently developed algorithms for ,segmented' regression to analyse continuous allometric relationships, and a pragmatic approach to the analysis of discontinuous relationships that recognises that there is no simple way to distinguish between morphs in some cases, and that all of the proposed methods for doing so have some drawbacks. 5.,Worked examples of the analysis of two sets of data from animals that have been the subject of controversy regarding the nature of their allometric relationships are given: further worked examples are provided as online Supporting Information. [source]

A design for robust power system stabilizer by means of H, control and particle swarm optimization method

Yoshifumi Zoka
Abstract This paper proposes two types of PSS design methods that take into account robustness for comparably large power systems. The first one is a design method based on , control theory and the second one is a parameter determination method for a standard PSS by using Particle Swarm Optimization (PSO). In order to deal with large-scale systems, a reduced model is developed to get the target system which preserves major oscillation modes only. The major oscillation modes are selected by using the residue concept, and the PSS is designed based on the target system. In order to verify effectiveness, the proposed methods are compared with the other previously proposed method based on a Genetic Algorithm (GA) through many numerical simulations. © 2008 Wiley Periodicals, Inc. Electron Comm Jpn, 91(8): 34,43, 2008; Published online in Wiley InterScience ( DOI 10.1002/ecj.10132 [source]

Simultaneous analysis of multiple PCR amplicons enhances capillary SSCP discrimination of MHC alleles

Miguel Alcaide
Abstract Major histocompatibility complex (MHC) genotyping still remains one of the most challenging issues for evolutionary ecologists. To date, none of the proposed methods have proven to be perfect, and all provide both important pros and cons. Although denaturing capillary electrophoresis has become a popular alternative, allele identification commonly relies upon conformational polymorphisms of two single-stranded DNA molecules at the most. Using the MHC class II (, chain, exon 2) of the black kite (Aves: Accipitridae) as our model system, we show that the simultaneous analysis of overlapping PCR amplicons from the same target region substantially enhances allele discrimination. To cover this aim, we designed a multiplex PCR capable to generate four differentially sized and labeled amplicons from the same allele. Informative peaks to assist allele calling then fourfold those generated by the analysis of single PCR amplicons. Our approach proved successful to differentiate all the alleles (N=13) isolated from eight unrelated birds at a single optimal run temperature and electrophoretic conditions. In particular, we emphasize that this approach may constitute a straightforward and cost-effective alternative for the genotyping of single or duplicated MHC genes displaying low to moderate sets of divergent alleles. [source]

A risk-based approach for bidding strategy in an electricity pay-as-bid auction

Javad Sadeh
Abstract With the reform of electric power industry and the development of electrical energy markets in many countries, it is of significance to develop bidding strategies for generation companies (GenCos). In this environment, one of the most challenging and important tasks for a GenCo is developing effective strategies to optimize hourly offer curve. In this paper, focusing on Iran's electricity market structure, we model the bidding problem from the viewpoint of a GenCo in a pay-as-bid (PAB) auction. Our goal is to present a tool for determining the optimal bidding strategy of a price-taker producer in an electricity PAB auction taking into account the relevant risks. Due to uncertainties in power market, the market-clearing price (MCP) of each hour is assumed to be known as a probability density function (pdf). The optimal solution of bidding problem is obtained analytically based on the classical optimization theory. Also, the analytical solution for a multi-step bid protocol is generalized and the properties of the generalized solution are discussed. A model is developed to consider concept of risk using two different methods. The two proposed methods are then compared and the results interpreted using numerical examples. In addition, the effect of variation of MCP's pdf parameters on supplier's profit is studied. Copyright © 2007 John Wiley & Sons, Ltd. [source]

Prediction of missing enzyme genes in a bacterial metabolic network

FEBS JOURNAL, Issue 9 2007
Reconstruction of the lysine-degradation pathway of Pseudomonas aeruginosa
The metabolic network is an important biological network which consists of enzymes and chemical compounds. However, a large number of metabolic pathways remains unknown, and most organism-specific metabolic pathways contain many missing enzymes. We present a novel method to identify the genes coding for missing enzymes using available genomic and chemical information from bacterial genomes. The proposed method consists of two steps: (a) estimation of the functional association between the genes with respect to chromosomal proximity and evolutionary association, using supervised network inference; and (b) selection of gene candidates for missing enzymes based on the original candidate score and the chemical reaction information encoded in the EC number. We applied the proposed methods to infer the metabolic network for the bacteria Pseudomonas aeruginosa from two genomic datasets: gene position and phylogenetic profiles. Next, we predicted several missing enzyme genes to reconstruct the lysine-degradation pathway in P. aeruginosa using EC number information. As a result, we identified PA0266 as a putative 5-aminovalerate aminotransferase (EC and PA0265 as a putative glutarate semialdehyde dehydrogenase (EC To verify our prediction, we conducted biochemical assays and examined the activity of the products of the predicted genes, PA0265 and PA0266, in a coupled reaction. We observed that the predicted gene products catalyzed the expected reactions; no activity was seen when both gene products were omitted from the reaction. [source]

Gene, region and pathway level analyses in whole-genome studies

Omar De la Cruz
Abstract In the setting of genome-wide association studies, we propose a method for assigning a measure of significance to pre-defined sets of markers in the genome. The sets can be genes, conserved regions, or groups of genes such as pathways. Using the proposed methods and algorithms, evidence for association between a particular functional unit and a disease status can be obtained not just by the presence of a strong signal from a SNP within it, but also by the combination of several simultaneous weaker signals that are not strongly correlated. This approach has several advantages. First, moderately strong signals from different SNPs are combined to obtain a much stronger signal for the set, therefore increasing power. Second, in combination with methods that provide information on untyped markers, it leads to results that can be readily combined across studies and platforms that might use different SNPs. Third, the results are easy to interpret, since they refer to functional sets of markers that are likely to behave as a unit in their phenotypic effect. Finally, the availability of gene-level P -values for association is the first step in developing methods that integrate information from pathways and networks with genome-wide association data, and these can lead to a better understanding of the complex traits genetic architecture. The power of the approach is investigated in simulated and real datasets. Novel Crohn's disease associations are found using the WTCCC data. Genet. Epidemiol. 34: 222,231, 2010. © 2009 Wiley-Liss, Inc. [source]

Genetic association tests in the presence of epistasis or gene-environment interaction

Kai WangArticle first published online: 24 APR 200
Abstract A genetic variant is very likely to manifest its effect on disease through its main effect as well as through its interaction with other genetic variants or environmental factors. Power to detect genetic variants can be greatly improved by modeling their main effects and their interaction effects through a common set of parameters or "generalized association parameters" (Chatterjee et al. [2006] Am. J. Hum. Genet. 79:1002,1016) because of the reduced number of degrees of freedom. Following this idea, I propose two models that extend the work by Chatterjee and colleagues. Particularly, I consider not only the case of relatively weak interaction effect compared to the main effect but also the case of relatively weak main effect. This latter case is perhaps more relevant to genetic association studies. The proposed methods are invariant to the choice of the allele for scoring genotypes or the choice of the reference genotype score. For each model, the asymptotic distribution of the likelihood ratio statistic is derived. Simulation studies suggest that the proposed methods are more powerful than existing ones under certain circumstances. Genet. Epidemiol. 2008. © 2008 Wiley-Liss, Inc. [source]

Candidate-gene association studies with pedigree data: Controlling for environmental covariates

S.L. Slager
Abstract Case-control studies provide an important epidemiological tool to evaluate candidate genes. There are many different study designs available. We focus on a more recently proposed design, which we call a multiplex case-control (MCC) design. This design compares allele frequencies between related cases, each of whom are sampled from multiplex families, and unrelated controls. Since within-family genotype correlations will exist, statistical methods will need to take this into account. Moreover, there is a need to develop methods to simultaneously control for potential confounders in the analysis. Generalized estimating equations (GEE) are one approach to analyze this type of data; however, this approach can have singularity problems when estimating the correlation matrix. To allow for modeling of other covariates, we extend our previously developed method to a more general model-based approach. Our proposed methods use the score statistic, derived from a composite likelihood. We propose three different approaches to estimate the variance of this statistic. Under random ascertainment of pedigrees, score tests have correct type I error rates; however, pedigrees are not randomly ascertained. Thus, through simulations, we test the validity and power of the score tests under different ascertainment schemes, and an illustration of our methods, applied to data from a prostate cancer study, is presented. We find that our robust score statistic has estimated type I error rates within the expected range for all situations we considered whereas the other two statistics have inflated type I error rates under nonrandom ascertainment schemes. We also find GEE to fail at least 5% of the time for each simulation configuration; at times, the failure rate reaches above 80%. In summary, our robust method may be the only current regression analysis method available for MCC data. Genet Epidemiol 24:273,283, 2003. © 2003 Wiley-Liss, Inc. [source]

Inferences for Selected Location Quotients with Applications to Health Outcomes

Gemechis Dilba Djira
The location quotient (LQ) is an index frequently used in geography and economics to measure the relative concentration of activities. This quotient is calculated in a variety of ways depending on which group is used as a reference. Here, we focus on a simultaneous inference for the ratios of the individual proportions to the overall proportion based on binomial data. This is a multiple comparison problem and inferences for LQs with adjustments for multiplicity have not been addressed before. The comparisons are negatively correlated. The quotients can be simultaneously tested against unity, and simultaneous confidence intervals can be constructed for the LQs based on existing probability inequalities and by directly using the asymptotic joint distribution of the associated test statistics. The proposed inferences are appropriate for analysis based on sample surveys. Two real data sets are used to demonstrate the application of multiplicity-adjusted LQs. A simulation study is also carried out to assess the performance of the proposed methods to achieve a nominal coverage probability. For the LQs considered, the coverage of the simple Bonferroni-adjusted Fieller intervals for LQs is observed to be almost as good as the coverage of the method that directly takes the correlations into account. El cociente de localización (LQ) es un índice de uso frecuente en las disciplinas de Geografía y Economía para medir la concentración relativa de actividades. El cálculo del cociente se realiza de una variedad de formas, dependiendo del grupo que se utilice como referencia. El presente artículo aborda el problema de realizar inferencias simultáneas con tasas que describen proporciones individuales en relación a proporciones globales, para el caso de datos en escala binomial. Este problema puede ser caracterizado como uno de tipo de comparaciones múltiples (multiple comparison problem). Salvo el estudio presente, no existen precedentes de métodos diseñados para realizar inferencias de LQ que estén ajustados para abordar comparaciones múltiples. Las comparaciones están correlacionadas negativamente. Los cocientes pueden ser evaluados simultáneamente para verificar la propiedad de unidad (unity), y se pueden construir intervalos de confianza simultáneos para un LQ basado en la desigualdad de probabilidades existentes y por medio del uso directo de la distribución asintótica conjunta (asymtotic joint distribution) de los test o pruebas estadísticas asociadas. El tipo de inferencias propuestas por los autores son las adecuadas para el análisis de encuestas por muestreo. Para demostrar la aplicación del LQ desarrollado por el estudio, se utilizan dos conjuntos de datos del mundo real. Asimismo se lleva a cabo un estudio de simulación para evaluar el desempeño de los métodos propuestos con el fin de alcanzar una probabilidad de cobertura nominal (nominal coverage). Para los LQs seleccionados, la cobertura de los intervalos de confianza simples Fieller-Bonferroni ajustados para LQ, producen resultados casi tan buenos como la cobertura de métodos que toma en cuenta las correlaciones directamente. [source]

Slug Test Analysis to Evaluate Permeability of Compressible Materials

GROUND WATER, Issue 4 2008
Hangseok Choi
The line-fitting methods such as the Hvorslev method and the Bouwer and Rice method provide a rapid and simple means to analyze slug test data for estimating in situ hydraulic conductivity (k) of geologic materials. However, when analyzing a slug test in a relatively compressible geologic formation, these conventional methods may have difficulties fitting a straight line to the semilogarithmic plot of the test data. Data from relatively compressible geologic formations frequently show a concave-upward curvature because of the effect of the compressibility or specific storage (Ss). To take into account the compressibility of geologic formations, a modified line-fitting method is introduced, which expands on Chirlin's (1989) approach to the case of a partially penetrating well with the basic-time-lag fitting method. A case study for a compressible till is made to verify the proposed method by comparing the results from the proposed methods with those obtained using a type-curve method (Kansas Geological Survey method [Hyder et al. 1994]). [source]

Simple modifications for stabilization of the finite point method

B. Boroomand
Abstract A stabilized version of the finite point method (FPM) is presented. A source of instability due to the evaluation of the base function using a least square procedure is discussed. A suitable mapping is proposed and employed to eliminate the ill-conditioning effect due to directional arrangement of the points. A step by step algorithm is given for finding the local rotated axes and the dimensions of the cloud using local average spacing and inertia moments of the points distribution. It is shown that the conventional version of FPM may lead to wrong results when the proposed mapping algorithm is not used. It is shown that another source for instability and non-monotonic convergence rate in collocation methods lies in the treatment of Neumann boundary conditions. Unlike the conventional FPM, in this work the Neumann boundary conditions and the equilibrium equations appear simultaneously in a weight equation similar to that of weighted residual methods. The stabilization procedure may be considered as an interpretation of the finite calculus (FIC) method. The main difference between the two stabilization procedures lies in choosing the characteristic length in FIC and the weight of the boundary residual in the proposed method. The new approach also provides a unique definition for the sign of the stabilization terms. The reasons for using stabilization terms only at the boundaries is discussed and the two methods are compared. Several numerical examples are presented to demonstrate the performance and convergence of the proposed methods. Copyright © 2005 John Wiley & Sons, Ltd. [source]

Voxel-based meshing and unit-cell analysis of textile composites

Hyung Joo Kim
Abstract Unit-cell homogenization techniques are frequently used together with the finite element method to compute effective mechanical properties for a wide range of different composites and heterogeneous materials systems. For systems with very complicated material arrangements, mesh generation can be a considerable obstacle to usage of these techniques. In this work, pixel-based (2D) and voxel-based (3D) meshing concepts borrowed from image processing are thus developed and employed to construct the finite element models used in computing the micro-scale stress and strain fields in the composite. The potential advantage of these techniques is that generation of unit-cell models can be automated, thus requiring far less human time than traditional finite element models. Essential ideas and algorithms for implementation of proposed techniques are presented. In addition, a new error estimator based on sensitivity of virtual strain energy to mesh refinement is presented and applied. The computational costs and rate of convergence for the proposed methods are presented for three different mesh-refinement algorithms: uniform refinement; selective refinement based on material boundary resolution; and adaptive refinement based on error estimation. Copyright © 2003 John Wiley & Sons, Ltd. [source]

Strain-driven homogenization of inelastic microstructures and composites based on an incremental variational formulation

Christian Miehe
Abstract The paper investigates computational procedures for the treatment of a homogenized macro-continuum with locally attached micro-structures of inelastic constituents undergoing small strains. The point of departure is a general internal variable formulation that determines the inelastic response of the constituents of a typical micro-structure as a generalized standard medium in terms of an energy storage and a dissipation function. Consistent with this type of inelasticity we develop a new incremental variational formulation of the local constitutive response where a quasi-hyperelastic micro-stress potential is obtained from a local minimization problem with respect to the internal variables. It is shown that this local minimization problem determines the internal state of the material for finite increments of time. We specify the local variational formulation for a setting of smooth single-surface inelasticity and discuss its numerical solution based on a time discretization of the internal variables. The existence of the quasi-hyperelastic stress potential allows the extension of homogenization approaches of elasticity to the incremental setting of inelasticity. Focusing on macro-strain-driven micro-structures, we develop a new incremental variational formulation of the global homogenization problem where a quasi-hyperelastic macro-stress potential is obtained from a global minimization problem with respect to the fine-scale displacement fluctuation field. It is shown that this global minimization problem determines the state of the micro-structure for finite increments of time. We consider three different settings of the global variational problem for prescribed linear displacements, periodic fluctuations and constant stresses on the boundary of the micro-structure and discuss their numerical solutions based on a spatial discretization of the fine-scale displacement fluctuation field. The performance of the proposed methods is demonstrated for the model problem of von Mises-type elasto-visco-plasticity of the constituents and applied to a comparative study of micro-to-macro transitions of inelastic composites. Copyright © 2002 John Wiley & Sons, Ltd. [source]

Design of adaptive variable structure controllers for T,S fuzzy time-delay systems

Tai-Zu Wu
Abstract In this paper, the adaptive variable structure control problem is presented for Takagi,Sugeno (T,S) fuzzy time-delay systems with uncertainties and external disturbances. The fuzzy sliding surfaces for the T,S fuzzy time-delay system are proposed by using a Lyapunov function, and we design the adaptive variable structure controllers such that the global T,S fuzzy time-delay system confined on the fuzzy sliding surfaces is asymptotically stable. One example is given to illustrate the effectiveness of our proposed methods. Copyright © 2009 John Wiley & Sons, Ltd. [source]

Blind identification of sparse Volterra systems

Hong-Zhou Tan
Abstract This paper is concerned with blind identification for single-input single-output Volterra systems with finite order and memory with the second-order and the third-order statistics. For the full-sized Volterra system (i.e. all its kernels are nonzero) excited by unknown independently and identically distributed stationary random sequences, it is shown that blind identifiability does not hold in the second-order moment (SOM) and the third-order moment (TOM) domain. However, under some sufficient conditions, a class of truncated sparse Volterra systems, where some kernels are restricted to being zero, can be identified blindly and more Volterra parameters can be estimated in TOM than in SOM. Numerical examples illustrate the effectiveness of the proposed methods. Copyright © 2007 John Wiley & Sons, Ltd. [source]

On parameter estimation of a simple real-time flow aggregation model

Huirong Fu
Abstract There exists a clear need for a comprehensive framework for accurately analysing and realistically modelling the key traffic statistics that determine network performance. Recently, a novel traffic model, sinusoid with uniform noise (SUN), has been proposed, which outperforms other models in that it can simultaneously achieve tractability, parsimony, accuracy (in predicting network performance), and efficiency (in real-time capability). In this paper, we design, evaluate and compare several estimation approaches, including variance-based estimation (Var), minimum mean-square-error-based estimation (MMSE), MMSE with the constraint of variance (Var+MMSE), MMSE of autocorrelation function with the constraint of variance (Var+AutoCor+MMSE), and variance of secondary demand-based estimation (Secondary Variance), to determining the key parameters in the SUN model. Integrated with the SUN model, all the proposed methods are able to capture the basic behaviour of the aggregation reservation system and closely approximate the system performance. In addition, we find that: (1) the Var is very simple to operate and provides both upper and lower performance bounds. It can be integrated into other methods to provide very accurate approximation to the aggregation's performance and thus obtain an accurate solution; (2) Var+AutoCor+MMSE is superior to other proposed methods in the accuracy to determine system performance; and (3) Var+MMSE and Var+AutoCor+MMSE differ from the other three methods in that both adopt an experimental analysis method, which helps to improve the prediction accuracy while reducing computation complexity. Copyright © 2005 John Wiley & Sons, Ltd. [source]

On the small signal modeling of advanced microwave FETs: A comparative study

Giovanni Crupi
Abstract Although many successful techniques have been proposed in the last decades for extracting the small signal equivalent circuit for microwave transistors from scattering parameter measurements, small signal modeling is still object of intense research. Further improvement and development of the proposed methods are incessantly required to take into account the continuous and rapid evolution of the transistor technology. The purpose of this article is to facilitate the choice of the most appropriate strategy for each particular case. For that, we present a brief but thorough comparative study of analytical techniques developed for modeling different types of advanced microwave transistors: GaAs HEMTs, GaN HEMTs, and FinFETs. It will be shown that a crucial step for a successful modeling is to adapt accurately the small signal equivalent circuit topology under "cold" condition to each investigated technology. © 2008 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2008. [source]

Delay-dependent stability and stabilization of neutral time-delay systems

Jian Sun
Abstract This paper is concerned with the problem of stability and stabilization of neutral time-delay systems. A new delay-dependent stability condition is derived in terms of linear matrix inequality by constructing a new Lyapunov functional and using some integral inequalities without introducing any free-weighting matrices. On the basis of the obtained stability condition, a stabilizing method is also proposed. Using an iterative algorithm, the state feedback controller can be obtained. Numerical examples illustrate that the proposed methods are effective and lead to less conservative results. Copyright © 2008 John Wiley & Sons, Ltd. [source]

Integrated fault detection and control for LPV systems

Heng Wang
Abstract This paper studies the integrated fault detection and control problem for linear parameter-varying systems. A parameter-dependent detector/controller is designed to generate two signals: residual and control signals that are used to detect faults and simultaneously meet some control objectives. The low-frequency faults and certain finite-frequency disturbances are considered. With the aid of the newly developed linearization techniques, the design methods are presented in terms of solutions to a set of linear matrix inequalities. A numerical example is given to illustrate the effectiveness of the proposed methods. Copyright © 2008 John Wiley & Sons, Ltd. [source]

Numerical nonlinear observers using pseudo-Newton-type solvers

Shigeru HanbaArticle first published online: 12 DEC 200
Abstract In constructing a globally convergent numerical nonlinear observer of Newton-type for a continuous-time nonlinear system, a globally convergent nonlinear equation solver with a guaranteed rate of convergence is necessary. In particular, the solver should be Jacobian free, because an analytic form of the state transition map of the nonlinear system is generally unavailable. In this paper, two Jacobian-free nonlinear equation solvers of pseudo-Newton type that fulfill these requirements are proposed. One of them is based on the finite difference approximation of the Jacobian with variable step size together with the line search. The other uses a similar idea, but the estimate of the Jacobian is mostly updated through a BFGS-type law. Then, by using these solvers, globally stable numerical nonlinear observers are constructed. Numerical results are included to illustrate the effectiveness of the proposed methods. Copyright © 2007 John Wiley & Sons, Ltd. [source]

Augmented Lyapunov functional and delay-dependent stability criteria for neutral systems

Yong He
Abstract In this paper, an augmented Lyapunov functional is proposed to investigate the asymptotic stability of neutral systems. Two methods with or without decoupling the Lyapunov matrices and system matrices are developed and shown to be equivalent to each other. The resulting delay-dependent stability criteria are less conservative than the existing ones owing to the augmented Lyapunov functional and the introduction of free-weighting matrices. The delay-independent criteria are obtained as an easy corollary. Numerical examples are given to demonstrate the effectiveness and less conservativeness of the proposed methods. Copyright © 2005 John Wiley & Sons, Ltd. [source]

Low-complexity unambiguous acquisition methods for BOC-modulated CDMA signals

Elena Simona Lohan
Abstract The new M-code signals of GPS and the signals proposed for the future Galileo systems are of split-spectrum type, where the pseudorandom (PRN) code is multiplied with rectangular sub-carriers in one or several stages. Sine and cosine binary-offset-carrier (BOC) modulations are examples of modulations, which split the signal spectrum and create ambiguities in the envelope of the autocorrelation function (ACF) of the modulated signals. Thus, the acquisition of split-spectrum signals, based on the ambiguous ACF, poses some challenges, which might be overcome at the expense of higher complexity (e.g. by decreasing the step in searching the timing hypotheses). Recently, two techniques that deal with the ambiguities of the ACF have been proposed, and they were referred to as ,sideband (SB) techniques' (by Betz, Fishman et al.) or ,BPSK-like' techniques (by Martin, Heiries et al.), since they use SB correlation channels and the obtained ACF looks similar to the ACF of a BPSK-modulated PRN code. These techniques allow the use of a higher search step compared with the ambiguous ACF situation. However, both these techniques use SB-selection filters and modified reference PRN codes at the receivers, which affect the implementational complexity. Moreover, the ,BPSK-like' techniques have been so far studied for even BOC-modulation orders (i.e. integer ratio between the sub-carrier frequency and the chip rate) and they fail to work for odd BOC-modulation orders (or equivalently for split-spectrum signals with significant zero-frequency content). We propose here three reduced-complexity methods that remove the ambiguities of the ACF of the split-spectrum signals and work for both even and odd BOC-modulation orders. Two of the proposed methods are extensions of the previously mentioned techniques, and the third one is introduced by the authors and called the unsuppressed adjacent lobes (UAL) technique. We argue via theoretical analysis the choice of the parameters of the proposed methods and we compare the alternative methods in terms of complexity and performance. Copyright © 2008 John Wiley & Sons, Ltd. [source]

Water sorption kinetics in light-cured poly-HEMA and poly(HEMA- co -TEGDMA); determination of the self-diffusion coefficient by new iterative methods

Irini D. Sideridou
Abstract The present investigation is concerned with the determination of self-diffusion coefficient (D) of water in methacrylate-based biomaterials following Fickian sorption by two new methods: the Iterative and the Graphical methods. The D value is traditionally determined by means of the initial slope of the corresponding sorption curve and the so-called Stefan's approximation. The proposed methods using equations without approximations and data resulting from the whole sorption range reach to accurate values of D, even when the sorption curve does not present an initial linear portion. In addition to D, the Graphical method allows the extrapolation of the mass of the sorbed water at equilibrium (M,), even when the equilibrium specimen's mass fluctuates around its limited value (m,). The test of the proposed procedures by means of ideal and Monte Carlo simulated data revealed that these methods are fairly applicable. The obtained D values compared with those determined by means of the Stephan's method revealed that the proposed methods provide more accurate results. Finally, the proposed methods were successfully applied to the experimental determination of the diffusion coefficient of water (50°C) in the homopolymer of 2-hydroxyethyl methacrylate (HEMA) and in the copolymer of HEMA with triethylene glycol dimethacrylate (98/2 mol/mol). These polymers were prepared by light curing (, = 470 nm) at room temperature in presence of camphorquinone and N,N -dimethylaminoethyl methacrylate as initiator. © 2007 Wiley Periodicals, Inc. J Appl Polym Sci 2007 [source]