Upper Bound (upper + bound)

Distribution by Scientific Domains
Distribution within Mathematics and Statistics

Kinds of Upper Bound

  • new upper bound

  • Terms modified by Upper Bound

  • upper bound solution

  • Selected Abstracts


    UPPER BOUNDS ON THE MINIMUM COVERAGE PROBABILITY OF CONFIDENCE INTERVALS IN REGRESSION AFTER MODEL SELECTION

    AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 3 2009
    Paul Kabaila
    Summary We consider a linear regression model, with the parameter of interest a specified linear combination of the components of the regression parameter vector. We suppose that, as a first step, a data-based model selection (e.g. by preliminary hypothesis tests or minimizing the Akaike information criterion , AIC) is used to select a model. It is common statistical practice to then construct a confidence interval for the parameter of interest, based on the assumption that the selected model had been given to us,a priori. This assumption is false, and it can lead to a confidence interval with poor coverage properties. We provide an easily computed finite-sample upper bound (calculated by repeated numerical evaluation of a double integral) to the minimum coverage probability of this confidence interval. This bound applies for model selection by any of the following methods: minimum AIC, minimum Bayesian information criterion (BIC), maximum adjusted,R2, minimum Mallows' CP and,t -tests. The importance of this upper bound is that it delineates general categories of design matrices and model selection procedures for which this confidence interval has poor coverage properties. This upper bound is shown to be a finite-sample analogue of an earlier large-sample upper bound due to Kabaila and Leeb. [source]


    On the absence of large-order divergences in superstring theory

    FORTSCHRITTE DER PHYSIK/PROGRESS OF PHYSICS, Issue 1 2003
    S. Davis
    The genus-dependence of multi-loop superstring amplitudes is estimated at large orders in perturbation theory using the super-Schottky group parameterization of supermoduli space. Restriction of the integration region to a subset of supermoduli space and a single fundamental domain of the super-modular group suggests an exponential dependence on the genus. Upper bounds for these estimates are obtained for arbitrary N-point superstring scattering amplitudes and are shown to be consistent with exact results obtained for special type II string amplitudes for orbifold or Calabi-Yau compactifications. The genus-dependence is then obtained by considering the effect of the remaining contribution to the superstring amplitudes after the coefficients of the formally divergent parts of the integrals vanish as a result of a sum over spin structures. The introduction of supersymmetry therefore leads to the elimination of large-order divergences in string perturbation theory, a result which is based only on the supersymmetric generalization of the Polyakov measure and not the gauge group of the string model. [source]


    Upper bounds for single-source uncapacitated concave minimum-cost network flow problems

    NETWORKS: AN INTERNATIONAL JOURNAL, Issue 4 2003
    Dalila B. M. M. Fontes
    Abstract In this paper, we describe a heuristic algorithm based on local search for the Single-Source Uncapacitated (SSU) concave Minimum-Cost Network Flow Problem (MCNFP). We present a new technique for creating different and informed initial solutions to restart the local search, thereby improving the quality of the resulting feasible solutions (upper bounds). Computational results on different classes of test problems indicate the effectiveness of the proposed method in generating basic feasible solutions for the SSU concave MCNFP very near to a global optimum. A maximum upper bound percentage error of 0.07% is reported for all problem instances for which an optimal solution has been found by a branch-and-bound method. © 2003 Wiley Periodicals, Inc. [source]


    Area efficient layouts of the Batcher sorting networks ,

    NETWORKS: AN INTERNATIONAL JOURNAL, Issue 4 2001
    Shimon Even
    Abstract In the early 1980s, the grid area required by the sorting nets of Batcher for input vectors of length N was investigated by Thompson. He showed that the ,(N2) area was necessary and sufficient, but the hidden constant factors, both for the lower and upper bounds, were not discussed. In this paper, a lower bound of (N , 1)2/2 is proven, for the area required by any sorting network. Upper bounds of 4N2 and 3N2 are shown for the bitonic sorter and the odd,even sorter, respectively. In the layouts, which are presented to establish these upper bounds, slanted lines are used and there are no knock-knees. © 2001 John Wiley & Sons, Inc. [source]


    Upper bounds for ruin probabilities in two dependent risk models under rates of interest

    APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 4 2010
    Dingjun Yao
    Abstract In this article, we consider two discrete-time risk models, in which dependent structures of the payments and the interest force are considered. Two autoregressive moving-average (ARMA) models are introduced to model the premiums and rates of interest, and the claims are assumed to be independent. Generalized Lundberg inequalities for the ruin probabilities are derived by using renewal recursive technique, which extend some known results. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    A Probabilistic Framework for Bayesian Adaptive Forecasting of Project Progress

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 3 2007
    Paolo Gardoni
    An adaptive Bayesian updating method is used to assess the unknown model parameters based on recorded data and pertinent prior information. Recorded data can include equality, upper bound, and lower bound data. The proposed approach properly accounts for all the prevailing uncertainties, including model errors arising from an inaccurate model form or missing variables, measurement errors, statistical uncertainty, and volitional uncertainty. As an illustration of the proposed approach, the project progress and final time-to-completion of an example project are forecasted. For this illustration construction of civilian nuclear power plants in the United States is considered. This application considers two cases (1) no information is available prior to observing the actual progress data of a specified plant and (2) the construction progress of eight other nuclear power plants is available. The example shows that an informative prior is important to make accurate predictions when only a few records are available. This is also the time when forecasts are most valuable to the project manager. Having or not having prior information does not have any practical effect on the forecast when progress on a significant portion of the project has been recorded. [source]


    Metropolitan Open-Space Protection with Uncertain Site Availability

    CONSERVATION BIOLOGY, Issue 2 2005
    ROBERT G. HAIGHT
    acceso público; Chicago; modelo de selección de sitio; optimización; representación de especies Abstract:,Urban planners acquire open space to protect natural areas and provide public access to recreation opportunities. Because of limited budgets and dynamic land markets, acquisitions take place sequentially depending on available funds and sites. To address these planning features, we formulated a two-period site selection model with two objectives: maximize the expected number of species represented in protected sites and maximize the expected number of people with access to protected sites. These objectives were both maximized subject to an upper bound on area protected over two periods. The trade-off between species representation and public access was generated by the weighting method of multiobjective programming. Uncertainty was represented with a set of probabilistic scenarios of site availability in a linear-integer formulation. We used data for 27 rare species in 31 candidate sites in western Lake County, near the city of Chicago, to illustrate the model. Each trade-off curve had a concave shape in which species representation dropped at an increasing rate as public accessibility increased, with the trade-off being smaller at higher levels of the area budget. Several sites were included in optimal solutions regardless of objective function weights, and these core sites had high species richness and public access per unit area. The area protected in period one depended on current site availability and on the probabilities of sites being undeveloped and available in the second period. Although the numerical results are specific for our study, the methodology is general and applicable elsewhere. Resumen:,Planificadores urbanos adquieren espacios abiertos para proteger áreas naturales y proporcionar acceso público a oportunidades de recreación. Debido a presupuestos limitados y a la dinámica de los mercados de terrenos, las adquisiciones se llevan a cabo secuencialmente en función de la disponibilidad de fondos y sitios. Para atender estas características de la planificación, formulamos un modelo de selección de sitios de dos períodos con dos objetivos: maximizar el número esperado de especies representado en sitios protegidos y maximizar el número esperado de personas con acceso a sitios protegidos. Ambos objetivos fueron maximizados con un límite superior en la superficie protegida en los dos períodos. El balance entre la representación de especies y el acceso público fue generado por el método de ponderación de programación de multiobjetivos. La incertidumbre fue representada con un conjunto de escenarios probabilísticos de la disponibilidad de sitios en una formulación lineal-integral. Para demostrar el modelo, utilizamos datos para 27 especies raras en 31 sitios potenciales en el oeste del Condado Lake, cerca de la ciudad de Chicago. Cada curva tenía forma cóncava y la representación de especies descendió a medida que incrementó la accesibilidad pública, con un menor equilibrio en niveles altos del presupuesto para el área. Varios sitios fueron incluidos en soluciones óptimas independientemente de las funciones de ponderación de los objetivos, y estos sitios tuvieron alta riqueza de especies y acceso público por unidad de área. La superficie protegida en el período uno dependió de la disponibilidad de sitios y de las probabilidades de que los sitios no fueran desarrollados y de su disponibilidad en el segundo período. Aunque los resultados numéricos son específicos a nuestro estudio, la metodología es general y aplicable en otros sitios. [source]


    Column restraint in post-tensioned self-centering moment frames

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 7 2010
    Chung-Che Chou
    Abstract Gaps between beam-to-column interfaces in a post-tensioned (PT) self-centering frame with more than one column are constrained by columns, which causes beam compression force different from the applied PT force. This study proposes an analytical method for evaluating column bending stiffness and beam compression force by modeling column deformation according to gap-openings at all stories. The predicted compression forces in the beams are validated by a cyclic analysis of a three-story PT frame and by cyclic tests of a full-scale, two-bay by first-story PT frame, which represents a substructure of the three-story PT frame. The proposed method shows that compared with the strand tensile force, the beam compression force is increased at the 1st story but is decreased at the 2nd and 3rd stories due to column deformation compatibility. The PT frame tests show that the proposed method reasonably predicts beam compression force and strand force and that the beam compression force is 2 and 60% larger than the strand force with respect to a minor restraint and a pin-supported boundary condition, respectively, at the tops of the columns. Therefore, the earlier method using a pin-supported boundary condition at upper story columns represents an upper bound of the effect and is shown to be overly conservative for cases where a structure responds primarily in its first mode. The proposed method allows for more accurate prediction of the column restraint effects for structures that respond in a pre-determined mode shape which is more typical of low and mid-rise structures. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Energy input and zooplankton species richness

    ECOGRAPHY, Issue 6 2007
    Dag O. Hessen
    What are the relative contribution of temperature and solar irradiance as types of energy deliveries for species richness at the ecosystem level? In order to reveal this question in lake ecosystems, we assessed zooplankton species richness in 1891 Norwegian lakes covering a wide range in latitude, altitude, and lake area. Geographical variables could largely be replaced by temperature-related variables, e.g. annual monthly maximum temperature or growth season. Multivariate analysis (PCA) revealed that not only maximum monthly temperature, but also energy input in terms of solar radiation were closely associated with species richness. This was confirmed by stepwise, linear regression analysis in which lake area was also found to be significant. We tested the predictive power of the "metabolic scaling laws" for species richness by regressing Ln of species richness over the inverse of the air temperature (in Kelvin), corrected for the activation energy (eV) as predicted by the Boltzmann constant. A significant, negative slope of 0.78 for ln richness over temperature, given as 1/kT, was found, thus slightly higher than the range of slopes predicted from the scaling law (0.60,0.70). Temperature basically constrained the upper bound of species number, but it was only a modest predictor of actual richness. Both PCA-analysis and linear regression models left a large unexplained variance probably due to lake-specific properties such as catchment influence, lake productivity, food-web structure, immigration constraints or more stochastic effects. [source]


    Traffic analysis in optical burst switching networks: a trace-based case study

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 7 2009
    Ahmad Rostami
    Optical burst switching (OBS) appears as a promising technology for building dynamic optical transport networks. The main advantage of OBS is that it allows for dynamic allocation of resources at sub-wavelength granularity. Nevertheless, the burst contention problem, which occurs frequently inside the network, has to be addressed before OBS can be really deployed as the next generation optical transport network. Recently a lot of attention is devoted to different approaches for resolving contentions in OBS networks. Although performance analysis of these approaches is strongly dependent on the traffic characteristics in the network, the majority of the studies is so far based on very hypothetical traffic assumptions. In this study we use traces of real measurements in the Internet to derive realistic data about the traffic that is injected into the OBS network. Specifically, we investigate the marginal distributions of burst size, burst interdeparture time, assembly delay and number of packets per burst as well as the burstiness of the burst traces. We demonstrate that the performance of an OBS core node using the real traces is pretty similar to the results obtained when the traffic arriving to the core node is assumed to be Poisson. In fact, usage of the Poisson as the process of burst arrival to the core node leads in all the investigated cases to an upper bound on the burst drop rate at that node. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Variable-length channel coding with noisy feedback

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 4 2008
    Stark C. Draper
    It is known that perfect noiseless feedback can be used to improve the reliability of communication systems. We show how to make those gains robust to noise on the feedback link. We focus on feedback links that are themselves discrete memoryless channels. We demonstrate that Forney's erasure-decoding exponent is achievable given any positive-capacity feedback channel. We also demonstrate that as the desired rate of communication approaches the capacity of the forward channel, the Burnashev upper bound on the reliability function is achievable given any positive-capacity noisy feedback channel. Finally, we demonstrate that our scheme dominates the erasure-decoding exponent at all rates and, for instance, at zero rate can achieve up to three-quarters of Burnashev's zero-rate reliability. This implies that in a shared medium, to maximise the reliability function some degrees of freedom should be allocated to feedback. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    THE RATE OF GENOME STABILIZATION IN HOMOPLOID HYBRID SPECIES

    EVOLUTION, Issue 2 2008
    C. Alex Buerkle
    Homoploid hybrid speciation has been recognized for its potential rapid completion, an idea that has received support from experimental and modeling studies. Following initial hybridization, the genomes of parental species recombine and junctions between chromosomal blocks of different parental origin leave a record of recombination and the time period before homogenization of the derived genome. We use detailed genetic maps of three hybrid species of sunflowers and models to estimate the time required for the stabilization of the new hybrid genome. In contrast to previous estimates of 60 or fewer generations, we find that the genomes of three hybrid sunflower species were not stabilized for hundreds of generations. These results are reconciled with previous research by recognizing that the stabilization of a hybrid species' genome is not synonymous with hybrid speciation. Segregating factors that contribute to initial ecological or intrinsic genetic isolation may become stabilized quickly. The remainder of the genome likely becomes stabilized over a longer time interval, with recombination and drift dictating the contributions of the parental genomes. Our modeling of genome stabilization provides an upper bound for the time interval for reproductive isolation to be established and confirms the rapid nature of homoploid hybrid speciation. [source]


    Measurement of the Debonding Resistance of Strongly Adherent Thick Coatings on Metals via In-plane Tensile Straining,

    ADVANCED ENGINEERING MATERIALS, Issue 5 2007
    S. Ryelandt
    When the ratio hc/hs of coating and substrate thicknesses is large enough, interfacial debonding can be induced to propagate from the root of a transverse crack under in-plane loading. An energy balance analysis accounting for the flow rule of the substrate allows translating the load for steady state debonding into an upper bound for the debonding toughness. The method is validated by FEM simulations using a cohesive zone model. [source]


    A glassy lowermost outer core

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2009
    Vernon F. Cormier
    SUMMARY New theories for the viscosity of metallic melts at core pressures and temperatures, together with observations of translational modes of oscillation of Earth's solid inner core, suggest a rapid increase in the dynamic viscosity near the bottom of the liquid outer core. If the viscosity of the lowermost outer core (F region) is sufficiently high, it may be in a glassy state, characterized by a frequency dependent shear modulus and increased viscoselastic attenuation. In testing this hypothesis, the amplitudes of high-frequency PKiKP waves are found to be consistent with an upper bound to shear velocity in the lowermost outer core of 0.5 km s,1 at 1 Hz. The fit of a Maxwell rheology to the frequency dependent shear modulus constrained by seismic observations at both low and high-frequency favours a model of the F region as a 400-km-thick chemical boundary layer. This layer has both a higher density and higher viscosity than the bulk of the outer core, with a peak viscosity on the order of 109 Pa s or higher near the inner core boundary. If lateral variations in the F region are confirmed to correlate with lateral variations observed in the structure of the uppermost inner core, they may be used to map differences in the solidification process of the inner core and flow in the lowermost outer core. [source]


    Stochastic Study of Solute Transport in a Nonstationary Medium

    GROUND WATER, Issue 2 2006
    Bill X. Hu
    A Lagrangian stochastic approach is applied to develop a method of moment for solute transport in a physically and chemically nonstationary medium. Stochastic governing equations for mean solute flux and solute covariance are analytically obtained in the first-order accuracy of log conductivity and/or chemical sorption variances and solved numerically using the finite-difference method. The developed method, the numerical method of moments (NMM), is used to predict radionuclide solute transport processes in the saturated zone below the Yucca Mountain project area. The mean, variance, and upper bound of the radionuclide mass flux through a control plane 5 km downstream of the footprint of the repository are calculated. According to their chemical sorption capacities, the various radionuclear chemicals are grouped as nonreactive, weakly sorbing, and strongly sorbing chemicals. The NMM method is used to study their transport processes and influence factors. To verify the method of moments, a Monte Carlo simulation is conducted for nonreactive chemical transport. Results indicate the results from the two methods are consistent, but the NMM method is computationally more efficient than the Monte Carlo method. This study adds to the ongoing debate in the literature on the effect of heterogeneity on solute transport prediction, especially on prediction uncertainty, by showing that the standard derivation of solute flux is larger than the mean solute flux even when the hydraulic conductivity within each geological layer is mild. This study provides a method that may become an efficient calculation tool for many environmental projects. [source]


    Optimal drug pricing, limited use conditions and stratified net benefits for Markov models of disease progression

    HEALTH ECONOMICS, Issue 11 2008
    Gregory S. Zaric
    Abstract Limited use conditions (LUCs) are a method of directing treatment with new drugs to those populations where they will be most cost effective. In this paper we investigate how a drug manufacturer could determine pricing and LUCs to maximize profits. We assume that the payer makes formulary decisions on the basis of net monetary benefits, that the disease can be modeled using a Markov model of disease progression, and that the drug reduces the probability of progression between states of the Markov model. LUCs are expressed as a range of probabilities of disease progression over which patients would have access to the new drug. We assume that the manufacturer determines the price and LUCs in order to maximize profits. We show that an explicit trade-off exists between the drug's price and the use conditions, that there is an upper bound on the drug price, that the proportion of the population targeted by the LUC does not depend on quality of life or costs in each health state or the payer's willingness to pay, and that high drug prices do not always correspond with high profits. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Standing Facilities and Interbank Borrowing: Evidence from the Federal Reserve's New Discount Window

    INTERNATIONAL FINANCE, Issue 3 2003
    Craig Furfine
    Standing facilities are designed to place an upper bound on the rates at which financial institutions lend to one another overnight, reducing the volatility of the overnight interest rate, typically the rate targeted by central banks. However, improper design of the facility might decrease a bank's incentive to participate actively in the interbank market. Thus, the mere availability of central-bank-provided credit may lead to its use being greater than what would be expected based on the characteristics of the interbank market. By contrast, however, banks may perceive a stigma from using such facilities, and thus borrow less than what one might expect, thereby reducing the facilities' effectiveness at reducing interest rate volatility. We develop a model demonstrating these two alternative implications of a standing facility. Empirical predictions of the model are then tested using data from the Federal Reserve's new primary credit facility and the US federal funds market. A comparison of data from before and after recent changes to the discount window suggests continued reluctance to borrow from the Federal Reserve. [source]


    A new fast hybrid adaptive grid generation technique for arbitrary two-dimensional domains

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2010
    Mohamed S. Ebeida
    Abstract This paper describes a new fast hybrid adaptive grid generation technique for arbitrary two-dimensional domains. This technique is based on a Cartesian background grid with square elements and quadtree decomposition. A new algorithm is introduced for the distribution of boundary points based on the curvature of the domain boundaries. The quadtree decomposition is governed either by the distribution of the boundary points or by a size function when a solution-based adaptive grid is desired. The resulting grid is quaddominant and ready for the application of finite element, multi-grid, or line-relaxation methods. All the internal angles in the final grid have a lower bound of 45° and an upper bound of 135°. Although our main interest is in grid generation for unsteady flow simulations, the technique presented in this paper can be employed in many other fields. Several application examples are provided to illustrate the main features of this new approach. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Guaranteed computable error bounds for conforming and nonconforming finite element analyses in planar elasticity

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 9 2010
    Mark Ainsworth
    Abstract We obtain fully computable a posteriori error estimators for the energy norm of the error in second-order conforming and nonconforming finite element approximations in planar elasticity. These estimators are completely free of unknown constants and give a guaranteed numerical upper bound on the norm of the error. The estimators are shown to also provide local lower bounds, up to a constant and higher-order data oscillation terms. Numerical examples are presented illustrating the theory and confirming the effectiveness of the estimator. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Limit analysis and convex programming: A decomposition approach of the kinematic mixed method

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2009
    Franck Pastor
    Abstract This paper proposes an original decomposition approach to the upper bound method of limit analysis. It is based on a mixed finite element approach and on a convex interior point solver using linear or quadratic discontinuous velocity fields. Presented in plane strain, this method appears to be rapidly convergent, as verified in the Tresca compressed bar problem in the linear velocity case. Then, using discontinuous quadratic velocity fields, the method is applied to the celebrated problem of the stability factor of a Tresca vertical slope: the upper bound is lowered to 3.7776,value to be compared with the best published lower bound 3.772,by succeeding in solving non-linear optimization problems with millions of variables and constraints. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Upper and lower bounds in limit analysis: Adaptive meshing strategies and discontinuous loading

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 4 2009
    J. J. Muñoz
    Abstract Upper and lower bounds of the collapse load factor are here obtained as the optimum values of two discrete constrained optimization problems. The membership constraints for Von Mises and Mohr,Coulomb plasticity criteria are written as a set of quadratic constraints, which permits one to solve the optimization problem using specific algorithms for Second-Order Conic Program (SOCP). From the stress field at the lower bound and the velocities at the upper bound, we construct a novel error estimate based on elemental and edge contributions to the bound gap. These contributions are employed in an adaptive remeshing strategy that is able to reproduce fan-type mesh patterns around points with discontinuous surface loading. The solution of this type of problems is analysed in detail, and from this study some additional meshing strategies are also described. We particularise the resulting formulation and strategies to two-dimensional problems in plane strain and we demonstrate the effectiveness of the method with a set of numerical examples extracted from the literature. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Robust exponential stability for discrete-time interval BAM neural networks with delays and Markovian jump parameters

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 9 2010
    Jiqing Qiu
    Abstract This paper investigates the problem of global robust exponential stability for discrete-time interval BAM neural networks with mode-dependent time delays and Markovian jump parameters, by utilizing the Lyapunov,Krasovskii functional combined with the linear matrix inequality (LMI) approach. A new Markov process as discrete-time, discrete-state Markov process is considered. An exponential stability performance analysis result is first established for error systems without ignoring any terms in the derivative of Lyapunov functional by considering the relationship between the time-varying delay and its upper bound. The delay factor depends on the mode of operation. Three numerical examples are given to demonstrate the merits of the proposed method. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Parameter identifiability with Kullback,Leibler information divergence criterion

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 10 2009
    Badong Chen
    Abstract We study the problem of parameter identifiability with Kullback,Leibler information divergence (KLID) criterion. The KLID-identifiability is defined, which can be related to many other concepts of identifiability, such as the identifiability with Fisher's information matrix criterion, identifiability with least-squares criterion, and identifiability with spectral density criterion. We also establish a simple check criterion for the Gaussian process and derive an upper bound for the minimal identifiable horizon of Markov process. Furthermore, we define the asymptotic KLID-identifiability and prove that, under certain constraints, the KLID-identifiability will be a sufficient or necessary condition for the asymptotic KLID-identifiability. The consistency problems of several parameter estimation methods are also discussed. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Direct adaptive command following and disturbance rejection for minimum phase systems with unknown relative degree

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 1 2007
    Jesse B. Hoagg
    Abstract This paper considers parameter-monotonic direct adaptive command following and disturbance rejection for single-input single-output minimum-phase linear time-invariant systems with knowledge of the sign of the high-frequency gain (first non-zero Markov parameter) and an upper bound on the magnitude of the high-frequency gain. We assume that the command and disturbance signals are generated by a linear system with known characteristic polynomial. Furthermore, we assume that the command signal is measured, but the disturbance signal is unmeasured. The first part of the paper is devoted to a fixed-gain analysis of a high-gain-stabilizing dynamic compensator for command following and disturbance rejection. The compensator utilizes a Fibonacci series construction to control systems with unknown-but-bounded relative degree. We then introduce a parameter-monotonic adaptive law and guarantee asymptotic command following and disturbance rejection. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Robust adaptive tracking control of uncertain discrete time systems

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 9 2005
    Shengping Li
    Abstract In this paper, the problem of robust adaptive tracking for uncertain discrete-time systems is considered from the slowly varying systems point of view. The class of uncertain discrete-time systems considered is subjected to both ,,, to ,,, bounded unstructured uncertainty and external additive bounded disturbances. A priori knowledge of the dynamic model of the reference signal to be tracked is not completely known. For such problem, an indirect adaptive tracking controller is obtained by frozen-time controllers that at each time optimally robustly stabilize the estimated models of the plant and minimize the worst-case steady-state absolute value of the tracking error of the estimated model over the model uncertainty. Based on ,,, to ,,, stability and performance of slowly varying system found in the literature, the proposed adaptive tracking scheme is shown to have good robust stability. Moreover, a computable upper bound on the size of the unstructured uncertainty permitted by the adaptive system and a computable tight upper bound on asymptotic robust steady-state tracking performance are provided. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    A combined iterative scheme for identification and control redesigns

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 8 2004
    Paresh Date
    Abstract This work proposes a unified algorithm for identification and control. Frequency domain data of the plant is weighted to satisfy the given performance specifications. A model is then identified from this weighted frequency domain data and a controller is synthesised using the ,, loopshaping design procedure. The cost function used in the identification stage essentially minimizes a tight upper bound on the difference between the achieved and the designed performance in the sense of the ,, loopshaping design paradigm. Given a model, a method is also suggested to re-adjust model and weighting transfer functions to reduce further the worst case chordal distance between the weighted true plant and the model. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Model reference adaptive control using a low-order controller

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 3 2001
    Daniel E. Miller
    Abstract In the model reference adaptive control problem, the goal is to force the error between the plant output and the reference model output asymptotically to zero. The classical assumptions on a single-input,single-output (SISO) plant is that it is minimum phase, and that the plant relative degree, the sign of the high-frequency gain, and an upper bound on the plant order are known. Here we consider a modified problem in which the objective is weakened slightly to that of requiring that the asymptotic value of the error be less than a (arbitrarily small) pre-specified constant. Using recent results on the design of generalized holds for model reference tracking, here we present a new switching adaptive controller of dimension two which achieves this new objective for every minimum phase SISO system; no structural information is required. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Variable structure robust state and parameter estimator

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 2 2001
    Alex S. Poznyak
    Abstract The problem of simultaneous robust state and parameters estimation for a class of SISO non-linear systems under mixed uncertainties (unmodelled dynamics as well as observation noises) is addressed. A non-linear variable structure robust ,observer,identifier' is introduced to obtain the corresponding estimates. Lie derivative technique is used to obtain the observability conditions for the equivalent extended non-linear system. It is shown that, in general, the extended system can lose the global observability property and a special procedure is needed to work well in this situation. The suggested adaptive observer has the non-linear high-gain observer structure with adjusted parameters that provides ,a good' upper bound for the identification error performance index. The van der Monde transformation is used to derive this bound which turns out to be tight. Three examples dealing with a simple pendulum, the Duffing equation and the van del Pol oscillator are considered to illustrate the effectiveness of the suggested approach. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Kalman filter-based channel estimation and ICI suppression for high-mobility OFDM systems

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 10 2008
    Prerana Gupta
    Abstract The use of orthogonal frequency division multiplexing (OFDM) in frequency-selective fading environments has been well explored. However, OFDM is more prone to time-selective fading compared with single-carrier systems. Rapid time variations destroy the subcarrier orthogonality and introduce inter-carrier interference (ICI). Besides this, obtaining reliable channel estimates for receiver equalization is a non-trivial task in rapidly fading systems. Our work addresses the problem of channel estimation and ICI suppression by viewing the system as a state-space model. The Kalman filter is employed to estimate the channel; this is followed by a time-domain ICI mitigation filter that maximizes the signal-to-interference plus noise ratio (SINR) at the receiver. This method is seen to provide good estimation performance apart from significant SINR gain with low training overhead. Suitable bounds on the performance of the system are described; bit error rate (BER) performance over a time-invariant Rayleigh fading channel serves as the lower bound, whereas BER performance over a doubly selective system with ICI as the dominant impairment provides the upper bound. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    A new traffic model for backbone networks and its application to performance analysis

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 6 2008
    Ming Yu
    Abstract In this paper, we present a new traffic model constructed from a random number of shifting level processes (SLP) aggregated over time, in which the lengths of the active periods of the SLP are of Pareto or truncated Pareto distribution. For both cases, the model has been proved to be asymptotically second-order self-similar. However, based on extensive traffic data we collected from a backbone network, we find that the active periods of the constructing SLPs can be approximated better by a truncated Pareto distribution, instead of the Pareto distribution as assumed in existing traffic model constructions. The queueing problem of a single server fed with a traffic described by the model is equivalently converted to a problem with a traffic described by Norros' model. For the tail probability of the queue length distribution, an approximate expression and upper bound have been found in terms of large deviation estimates and are mathematically more tractable than existing results. The effectiveness of the traffic model and performance results are demonstrated by our simulations and experimental studies on a backbone network. Copyright © 2007 John Wiley & Sons, Ltd. [source]