Lower Bound (lower + bound)

Distribution by Scientific Domains

Kinds of Lower Bound

  • new lower bound


  • Selected Abstracts


    LOWER BOUNDS TO THE POPULATION SIZE WHEN CAPTURE PROBABILITIES VARY OVER INDIVIDUALS

    AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 2 2008
    Chang Xuan Mao
    Summary The problem of estimating population sizes has a wide range of applications. Although the size is non-identifiable when a population is heterogeneous, it is often useful to estimate the lower bounds and to construct lower confidence limits. A sequence of lower bounds, including the well-known Chao lower bound, is proposed. The bounds have closed-form expressions and are estimated by the method of moments or by maximum likelihood. Real examples from epidemiology, wildlife management and ecology are investigated. Simulation studies are used to assess the proposed estimators. [source]


    On-line algorithms for minimizing makespan on batch processing machines

    NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 3 2001
    Gouchuan Zhang
    Abstract We consider problem of scheduling jobs on-line on batch processing machines with dynamic job arrivals to minimize makespan. A batch machine can handle up to B jobs simultaneously. The jobs that are processed together from a batch, and all jobs in a batch start and complete at the same time. The processing time of a batch is given by the longest processing time of any job in the batch. Each job becomes available at its arrival time, which is unknown in advance, and its processing time becomes known upon its arrival. In the first part of this paper, we address the single batch processing machine scheduling problem. First we deal with two variants: the unbounded model where B is sufficiently large and the bounded model where jobs have two distinct arrival times. For both variants, we provide on-line algorithms with worst-case ratio (the inverse of the Golden ratio) and prove that these results are the best possible. Furthermore, we generalize our algorithms to the general case and show a worst-case ratio of 2. We then consider the unbounded case for parallel batch processing machine scheduling. Lower bound are given, and two on-line algorithms are presented. © 2001 John Wiley & Sons, Inc. Naval Research Logistics 48: 241,258, 2001 [source]


    Lower bounds to the Weizsäcker energy

    INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 6 2005
    M. G. Marmorino
    Abstract Gram determinants can be used to obtain lower bounds to the Weizsäcker energy for an N -particle density using expectation values of a wide variety of functions of position. This work focuses on expectation values of radial moments to easily derive many previous bounds and some new bounds and to show that improved bounds are easily obtained by increasing the size of the determinant. © 2005 Wiley Periodicals, Inc. Int J Quantum Chem, 2005 [source]


    Lower bounds in communication complexity based on factorization norms

    RANDOM STRUCTURES AND ALGORITHMS, Issue 3 2009
    Nati Linial
    Abstract We introduce a new method to derive lower bounds on randomized and quantum communication complexity. Our method is based on factorization norms, a notion from Banach Space theory. This approach gives us access to several powerful tools from this area such as normed spaces duality and Grothendiek's inequality. This extends the arsenal of methods for deriving lower bounds in communication complexity. As we show, our method subsumes most of the previously known general approaches to lower bounds on communication complexity. Moreover, we extend all (but one) of these lower bounds to the realm of quantum communication complexity with entanglement. Our results also shed some light on the question how much communication can be saved by using entanglement. It is known that entanglement can save one of every two qubits, and examples for which this is tight are also known. It follows from our results that this bound on the saving in communication is tight almost always. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2009 [source]


    A computerized treatment of dyslexia: Benefits from treating lexico-phonological processing problems

    DYSLEXIA, Issue 1 2005
    Jurgen Tijms
    Abstract Two hundred sixty-seven 10- to 14-year-old Dutch children with dyslexia were randomly assigned to one of two samples that received a treatment for reading and spelling difficulties. The treatment was computer-based and focused on learning to recognise and use the phonological and morphological structure of Dutch words. The inferential algorithmic basis of the program ensured that the instruction was highly structured. The present study examined the reliability of the effects of the treatment, and provided an evaluation of the attained levels of reading and spelling by relating them to normal levels. Both samples revealed large, generalized treatment effects on reading accuracy, reading rate, and spelling skills. Following the treatment, participants attained an average level of reading accuracy and spelling. The attained level of reading rate was comparable to the lower bound of the average range. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Low-cost J-R curve estimation based on CVN upper shelf energy

    FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 8 2001
    K. Wallin
    J-R curve testing is costly and difficult. The results may also sometimes be unreliable. For less demanding structures, J-R curve testing is therefore not practical. The only way to introduce tearing instability analysis for such cases is to estimate the J-R curves indirectly from some simpler test. The Charpy-V notch test provides information about the energy needed to fracture a small specimen in half. On the upper shelf this energy relates to ductile fracture resistance and it is possible to correlate it to the J-R curve. Here, 112 multispecimen J-R curves from a wide variety of materials were analysed and a simple power-law-based description of the J-R curves was correlated to the CVNUS energy. This new correlation corresponds essentially to a 5% lower bound and conforms well with the earlier correlations, regardless of the definition of the ductile fracture toughness parameter. [source]


    Localized spectral analysis on the sphere

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2005
    Mark A. Wieczorek
    SUMMARY It is often advantageous to investigate the relationship between two geophysical data sets in the spectral domain by calculating admittance and coherence functions. While there exist powerful Cartesian windowing techniques to estimate spatially localized (cross-)spectral properties, the inherent sphericity of planetary bodies sometimes necessitates an approach based in spherical coordinates. Direct localized spectral estimates on the sphere can be obtained by tapering, or multiplying the data by a suitable windowing function, and expanding the resultant field in spherical harmonics. The localization of a window in space and its spectral bandlimitation jointly determine the quality of the spatiospectral estimation. Two kinds of axisymmetric windows are here constructed that are ideally suited to this purpose: bandlimited functions that maximize their spatial energy within a cap of angular radius ,0, and spacelimited functions that maximize their spectral power within a spherical harmonic bandwidth L. Both concentration criteria yield an eigenvalue problem that is solved by an orthogonal family of data tapers, and the properties of these windows depend almost entirely upon the space,bandwidth product N0= (L+ 1) ,0/,. The first N0, 1 windows are near perfectly concentrated, and the best-concentrated window approaches a lower bound imposed by a spherical uncertainty principle. In order to make robust localized estimates of the admittance and coherence spectra between two fields on the sphere, we propose a method analogous to Cartesian multitaper spectral analysis that uses our optimally concentrated data tapers. We show that the expectation of localized (cross-)power spectra calculated using our data tapers is nearly unbiased for stochastic processes when the input spectrum is white and when averages are made over all possible realizations of the random variables. In physical situations, only one realization of such a process will be available, but in this case, a weighted average of the spectra obtained using multiple data tapers well approximates the expected spectrum. While developed primarily to solve problems in planetary science, our method has applications in all areas of science that investigate spatiospectral relationships between data fields defined on a sphere. [source]


    Errors in technological systems

    HUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 4 2003
    R.B. Duffey
    Massive data and experience exist on the rates and causes of errors and accidents in modern industrial and technological society. We have examined the available human record, and have shown the existence of learning curves, and that there is an attainable and discernible minimum or asymptotic lower bound for error rates. The major common contributor is human error, including in the operation, design, manufacturing, procedures, training, maintenance, management, and safety methodologies adopted for technological systems. To analyze error and accident rates in many diverse industries and activities, we used a combined empirical and theoretical approach. We examine the national and international reported error, incident and fatal accident rates for multiple modern technologies, including shipping losses, industrial injuries, automobile fatalities, aircraft events and fatal crashes, chemical industry accidents, train derailments and accidents, medical errors, nuclear events, and mining accidents. We selected national and worldwide data sets for time spans of up to ,200 years, covering many millions of errors in diverse technologies. We developed and adopted a new approach using the accumulated experience; thus, we show that all the data follow universal learning curves. The vast amounts of data collected and analyzed exhibit trends consistent with the existence of a minimum error rate, and follow failure rate theory. There are potential and key practical impacts for the management of technological systems, the regulatory practices for complex technological processes, the assignment of liability and blame, the assessment of risk, and for the reporting and prediction of errors and accident rates. The results are of fundamental importance to society as we adopt, manage, and use modern technology. © 2003 Wiley Periodicals, Inc. Hum Factors Man 13: 279,291, 2003. [source]


    No pharmacokinetic interaction between paliperidone extended-release tablets and trimethoprim in healthy subjects,

    HUMAN PSYCHOPHARMACOLOGY: CLINICAL AND EXPERIMENTAL, Issue 7 2009
    An Thyssen
    Abstract Objective The effect of trimethoprim, a potent organic cation transport inhibitor, on the pharmacokinetics (PK) of paliperidone extended-release tablets (paliperidone ER), an organic cation mainly eliminated via renal excretion, was assessed. Methods Open-label, two-period, randomized, crossover study in 30 healthy males. Single dose of paliperidone ER 6,mg was administered either alone on day 1 or day 5 during an 8-day treatment period of trimethoprim 200,mg twice daily. Serial blood and urine samples were collected for PK and plasma protein binding of paliperidone and its enantiomers. The 90% confidence interval (CI) of ratios with/without trimethoprim for PK parameters of paliperidone and its enantiomers calculated. Results Creatinine clearance decreased from 119 to 102,mL,min,1 with trimethoprim. Addition of trimethoprim increased unbound fraction of paliperidone by 16%, renal clearance by 13%, AUC, by 9%, and t˝ by 19%. The 90% CIs for ratios with/without trimethoprim were within the 80,125% range for Cmax, AUClast, and renal clearance. For AUC,, 90% CI was 79.37,101.51, marginally below the lower bound of the acceptance range. Paliperidone did not affect steady-state plasma concentrations of trimethoprim. Conclusions No clinically important drug interactions are expected when paliperidone ER is administered with organic cation transport inhibitors. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Non-random reassortment in human influenza A viruses

    INFLUENZA AND OTHER RESPIRATORY VIRUSES, Issue 1 2008
    Raul Rabadan
    Background, The influenza A virus has two basic modes of evolution. Because of a high error rate in the process of replication by RNA polymerase, the viral genome drifts via accumulated mutations. The second mode of evolution is termed a shift, which results from the reassortment of the eight segments of this virus. When two different influenza viruses co-infect the same host cell, new virions can be released that contain segments from both parental strains. This type of shift has been the source of at least two of the influenza pandemics in the 20th century (H2N2 in 1957 and H3N2 in 1968). Objectives, The methods to measure these genetic shifts have not yet provided a quantitative answer to questions such as: what is the rate of genetic reassortment during a local epidemic? Are all possible reassortments equally likely or are there preferred patterns? Methods, To answer these questions and provide a quantitative way to measure genetic shifts, a new method for detecting reassortments from nucleotide sequence data was created that does not rely upon phylogenetic analysis. Two different sequence databases were used: human H3N2 viruses isolated in New York State between 1995 and 2006, and human H3N2 viruses isolated in New Zealand between 2000 and 2005. Results, Using this new method, we were able to reproduce all the reassortments found in earlier works, as well as detect, with very high confidence, many reassortments that were not detected by previous authors. We obtain a lower bound on the reassortment rate of 2,3 events per year, and find a clear preference for reassortments involving only one segment, most often hemagglutinin or neuraminidase. At a lower frequency several segments appear to reassort in vivo in defined groups as has been suggested previously in vitro. Conclusions, Our results strongly suggest that the patterns of reassortment in the viral population are not random. Deciphering these patterns can be a useful tool in attempting to understand and predict possible influenza pandemics. [source]


    Piece Rates, Fixed Wages, and Incentive Effects: Statistical Evidence from Payroll Records

    INTERNATIONAL ECONOMIC REVIEW, Issue 1 2000
    Harry J. Paarsch
    We develop and estimate an agency model of worker behavior under piece rates and fixed wages. The model implies optimal decision rules for the firm's choice of a compensation system as a function of working conditions. Our model also implies an upper and lower bound to the incentive effect (the productivity gain realized by paying workers piece rates rather than fixed wages) that can be estimated using regression methods. Using daily productivity data collected from the payroll records of a British Columbia tree-planting firm, we estimate these bounds to be an 8.8 and a 60.4 percent increase in productivity. Structural estimation, which accounts for the firm's optimal choice of a compensation system, suggests that incentives caused a 22.6 percent increase in productivity. However, only part of this increase represents valuable output because workers respond to incentives, in part, by reducing quality. [source]


    Certified solutions for hydraulic structures using the node-based smoothed point interpolation method (NS-PIM)

    INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 15 2010
    J. Cheng
    Abstract A meshfree node-based smoothed point interpolation method (NS-PIM), which has been recently developed for solid mechanics problems, is applied to obtain certified solutions with bounds for hydraulic structure designs. In this approach, shape functions for displacements are constructed using the point interpolation method (PIM), and the shape functions possess the Kronecker delta property and permit the straightforward enforcement of essential boundary conditions. The generalized smoothed Galerkin weak form is then applied to construct discretized system equations using the node-based smoothed strains. As a very novel and important property, the approach can obtain the upper bound solution in energy norm for hydraulic structures. A 2D gravity dam problem and a 3D arch dam problem are solved, respectively, using the NS-PIM and the simulation results of NS-PIM are found to be the upper bounds. Together with standard fully compatible FEM results as a lower bound, we have successfully determined the solution bounds to certify the accuracy of numerical solutions. This confirms that the NS-PIM is very useful for producing certified solutions for the analysis of huge hydraulic structures. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    A new fast hybrid adaptive grid generation technique for arbitrary two-dimensional domains

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2010
    Mohamed S. Ebeida
    Abstract This paper describes a new fast hybrid adaptive grid generation technique for arbitrary two-dimensional domains. This technique is based on a Cartesian background grid with square elements and quadtree decomposition. A new algorithm is introduced for the distribution of boundary points based on the curvature of the domain boundaries. The quadtree decomposition is governed either by the distribution of the boundary points or by a size function when a solution-based adaptive grid is desired. The resulting grid is quaddominant and ready for the application of finite element, multi-grid, or line-relaxation methods. All the internal angles in the final grid have a lower bound of 45° and an upper bound of 135°. Although our main interest is in grid generation for unsteady flow simulations, the technique presented in this paper can be employed in many other fields. Several application examples are provided to illustrate the main features of this new approach. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    A novel singular node-based smoothed finite element method (NS-FEM) for upper bound solutions of fracture problems

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 11 2010
    G. R. Liu
    Abstract It is well known that the lower bound to exact solutions in linear fracture problems can be easily obtained by the displacement compatible finite element method (FEM) together with the singular crack tip elements. It is, however, much more difficult to obtain the upper bound solutions for these problems. This paper aims to formulate a novel singular node-based smoothed finite element method (NS-FEM) to obtain the upper bound solutions for fracture problems. In the present singular NS-FEM, the calculation of the system stiffness matrix is performed using the strain smoothing technique over the smoothing domains (SDs) associated with nodes, which leads to the line integrations using only the shape function values along the boundaries of the SDs. A five-node singular crack tip element is used within the framework of NS-FEM to construct singular shape functions via direct point interpolation with proper order of fractional basis. The mix-mode stress intensity factors are evaluated using the domain forms of the interaction integrals. The upper bound solutions of the present singular NS-FEM are demonstrated via benchmark examples for a wide range of material combinations and boundary conditions. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Limit analysis and convex programming: A decomposition approach of the kinematic mixed method

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2009
    Franck Pastor
    Abstract This paper proposes an original decomposition approach to the upper bound method of limit analysis. It is based on a mixed finite element approach and on a convex interior point solver using linear or quadratic discontinuous velocity fields. Presented in plane strain, this method appears to be rapidly convergent, as verified in the Tresca compressed bar problem in the linear velocity case. Then, using discontinuous quadratic velocity fields, the method is applied to the celebrated problem of the stability factor of a Tresca vertical slope: the upper bound is lowered to 3.7776,value to be compared with the best published lower bound 3.772,by succeeding in solving non-linear optimization problems with millions of variables and constraints. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Upper and lower bounds in limit analysis: Adaptive meshing strategies and discontinuous loading

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 4 2009
    J. J. Muńoz
    Abstract Upper and lower bounds of the collapse load factor are here obtained as the optimum values of two discrete constrained optimization problems. The membership constraints for Von Mises and Mohr,Coulomb plasticity criteria are written as a set of quadratic constraints, which permits one to solve the optimization problem using specific algorithms for Second-Order Conic Program (SOCP). From the stress field at the lower bound and the velocities at the upper bound, we construct a novel error estimate based on elemental and edge contributions to the bound gap. These contributions are employed in an adaptive remeshing strategy that is able to reproduce fan-type mesh patterns around points with discontinuous surface loading. The solution of this type of problems is analysed in detail, and from this study some additional meshing strategies are also described. We particularise the resulting formulation and strategies to two-dimensional problems in plane strain and we demonstrate the effectiveness of the method with a set of numerical examples extracted from the literature. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Theory and finite element computation of cyclic martensitic phase transformation at finite strain

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 1 2008
    Erwin Stein
    Abstract A generalized variational formulation, including quasi-convexification of energy wells for arbitrarily many martensitic variants in case of mono-crystals for linearized strains, was developed by Govindjee and Miehe (Comp. Meth. Appl. Mech. Eng. 2001; 191(3,5):215,238) and computationally extended by Stein and Zwickert (Comput. Mech. 2006; in press). This work is generalized here for finite strain kinematics with monotonous hyperelastic stress,strain functions in order to account for large transformation strains that can reach up to 15%. A major theoretical and numerical difficulty herein is the convexification of the finite deformation phase transformation (PT) problems for multiple phase variants, n,2. A lower bound of the mixing energy is provided by the Reuss bound in case of linear kinematics and an arbitrary number of variants, shown by Govindjee et al. (J. Mech. Phys. Solids 2003; 51(4):I,XXVI). In case of finite strains, a generalized representation of free energy of mixing is introduced for a quasi-Reuss bound, which in general holds for n,2. Numerical validation of the used micro,macro material model is presented by comparing verified numerical results with the experimental data for Cu82Al14Ni4 monocrystals for quasiplastic PT, provided by Xiangyang et al. (J. Mech. Phys. Solids 2000; 48:2163,2182). The zigzag-type experimental stress,strain curve within PT at loading, called ,yield tooth', is approximated within the finite element analysis by a smoothly decreasing and then increasing axial stress which could not be achieved with linearized kinematics yielding a constant axial stress during PT. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Variational h -adaption in finite deformation elasticity and plasticity

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 5 2007
    J. Mosler
    Abstract We ropose a variational h -adaption strategy in which the evolution of the mesh is driven directly by the governing minimum principle. This minimum principle is the principle of minimum potential energy in the case of elastostatics; and a minimum principle for the incremental static problem of elasto-viscoplasticity. In particular, the mesh is refined locally when the resulting energy or incremental pseudo-energy released exceeds a certain threshold value. In order to avoid global recomputes, we estimate the local energy released by mesh refinement by means of a lower bound obtained by relaxing a local patch of elements. This bound can be computed locally, which reduces the complexity of the refinement algorithm to O(N). We also demonstrate how variational h -refinement can be combined with variational r -refinement to obtain a variational hr -refinement algorithm. Because of the strict variational nature of the h -refinement algorithm, the resulting meshes are anisotropic and outperform other refinement strategies based on aspect ratio or other purely geometrical measures of mesh quality. The versatility and rate of convergence of the resulting approach are illustrated by means of selected numerical examples. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Adaptive control of Burgers' equation with unknown viscosity

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 7 2001
    Wei-Jiu Liu
    Abstract In this paper, we propose a fortified boundary control law and an adaptation law for Burgers' equation with unknown viscosity, where no a priori knowledge of a lower bound on viscosity is needed. This control law is decentralized, i.e., implementable without the need for central computer and wiring. Using the Lyapunov method, we prove that the closed-loop system, including the parameter estimator as a dynamic component, is globally H1 stable and well posed. Furthermore, we show that the state of the system is regulated to zero by developing an alternative to Barbalat's Lemma which cannot be used in the present situation. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Kalman filter-based channel estimation and ICI suppression for high-mobility OFDM systems

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 10 2008
    Prerana Gupta
    Abstract The use of orthogonal frequency division multiplexing (OFDM) in frequency-selective fading environments has been well explored. However, OFDM is more prone to time-selective fading compared with single-carrier systems. Rapid time variations destroy the subcarrier orthogonality and introduce inter-carrier interference (ICI). Besides this, obtaining reliable channel estimates for receiver equalization is a non-trivial task in rapidly fading systems. Our work addresses the problem of channel estimation and ICI suppression by viewing the system as a state-space model. The Kalman filter is employed to estimate the channel; this is followed by a time-domain ICI mitigation filter that maximizes the signal-to-interference plus noise ratio (SINR) at the receiver. This method is seen to provide good estimation performance apart from significant SINR gain with low training overhead. Suitable bounds on the performance of the system are described; bit error rate (BER) performance over a time-invariant Rayleigh fading channel serves as the lower bound, whereas BER performance over a doubly selective system with ICI as the dominant impairment provides the upper bound. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Price competition under universal service obligations

    INTERNATIONAL JOURNAL OF ECONOMIC THEORY, Issue 3 2010
    Axel Gautier
    L13; L51 In industries like telecom, postal services or energy provision, universal service obligations (uniform price and universal coverage) are often imposed on one market participant. Universal service obligations are likely to alter firms' strategic behavior in such competitive markets. In the present paper, we show that, depending on the entrant's market coverage and the degree of product differentiation, the Nash equilibrium in prices involves either pure or mixed strategies. We show that the pure strategy market sharing equilibrium, as identified by Valletti, Hoernig, and Barros (2002), defines a lower bound on the level of equilibrium prices. [source]


    Structure of the optimal income tax in the quasi-linear model

    INTERNATIONAL JOURNAL OF ECONOMIC THEORY, Issue 1 2007
    Nigar Hashimzade
    H21; H24 Existing numerical characterizations of the optimal income tax have been based on a limited number of model specifications. As a result, they do not reveal which properties are general. We determine the optimal tax in the quasi-linear model under weaker assumptions than have previously been used; in particular, we remove the assumption of a lower bound on the utility of zero consumption and the need to permit negative labor incomes. A Monte Carlo analysis is then conducted in which economies are selected at random and the optimal tax function constructed. The results show that in a significant proportion of economies the marginal tax rate rises at low skills and falls at high. The average tax rate is equally likely to rise or fall with skill at low skill levels, rises in the majority of cases in the centre of the skill range, and falls at high skills. These results are consistent across all the specifications we test. We then extend the analysis to show that these results also hold for Cobb-Douglas utility. [source]


    The persistence in international real interest rates

    INTERNATIONAL JOURNAL OF FINANCE & ECONOMICS, Issue 4 2004
    David E. Rapach
    Abstract In this paper, we investigate the degree of persistence in quarterly postwar tax-adjusted ex post real interest rates for 13 industrialized countries using two recently developed econometric procedures. Our results show that international tax-adjusted real interest rates are typically very persistent, with the lower bound of the 95% confidence interval for the sum of the autoregressive coefficients very close to 0.90 for nearly every country. A highly persistent real interest rate has important theoretical implications. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Using matching distance in size theory: A survey

    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 5 2006
    Michele d'Amico
    Abstract In this survey we illustrate how the matching distance between reduced size functions can be applied for shape comparison. We assume that each shape can be thought of as a compact connected manifold with a real continuous function defined on it, that is a pair (,,, : , , ,), called size pair. In some sense, the function , focuses on the properties and the invariance of the problem at hand. In this context, matching two size pairs (,, ,) and (,,, ,) means looking for a homeomorphism between , and ,, that minimizes the difference of values taken by , and , on the two manifolds. Measuring the dissimilarity between two shapes amounts to the difficult task of computing the value , = inff maxP,, |,(P) , ,(f(P))|, where f varies among all the homeomorphisms from , to ,,. From another point of view, shapes can be described by reduced size functions associated with size pairs. The matching distance between reduced size functions allows for a robust to perturbations comparison of shapes. The link between reduced size functions and the dissimilarity measure , is established by a theorem, stating that the matching distance provides an easily computable lower bound for ,. Throughout this paper we illustrate this approach to shape comparison by means of examples and experiments. © 2007 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 16, 154,161, 2006 [source]


    Solution of fuzzy matrix games: An application of the extension principle

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 8 2007
    Shiang-Tai Liu
    Conventional game theory is concerned with how rational individuals make decisions when they are faced with known payoffs. This article develops a solution method for the two-person zero-sum game where the payoffs are only approximately known and can be represented by fuzzy numbers. Because the payoffs are fuzzy, the value of the game is fuzzy as well. Based on the extension principle, a pair of two-level mathematical programs is formulated to obtain the upper bound and lower bound of the value of the game at possibility level ,. By applying a dual formulation and a variable substitution technique, the pair of two-level mathematical programs is transformed to a pair of ordinary one-level linear programs so they can be manipulated. From different values of ,, the membership function of the fuzzy value of the game is constructed. It is shown that the two players have the same fuzzy value of the game. An example illustrates the whole idea of a fuzzy matrix game. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 891,903, 2007. [source]


    Towards an automated deduction system for first-order possibilistic logic programming with fuzzy constants

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 9 2002
    Teresa Alsinet
    In this article, we present a first-order logic programming language for fuzzy reasoning under possibilistic uncertainty and poorly known information. Formulas are represented by a pair (,, ,), in which , is a first-order Horn clause or a query with fuzzy constants and regular predicates, and , , [0, 1] is a lower bound on the belief on , in terms of necessity measures. Since fuzzy constants can occur in the logic component of formulas, the truth value of formulas is many-valued instead of Boolean. Moreover, since we have to reason about the possibilistic uncertainty of formulas with fuzzy constants, belief states are modeled by normalized possibility distributions on a set of many-valued interpretations. In this framework, (1) we define a syntax and a semantics of the underlying logic; (2) we give a sound modus ponens-style calculus by derivation based on a semantic unification pattern of fuzzy constants; (3) we develop a directional fuzzy unification algorithm based on the distinction between general and specific object constants; and (4) we describe a backward first-order proof procedure oriented to queries that is based on the calculus of the language and the computation of the unification degree between fuzzy constants in terms of a necessity measure for fuzzy events. © 2002 Wiley Periodicals, Inc. [source]


    Approximate lower bounds of the Weinstein and Temple variety

    INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 6 2007
    M. G. Marmorino
    Abstract By using the Weinstein interval or coupling the Temple lower bound to a variational upper bound one can in principle construct an error bar about the ground-state energy of an electronic system. Unfortunately there are theoretical and calculational issues which complicate this endeavor so that at best only an upper bound to the electronic energy has been practical in systems with more than a few electrons. The calculational issue is the complexity of ,H2, which is necessary in the Temple or Weinstein approach. In this work we provide a way to approximate the ,H2, to any desired accuracy using much simpler ,H,-like information so that the lower bound calculations are more practical. The helium atom is used as a testing ground in which we obtain approximate error bars for the ground-state energy of [,2.904230, ,2.903721] hartree using the variational energy with the Temple lower bound and [,2.919098, ,2.888344] hartree for the Weinstein interval. For comparison, the slightly larger error bars using the exact value of ,H2, are: [,2.904358, ,2.903721] hartree and [,2.919765, ,2.887677] hartree, respectively. © 2006 Wiley Periodicals, Inc. Int J Quantum Chem, 2007 [source]


    Signal reconstruction in the presence of finite-rate measurements: finite-horizon control applications

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 1 2010
    Sridevi V. Sarma
    Abstract In this paper, we study finite-length signal reconstruction over a finite-rate noiseless channel. We allow the class of signals to belong to a bounded ellipsoid and derive a universal lower bound on a worst-case reconstruction error. We then compute upper bounds on the error that arise from different coding schemes and under different causality assumptions. When the encoder and decoder are noncausal, we derive an upper bound that either achieves the universal lower bound or is comparable to it. When the decoder and encoder are both causal operators, we show that within a very broad class of causal coding schemes, memoryless coding prevails as optimal, imposing a hard limitation on reconstruction. Finally, we map our general reconstruction problem into two important control problems in which the plant and controller are local to each other, but are together driven by a remote reference signal that is transmitted through a finite-rate noiseless channel. The first problem is to minimize a finite-horizon weighted tracking error between the remote system output and a reference command. The second problem is to navigate the state of the remote system from a nonzero initial condition to as close to the origin as possible in finite-time. Our analysis enables us to quantify the tradeoff between time horizon and performance accuracy, which is not well studied in the area of control with limited information as most works address infinite-horizon control objectives (e.g. stability, disturbance rejection). Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Tree search algorithm for assigning cooperating UAVs to multiple tasks

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 2 2008
    Steven J. Rasmussen
    Abstract This paper describes a tree search algorithm for assigning cooperating homogeneous uninhabited aerial vehicles to multiple tasks. The combinatorial optimization problem is posed in the form of a decision tree, the structure of which enforces the required group coordination and precedence for cooperatively performing the multiple tasks. For path planning, a Dubin's car model is used so that the vehicles' constraint, of minimum turning radius, is taken into account. Due to the prohibitive computational complexity of the problem, exhaustive enumeration of all the assignments encoded in the tree is not feasible. The proposed optimization algorithm is initialized by a best-first search and candidate optimal solutions serve as a monotonically decreasing upper bound for the assignment cost. Euclidean distances are used for estimating the path length encoded in branches of the tree that have not yet been evaluated by the computationally intensive Dubin's optimization subroutine. This provides a lower bound for the cost of unevaluated assignments. We apply these upper and lower bounding procedures iteratively on active subsets within the feasible set, enabling efficient pruning of the solution tree. Using Monte Carlo simulations, the performance of the search algorithm is analyzed for two different cost functions and different limits on the vehicles' minimum turn radius. It is shown that the selection of the cost function and the limit have a considerable effect on the level of cooperation between the vehicles. The proposed deterministic search method can be applied on line to different sized problems. For small-sized problems, it provides the optimal solution. For large-sized problems, it provides an immediate feasible solution that improves over the algorithm's run time. When the proposed method is applied off line, it can be used to obtain the optimal solution, which can be used to evaluate the performance of other sub-optimal search methods. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Development of a skew µ lower bound

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 11 2005
    Rod Holland
    Abstract Exploitation of the NP hard, mixed µ problem structure provides a polynomial time algorithm that approximates µ with usually reasonable answers. When the problem is extended to the skew µ problem an extension of the existing method to the skew µ formulation is required. The focus of this paper is to extend the µ lower bound derivation to the skew µ lower bound and show its direct computation by way of a power algorithm. Copyright © 2005 John Wiley & Sons, Ltd. [source]