Optimality

Distribution by Scientific Domains
Distribution within Engineering

Kinds of Optimality

  • pareto optimality

  • Terms modified by Optimality

  • optimality condition
  • optimality criterioN
  • optimality criterion
  • optimality models
  • optimality property
  • optimality theory

  • Selected Abstracts


    SUBSIDY IN LICENSING: OPTIMALITY AND WELFARE IMPLICATIONS*

    THE MANCHESTER SCHOOL, Issue 3 2005
    CHUN-HSIUNG LIAO
    This paper shows that subsidy can naturally emerge as part of the equilibrium strategy of an innovator of a cost-reducing innovation in a Cournot oligopoly when the innovator is endowed with combinations of upfront fee and royalty. It is further shown that there are robust regions where the social welfare is higher in subsidy-based licensing compared to the regime where licensing involving subsidy is not allowed. The analysis is carried out for both outsider and incumbent innovators. [source]


    Optimality for the linear quadratic non-Gaussian problem via the asymmetric Kalman filter

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 1 2004
    Rosario Romera
    Abstract In the linear non-Gaussian case, the classical solution of the linear quadratic Gaussian (LQG) control problem is known to provide the best solution in the class of linear transformations of the plant output if optimality refers to classical least-squares minimization criteria. In this paper, the adaptive linear quadratic control problem is solved with optimality based on asymmetric least-squares approach, which includes least-squares criteria as a special case. Our main result gives explicit solutions for this optimal quadratic control problem for partially observable dynamic linear systems with asymmetric observation errors. The main difficulty is to find the optimal state estimate. For this purpose, an asymmetric version of the Kalman filter based on asymmetric least-squares estimation is used. We illustrate the applicability of our approach with numerical results. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Dynamic Optimality of Yield Curve Strategies,

    INTERNATIONAL REVIEW OF FINANCE, Issue 1-2 2003
    Takao Kobayashi
    This paper formulates and analyzes a dynamic optimization problem of bond portfolios within Markovian Heath,Jarrow,Morton term structure models. In particular, we investigate optimal yield curve strategies analytically and numerically, and provide theoretical justification for a typical strategy which is recommended in practice for an expected change in the shape of the yield curve. In the numerical analysis, we utilize a new technique based on the asymptotic expansion approach in order to increase efficiency in computation. [source]


    Estimation Optimality of Corrected AIC and Modified Cp in Linear Regression

    INTERNATIONAL STATISTICAL REVIEW, Issue 2 2006
    Simon L. Davies
    Summary Model selection criteria often arise by constructing unbiased or approximately unbiased estimators of measures known as expected overall discrepancies (Linhart & Zucchini, 1986, p. 19). Such measures quantify the disparity between the true model (i.e., the model which generated the observed data) and a fitted candidate model. For linear regression with normally distributed error terms, the "corrected" Akaike information criterion and the "modified" conceptual predictive statistic have been proposed as exactly unbiased estimators of their respective target discrepancies. We expand on previous work to additionally show that these criteria achieve minimum variance within the class of unbiased estimators. Résumé Les critères de modèle de sélection naissent souvent de la construction de mesures d'estimation impartiales, ou approximativement impartiales, connues comme divergences globales prévues. De telles mesures quantifient la disparité entre le vrai modèle (c'est-à-dire le modèle qui a produit les données observées) et un modèle candidat correspondant. En ce qui concerne les applications de régression linéaires contenant des erreurs distribuées normalement, le modèle de critère d'information "corrigé" Akaike et le modèle conceptuel de statistique de prévision "modifié" ont été proposés comme étant des instruments exacts de mesures d'estimation impartiales de leurs objectifs respectifs de divergences. En nous appuyant sur les travaux précédents et en les développant, nous proposons de démontrer, en outre, que ces critères réalisent une variance minimum au sein de la classe des instruments de mesures d'estimation impartiales. [source]


    Optimality of greedy and sustainable policies in the management of renewable resources

    OPTIMAL CONTROL APPLICATIONS AND METHODS, Issue 1 2003
    A. Rapaport
    Abstract We consider a discrete-time modelling of renewable resources, which regenerate after a delay once harvested. We study the qualitative behaviour of harvesting policies, which are optimal with respect to a discounted utility function over infinite horizon. Using Bellman's equation, we derive analytically conditions under which two types of policies (greedy and sustainable) are optimal, depending on the discount rate and the marginal utility. For this particular class of problems, we show also that the greedy policy is attractive in a certain sense. The techniques of proof lie on concavity, comparison of value functions and Lyapunov-like functions. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Photosynthetic Acclimation to Simultaneous and Interacting Environmental Stresses Along Natural Light Gradients: Optimality and Constraints

    PLANT BIOLOGY, Issue 3 2004
    ü. Niinemets
    Abstract: There is a strong natural light gradient from the top to the bottom in plant canopies and along gap-understorey continua. Leaf structure and photosynthetic capacities change close to proportionally along these gradients, leading to maximisation of whole canopy photosynthesis. However, other environmental factors also vary within the light gradients in a correlative manner. Specifically, the leaves exposed to higher irradiance suffer from more severe heat, water, and photoinhibition stresses. Research in tree canopies and across gap-understorey gradients demonstrates that plants have a large potential to acclimate to interacting environmental limitations. The optimum temperature for photosynthetic electron transport increases with increasing growth irradiance in the canopy, improving the resistance of photosynthetic apparatus to heat stress. Stomatal constraints on photosynthesis are also larger at higher irradiance because the leaves at greater evaporative demands regulate water use more efficiently. Furthermore, upper canopy leaves are more rigid and have lower leaf osmotic potentials to improve water extraction from drying soil. The current review highlights that such an array of complex interactions significantly modifies the potential and realized whole canopy photosynthetic productivity, but also that the interactive effects cannot be simply predicted as composites of additive partial environmental stresses. We hypothesize that plant photosynthetic capacities deviate from the theoretical optimum values because of the interacting stresses in plant canopies and evolutionary trade-offs between leaf- and canopy-level plastic adjustments in light capture and use. [source]


    Construction and Optimality of a Special Class of Balanced Designs

    QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 5 2006
    Stefano Barone
    Abstract The use of balanced designs is generally advisable in experimental practice. In technological experiments, balanced designs optimize the exploitation of experimental resources, whereas in marketing research experiments they avoid erroneous conclusions caused by the misinterpretation of interviewed customers. In general, the balancing property assures the minimum variance of first-order effect estimates. In this work the authors consider situations in which all factors are categorical and minimum run size is required. In a symmetrical case, it is often possible to find an economical balanced design by means of algebraic methods. Conversely, in an asymmetrical case algebraic methods lead to expensive designs, and therefore it is necessary to adopt heuristic methods. The existing methods implemented in widespread statistical packages do not guarantee the balancing property as they are designed to pursue other optimality criteria. To deal with this problem, the authors recently proposed a new method to generate balanced asymmetrical designs aimed at estimating first- and second-order effects. To reduce the run size as much as possible, the orthogonality cannot be guaranteed. However, the method enables designs that approach the orthogonality as much as possible (near orthogonality). A collection of designs with two- and three-level factors and run size lower than 100 was prepared. In this work an empirical study was conducted to understand how much is lost in terms of other optimality criteria when pursuing balancing. In order to show the potential applications of these designs, an illustrative example is provided. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    On the Optimality of Restricting Credit: Inflation Avoidance and Productivity

    THE JAPANESE ECONOMIC REVIEW, Issue 3 2000
    Max Gillman
    The paper presents a model in which the consumer uses up resources in order to avoid the inflation tax through the use of exchange credit. In an example economy without capital, the credit tax is optimal when the resource loss from credit use dominates the productivity effect and the inefficiency of substitution towards leisure as a result of the credit tax. The paper also examines second-best inflation policy in this context, given a credit tax. It then extends the economy to an endogenous growth setting and shows how restricting inflation avoidance can increase productivity. JEL Classification Numbers: E13, E51, E61, G21. [source]


    Prenatal Antecedents of Newborn Neurological Maturation

    CHILD DEVELOPMENT, Issue 1 2010
    Janet A. DiPietro
    Fetal neurobehavioral development was modeled longitudinally using data collected at weekly intervals from 24 to 38 weeks gestation in a sample of 112 healthy pregnancies. Predictive associations between 3 measures of fetal neurobehavioral functioning and their developmental trajectories to neurological maturation in the first weeks after birth were examined. Prenatal measures included fetal heart rate (FHR) variability, fetal movement, and coupling between fetal motor activity and heart rate patterning; neonatal outcomes include a standard neurologic examination (n = 97) and brainstem auditory evoked potential (BAEP; n = 47). Optimality in newborn motor activity and reflexes was predicted by fetal motor activity, FHR variability, and somatic,cardiac coupling predicted BAEP parameters. Maternal pregnancy-specific psychological stress was associated with accelerated neurologic maturation. [source]


    PREPROCESSING RULES FOR TRIANGULATION OF PROBABILISTIC NETWORKS,

    COMPUTATIONAL INTELLIGENCE, Issue 3 2005
    Hans L. Bodlaender
    Currently, the most efficient algorithm for inference with a probabilistic network builds upon a triangulation of a network's graph. In this paper, we show that pre-processing can help in finding good triangulations for probabilistic networks, that is, triangulations with a maximum clique size as small as possible. We provide a set of rules for stepwise reducing a graph, without losing optimality. This reduction allows us to solve the triangulation problem on a smaller graph. From the smaller graph's triangulation, a triangulation of the original graph is obtained by reversing the reduction steps. Our experimental results show that the graphs of some well-known real-life probabilistic networks can be triangulated optimally just by preprocessing; for other networks, huge reductions in their graph's size are obtained. [source]


    An optimal multimedia object allocation solution in multi-powermode storage systems

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2010
    Yingwei Jin
    Abstract Given a set of multimedia objects R={o1, o2, ,, ok} each of which has a set of multiple versions oi.v={Ai.0, Ai.1, ,, Ai.m}, i=1, 2, ,, k, there is a problem of distributing these objects in a server system so that user requests for accessing specified multimedia objects can be fulfilled with the minimum energy consumption and without significant degrading of the system performance. This paper considers the allocation problem of multimedia objects in multi-powermode storage systems, where the objects are distributed among multi-powermode storages based on the access pattern to the objects. We design an underlying infrastructure of storage system and propose a dynamic multimedia object allocation policy based on the designed infrastructure, which integrate and prove the optimality of the proposed policy. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    A role for Pareto optimality in mining performance data

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 1 2005
    Joël M. Malard
    Abstract Improvements in performance modeling and identification of computational regimes within software libraries is a critical first step in developing software libraries that are truly agile with respect to the application as well as to the hardware. It is shown here that Pareto ranking, a concept from multi-objective optimization, can be an effective tool for mining large performance datasets. The approach is illustrated using software performance data gathered using both the public domain LAPACK library and an asynchronous communication library based on IBM LAPI active message library. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    On the optimality of Feautrier's scheduling algorithm

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11-12 2003
    Frédéric Vivien
    Abstract Feautrier's scheduling algorithm is the most powerful existing algorithm for parallelism detection and extraction, but it has always been known to be suboptimal. However, the question as to whether it may miss some parallelism because of its design has not been answered. We show that this is not the case. Therefore, for an algorithm to find more parallelism than this algorithm, one needs to remove some of the hypotheses underlying its framework. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Decision Theory Applied to an Instrumental Variables Model

    ECONOMETRICA, Issue 3 2007
    Gary Chamberlain
    This paper applies some general concepts in decision theory to a simple instrumental variables model. There are two endogenous variables linked by a single structural equation; k of the exogenous variables are excluded from this structural equation and provide the instrumental variables (IV). The reduced-form distribution of the endogenous variables conditional on the exogenous variables corresponds to independent draws from a bivariate normal distribution with linear regression functions and a known covariance matrix. A canonical form of the model has parameter vector (,, ,, ,), where ,is the parameter of interest and is normalized to be a point on the unit circle. The reduced-form coefficients on the instrumental variables are split into a scalar parameter ,and a parameter vector ,, which is normalized to be a point on the (k,1)-dimensional unit sphere; ,measures the strength of the association between the endogenous variables and the instrumental variables, and ,is a measure of direction. A prior distribution is introduced for the IV model. The parameters ,, ,, and ,are treated as independent random variables. The distribution for ,is uniform on the unit circle; the distribution for ,is uniform on the unit sphere with dimension k-1. These choices arise from the solution of a minimax problem. The prior for ,is left general. It turns out that given any positive value for ,, the Bayes estimator of ,does not depend on ,; it equals the maximum-likelihood estimator. This Bayes estimator has constant risk; because it minimizes average risk with respect to a proper prior, it is minimax. The same general concepts are applied to obtain confidence intervals. The prior distribution is used in two ways. The first way is to integrate out the nuisance parameter ,in the IV model. This gives an integrated likelihood function with two scalar parameters, ,and ,. Inverting a likelihood ratio test, based on the integrated likelihood function, provides a confidence interval for ,. This lacks finite sample optimality, but invariance arguments show that the risk function depends only on ,and not on ,or ,. The second approach to confidence sets aims for finite sample optimality by setting up a loss function that trades off coverage against the length of the interval. The automatic uniform priors are used for ,and ,, but a prior is also needed for the scalar ,, and no guidance is offered on this choice. The Bayes rule is a highest posterior density set. Invariance arguments show that the risk function depends only on ,and not on ,or ,. The optimality result combines average risk and maximum risk. The confidence set minimizes the average,with respect to the prior distribution for ,,of the maximum risk, where the maximization is with respect to ,and ,. [source]


    Choosing the Number of Instruments

    ECONOMETRICA, Issue 5 2001
    Stephen G. Donald
    Properties of instrumental variable estimators are sensitive to the choice of valid instruments, even in large cross-section applications. In this paper we address this problem by deriving simple mean-square error criteria that can be minimized to choose the instrument set. We develop these criteria for two-stage least squares (2SLS), limited information maximum likelihood (LIML), and a bias adjusted version of 2SLS (B2SLS). We give a theoretical derivation of the mean-square error and show optimality. In Monte Carlo experiments we find that the instrument choice generally yields an improvement in performance. Also, in the Angrist and Krueger (1991) returns to education application, when the instrument set is chosen in the way we consider, it turns out that both 2SLS and LIML give similar (large) returns to education. [source]


    Experiments on stabilizing receding horizon control of a direct drive manipulator

    ELECTRONICS & COMMUNICATIONS IN JAPAN, Issue 5 2008
    Yasunori Kawai
    Abstract In this paper, the application of receding horizon control to a two-link direct drive robot arm is demonstrated. Instead of terminal constraints, a terminal cost on receding horizon control is used to guarantee stability, because of the computational demand. The key idea of this paper is to apply receding horizon control with a terminal cost derived from the energy function of the robot system. The energy function is defined as the control Lyapunov function by considering inverse optimality. In experimental results, stability and performance are compared with respect to the horizon length by applying receding horizon control and inverse optimal control to the robot arm. © 2008 Wiley Periodicals, Inc. Electron Comm Jpn, 91(5): 33,40, 2008; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecj.10113 [source]


    How to find what's in a name: Scrutinizing the optimality of five scoring algorithms for the name-letter task

    EUROPEAN JOURNAL OF PERSONALITY, Issue 2 2009
    Etienne P. LeBel
    Abstract Although the name-letter task (NLT) has become an increasingly popular technique to measure implicit self-esteem (ISE), researchers have relied on different algorithms to compute NLT scores and the psychometric properties of these differently computed scores have never been thoroughly investigated. Based on 18 independent samples, including 2690 participants, the current research examined the optimality of five scoring algorithms based on the following criteria: reliability; variability in reliability estimates across samples; types of systematic error variance controlled for; systematic production of outliers and shape of the distribution of scores. Overall, an ipsatized version of the original algorithm exhibited the most optimal psychometric properties, which is recommended for future research using the NLT. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Physical apertures as constraints on egg size and shape in the Common Musk Turtle, Sternotherus odoratus

    FUNCTIONAL ECOLOGY, Issue 1 2001
    P. J. Clark
    Summary 1,Egg size in turtles often increases with female size, contrary to expectations of optimality. Functional constraints on egg width imposed by the pelvic aperture or the gap between the carapace and plastron (the caudal gap) have been inferred for a few populations but appear inapplicable in others. 2,For Sternotherus odoratus (the Common Musk Turtle), the pelvic aperture was always wider than the width of the female's largest egg by at least 3·7 mm. The caudal gap was narrower than the widest egg for 25·7% of the females. 3,Egg width increased, and elongation (length/width) decreased, as female size and clutch size increased. 4,Females at three ecologically contrasting sites differed appreciably in size but produced eggs of the same mean shape and size, despite the strong within-site changes in both egg size and shape with female size. As the younger females at all sites were of similar age and produced eggs of similar size and shape (again, despite differences in body size), egg size and shape may be age-specific. 5,No optimal egg size prevailed but the scaled residuals of egg size with female mass were less variable than were those for clutch size. [source]


    Locational Equilibria in Weberian Agglomeration

    GEOGRAPHICAL ANALYSIS, Issue 4 2008
    Dean M. Hanink
    A simple Weberian agglomeration is developed and then extended as an innovative fixed-charged, colocation model over a large set of locational possibilities. The model is applied to cases in which external economies (EE) arise due to colocation alone and also cases in which EE arise due to city size. Solutions to the model are interpreted in the context of contemporary equilibrium analysis, which allows Weberian agglomeration to be interpreted in a more general way than in previous analyses. Within that context, the Nash points and Pareto efficient points in the location patterns derived in the model are shown to rarely coincide. The applications consider agglomeration from two perspectives: one is the colocation behavior of producers as the agents of agglomeration and the other is the interaction between government and those agents in the interest of agglomeration policy. Extending the analysis to games, potential Pareto efficiency and Hicks optimality are considered with respect to side payments between producers and with respect to appropriate government incentives toward agglomeration. [source]


    The concept of work compatibility: An integrated design criterion for improving workplace human performance in manufacturing systems

    HUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 4 2004
    S. Abdallah
    In this paper, we present the concept of work compatibility as an integrated work design criterion that simultaneously improves human health and safety, productivity, and work quality in manufacturing systems. In this respect, we have modeled work compatibility as a work design parameter that mathematically integrates the energizing (i.e., system resources) and the demand (i.e., system requirements) forces in the work system. A mathematical equation has been derived for the work compatibility matrix. Furthermore, an operating zone has been developed in which there is a region of optimality for the employee to function on practical grounds with a good degree of efficiency and sustainability. An application example is provided to demonstrate the potential of work compatibility to improve productivity and quality along with improvement in worker safety and health. © 2004 Wiley Periodicals, Inc. Hum Factors Man 14: 379,402, 2004. [source]


    Energy Saving Speed and Charge/Discharge Control of a Railway Vehicle with On-board Energy Storage by Means of an Optimization Model

    IEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING, Issue 6 2009
    Masafumi Miyatake Member
    Abstract The optimal operation of rail vehicle minimizing total energy consumption is discussed in this paper. In recent years, the energy storage devices have enough energy and power density to use in trains as on-board energy storage. The on-board storage can assist the acceleration/deceleration of the train and may decrease energy consumption. Many works on the application of the energy storage devices to trains were reported, however, they did not deal enough with the optimality of the control of the devices. The authors pointed out that the charging/discharging command and vehicle speed profile should be optimized together based on the optimality analysis. The authors have developed the mathematical model based on a general optimization technique, sequential quadratic programming. The proposed method can determine the optimal acceleration/deceleration and current commands at every sampling point under fixed conditions of transfer time and distance. Using the proposed method, simulations were implemented in some cases. The electric double layer capacitor (EDLC) is assumed as an energy storage device in our study, because of its high power density etc. The trend of optimal solutions such as values of control inputs and energy consumption is finally discussed. Copyright © 2009 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc. [source]


    Estimating Labor Demand with Fixed Costs*

    INTERNATIONAL ECONOMIC REVIEW, Issue 1 2004
    Paola Rota
    We consider a dynamic model in which firms decide whether or not to vary labor in the presence of fixed costs. By exploiting the first-order condition for optimality, we derive a semireduced form in which firms' intertemporal employment is defined by a standard marginal productivity condition augmented by a forward-looking term. We obtain a marginal productivity equilibrium relation that takes into account the future alternatives of adjustment or nonadjustment that firms face. We use the structural parameter from this condition to estimate the fixed cost within a discrete decision process. Fixed costs are about 15 months' labor cost. [source]


    Local maximum-entropy approximation schemes: a seamless bridge between finite elements and meshfree methods

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 13 2006
    M. Arroyo
    Abstract We present a one-parameter family of approximation schemes, which we refer to as local maximum-entropy approximation schemes, that bridges continuously two important limits: Delaunay triangulation and maximum-entropy (max-ent) statistical inference. Local max-ent approximation schemes represent a compromise,in the sense of Pareto optimality,between the competing objectives of unbiased statistical inference from the nodal data and the definition of local shape functions of least width. Local max-ent approximation schemes are entirely defined by the node set and the domain of analysis, and the shape functions are positive, interpolate affine functions exactly, and have a weak Kronecker-delta property at the boundary. Local max-ent approximation may be regarded as a regularization, or thermalization, of Delaunay triangulation which effectively resolves the degenerate cases resulting from the lack or uniqueness of the triangulation. Local max-ent approximation schemes can be taken as a convenient basis for the numerical solution of PDEs in the style of meshfree Galerkin methods. In test cases characterized by smooth solutions we find that the accuracy of local max-ent approximation schemes is vastly superior to that of finite elements. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Displacement/pressure mixed interpolation in the method of finite spheres

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2001
    Suvranu De
    Abstract The displacement-based formulation of the method of finite spheres is observed to exhibit volumetric ,locking' when incompressible or nearly incompressible deformations are encountered. In this paper, we present a displacement/pressure mixed formulation as a solution to this problem. We analyse the stability and optimality of the formulation for several discretization schemes using numerical inf,sup tests. Issues concerning computational efficiency are also discussed. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Optimality for the linear quadratic non-Gaussian problem via the asymmetric Kalman filter

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 1 2004
    Rosario Romera
    Abstract In the linear non-Gaussian case, the classical solution of the linear quadratic Gaussian (LQG) control problem is known to provide the best solution in the class of linear transformations of the plant output if optimality refers to classical least-squares minimization criteria. In this paper, the adaptive linear quadratic control problem is solved with optimality based on asymmetric least-squares approach, which includes least-squares criteria as a special case. Our main result gives explicit solutions for this optimal quadratic control problem for partially observable dynamic linear systems with asymmetric observation errors. The main difficulty is to find the optimal state estimate. For this purpose, an asymmetric version of the Kalman filter based on asymmetric least-squares estimation is used. We illustrate the applicability of our approach with numerical results. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Simplified group interference cancelling for asynchronous DS-CDMA

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 10 2006
    David W. Matolak
    Abstract A simplified group interference cancelling (IC) approach is investigated for asynchronous direct-sequence code-division multiple access on flat fading channels. The technique employs grouping by estimated signal-to-noise-plus-interference ratio (SNIR), and interference cancellation is performed blockwise, for a subset of the total number of users. We consider long random spreading codes, and include the effects of imperfect amplitude, carrier phase, and delay estimation. Performance of the technique shows SNIR gains of several dB, and concomitant improvements in error probability, with lower computational complexity than that of parallel or serial interference cancelling techniques. We also show that our SNIR expressions are applicable to both the AWGN and flat fading channels, and for moderate near,far conditions. In addition, we determine optimal group sizes for our technique, where optimality is in terms of average error probability over all users. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    The Phelps,Koopmans theorem and potential optimality

    INTERNATIONAL JOURNAL OF ECONOMIC THEORY, Issue 1 2010
    Debraj Ray
    D90; O41 The Phelps,Koopmans theorem states that if every limit point of a path of capital stocks exceeds the "golden rule," then that path is inefficient: there is another feasible path from the same initial stock that provides at least as much consumption at every date and strictly more consumption at some date. I show that in a model with nonconvex technologies and preferences, the theorem is false in a strong sense. Not only can there be efficient paths with capital stocks forever above and bounded away from a unique golden rule, such paths can also be optimal under the infinite discounted sum of a one-period utility function. The paper makes clear, moreover, that this latter criterion is strictly more demanding than the efficiency of a path. [source]


    Reflections on the optimal currency area (OCA) criteria in the light of EMU

    INTERNATIONAL JOURNAL OF FINANCE & ECONOMICS, Issue 4 2003
    M.J. Artis
    Abstract Optimal Currency Area (OCA) theory offers criteria for evaluating the optimality of monetary union arrangements. This paper reviews the use that has been made of these criteria in the specific context of European Monetary Union. It reviews the use of business cycle synchronization data and the data produced by SVAR analyses, which led to the ,core,periphery' distinction. It also reviews extensions of the criteria that have been proposed or generated in this context: in particular, the proposition that the criteria may be ,endogenous'. It presents Taylor rule estimates to check for inhomogeneities in Euro Area performance. The paper concludes that OCA criteria provide a useful starting point for evaluating monetary union options. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    A fuzzy goal programming procedure for solving quadratic bilevel programming problems

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 5 2003
    Bijay Baran Pal
    This article presents a fuzzy goal programming (FGP) procedure for solving quadratic bilevel programming problems (QBLPP). In the proposed approach, the membership functions for the defined fuzzy objective goals of the decision makers (DM) at both the levels are developed first. Then, a quadratic programming model is formulated by using the notion of distance function minimizing the degree of regret to satisfaction of both DMs. At the first phase of the solution process, the quadratic programming model is transformed into an equivalent nonlinear goal programming (NLGP) model to maximize the membership value of each of the fuzzy objective goals on the extent possible on the basis of their priorities in the decision context. Then, at the second phase, the concept of linear approximation technique in goal programming is introduced for measuring the degree of satisfaction of the DMs at both the levels by arriving at a compromised decision regarding the optimality of two different sets of decision variables controlled separately by each of them. A numerical example is provided to illustrate the proposed approach. © 2003 Wiley Periodicals, Inc. [source]


    Modeling and optimization of cylindrical antennas using the mode-expansion method and genetic algorithms

    INTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 6 2005
    Dawei Shen
    Abstract For monopole antennas with cylindrically symmetric structures, a mode-expansion method is highly time efficient, which is a realistic approach for integrating function-optimization tools, such as genetic algorithms (GAs), in order to extract the best bandwidth property. In this article, a mode-expansion method is used to simulate the impedance characteristics of the cylindrical antennas. As examples, two new types of monopole antennas are presented, one of which possesses a two-step top-hat structure while the other has an annulus around the stem. After the modeling scheme is examined for convergence and data validity, the associated optimization problem, with dimensions as decision variables, structural limitations as linear constraints, and desired bandwidth performance as an objective function, is solved using GAs. The effects of the geometric parameters on the impedance characteristics are investigated in order to demonstrate the optimality of the calculated solutions. Two optimized practical antennas are designed based on our numerical studies. One has a broad bandwidth of 3 GHz while the other shows a dual-band property, which can satisfy the bandwidth requirements for both Bluetooth (2.45-GHz band) and WLAN (5-GHz band) systems. © 2005 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2005. [source]