Optimal Choice (optimal + choice)

Distribution by Scientific Domains


Selected Abstracts


Optimal choice of granularity in commonsense estimation: Why half-orders of magnitude?

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 8 2006
Jerry R. Hobbs
It has been observed that when people make crude estimates, they feel comfortable choosing between alternatives that differ by a half-order of magnitude (e.g., were there 100, 300, or 1000 people in the crowd?) and less comfortable making a choice on a more detailed scale, with finer granules, or on a coarser scale (like 100 or 1000). In this article, we describe two models of choosing granularity in commonsense estimates, and we show that for both models, in the optimal granularity, the next estimate is three to four times larger than the previous one. Thus, these two optimization results explain the commonsense granularity. © 2006 Wiley Periodicals, Inc. Int J Int Syst 21: 843,855, 2006. [source]


Optimal choice of characteristics for a nonexcludable good

THE RAND JOURNAL OF ECONOMICS, Issue 1 2008
Isabelle Brocas
In this model, a principal decides whether to produce one indivisible good and which characteristics it contains. Agents are differentiated along two substitutable dimensions: a vertical parameter that captures their valuation for the good, and a horizontal parameter that captures their disutility when the characteristics are distant from their preferred ones. When valuations are private information, the principal produces a good with characteristics more on the lines of the preferences of the agent with the lowest valuation. Under asymmetric information on the horizontal dimension, the principal biases the decision in favor of the agent who incurs the highest disutility. [source]


Optimal Design of the Online Auction Channel: Analytical, Empirical, and Computational Insights,

DECISION SCIENCES, Issue 4 2002
Ravi Bapna
ABSTRACT The focus of this study is on business-to-consumer (B2C) online auctions made possible by the advent of electronic commerce over an open-source, ubiquitous Internet Protocol (IP) computer network. This work presents an analytical model that characterizes the revenue generation process for a popular B2C online auction, namely, Yankee auctions. Such auctions sell multiple identical units of a good to multiple buyers using an ascending and open auction mechanism. The methodologies used to validate the analytical model range from empirical analysis to simulation. A key contribution of this study is the design of a partitioning scheme of the discrete valuation space of the bidders such that equilibrium points with higher revenue structures become identifiable and feasible. Our analysis indicates that the auctioneers are, most of the time, far away from the optimal choice of key control factors such as the bid increment, resulting in substantial losses in a market with already tight margins. With this in mind, we put forward a portfolio of tools, varying in their level of abstraction and information intensity requirements, which help auctioneers maximize their revenues. [source]


Mixture toxicity and gene inductions: Can we predict the outcome?

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 3 2008
Freddy Dardenne
Abstract As a consequence of the nature of most real-life exposure scenarios, the last decade of ecotoxicological research has seen increasing interest in the assessment of mixture ecotoxicology. Often, mixtures are considered to follow one of two models, concentration addition (CA) or response addition (RA), both of which have been described in the literature. Nevertheless, mixtures that deviate from either or both models exist; they typically exhibit phenomena like synergism, ratio or concentration dependency, or inhibition. Moreover, both CA and RA have been challenged and evaluated mainly for acute responses at relatively high levels of biological organization (e.g., whole-organism mortality), and applicability to genetic responses has not received much attention. Genetic responses are considered to be the primary reaction in case of toxicant exposure and carry valuable mechanistic information. Effects at the gene-expression level are at the heart of the mode of action by toxicants and mixtures. The ability to predict mixture responses at this primary response level is an important asset in predicting and understanding mixture effects at different levels of biological organization. The present study evaluated the applicability of mixture models to stress gene inductions in Escherichia coli employing model toxicants with known modes of action in binary combinations. The results showed that even if the maximum of the dose,response curve is not known, making a classical ECx (concentration causing x% effect) approach impossible, mixture models can predict responses to the binary mixtures based on the single-toxicant response curves. In most cases, the mode of action of the toxicants does not determine the optimal choice of model (i.e., CA, RA, or a deviation thereof). [source]


On dichotomizing phenotypes in family-based association tests: quantitative phenotypes are not always the optimal choice

GENETIC EPIDEMIOLOGY, Issue 5 2007
David Fardo
Abstract In family-based association studies, quantitative traits are thought to provide higher statistical power than dichotomous traits. Consequently, it is standard practice to collect quantitative traits and to analyze them as such. However, in many situations, continuous measurements are more difficult to obtain and/or need to be adjusted for other factors/confounding variables which also have to be measured. In such scenarios, it can be advantageous to record and analyze a "simplified/dichotomized" version of the original trait. Under fairly general circumstances, we derive here rules for the dichotomization of quantitative traits that maintain power levels that are comparable to the analysis of the original quantitative trait. Using simulation studies, we show that the proposed rules are robust against phenotypic misclassification, making them an ideal tool for inexpensive phenotyping in large-scale studies. The guidelines are illustrated by an application to an asthma study. Genet. Epidemiol. 2007. © 2007 Wiley-Liss, Inc. [source]


On the spatial scaling of seismicity rate

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2005
G. Molchan
SUMMARY Scaling analysis of seismicity in the space,time,magnitude domain very often starts from the relation ,(m, L) =aL 10,bmLc for the rate of seismic events of magnitude M >m in an area of size L. There is some evidence in favour of multifractality being present in seismicity. In this case, the optimal choice of the scale exponent c is not unique. It is shown how different values of c are related to different types of spatial averaging applied to ,(m, L) and what are the values of c for which the distributions of aL best agree for small L. Theoretical analysis is tested using the California data. [source]


Retrospective selection bias (or the benefit of hindsight)

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2001
Francesco Mulargia
SUMMARY The complexity of geophysical systems makes modelling them a formidable task, and in many cases research studies are still in the phenomenological stage. In earthquake physics, long timescales and the lack of any natural laboratory restrict research to retrospective analysis of data. Such ,fishing expedition' approaches lead to optimal selection of data, albeit not always consciously. This introduces significant biases, which are capable of falsely representing simple statistical fluctuations as significant anomalies requiring fundamental explanations. This paper identifies three different strategies for discriminating real issues from artefacts generated retrospectively. The first attempts to identify ab initio each optimal choice and account for it. Unfortunately, a satisfactory solution can only be achieved in particular cases. The second strategy acknowledges this difficulty as well as the unavoidable existence of bias, and classifies all ,anomalous' observations as artefacts unless their retrospective probability of occurrence is exceedingly low (for instance, beyond six standard deviations). However, such a strategy is also likely to reject some scientifically important anomalies. The third strategy relies on two separate steps with learning and validation performed on effectively independent sets of data. This approach appears to be preferable in the case of small samples, such as are frequently encountered in geophysics, but the requirement for forward validation implies long waiting times before credible conclusions can be reached. A practical application to pattern recognition, which is the prototype of retrospective ,fishing expeditions', is presented, illustrating that valid conclusions are hard to find. [source]


Factors Influencing the Course of the Macrocyclization of ,,, -Diamines with Esters of ,,, -Dicarboxylic Acids

HELVETICA CHIMICA ACTA, Issue 1 2004
Dorota Gryko
The efficient synthesis of eight new macrocyclic amides (lactams) via reaction of diesters with diamines under normal dilution conditions is described. The role of intermolecular H-bond formation and steric hindrance is discussed based on 1H- and 15N-NMR studies of appropriate model compounds. Principles for the optimal choice of esters that can be efficiently transformed into diamides have been developed. [source]


Piece Rates, Fixed Wages, and Incentive Effects: Statistical Evidence from Payroll Records

INTERNATIONAL ECONOMIC REVIEW, Issue 1 2000
Harry J. Paarsch
We develop and estimate an agency model of worker behavior under piece rates and fixed wages. The model implies optimal decision rules for the firm's choice of a compensation system as a function of working conditions. Our model also implies an upper and lower bound to the incentive effect (the productivity gain realized by paying workers piece rates rather than fixed wages) that can be estimated using regression methods. Using daily productivity data collected from the payroll records of a British Columbia tree-planting firm, we estimate these bounds to be an 8.8 and a 60.4 percent increase in productivity. Structural estimation, which accounts for the firm's optimal choice of a compensation system, suggests that incentives caused a 22.6 percent increase in productivity. However, only part of this increase represents valuable output because workers respond to incentives, in part, by reducing quality. [source]


Reduced-order robust adaptive control design of uncertain SISO linear systems

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 7 2008
Qingrong Zhao
Abstract In this paper, a stability and robustness preserving adaptive controller order-reduction method is developed for a class of uncertain linear systems affected by system and measurement noises. In this method, we immediately start the integrator backstepping procedure of the controller design without first stabilizing a filtered dynamics of the output. This relieves us from generating the reference trajectory for the filtered dynamics of the output and thus reducing the controller order by n, n being the dimension of the system state. The stability of the filtered dynamics is indirectly proved via an existing state signal. The trade-off for this order reduction is that the worst-case estimate for the expanded state vector has to be chosen as a suboptimal choice rather than the optimal choice. It is shown that the resulting reduced-order adaptive controller preserves the stability and robustness properties of the full-order adaptive controller in disturbance attenuation, boundedness of closed-loop signals, and output tracking. The proposed order-reduction scheme is also applied to a class of single-input single-output linear systems with partly measured disturbances. Two examples are presented to illustrate the performance of the reduced-order controller in this paper. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Design of distributed controllers with constrained and noisy links

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 18 2006
Shengxiang Jiang
Abstract In this paper we consider some design aspects of distributed controllers that guarantee a ,, performance level. In particular, we consider two design problems. First, is the case where, without loss of generality, there are two distributed subcontrollers connected to a (generalized) plant and the interest is placed in minimizing the number of noise-free (and dynamics free) communication channels between the subcontrollers needed to provide a given performance. The second is the case where, given a distributed controller designed in the first case, communication noise is present and we seek an optimal choice of the communication signals to guarantee a performance level while keeping the communication signal to noise power limited. We take a linear matrix inequality (LMI) approach to provide solution procedures to these problems and present examples that demonstrate their efficiency. Copyright © 2006 John Wiley & Sons, Ltd. [source]


An evaluation of current diagnostic tests for the detection of infectious salmon anaemia virus (ISAV) following experimental water-borne infection of Atlantic salmon, Salmo salar L.

JOURNAL OF FISH DISEASES, Issue 3 2003
M Snow
Abstract Four commonly used diagnostic tests [reverse transcription polymerase chain reaction (RT-PCR), indirect fluorescent antibody test (IFAT), virus culture and light microscopy] were evaluated for their ability to detect infectious salmon anaemia virus (ISAV) or tissue pathology following experimental infection of Atlantic salmon. Fish were infected with ISAV by water-borne exposure which mimics the route of natural infection. Forty-five per cent of pre-clinical fish tested yielded positive results by RT-PCR for at least one of the organs tested (kidney, heart, gill, liver, blood). No significant difference was detected between organs in the number or time of first occurrence of positive result. Virus culture identified a total of 14% of pre-clinical fish as ISAV-infected. The presence of ISAV in heart tissue was particularly notable (13% of fish sampled) as was the inability to culture virus from spleen tissue. In the case of IFAT, 15% of fish sampled were positive, although tissue other than kidney proved unsuitable for use in this method. Only limited ISAV-specific pathology was detectable by histological examination of fish prior to the onset of clinical disease. These findings reveal important information regarding the optimal choice of both tissue sample and diagnostic test for the routine diagnosis of ISAV. [source]


PROPOSAL OF ECTOCARPUS SILICULOSUS (ECTOCARPALES, PHAEOPHYCEAE) AS A MODEL ORGANISM FOR BROWN ALGAL GENETICS AND GENOMICS,

JOURNAL OF PHYCOLOGY, Issue 6 2004
Akira F. Peters
The emergence of model organisms that permit the application of a powerful combination of genomic and genetic approaches has been a major factor underlying the advances that have been made in the past decade in dissecting the molecular basis of a wide range of biological processes. However, the phylogenetic distance separating marine macroalgae from these model organisms, which are mostly from the animal, fungi, and higher plant lineages, limits the latters' applicability to problems specific to macroalgal biology. There is therefore a pressing need to develop similar models for the macroalgae. Here we describe a survey of potential model brown algae in which particular attention was paid to characteristics associated with a strong potential for genomic and genetic analysis, such as a small nuclear genome size, sexuality, and a short life cycle. Flow cytometry of nuclei isolated from zoids showed that species from the Ectocarpales possess smaller haploid genomes (127,290 Mbp) than current models among the Laminariales (580,720 Mbp) and Fucales (1095,1271 Mbp). Species of the Ectocarpales may complete their life histories in as little as 6 weeks in laboratory culture and are amenable to genetic analyses. Based on this study, we propose Ectocarpus siliculosus (Dillwyn) Lyngbye as an optimal choice for a general model organism for the molecular genetics of the brown algae. [source]


Impact of the Sampling Rate on the Estimation of the Parameters of Fractional Brownian Motion

JOURNAL OF TIME SERIES ANALYSIS, Issue 3 2006
Zhengyuan Zhu
Primary 60G18; secondary 62D05, 62F12 Abstract., Fractional Brownian motion is a mean-zero self-similar Gaussian process with stationary increments. Its covariance depends on two parameters, the self-similar parameter H and the variance C. Suppose that one wants to estimate optimally these parameters by using n equally spaced observations. How should these observations be distributed? We show that the spacing of the observations does not affect the estimation of H (this is due to the self-similarity of the process), but the spacing does affect the estimation of the variance C. For example, if the observations are equally spaced on [0, n] (unit-spacing), the rate of convergence of the maximum likelihood estimator (MLE) of the variance C is . However, if the observations are equally spaced on [0, 1] (1/n -spacing), or on [0, n2] (n -spacing), the rate is slower, . We also determine the optimal choice of the spacing , when it is constant, independent of the sample size n. While the rate of convergence of the MLE of C is in this case, irrespective of the value of ,, the value of the optimal spacing depends on H. It is 1 (unit-spacing) if H = 1/2 but is very large if H is close to 1. [source]


Unified multipliers-free theory of dual-primal domain decomposition methods

NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 3 2009
Ismael Herrera
Abstract The concept of dual-primal methods can be formulated in a manner that incorporates, as a subclass, the non preconditioned case. Using such a generalized concept, in this article without recourse to "Lagrange multipliers," we introduce an all-inclusive unified theory of nonoverlapping domain decomposition methods (DDMs). One-level methods, such as Schur-complement and one-level FETI, as well as two-level methods, such as Neumann-Neumann and preconditioned FETI, are incorporated in a unified manner. Different choices of the dual subspaces yield the different dual-primal preconditioners reported in the literature. In this unified theory, the procedures are carried out directly on the matrices, independently of the differential equations that originated them. This feature reduces considerably the code-development effort required for their implementation and permit, for example, transforming 2D codes into 3D codes easily. Another source of this simplification is the introduction of two projection-matrices, generalizations of the average and jump of a function, which possess superior computational properties. In particular, on the basis of numerical results reported there, we claim that our jump matrix is the optimal choice of the B operator of the FETI methods. A new formula for the Steklov-Poincaré operator, at the discrete level, is also introduced. © 2008 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2009 [source]


Minimum effort dead-beat control of linear servomechanisms with ripple-free response

OPTIMAL CONTROL APPLICATIONS AND METHODS, Issue 3 2001

Abstract A new and systematic approach to the problem of minimum effort ripple-free dead-beat (EFRFDB) control of the step response of a linear servomechanism is presented. There is specified a set of admissible discrete error feedback controllers, complying with general conditions for the design of ripple-free dead-beat (RFDB) controllers, regardless of the introduced degree of freedom, defined as the number of steps exceeding their minimum number. The solution is unique for the minimum number of steps, while their increase enables one to make an optimal choice from a competitive set of controllers via their parametrization in a finite-dimensional space. As an objective function, Chebyshev's norm of an arbitrarily chosen linear projection of the control variable was chosen. There has been elaborated a new, efficient algorithm for all stable systems of the given class with an arbitrary degree of freedom. A parametrized solution in a finite space of polynomials is obtained through the solution of a standard problem of mathematical programming which simultaneously yields the solution of a total position change maximization of servomechanism provided that a required number of steps and control effort limitation are given. A problem formulated in this way is consecutively used in solving the time-optimal (minimum-step) control of a servomechanism to a given steady-state position with a specified limitation on control effort. The effect of EFRFDB control on the example of a linear servomechanism with torsion spring shaft, with the criterions of control effort and control difference effort, is illustrated and analysed. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Phototherapy in the management of atopic dermatitis: a systematic review

PHOTODERMATOLOGY, PHOTOIMMUNOLOGY & PHOTOMEDICINE, Issue 4 2007
N. Bhavani Meduri
Background/purpose: Atopic dermatitis (AD) is a common and extremely burdensome skin disorder with limited therapeutic options. Ultraviolet (UV) phototherapy is a well tolerated, efficacious treatment for AD, but its use is limited by a lack of guidelines in the optimal choice of modality and dosing. Given this deficit, we aim to develop suggestions for the treatment of AD with phototherapy by systematically reviewing the current medical literature. Methods: Data sources: All data sources were identified through searches of MEDLINE via the Ovid interface, the Cochrane Central Register of Controlled Trials, and a complementary manual literature search. Study selection: Studies selected for review met these inclusion criteria, as applied by multiple reviewers: controlled clinical trials of UV phototherapy in the management of AD in human subjects as reported in the English-language literature. Studies limited to hand dermatitis and studies in which subjects were allowed unmonitored use of topical corticosteroids or immunomodulators were excluded. Data extraction: Included studies were assessed by multiple independent observers who extracted and compiled the following data: number of patients, duration of treatment, cumulative doses of UV radiation, adverse effects, and study results. Data quality was assessed by comparing data sets and rechecking source materials if a discrepancy occurred. Results: Nine trials that met the inclusion criteria were identified. Three studies demonstrated that UVA1 is both faster and more efficacious than combined UVAB for treating acute AD. Two trials disclosed the advantages of medium dose (50 J/cm2) UVA1 for treating acute AD. Two trials revealed the superiority of combined UVAB in the management of chronic AD. Two additional studies demonstrated that narrow-band UVB is more effective than either broad-band UVA or UVA1 for managing chronic AD. Conclusion: On the basis of available evidence, the following suggestions can be made: phototherapy with medium-dose (50 J/cm2) UVA1, if available, should be used to control acute flares of AD while UVB modalities, specifically narrow-band UVB, should be used for the management of chronic AD. [source]


A Short Note About Energy-Efficiency Performance of Thermally Coupled Distillation Sequences

THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING, Issue 1 2006
Juan Gabriel Segovia-Hernández
Abstract In this work, we present a comparative study of the energy-efficiency performance between conventional distillation sequences and thermally coupled distillation arrangements (TCDS) for the separation of ternary mixtures of hydrocarbons under the action of feedback control loops. The influence of the relative ease of separation of the feed mixture and its composition was analyzed. The feedback analysis was conducted through servo tests with individual changes in the set points for each of the three product streams. Standard PI controllers were used for each loop. The results show an apparent trend regarding the sequence with a better dynamic performance. Generally, TCDS options performed better for the control of the extreme components of the ternary mixture (A and C), while the conventional sequences offered a better dynamic behaviour for the control of the intermediate component (B). The only case in which there was a dominant structure for all control loops was when the feed contained low amounts of the intermediate component and the mixture had similar relative volatilities. The Petlyuk column provided the optimal choice in such case, which contradicts the general expectations regarding its control behaviour. In addition, the energy demands during the dynamic responses were significantly lower than those observed for the other distillation sequences. TCDS options, therefore, are not only more energy efficient than the conventional sequences, but there are cases in which they also offer better feedback control properties. On présente dans ce travail une étude comparative de la performance d'efficacité d'énergétique entre les séquences de distillation conventionnelles et les configurations de distillation couplées thermiquement (TCDS) pour la séparation de mélanges ternaires d'hydrocarbures sous l'action de boucles de contrôle d'asservissement. L'influence de la facilité relative de séparation du mélange d'alimentation et de sa composition est analysée. L'analyse de rétroalimentation est réalisée grâce à des tests d'asservissement avec des changements individuels dans les points de consigne pour chacun des trois courants de produits. Des contrôleurs PI standards ont été utilisés pour chaque boucle. Les résultats montrent une tendance apparente pour la séquence ayant une meilleure performance dynamique. Généralement, les options TCDS sont meilleures pour le contrôle des composantes extrêmes du mélange ternaire (A et C), tandis que les séquences conventionnelles offrent un meilleur contrôle dynamique pour le contrôle de la composante intermédiaire (B). Le seul cas où il y a une structure dominante pour toutes les boucles de contrôle, c'est lorsque l'alimentation contenant de faibles quantités de la composante intermédiaire et le mélange ont la même volatilité relative. La colonne Petlyuk est le choix optimal dans un tel cas, ce qui contredit les attentes générales concernant son comportement de contrôle. En outre, les demandes d'énergie pendant les réponses dynamiques sont significativement plus faibles que celles observées pour les autres séquences de distillation. Ainsi, non seulement les options TCDS sont plus efficaces que les séquences conventionnelles, mais il y a des cas où elles offrent également de meilleures propriétés de contrôle d'asservissement. [source]


Bank Loans Versus Bond Finance: Implications for Sovereign Debtors,

THE ECONOMIC JOURNAL, Issue 510 2006
Misa Tanaka
This article analyses the optimal choice between bank loans and bond finance for a sovereign debtor. It shows that if borrowers can be ,publicly monitored' by a rating agency that disseminates the information about their creditworthiness, their choice between bank loans and bond finance is determined by the trade-off between two deadweight costs: the crisis cost of default and the cost of debtor moral hazard. If crisis costs are large, sovereigns use bank loans for short-term financing and bond issuance for long-term financing. I also demonstrate that state contingent debt and IMF intervention can improve welfare. [source]


On a fast calculation of structure factors at a subatomic resolution

ACTA CRYSTALLOGRAPHICA SECTION A, Issue 1 2004
P. V. Afonine
In the last decade, the progress of protein crystallography allowed several protein structures to be solved at a resolution higher than 0.9,Å. Such studies provide researchers with important new information reflecting very fine structural details. The signal from these details is very weak with respect to that corresponding to the whole structure. Its analysis requires high-quality data, which previously were available only for crystals of small molecules, and a high accuracy of calculations. The calculation of structure factors using direct formulae, traditional for `small-molecule' crystallography, allows a relatively simple accuracy control. For macromolecular crystals, diffraction data sets at a subatomic resolution contain hundreds of thousands of reflections, and the number of parameters used to describe the corresponding models may reach the same order. Therefore, the direct way of calculating structure factors becomes very time expensive when applied to large molecules. These problems of high accuracy and computational efficiency require a re-examination of computer tools and algorithms. The calculation of model structure factors through an intermediate generation of an electron density [Sayre (1951). Acta Cryst.4, 362,367; Ten Eyck (1977). Acta Cryst. A33, 486,492] may be much more computationally efficient, but contains some parameters (grid step, `effective' atom radii etc.) whose influence on the accuracy of the calculation is not straightforward. At the same time, the choice of parameters within safety margins that largely ensure a sufficient accuracy may result in a significant loss of the CPU time, making it close to the time for the direct-formulae calculations. The impact of the different parameters on the computer efficiency of structure-factor calculation is studied. It is shown that an appropriate choice of these parameters allows the structure factors to be obtained with a high accuracy and in a significantly shorter time than that required when using the direct formulae. Practical algorithms for the optimal choice of the parameters are suggested. [source]


The response of protist and metazoan communities in permeable pavement structures to high oil loadings

THE JOURNAL OF EUKARYOTIC MICROBIOLOGY, Issue 2 2005
S. J. COUPE
Permeable pavement structures (PPS) have been demonstrated to provide an efficient and sustainable method of controlling urban derived hydrocarbon contamination. Until recently, laboratory PPS mesocosm models have used crushed granite as the load bearing sub-base material. However, the use of virgin stone may not be the optimal choice of substrate, as this is not necessarily sustainable or cost effective in the long term when compared to the use of recycled materials. However, recycled materials such as waste concrete may change the environmental conditions in PPS mesocosms, and the characteristics of the eukaryotic community may become different from those which have been previously described. In the current experiment, granite and recycled concrete sub-base materials were compared for their ability to retain 900 g/m2 of clean mineral oil applied to the mesocosm surface. It was observed that, even at this very high oil loading, 99.95% of the applied oil was retained within granite and concrete-based structures, but the effluent was two pH units more alkaline in concrete mesocosms than granite. The eukaryotic microfauna in the effluent from both mesocosm types showed a ten-fold increase in protist abundance, and a doubling in the number of protist genera, compared with earlier work using only 18 g/m2 of applied oil. Five genera of testate amoebae not previously recorded in PPS were identified, these included Arcella, Assulina, Cryptodifflugia, Cyclopyxis and Difflugia in addition to the three genera observed previously using the lower oil application. Metazoan abundances increased from 1.5 × 101 organisms per ml using the lower oil loadings to 2.0 × 103/ml in the current experiment. Rotifers and nematodes were the most numerous, but tardigrades were also observed in both concrete and granite-based mesocosms. Despite the differences in effluent pH, it was apparent that there were only marginal differences in the eukaryotic microbiology of the two mesocosm types. This was thought to be due to the layered structural arrangement of the pavement and the location of the highly oil-retentive polypropylene geotextile and extensive biofilm layer positioned above the concrete sub-base. Work is now underway to find oil loadings that will adversely affect the abundance and diversity of eukaryotic organisms in PPS mesocosms. [source]


Market power, price discrimination, and allocative efficiency in intermediate-goods markets

THE RAND JOURNAL OF ECONOMICS, Issue 4 2009
Roman Inderst
We consider a monopolistic supplier's optimal choice of two-part tariff contracts when downstream firms are asymmetric. We find that the optimal discriminatory contracts amplify differences in downstream firms' competitiveness. Firms that are larger,either because they are more efficient or because they sell a superior product,obtain a lower wholesale price than their rivals. This increases allocative efficiency by favoring the more productive firms. In contrast, we show that a ban on price discrimination reduces allocative efficiency and can lead to higher wholesale prices for,all,firms. As a result, consumer surplus, industry profits, and welfare are lower. [source]


A multi-agent control scheme for a supply chain model,

ASIAN JOURNAL OF CONTROL, Issue 2 2008
Mauro Boccadoro
Abstract The reduction of the bullwhip effect on supply chain systems is generally achieved through the optimal choice of policies at the local level and also by setting some type of cooperation among the different agents of the system. Here, such constructive interaction is pursued by the introduction of a negotiation mechanism among neighboring sites, and according to revenues/costs directly related to the impact of the bullwhip effect on the performances of each site. The concept is demonstrated for a policy which is quite common in the supply chain literature. The results obtained show the convergence properties of the negotiation for particular disturbance signals, and give indications on how cooperating mechanisms can be devised on the basis of the proposed negotiation. Copyright © 2008 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society [source]


STEERING A MOBILE ROBOT: SELECTION OF A VELOCITY PROFILE SATISFYING DYNAMICAL CONSTRAINTS

ASIAN JOURNAL OF CONTROL, Issue 4 2000
M.A. Benayad
ABSTRACT We present an open loop control design allowing to steer a wheeled mobile robot along a prespecified smooth geometric path, minimizing a given cost index and satisfying a set of dynamical constraints. Using the concept of "differential flatness," the problem is equivalent to the selection of the optimal time parametrization of the geometric path. This parametrization is characterized by a differential equation involving a function of the curvilinear coordinate along the path. For the minimum time problem, as well as for another index (such as the maximum value of the centripetal acceleration) to be minimized over a given time interval, the problem then reduces to the optimal choice of this function of the curvilinear coordinate. Using spline functions interpolation, the problem can be recast as a finite parameter optimization problem. Numerical simulation results illustrate the procedure. [source]


Risk Factor Adjustment in Marginal Structural Model Estimation of Optimal Treatment Regimes

BIOMETRICAL JOURNAL, Issue 5 2009
Erica E. M. Moodie
Abstract Marginal structural models (MSMs) are an increasingly popular tool, particularly in epidemiological applications, to handle the problem of time-varying confounding by intermediate variables when studying the effect of sequences of exposures. Considerable attention has been devoted to the optimal choice of treatment model for propensity score-based methods and, more recently, to variable selection in the treatment model for inverse weighting in MSMs. However, little attention has been paid to the modeling of the outcome of interest, particularly with respect to the best use of purely predictive, non-confounding variables in MSMs. Four modeling approaches are investigated in the context of both static treatment sequences and optimal dynamic treatment rules with the goal of estimating a marginal effect with the least error, both in terms of bias and variability. [source]


Use of Physicochemical Tools to Determine the Choice of Optimal Enzyme: Stabilization of d -Amino Acid Oxidase

BIOTECHNOLOGY PROGRESS, Issue 3 2003
Lorena Betancor
An evaluation of the stability of several forms (including soluble and two immobilized preparations) of d -amino acid oxidases from Trigonopsis variabilis (TvDAAO) and Rhodotorula gracilis (RgDAAO) is presented here. Initially, both soluble enzymes become inactivated via subunit dissociation, and the most thermostable enzyme seemed to be TvDAAO, which was 3,4 times more stable than RgDAAO at a protein concentration of 30 ,g/mL. Immobilization on poorly activated supports was unable to stabilize the enzyme, while highly activated supports improved the enzyme stability. Better results were obtained when using highly activated glyoxyl agarose supports than when glutaraldehyde was used. Thus, multisubunit immobilization on highly activated glyoxyl agarose dramatically improved the stability of RgDAAO (by ca. 15 000-fold) while only marginally improving the stability of TvDAAO (by 15,20-fold), at a protein concentration of 6.7 ,g/mL. Therefore, the optimal immobilized RgDAAO was much more stable than the optimal immobilized TvDAAO at this enzyme concentration. The lower stabilization effect on TvDAAO was associated with the inactivation of this enzyme by FAD dissociation that was not prevented by immobilization. Finally, nonstabilized RgDAAO was marginally more stable in the presence of H2O2 than TvDAAO, but after stabilization by multisubunit immobilization, its stability became 10 times higher than that of TvDAAO. Therefore, the most stable DAAO preparation and the optimal choice for an industrial application seems to be RgDAAO immobilized on glyoxyl agarose. [source]