Home About us Contact | |||
Maximization
Kinds of Maximization Terms modified by Maximization Selected AbstractsSALES MAXIMIZATION AND PROFIT MAXIMIZATION: A NOTE ON THE DECISION OF A SALES MAXIMIZER TO THE INCREASE OF PER UNIT COSTPACIFIC ECONOMIC REVIEW, Issue 5 2007Ke Li A common mistake in currently used textbooks is pointed out, and a new proposition is proposed for replacing a false statement there. [source] PROFIT MAXIMIZATION AND SOCIAL OPTIMUM WITH NETWORK EXTERNALITY,THE MANCHESTER SCHOOL, Issue 2 2006URIEL SPIEGEL The paper analyzes the options open to monopoly firms that sell Internet services. We consider two groups of customers that are different in their reservation prices. The monopoly uses price discrimination between customers by producing two versions of the product at positive price for high-quality product and a free version at zero price for lower-quality product. The monopoly can sell advertising space to increase its revenue but risks losing customers who are annoyed by advertising. Network externalities increase the incentive to increase output; thus we find cases where the profit maximization is consistent with maximum social welfare. [source] EXACT P -VALUES FOR DISCRETE MODELS OBTAINED BY ESTIMATION AND MAXIMIZATIONAUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 4 2008Chris J. Lloyd Summary In constructing exact tests from discrete data, one must deal with the possible dependence of the P -value on nuisance parameter(s) , as well as the discreteness of the sample space. A classical but heavy-handed approach is to maximize over ,. We prove what has previously been understood informally, namely that maximization produces the unique and smallest possible P -value subject to the ordering induced by the underlying test statistic and test validity. On the other hand, allowing for the worst case will be more attractive when the P -value is less dependent on ,. We investigate the extent to which estimating , under the null reduces this dependence. An approach somewhere between full maximization and estimation is partial maximization, with appropriate penalty, as introduced by Berger & Boos (1994, P values maximized over a confidence set for the nuisance parameter. J. Amer. Statist. Assoc.,89, 1012,1016). It is argued that estimation followed by maximization is an attractive, but computationally more demanding, alternative to partial maximization. We illustrate the ideas on a range of low-dimensional but important examples for which the alternative methods can be investigated completely numerically. [source] The Relation between Stakeholder Management, Firm Value, and CEO Compensation: A Test of Enlightened Value MaximizationFINANCIAL MANAGEMENT, Issue 3 2010Bradley W. Benson Whether firms pursue shareholder value maximization or the maximization of stakeholder welfare is a controversial issue whose outcomes seem irreconcilable. We propose that firms are likely to compensate their executives for pursuing the firm's goal be it shareholder value maximization or the maximization of stakeholder welfare. In this paper, we examine the correlation between firm value, stakeholder management, and compensation. We find that stakeholder management is positively related to firm value. However, firms do not compensate managers for having good relationships with its stakeholders. These results do not support stakeholder theory. We also find an endogenous association between compensation and firm value. Our results are consistent with Jensen's (2001) enlightened value maximization theory. Managers are compensated for achieving the firm's ultimate goal, value maximization. However, managers optimize interaction with stakeholders to accomplish this objective. [source] Hierarchical Zeolite Catalysts: Zeolite Catalysts with Tunable Hierarchy Factor by Pore-Growth Moderators (Adv. Funct.ADVANCED FUNCTIONAL MATERIALS, Issue 24 2009Mater. On page 3972, Pérez-Ramírez et al. introduce the hierarchy factor as a valuable descriptor to categorize hierarchical zeolites and to optimize their design for catalytic applications. They demonstrate a direct correlation between the catalytic performance of ZSM-5 in benzene alkylation and the hierarchy factor. Maximization of the hierarchy factor is achieved by enhancing the mesopore surface area without reducing the micropore volume. For this purpose, a novel desilication variant involving NaOH treatment in the presence of pore growth moderators (quaternary ammonium cations) is presented. [source] Implications of Executive Hedge Markets for Firm Value MaximizationJOURNAL OF ECONOMICS & MANAGEMENT STRATEGY, Issue 2 2007açhan Çelen This paper analyzes the incentive implications of executive hedge markets. The manager can promise the return from his shares to third parties in exchange for a fixed payment,swap contracts,and/or he can trade a customized security correlated with his firm-specific risk. The customized security improves incentives by diversifying the manager's firm-specific risk. However, unless they are exclusive, swap contracts lead to a complete unraveling of incentives. When security customization is sufficiently high, the manager only trades the customized security,but not any nonexclusive swap contracts, and incentives improve. Access to highly customized hedge securities and/or exclusive swap contracts increases the manager's pay-performance sensitivity. [source] Compatibility factor for the power output of a thermogeneratorPHYSICA STATUS SOLIDI - RAPID RESEARCH LETTERS, Issue 6 2007W. Seifert Abstract The compatibility approach introduced by Snyder and Ursell enables the description of both thermoelectric generator (TEG) and Peltier cooler (TEC) within the framework of a unified 1D model. Both TEG's efficiency , and TEC's coefficient of performance (C.O.P.) can be formulated in terms of the reduced current density u, which has been introduced as a new, intensive state variable of a thermoelectric system. For , and C.O.P., integral expressions are formed from additive contributions of all length segments of a thermoelectric ele-ment, enabling exact calculation of these quantities even for arbitrarily graded elements. Maximization of these global performance parameters can thus be deduced to local maximization. Here the maximum power from a TEG with fixed length but variable heat supply is considered, which leads to the new concept of power-related compatibility and to the introduction of a new, different compatibility factor. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Analysis of Revenue Maximization Under Two Movie-Screening PoliciesPRODUCTION AND OPERATIONS MANAGEMENT, Issue 1 2010Milind Dawande A few weeks before the start of a major season, movie distributors arrange a private screening of the movies to be released during that season for exhibitors and, subsequently, solicit bids for these movies (from exhibitors). Since the number of such solicitations far exceeds the number of movies that can be feasibly screened at a multiplex (i.e., a theater with multiple screens), the problem of interest for an exhibitor is that of choosing a subset of movies for which to submit bids to the distributors. We consider the problem of the selection and screening of movies for a multiplex to maximize the exhibitor's cumulative revenue over a fixed planning horizon. The release times of the movies that can potentially be selected during the planning horizon are known a priori. If selected for screening, a movie must be scheduled through its obligatory period, after which its run may or may not be extended. The problem involves two primary decisions: (i) the selection of a subset of movies for screening from those that can potentially be screened during the planning horizon and (ii) the determination of the duration of screening for the selected movies. We investigate two basic and popular screening policies: preempt-resume and non-preempt. In the preempt-resume policy, the screening of a movie can be preempted and resumed in its post-obligatory period. In the non-preempt policy, a movie is screened continuously from its release time until the time it is permanently withdrawn from the multiplex. We show that optimizing under the preempt-resume policy is strongly NP-hard while the problem under the non-preempt policy is polynomially solvable. We develop efficient algorithms for the problem under both screening policies and show that the revenue obtained from the preempt-resume policy can be significantly higher as compared with that from the non-preempt policy. Our work provides managers of multiplexes with valuable insights into the selection and screening of movies and offers an easy-to-use computational tool to compare the revenues obtainable from adopting these popular policies. [source] Optimization of a model IV fluidized catalytic cracking unitTHE CANADIAN JOURNAL OF CHEMICAL ENGINEERING, Issue 4 2001Rein Luus Abstract Maximization of a profit function related to a fluidized catalytic cracking unit model was carried out by Luus-jaakola optimization procedure. A 7-dimensional search is carried out on a FCC unit described by 113 nonlinear algebraic equations and 9 differential equations. Despite the low sensitivity and the existence of several local optima, the global optimum was obtained with reasonable amount of computational effort. At the optimum, the profit function is 1% higher than when the air blowers are constrained to operate at their maximum capacity. On a réalisé par la méthode d'optimisation de Luus-jaakola la maximisation d'une fonction de profit relativement à un modèle d'unité de craquage catarytique fluidisé (FCC). Une recherche en sept dimensions est menée sur une unité FCC décrite par 113 équations algébriques non linéaires et 9 équations différentielles. Malgré la faible sensibilité et l'existence de plusieurs optimums locaux, l'optimum global a été atteint avec des efforts raisonnables en termes de calcul. À l'optimum, la fonction de profit est de 1% supérieure à celle obtenue lorsqu'on force les ventilateurs soufflants à fonctionner à leur capacité maximum. [source] Modeling Longitudinal Data with Nonparametric Multiplicative Random Effects Jointly with Survival DataBIOMETRICS, Issue 2 2008Jimin Ding Summary In clinical studies, longitudinal biomarkers are often used to monitor disease progression and failure time. Joint modeling of longitudinal and survival data has certain advantages and has emerged as an effective way to mutually enhance information. Typically, a parametric longitudinal model is assumed to facilitate the likelihood approach. However, the choice of a proper parametric model turns out to be more elusive than models for standard longitudinal studies in which no survival endpoint occurs. In this article, we propose a nonparametric multiplicative random effects model for the longitudinal process, which has many applications and leads to a flexible yet parsimonious nonparametric random effects model. A proportional hazards model is then used to link the biomarkers and event time. We use B-splines to represent the nonparametric longitudinal process, and select the number of knots and degrees based on a version of the Akaike information criterion (AIC). Unknown model parameters are estimated through maximizing the observed joint likelihood, which is iteratively maximized by the Monte Carlo Expectation Maximization (MCEM) algorithm. Due to the simplicity of the model structure, the proposed approach has good numerical stability and compares well with the competing parametric longitudinal approaches. The new approach is illustrated with primary biliary cirrhosis (PBC) data, aiming to capture nonlinear patterns of serum bilirubin time courses and their relationship with survival time of PBC patients. [source] A Multiobjective and Stochastic System for Building Maintenance ManagementCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2000Z. Lounis Building maintenance management involves decision making under multiple objectives and uncertainty, in addition to budgetary constraints. This article presents the development of a multiobjective and stochastic optimization system for maintenance management of roofing systems that integrates stochastic condition-assessment and performance-prediction models with a multiobjective optimization approach. The maintenance optimization includes determination of the optimal allocation of funds and prioritization of roofs for maintenance, repair, and replacement that simultaneously satisfy the following conflicting objectives: (1) minimization of maintenance and repair costs, (2) maximization of network performance, and (3) minimization of risk of failure. A product model of the roof system is used to provide the data framework for collecting and processing data. Compromise programming is used to solve this multiobjective optimization problem and provides building managers an effective decision support system that identifies the optimal projects for repair and replacement while it achieves a satisfactory tradeoff between the conflicting objectives. [source] Maximizing revenue in Grid markets using an economically enhanced resource managerCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2010M. Macías Abstract Traditional resource management has had as its main objective the optimization of throughput, based on parameters such as CPU, memory, and network bandwidth. With the appearance of Grid markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The Self-organizing ICT Resource Management (SORMA) project aims at allowing resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA's motivation is to achieve efficient resource utilization by maximizing revenue for resource providers and minimizing the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that the desired quality of service levels meet the expectations of market participants. This paper explains the proposed use of an economically enhanced resource manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximization across multiple service level agreements and provides an application scenario to demonstrate its usefulness and effectiveness. Copyright © 2008 John Wiley & Sons, Ltd. [source] Oil transnational corporations: corporate social responsibility and environmental sustainabilityCORPORATE SOCIAL RESPONSIBILITY AND ENVIRONMENTAL MANAGEMENT, Issue 4 2008Felix M. Edoho Abstract Corporate social responsibility (CSR) occupies the center stage of the debate on the operations of transnational corporations in the developing countries. The quest for profit maximization as the overriding value at the expense of corporate social responsibility puts some transnational corporations on a collision path with their Niger Delta host communities, who are demanding environmental sustainability. Militant groups have shut down flow stations and taken oil workers hostage. Unresponsiveness of oil firms to community demands for CSR is heightening the volatility of the Nigerian oil industry. The problem will intensify until oil firms initiate authentic CSR strategies to address the environmental havocs emanating from their operations. At the core of such strategies is recognizing the host communities as bona fide stakeholders and addressing their socioeconomic needs. Copyright © 2007 John Wiley & Sons, Ltd and ERP Environment. [source] Knowledge Life Cycle, Knowledge Inventory, and Knowledge Acquisition Strategies,DECISION SCIENCES, Issue 1 2010Andrew N. K. Chen ABSTRACT For a knowledge- and skill-centric organization, the process of knowledge management encompasses three important and closely related elements: (i) task assignments, (ii) knowledge acquisition through training, and (iii) maintaining a proper level of knowledge inventory among the existing workforce. Trade-off on choices between profit maximization in the short run and agility and flexibility in the long term is a vexing problem in knowledge management. In this study, we examine the effects of different training strategies on short-term operational efficiency and long-term workforce flexibility. We address our research objective by developing a computational model for task and training assignment in a dynamic knowledge environment consisting of multiple distinct knowledge dimensions. Overall, we find that organizational slack is an important variable in determining the effectiveness of training strategies. Training strategies focused on the most recent skills are found to be the preferred option in most of the considered scenarios. Interestingly, increased efficiencies in training can actually create preference conflict between employees and the firm. Our findings indicate that firms facing longer knowledge life cycles, higher slack in workforce capacity, and better training efficiencies actually face more difficult challenges in knowledge management. [source] Peroxisome proliferator-activated receptor-, co-activator-1, (PGC-1,) gene polymorphisms and their relationship to Type 2 diabetes in Asian IndiansDIABETIC MEDICINE, Issue 11 2005K. S. Vimaleswaran Abstract Aims The objective of the present investigation was to examine the relationship of three polymorphisms, Thr394Thr, Gly482Ser and +A2962G, of the peroxisome proliferator activated receptor-, co-activator-1 alpha (PGC-1,) gene with Type 2 diabetes in Asian Indians. Methods The study group comprised 515 Type 2 diabetic and 882 normal glucose tolerant subjects chosen from the Chennai Urban Rural Epidemiology Study, an ongoing population-based study in southern India. The three polymorphisms were genotyped using polymerase chain reaction,restriction fragment length polymorphism (PCR,RFLP). Haplotype frequencies were estimated using an expectation,maximization (EM) algorithm. Linkage disequilibrium was estimated from the estimates of haplotypic frequencies. Results The three polymorphisms studied were not in linkage disequilibrium. With respect to the Thr394Thr polymorphism, 20% of the Type 2 diabetic patients (103/515) had the GA genotype compared with 12% of the normal glucose tolerance (NGT) subjects (108/882) (P = 0.0004). The frequency of the A allele was also higher in Type 2 diabetic subjects (0.11) compared with NGT subjects (0.07) (P = 0.002). Regression analysis revealed the odds ratio for Type 2 diabetes for the susceptible genotype (XA) to be 1.683 (95% confidence intervals: 1.264,2.241, P = 0.0004). Age adjusted glycated haemoglobin (P = 0.003), serum cholesterol (P = 0.001) and low-density lipoprotein (LDL) cholesterol (P = 0.001) levels and systolic blood pressure (P = 0.001) were higher in the NGT subjects with the XA genotype compared with GG genotype. There were no differences in genotype or allelic distribution between the Type 2 diabetic and NGT subjects with respect to the Gly482Ser and +A2962G polymorphisms. Conclusions The A allele of Thr394Thr (G , A) polymorphism of the PGC-1 gene is associated with Type 2 diabetes in Asian Indian subjects and the XA genotype confers 1.6 times higher risk for Type 2 diabetes compared with the GG genotype in this population. [source] Decision Theory Applied to an Instrumental Variables ModelECONOMETRICA, Issue 3 2007Gary Chamberlain This paper applies some general concepts in decision theory to a simple instrumental variables model. There are two endogenous variables linked by a single structural equation; k of the exogenous variables are excluded from this structural equation and provide the instrumental variables (IV). The reduced-form distribution of the endogenous variables conditional on the exogenous variables corresponds to independent draws from a bivariate normal distribution with linear regression functions and a known covariance matrix. A canonical form of the model has parameter vector (,, ,, ,), where ,is the parameter of interest and is normalized to be a point on the unit circle. The reduced-form coefficients on the instrumental variables are split into a scalar parameter ,and a parameter vector ,, which is normalized to be a point on the (k,1)-dimensional unit sphere; ,measures the strength of the association between the endogenous variables and the instrumental variables, and ,is a measure of direction. A prior distribution is introduced for the IV model. The parameters ,, ,, and ,are treated as independent random variables. The distribution for ,is uniform on the unit circle; the distribution for ,is uniform on the unit sphere with dimension k-1. These choices arise from the solution of a minimax problem. The prior for ,is left general. It turns out that given any positive value for ,, the Bayes estimator of ,does not depend on ,; it equals the maximum-likelihood estimator. This Bayes estimator has constant risk; because it minimizes average risk with respect to a proper prior, it is minimax. The same general concepts are applied to obtain confidence intervals. The prior distribution is used in two ways. The first way is to integrate out the nuisance parameter ,in the IV model. This gives an integrated likelihood function with two scalar parameters, ,and ,. Inverting a likelihood ratio test, based on the integrated likelihood function, provides a confidence interval for ,. This lacks finite sample optimality, but invariance arguments show that the risk function depends only on ,and not on ,or ,. The second approach to confidence sets aims for finite sample optimality by setting up a loss function that trades off coverage against the length of the interval. The automatic uniform priors are used for ,and ,, but a prior is also needed for the scalar ,, and no guidance is offered on this choice. The Bayes rule is a highest posterior density set. Invariance arguments show that the risk function depends only on ,and not on ,or ,. The optimality result combines average risk and maximum risk. The confidence set minimizes the average,with respect to the prior distribution for ,,of the maximum risk, where the maximization is with respect to ,and ,. [source] Defining Economics: The Long Road to Acceptance of the Robbins DefinitionECONOMICA, Issue 2009ROGER E. BACKHOUSE Robbins' Essay gave economics a definition that came to dominate the professional literature. This definition laid a foundation that could be seen as justifying both the narrowing of economic theory to the theory of constrained maximization or rational choice and economists' ventures into other social science fields. Though often presented as self-evidently correct, both the definition itself and the developments that it has been used to support were keenly contested. This paper traces the reception, diffusion and contesting of the Robbins definition, arguing that this process took around three decades and that even then there was still significant dissent. [source] Optimization of Operating Temperature for Continuous Immobilized Glucose Isomerase Reactor with Pseudo Linear KineticsENGINEERING IN LIFE SCIENCES (ELECTRONIC), Issue 5 2004N.M. Faqir Abstract In this work, the optimal operating temperature for the enzymatic isomerization of glucose to fructose using a continuous immobilized glucose isomerase packed bed reactor is studied. This optimization problem describing the performance of such reactor is based on reversible pseudo linear kinetics and is expressed in terms of a recycle ratio. The thermal deactivation of the enzyme as well as the substrate protection during the reactor operation is considered. The formulation of the problem is expressed in terms of maximization of the productivity of fructose. This constrained nonlinear optimization problem is solved using the disjoint policy of the calculus of variations. Accordingly, this method of solution transforms the nonlinear optimization problem into a system of two coupled nonlinear ordinary differential equations (ODEs) of the initial value type, one equation for the operating temperature profile and the other one for the enzyme activity. The ODE for the operating temperature profile is dependent on the recycle ratio, operating time period, and the reactor residence time as well as the kinetics of the reaction and enzyme deactivation. The optimal initial operating temperature is selected by solving the ODEs system by maximizing the fructose productivity. This results into an unconstrained one-dimensional optimization problem with simple bounds on the operating temperature. Depending on the limits of the recycle ratio, which represents either a plug flow or a mixed flow reactor, it is found that the optimal temperature of operation is characterized by an increasing temperature profile. For higher residence time and low operating periods the residual enzyme activity in the mixed flow reactor is higher than that for the plug flow reactor, which in turn allows the mixed flow reactor to operate at lower temperature than that of the plug flow reactor. At long operating times and short residence time, the operating temperature profiles are almost the same for both reactors. This could be attributed to the effect of substrate protection on the enzyme stability, which is almost the same for both reactors. Improvement in the fructose productivity for both types of reactors is achieved when compared to the constant optimum temperature of operation. The improvement in the fructose productivity for the plug flow reactor is significant in comparison with the mixed flow reactor. [source] INAR(1) modeling of overdispersed count series with an environmental applicationENVIRONMETRICS, Issue 4 2008Harry Pavlopoulos Abstract This paper is concerned with a novel version of the INAR(1) model, a non-linear auto-regressive Markov chain on ,, with innovations following a finite mixture distribution of Poisson laws. For , the stationary marginal probability distribution of the chain is overdispersed relative to a Poisson, thus making INAR(1) suitable for modeling time series of counts with arbitrary overdispersion. The one-step transition probability function of the chain is also a finite mixture, of m Poisson-Binomial laws, facilitating likelihood-based inference for model parameters. An explicit EM-algorithm is devised for inference by maximization of a conditional likelihood. Alternative options for inference are discussed along with criteria for selecting m. Integer-valued prediction (IP) is developed by a parametric bootstrap approach to ,coherent' forecasting, and a certain test statistic based on predictions is introduced for assessing performance of the fitted model. The proposed model is fitted to time series of counts of pixels where spatially averaged rain rate exceeds a given threshold level, illustrating its capabilities in challenging cases of highly overdispersed count data. Copyright © 2007 John Wiley & Sons, Ltd. [source] Algorithmic challenges and current problems in market coupling regimesEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 4 2009Bernd Tersteegen Abstract Increasing cross-border trade at European borders has lead to the necessity of an efficient allocation of scarce cross-border capacities. Explicit auctions used to be the commonly applied auction method in the past at most borders, but due to the separation of the trade of electrical energy and the allocation of cross-border capacity, market inefficiencies arise. As a consequence, a trend toward a market coupling, which combines the trade of electrical energy with the allocation of cross-border capacity, can be observed across Europe. The most convincing approach to solve the complex optimization task associated with market couplings solves the problem by a maximization of the system-wide welfare based on a closed-form optimization. Practical experience shows that problems remain with such an approach. This paper thoroughly analyzes problems that may occur in market coupling regimes with a closed-form optimization. In this paper an extension of formerly presented formulations of the optimization problem is presented, which avoids the described problems. The extended formulation still assures practically feasible calculation times of far less than 10 minutes even for systems with up to 12 market areas. Further, a fair and transparent approach to determine feasible market clearing prices not neglecting the time and market coupling relationship between prices is shown in this paper and it is demonstrated that this approach does not lead to practically infeasible calculation times. Copyright © 2009 John Wiley & Sons, Ltd. [source] Improved genetic algorithm for multi-objective reactive power dispatch problemEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 6 2007D. DevarajArticle first published online: 12 JAN 200 Abstract This paper presents an improved genetic algorithm (GA) approach for solving the multi-objective reactive power dispatch problem. Loss minimization and maximization of voltage stability margin are taken as the objectives. Maximum L-index of the system is used to specify the voltage stability level. Generator terminal voltages, reactive power generation of capacitor banks and tap changing transformer setting are taken as the optimization variables. In the proposed GA, voltage magnitudes are represented as floating point numbers and transformer tap-setting and reactive power generation of capacitor bank are represented as integers. This alleviates the problems associated with conventional binary-coded GAs to deal with real variables and integer variables with total number of permissible choices not equal to 25. Crossover and mutation operators which can deal with mixed variables are proposed. The proposed method has been tested on IEEE 30-bus system and is compared with conventional methods and binary-coded GA. The proposed method has produced the loss which is less than the value reported earlier and is well suitable for solving the mixed integer optimization problem. Copyright © 2007 John Wiley & Sons, Ltd. [source] Optimum adaptive OFDM systemsEUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 3 2003Lorenzo Piazzo When Orthogonal Frequency Division Multiplexing (OFDM) is used to transmit information over a frequency selective channel, it is convenient to vary the power and the number of bits allocated to each subcarrier in order to optimize the system performance. In this paper, the three classical problems of transmission power minimization, error rate minimization and throughput maximization are investigated in a unified manner. The relations existing among these three problems are clarified and a precise definition of optimum system is given. A general and rigorous way to extend the solution of any of the three problems in order to obtain the solution of the other two is presented. This result is used to devise an efficient algorithm for the error rate minimization. Copyright © 2003 AEI. [source] Effect of sequence polymorphism and drug resistance on two HIV-1 Gag processing sitesFEBS JOURNAL, Issue 16 2002Anita Fehér The HIV-1 proteinase (PR) has proved to be a good target for antiretroviral therapy of AIDS, and various PR inhibitors are now in clinical use. However, there is a rapid selection of viral variants bearing mutations in the proteinase that are resistant to clinical inhibitors. Drug resistance also involves mutations of the nucleocapsid/p1 and p1/p6 cleavage sites of Gag, both in vitro and in vivo. Cleavages at these sites have been shown to be rate limiting steps for polyprotein processing and viral maturation. Furthermore, these sites show significant sequence polymorphism, which also may have an impact on virion infectivity. We have studied the hydrolysis of oligopeptides representing these cleavage sites with representative mutations found as natural variations or that arise as resistant mutations. Wild-type and five drug resistant PRs with mutations within or outside the substrate binding site were tested. While the natural variations showed either increased or decreased susceptibility of peptides toward the proteinases, the resistant mutations always had a beneficial effect on catalytic efficiency. Comparison of the specificity changes obtained for the various substrates suggested that the maximization of the van der Waals contacts between substrate and PR is the major determinant of specificity: the same effect is crucial for inhibitor potency. The natural nucleocapsid/p1 and p1/p6 sites do not appear to be optimized for rapid hydrolysis. Hence, mutation of these rate limiting cleavage sites can partly compensate for the reduced catalytic activity of drug resistant mutant HIV-1 proteinases. [source] The Relation between Stakeholder Management, Firm Value, and CEO Compensation: A Test of Enlightened Value MaximizationFINANCIAL MANAGEMENT, Issue 3 2010Bradley W. Benson Whether firms pursue shareholder value maximization or the maximization of stakeholder welfare is a controversial issue whose outcomes seem irreconcilable. We propose that firms are likely to compensate their executives for pursuing the firm's goal be it shareholder value maximization or the maximization of stakeholder welfare. In this paper, we examine the correlation between firm value, stakeholder management, and compensation. We find that stakeholder management is positively related to firm value. However, firms do not compensate managers for having good relationships with its stakeholders. These results do not support stakeholder theory. We also find an endogenous association between compensation and firm value. Our results are consistent with Jensen's (2001) enlightened value maximization theory. Managers are compensated for achieving the firm's ultimate goal, value maximization. However, managers optimize interaction with stakeholders to accomplish this objective. [source] Chinese Choices: A Poliheuristic Analysis of Foreign Policy Crises, 1950,1996FOREIGN POLICY ANALYSIS, Issue 1 2005Patrick James This paper uses the Poliheuristic Theory (PH), developed by Mintz, which incorporates both psychological and rational choice components in a synthesis of these previously isolated approaches, to explain decision making in Chinese foreign policy crises. China is an interesting initial case for this project for two reasons. One is its importance as a permanent member of the UN Security Council and rising superpower. The other is China's reputation as a nearly unique "black box",an especially challenging case,with regard to decision making in foreign policy crises. Taken from the authoritative compilation of the International Crisis Behavior (ICB) Project, the nine cases (with available data) in which China is a crisis actor span the period from 1950 to 1996. A comparative analysis of Chinese decision making in times of crisis is used to test hypotheses derived from the PH. The hypotheses focus on how decisions are anticipated to occur over two stages. Principal expectations are that the non compensatory rule, which places priority on political considerations, will determine viable alternatives at the first stage, while choices more in line with expected value maximization or lexicographic ordering will characterize the second stage. [source] Zeolite Catalysts with Tunable Hierarchy Factor by Pore-Growth ModeratorsADVANCED FUNCTIONAL MATERIALS, Issue 24 2009Javier Pérez-Ramírez Abstract The design of hierarchical zeolite catalysts is attempted through the maximization of the hierarchy factor (HF); that is, by enhancing the mesopore surface area without severe penalization of the micropore volume. For this purpose, a novel desilication variant involving NaOH treatment of ZSM-5 in the presence of quaternary ammonium cations is developed. The organic cation (TPA+ or TBA+) acts as a pore-growth moderator in the crystal by OH, -assisted silicon extraction, largely protecting the zeolite crystal during the demetallation process. The protective effect is not seen when using cations that are able to enter the micropores, such as TMA+ Engineering the pore structure at the micro- and mesolevel is essential to optimize transport properties and catalytic performance, as demonstrated in the benzene alkylation with ethylene, a representative mass-transfer limited reaction. The hierarchy factor is an appropriate tool to classify hierarchically structured materials. The latter point is of wide interest to the scientific community as it not only embraces mesoporous zeolites obtained by desilication methods but it also enables to quantitatively compare and correlate various materials obtained by different synthetic methodologies. [source] Allometric relationships between lamina area, lamina mass and petiole mass of 93 temperate woody species vary with leaf habit, leaf form and altitudeFUNCTIONAL ECOLOGY, Issue 4 2008Guoyong Li Summary 1The allometric scaling relationship between lamina and lamina support has rarely been examined, such that its significance to plant life-history strategies has not been fully explored and understood so far. We investigated the sizes of leaf lamina and petiole for 93 temperate broad-leaved woody species at two altitudes of a southwestern mountain, and analysed the scaling relationship in relation to leaf habit (evergreen vs. deciduous), leaf form (simple- vs. compound-leaved species), and habitat type (low vs. high altitude). 2Significant allometric scaling relationships were found between petiole mass and lamina mass, and between petiole mass and lamina area, with common slopes of 0·872 and 0·742, respectively, both significantly departed from the value of 1·0. The results of phylogenetic comparative analyses were in accordance with the observed positive scaling relationships. 3The evergreen species were found to have a greater petiole mass than the deciduous at a given lamina area; whilst a contrasting pattern was observed between lamina mass and petiole mass, in which the evergreens had a greater biomass allocation to lamina for the same petiole mass relative to the deciduous. 4The compound-leaved species were observed to be significantly greater in both lamina area and lamina support (including petioles, rachis and petiolules) than the simple-leaved species, whereas the former had a smaller lamina area or lamina mass at a given petiole mass than the latter. 5The plants from the high altitude had less lamina area at a given petiole investment compared to those from the lower altitude, likely due to the large mechanic and transporting requirements of petioles in the species at high altitude. 6Our results indicate that petioles serve as an adverse forcing on the maximization of lamina area and lamina biomass and that the allometric relationship between lamina and lamina support varies with leaf habit, leaf form and habitat. [source] Individual differences in allocation of funds in the dictator game associated with length of the arginine vasopressin 1a receptor RS3 promoter region and correlation between RS3 length and hippocampal mRNAGENES, BRAIN AND BEHAVIOR, Issue 3 2008A. Knafo Human altruism is a widespread phenomenon that puzzled evolutionary biologists since Darwin. Economic games illustrate human altruism by showing that behavior deviates from economic predictions of profit maximization. A game that most plainly shows this altruistic tendency is the Dictator Game. We hypothesized that human altruistic behavior is to some extent hardwired and that a likely candidate that may contribute to individual differences in altruistic behavior is the arginine vasopressin 1a (AVPR1a) receptor that in some mammals such as the vole has a profound impact on affiliative behaviors. In the current investigation, 203 male and female university students played an online version of the Dictator Game, for real money payoffs. All subjects and their parents were genotyped for AVPR1a RS1 and RS3 promoter-region repeat polymorphisms. Parents did not participate in online game playing. As variation in the length of a repetitive element in the vole AVPR1a promoter region is associated with differences in social behavior, we examined the relationship between RS1 and RS3 repeat length (base pairs) and allocation sums. Participants with short versions (308,325 bp) of the AVPR1a RS3 repeat allocated significantly (likelihood ratio = 14.75, P = 0.001, df = 2) fewer shekels to the ,other' than participants with long versions (327,343 bp). We also implemented a family-based association test, UNPHASED, to confirm and validate the correlation between the AVPR1a RS3 repeat and monetary allocations in the dictator game. Dictator game allocations were significantly associated with the RS3 repeat (global P value: likelihood ratio ,2 = 11.73, df = 4, P = 0.019). The association between the AVPR1a RS3 repeat and altruism was also confirmed using two self-report scales (the Bardi,Schwartz Universalism and Benevolence Value-expressive Behavior scales). RS3 long alleles were associated with higher scores on both measures. Finally, long AVPR1a RS3 repeats were associated with higher AVPR1a human post-mortem hippocampal messenger RNA levels than short RS3 repeats (one-way analysis of variance (ANOVA): F = 15.04, P = 0.001, df = 14) suggesting a functional molecular genetic basis for the observation that participants with the long RS3 repeats allocate more money than participants with the short repeats. This is the first investigation showing that a common human polymorphism, with antecedents in lower mammals, contributes to decision making in an economic game. The finding that the same gene contributing to social bonding in lower animals also appears to operate similarly in human behavior suggests a common evolutionary mechanism. [source] Affected-sib-pair test for linkage based on constraints for identical-by-descent distributions corresponding to disease models with imprinting,GENETIC EPIDEMIOLOGY, Issue 4 2004Michael Knapp Abstract Holmans' possible triangle test for affected sib pairs has proven to be a powerful tool for linkage analysis. This test is a likelihood-ratio test for which maximization is restricted to the set of possible sharing probabilities. Here, we extend the possible triangle test to take into account genomic imprinting, which is also known as parent-of-origin effect. While the classical test without imprinting looks at whether affected sib pairs share 0, 1, or 2 alleles identical-by-descent, the likelihood-ratio test allowing for imprinting further distinguishes whether the sharing of exactly one allele is through the father or mother. Thus, if the disease gene is indeed subject to imprinting, the extended test presented here can take into account that affecteds will have inherited the mutant allele preferentially from one particular parent. We calculate the sharing probabilities at a marker locus linked to a disease susceptibility locus. Using our formulation, the constraints on these probabilities given by Dudoit and Speed ([1999] Statistics in Genetics; New York: Springer) can easily be verified. Next, we derive the asymptotic distribution of the restricted likelihood-ratio test statistic under the null hypothesis of no linkage, and give LOD-score criteria for various test sizes. We show, for various disease models, that the test allowing for imprinting has significantly higher power to detect linkage if imprinting is indeed present, at the cost of only a small reduction in power in case of no imprinting. Altogether, unlike many methods currently available, our novel model-free sib-pair test adequately models the epigenetic parent-of-origin effect, and will hopefully prove to be a useful tool for the genetic mapping of complex traits. © 2004 Wiley-Liss, Inc. [source] Evaluations of maximization procedures for estimating linkage parameters under heterogeneityGENETIC EPIDEMIOLOGY, Issue 3 2004Swati Biswas Abstract Locus heterogeneity is a major problem plaguing the mapping of disease genes responsible for complex genetic traits via linkage analysis. A common feature of several available methods to account for heterogeneity is that they involve maximizing a multidimensional likelihood to obtain maximum likelihood estimates. The high dimensionality of the likelihood surface may be due to multiple heterogeneity (mixing) parameters, linkage parameters, and/or regression coefficients corresponding to multiple covariates. Here, we focus on this nontrivial computational aspect of incorporating heterogeneity by considering several likelihood maximization procedures, including the expectation maximization (EM) algorithm and the stochastic expectation maximization (SEM) algorithm. The wide applicability of these procedures is demonstrated first through a general formulation of accounting for heterogeneity, and then by applying them to two specific formulations. Furthermore, our simulation studies as well as an application to the Genetic Analysis Workshop 12 asthma datasets show that, among other observations, SEM performs better than EM. As an aside, we illustrate a limitation of the popular admixture approach for incorporating heterogeneity, proved elsewhere. We also show how to obtain standard errors (SEs) for EM and SEM estimates, using methods available in the literature. These SEs can then be combined with the corresponding estimates to provide confidence intervals of the parameters. © 2004 Wiley-Liss, Inc. [source] |