Desirable Properties (desirable + property)

Distribution by Scientific Domains
Distribution within Chemistry


Selected Abstracts


A Framework for Measuring the Importance of Variables with Applications to Management Research and Decision Models,

DECISION SCIENCES, Issue 3 2000
Ehsan S. Soofi
In many disciplines, including various management science fields, researchers have shown interest in assigning relative importance weights to a set of explanatory variables in multivariable statistical analysis. This paper provides a synthesis of the relative importance measures scattered in the statistics, psychometrics, and management science literature. These measures are computed by averaging the partial contributions of each variable over all orderings of the explanatory variables. We define an Analysis of Importance (ANIMP) framework that reflects two desirable properties for the relative importance measures discussed in the literature: additive separability and order independence. We also provide a formal justification and generalization of the "averaging over all orderings" procedure based on the Maximum Entropy Principle. We then examine the question of relative importance in management research within the framework of the "contingency theory of organizational design" and provide an example of the use of relative importance measures in an actual management decision situation. Contrasts are drawn between the consequences of use of statistical significance, which is an inappropriate indicator of relative importance and the results of the appropriate ANIMP measures. [source]


2-D/3-D multiply transmitted, converted and reflected arrivals in complex layered media with the modified shortest path method

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2009
Chao-Ying Bai
SUMMARY Grid-cell based schemes for tracing seismic arrivals, such as the finite difference eikonal equation solver or the shortest path method (SPM), are conventionally confined to locating first arrivals only. However, later arrivals are numerous and sometimes of greater amplitude than the first arrivals, making them valuable information, with the potential to be used for precise earthquake location, high-resolution seismic tomography, real-time automatic onset picking and identification of multiple events on seismic exploration data. The purpose of this study is to introduce a modified SPM (MSPM) for tracking multiple arrivals comprising any kind of combination of transmissions, conversions and reflections in complex 2-D/3-D layered media. A practical approach known as the multistage scheme is incorporated into the MSPM to propagate seismic wave fronts from one interface (or subsurface structure for 3-D application) to the next. By treating each layer that the wave front enters as an independent computational domain, one obtains a transmitted and/or converted branch of later arrivals by reinitializing it in the adjacent layer, and a reflected and/or converted branch of later arrivals by reinitializing it in the incident layer. A simple local grid refinement scheme at the layer interface is used to maintain the same accuracy as in the one-stage MSPM application in tracing first arrivals. Benchmark tests against the multistage fast marching method are undertaken to assess the solution accuracy and the computational efficiency. Several examples are presented that demonstrate the viability of the multistage MSPM in highly complex layered media. Even in the presence of velocity variations, such as the Marmousi model, or interfaces exhibiting a relatively high curvature, later arrivals composed of any combination of the transmitted, converted and reflected events are tracked accurately. This is because the multistage MSPM retains the desirable properties of a single-stage MSPM: high computational efficiency and a high accuracy compared with the multistage FMM scheme. [source]


Luminescence of Nanocrystalline Erbium-Doped Yttria

ADVANCED FUNCTIONAL MATERIALS, Issue 5 2009
Yuanbing Mao
Abstract In this paper, the luminescence, including photoluminescence, upconversion and cathodoluminescence, from single-crystalline erbium-doped yttria nanoparticles with an average diameter of 80,nm, synthesized by a molten salt method, is reported. Outstanding luminescent properties, including sharp and well-resolved photoluminescent lines in the infrared region, outstanding green and red upconversion emissions, and excellent cathodoluminescence, are observed from the nanocrystalline erbium-doped yttria. Moreover, annealing by the high power laser results in a relatively large increase in photoluminescent emission intensity without causing spectral line shift. These desirable properties make these nanocrystals promising for applications in display, bioanalysis and telecommunications. [source]


The Materials Science of Functional Oxide Thin Films,

ADVANCED MATERIALS, Issue 38-39 2009
Mark G. Blamire
Abstract Research in the area of functional oxides has progressed from study of their basic chemistry and structure to the point at which an enormous range of desirable properties are being explored for potential applications. The primary limitation on exploitation is the difficulty of achieving sufficiently precise control of the properties because of the range of possible defects in such materials and the remarkably strong effect of such defects on the properties. This review outlines the reasons underlying this sensitivity and recent results that demonstrate the levels of control which are now possible. [source]


PRIOR ELICITATION IN MULTIPLE CHANGE-POINT MODELS,

INTERNATIONAL ECONOMIC REVIEW, Issue 3 2009
Gary Koop
This article discusses Bayesian inference in change-point models. The main existing approaches treat all change-points equally, a priori, using either a Uniform prior or an informative hierarchical prior. Both approaches assume a known number of change-points. Some undesirable properties of these approaches are discussed. We develop a new Uniform prior that allows some of the change-points to occur out of sample. This prior has desirable properties, can be interpreted as "noninformative," and treats the number of change-points as unknown. Artificial and real data exercises show how these different priors can have a substantial impact on estimation and prediction. [source]


A FETI-based multi-time-step coupling method for Newmark schemes in structural dynamics

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 13 2004
A. Prakash
Abstract We present an efficient and accurate multi-time-step coupling method using FETI domain decomposition for structural dynamics. Using this method one can divide a large structural mesh into a number of smaller subdomains, solve the individual subdomains separately and couple the solutions together to obtain the solution to the original problem. The various subdomains can be integrated in time using different time steps and/or different Newmark schemes. This approach will be most effective for very large-scale simulations on complex geometries. Our coupling method builds upon a method previously proposed by Gravouil and Combescure (GC method). We show that for the simplest case when the same time step is used in all subdomains of the mesh our method reduces to the GC method and is unconditionally stable and energy preserving. In addition, we show that our method possesses these desirable properties for general multi-time-step cases too unlike the GC method which is dissipative. Greater computational efficiency is also achieved through our method by limiting the computation of interface forces to the largest time step as opposed to the smallest time step in the GC method. Copyright © 2004 John Wiley & Sons, Ltd. [source]


An overview of methods for determining OWA weights

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 8 2005
Zeshui Xu
The ordered weighted aggregation (OWA) operator has received more and more attention since its appearance. One key point in the OWA operator is to determine its associated weights. In this article, I first briefly review existing main methods for determining the weights associated with the OWA operator, and then, motivated by the idea of normal distribution, I develop a novel practical method for obtaining the OWA weights, which is distinctly different from the existing ones. The method can relieve the influence of unfair arguments on the decision results by weighting these arguments with small values. Some of its desirable properties have also been investigated. © 2005 Wiley Periodicals, Inc. Int J Int Syst 20: 843,865, 2005. [source]


Addressing agent loss in vehicle formations and sensor networks

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 15 2009
Tyler H. Summers
Abstract In this paper, we address the problem of agent loss in vehicle formations and sensor networks via two separate approaches: (1) perform a ,self-repair' operation in the event of agent loss to recover desirable information architecture properties or (2) introduce robustness into the information architecture a priori such that agent loss does not destroy desirable properties. We model the information architecture as a graph G(V, E), where V is a set of vertices representing the agents and E is a set of edges representing information flow amongst the agents. We focus on two properties of the graph called rigidity and global rigidity, which are required for formation shape maintenance and sensor network self-localization, respectively. For the self-repair approach, we show that while previous results permit local repair involving only neighbours of the lost agent, the repair cannot always be implemented using only local information. We present new results that can be applied to make the local repair using only local information. We describe implementation and illustrate with algorithms and examples. For the robustness approach, we investigate the structure of graphs with the property that rigidity or global rigidity is preserved after removing any single vertex (we call the property as 2-vertex-rigidity or 2-vertex-global-rigidity, respectively). Information architectures with such properties would allow formation shape maintenance or self-localization to be performed even in the event of agent failure. We review a characterization of a class of 2-vertex-rigidity and develop a separate class, making significant strides towards a complete characterization. We also present a characterization of a class of 2-vertex-global-rigidity. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Some nonlinear optimal control problems with closed-form solutions

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 14 2001
Michael Margaliot
Abstract Optimal controllers guarantee many desirable properties including stability and robustness of the closed-loop system. Unfortunately, the design of optimal controllers is generally very difficult because it requires solving an associated Hamilton,Jacobi,Bellman equation. In this paper we develop a new approach that allows the formulation of some nonlinear optimal control problems whose solution can be stated explicitly as a state-feedback controller. The approach is based on using Young's inequality to derive explicit conditions by which the solution of the associated Hamilton,Jacobi,Bellman equation is simplified. This allows us to formulate large families of nonlinear optimal control problems with closed-form solutions. We demonstrate this by developing optimal controllers for a Lotka,Volterra system. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Mixture model equations for marker-assisted genetic evaluation

JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 4 2005
Y. Liu
Summary Marker-assisted genetic evaluation needs to infer genotypes at quantitative trait loci (QTL) based on the information of linked markers. As the inference usually provides the probability distribution of QTL genotypes rather than a specific genotype, marker-assisted genetic evaluation is characterized by the mixture model because of the uncertainty of QTL genotypes. It is, therefore, necessary to develop a statistical procedure useful for mixture model analyses. In this study, a set of mixture model equations was derived based on the normal mixture model and the EM algorithm for evaluating linear models with uncertain independent variables. The derived equations can be seen as an extension of Henderson's mixed model equations to mixture models and provide a general framework to deal with the issues of uncertain incidence matrices in linear models. The mixture model equations were applied to marker-assisted genetic evaluation with different parameterizations of QTL effects. A sire-QTL-effect model and a founder-QTL-effect model were used to illustrate the application of the mixture model equations. The potential advantages of the mixture model equations for marker-assisted genetic evaluation were discussed. The mixed-effect mixture model equations are flexible in modelling QTL effects and show desirable properties in estimating QTL effects, compared with Henderson's mixed model equations. [source]


Finite sample improvements in statistical inference with I(1) processes

JOURNAL OF APPLIED ECONOMETRICS, Issue 3 2001
D. Marinucci
Robinson and Marinucci (1998) investigated the asymptotic behaviour of a narrow-band semiparametric procedure termed Frequency Domain Least Squares (FDLS) in the broad context of fractional cointegration analysis. Here we restrict discussion to the standard case when the data are I(1) and the cointegrating errors are I(0), proving that modifications of the Fully Modified Ordinary Least Squares (FM-OLS) procedure of Phillips and Hansen (1990) which use the FDLS idea have the same asymptotically desirable properties as FM-OLS, and, on the basis of a Monte Carlo study, find evidence that they have superior finite-sample properties. The new procedures are also shown to compare satisfactorily with parametric estimates. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Blends of triazine-based hyperbranched polyether with LDPE and plasticized PVC

JOURNAL OF APPLIED POLYMER SCIENCE, Issue 1 2007
Jyotishmoy Borah
Abstract Triazine-based hyperbranched polyether was obtained by earlier reported method and blended with low density polyethylene (LDPE) and plasticized poly(vinyl chloride) (PVC) separately to improve some desirable properties of those linear polymers. The properties like processability, mechanical properties, flammability, etc. of those linear polymers were studied by blending with 1,7.5 phr of hyperbranched polyether. The mechanical properties were also measured after thermal aging and leaching in different chemical media. SEM study indicates that both polymers exhibit homogenous morphology at all dose levels. The mechanical properties like tensile strength, elongation at break, hardness, etc. of LDPE and PVC increase with the increase of dose level of hyperbranched polyether. The flame retardant behavior as measured by limiting oxygen index (LOI) for all blends indicates an enhanced LOI value compared to the polymer without hyperbranched polyether. The processing behavior of both types of blends as measured by solution viscosity and melt flow rate value indicates that hyperbranched polyether acts as a process aid for those base polymers. The effect of leaching and heat aging of these linear polymers on the mechanical properties showed that hyperbranched polyether is a superior antidegradant compared to the commercially used N -isopropyl- N -phenyl p -phenylene diamine. © 2007 Wiley Periodicals, Inc. J Appl Polym Sci 104: 648,654, 2007 [source]


Recycling of nickel,metal hydride batteries.

JOURNAL OF CHEMICAL TECHNOLOGY & BIOTECHNOLOGY, Issue 9 2004
II: Electrochemical deposition of cobalt, nickel
Abstract A combination of hydrometallurgical and electrochemical processes has been developed for the separation and recovery of nickel and cobalt from cylindrical nickel,metal hydride rechargeable batteries. Leaching tests revealed that a 4 mol dm,3 hydrochloric acid solution at 95 °C was suitable to dissolve all metals from the battery after 3 h dissolution. The rare earths were separated from the leaching solution by solvent extraction with 25% bis(2-ethylhexyl)phosphoric acid (D2EHPA) in kerosene. The nickel and cobalt present in the aqueous phase were subjected to electrowinning. Galvanostatic tests on simulated aqueous solutions investigated the effect of current density, pH, and temperature with regard to current efficiency and deposit composition and morphology. The results indicated that achieving an NiCo composition with desirable properties was possible by varying the applied current density. Preferential cobalt deposition was observed at low current densities. Galvanostatic tests using solutions obtained from treatment of batteries revealed that the aqueous chloride phase, obtained from the extraction, was suitable for recovery of nickel and cobalt through simultaneous electrodeposition. Scanning electron micrography and X-ray diffraction analysis gave detailed information of the morphology and the crystallographic orientation of the obtained deposits. Copyright © 2004 Society of Chemical Industry [source]


FREE-SPACE MICROWAVE MEASUREMENT of LOW MOISTURE CONTENT IN POWDERED FOODS

JOURNAL OF FOOD PROCESSING AND PRESERVATION, Issue 1 2000
RAM M. NARAYANAN
A free-space microwave transmission technique has been developed and tested for rapid inline noninvasive measurement of the moisture content of various types of food powders. the basis of this technique is the relation between the attenuation of X-band microwave radiation through a sample of the food powder to its moisture content by weight. Since food powders generally lose their utility and desirable properties, such as flowability and resistance to spoilage, at lower levels of moisture content, typically 3,7%, special techniques must be developed in order to accurately characterize the moisture content at these low levels. One such technique is to use frequency averaging to enhance the accuracy of the measurements to avoid multiple reflection effects prevalent in low-loss low-moisture attenuation measurements. This technique was implemented in the moisture content estimation. Overall accuracies in moisture content estimation are generally less than 1%, although in some cases, accuracies are in the vicinity of 5%. [source]


MODEL PREDICTION FOR SENSORY ATTRIBUTES OF NONGLUTEN PASTA

JOURNAL OF FOOD QUALITY, Issue 6 2001
JEN-CHIEH HUANG
ABSTRACT Response surface methodology was used to predict sensory attributes of a nongluten pasta and develop response surface plots to help visualize the optimum region. Optimum regions of xanthan gum, modified starch, and locust bean gum were selected by overlapping the contour plots of sensory properties of nongluten pasta as compared with the control pasta. The formula of nongluten pasta that possessed the most desirable properties was xanthan gum at 40 g, modified starch at 35 g, locust bean gum at 40 g, tapioca starch at 113 g, potato starch at 57 g, corn flour at 250 g, and rice flour at 50 g. The quality of nongluten pasta could be improved by using different levels of nongluten starches and flours, and nonstarch polysaccharides. [source]


A measure for the cohesion of weighted networks

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 3 2003
Leo Egghe
A generalization of both the Botafogo-Rivlin-Shneiderman compactness measure and the Wiener index is presented. These new measures for the cohesion of networks can be used in case a dissimilarity value is given between nodes in a network or a graph. It is illustrated how a set of weights between connected nodes can be transformed into a set of dissimilarity measures for all nodes. The new compactness measure for the cohesion of weighted graphs has several desirable properties related to the disjoint union of two networks. Finally, an example is presented of the calculation of the new compactness measures for a co-citation and a bibliographic coupling network. [source]


Range Unit-Root (RUR) Tests: Robust against Nonlinearities, Error Distributions, Structural Breaks and Outliers

JOURNAL OF TIME SERIES ANALYSIS, Issue 4 2006
Felipe Aparicio
Abstract., Since the seminal paper by Dickey and Fuller in 1979, unit-root tests have conditioned the standard approaches to analysing time series with strong serial dependence in mean behaviour, the focus being placed on the detection of eventual unit roots in an autoregressive model fitted to the series. In this paper, we propose a completely different method to test for the type of long-wave patterns observed not only in unit-root time series but also in series following more complex data-generating mechanisms. To this end, our testing device analyses the unit-root persistence exhibited by the data while imposing very few constraints on the generating mechanism. We call our device the range unit-root (RUR) test since it is constructed from the running ranges of the series from which we derive its limit distribution. These nonparametric statistics endow the test with a number of desirable properties, the invariance to monotonic transformations of the series and the robustness to the presence of important parameter shifts. Moreover, the RUR test outperforms the power of standard unit-root tests on near-unit-root stationary time series; it is invariant with respect to the innovations distribution and asymptotically immune to noise. An extension of the RUR test, called the forward,backward range unit-root (FB-RUR) improves the check in the presence of additive outliers. Finally, we illustrate the performances of both range tests and their discrepancies with the Dickey,Fuller unit-root test on exchange rate series. [source]


A Dependence Metric for Possibly Nonlinear Processes

JOURNAL OF TIME SERIES ANALYSIS, Issue 5 2004
C. W. Granger
Abstract., A transformed metric entropy measure of dependence is studied which satisfies many desirable properties, including being a proper measure of distance. It is capable of good performance in identifying dependence even in possibly nonlinear time series, and is applicable for both continuous and discrete variables. A nonparametric kernel density implementation is considered here for many stylized models including linear and nonlinear MA, AR, GARCH, integrated series and chaotic dynamics. A related permutation test of independence is proposed and compared with several alternatives. [source]


Synthesis and Properties of Novel Fluorinated Poly(phenylene- co -imide)s

MACROMOLECULAR CHEMISTRY AND PHYSICS, Issue 3 2007
Wenmu Li
Abstract A new class of high-performance materials, fluorinated poly(phenylene- co -imide)s, were prepared by Ni(0)-catalytic coupling of 2,5-dichlorobenzophenone with fluorinated dichlorophthalimide. The synthesized copolymers have high molecular weights (,=,5.74,×,104,17.3,×,104 g,·,mol,1), and a combination of desirable properties such as high solubility in common organic solvent, film-forming ability, and excellent mechanical properties. The glass transition temperature (Tgs) of the copolymers was readily tuned to be between 219 and 354,°C via systematic variation of the ratio of the two comonomers. The tough polymer films, obtained by casting from solution, had tensile strength, elongation at break, and tensile modulus values in the range of 66.7,266 MPa, 2.7,13.5%, and 3.13,4.09 GPa, respectively. The oxygen permeability coefficients () and permeability selectivity of oxygen to nitrogen () of these copolymer membranes were in the range of 0.78,3.01 barrer [1 barrer,=,10,10 cm3 (STP) cm/(cm2,·,s,·,cmHg)] and 5.09,6.25, respectively. Consequently, these materials have shown promise as engineering plastics and gas-separation membrane materials. [source]


A Click Approach to Chiral-Dendronized Polyfluorene Derivatives

MACROMOLECULAR RAPID COMMUNICATIONS, Issue 23 2007
Zi-Tong Liu
Abstract A new kind of chiral-dendronized binaphthyl-containing polyfluorene derivatives has been synthesized through "click chemistry" efficiently. The resulting copolymers exhibited desirable properties, such as excellent solubility, good thermal stability, and considerably high molecular weights. The photophysical properties of the copolymers were investigated in details, and the results indicated that the combination of chiral binaphthyl unit and bulky dendron could effectively suppress intermolecular packing and aggregation. In addition, the investigation of circular dichroism behavior of these chiral-dendronized copolymers showed a strong Cotton effect at long wavelength (373,379 nm), indicating that the chirality of the binaphthyl units was transferred to the whole polyfluorene backbone. [source]


Recent advances in selective opioid receptor agonists and antagonists

MEDICINAL RESEARCH REVIEWS, Issue 2 2004
Masakatsu Eguchi
Abstract Opioid analgesics provide outstanding benefits for relief of severe pain. The mechanisms of the analgesia accompanied with some side effects have been investigated by many scientists to shed light on the complex biological processes at the molecular level. New opioid drugs and therapies with more desirable properties can be developed on the bases of accurate insight of the opioid ligand,receptor interaction and clear knowledge of the pharmacological behavior of opioid receptors and the associated proteins. Toward this goal, recent advances in selective opioid receptor agonists and antagonists including opioid ligand,receptor interactions are summarized in this review article. © 2003 Wiley Periodicals, Inc. Med Res Rev, 24, No. 2, 182,212, 2004 [source]


Damped oscillations in the adaptive response of the iron homeostasis network of E. coli

MOLECULAR MICROBIOLOGY, Issue 2 2010
Amnon Amir
Summary Living organisms often have to adapt to sudden environmental changes and reach homeostasis. To achieve adaptation, cells deploy motifs such as feedback in their genetic networks, endowing the cellular response with desirable properties. We studied the iron homeostasis network of E. coli, which employs feedback loops to regulate iron usage and uptake, while maintaining intracellular iron at non-toxic levels. Using fluorescence reporters for iron-dependent promoters in bulk and microfluidics-based, single-cell experiments, we show that E. coli cells exhibit damped oscillations in gene expression, following sudden reductions in external iron levels. The oscillations, lasting for several generations, are independent of position along the cell cycle. Experiments with mutants in network components demonstrate the involvement of iron uptake in the oscillations. Our findings suggest that the response is driven by intracellular iron oscillations large enough to induce nearly full network activation/deactivation. We propose a mathematical model based on a negative feedback loop closed by rapid iron uptake, and including iron usage and storage, which captures the main features of the observed behaviour. Taken together, our results shed light on the control of iron metabolism in bacteria and suggest that the oscillations represent a compromise between the requirements of stability and speed of response. [source]


Decentralized control strategies for dynamic routing

OPTIMAL CONTROL APPLICATIONS AND METHODS, Issue 6 2002
ftar
Abstract The routing problem in multi-destination data communication networks is considered. A dynamic model, which can incorporate arbitrary, different, time-varying processing delays at different nodes, is developed to describe the network dynamics. Based on this model, controllers for routing control are proposed. The structures of the proposed controllers are motivated by an optimal control problem. These proposed controllers are completely decentralized in the sense that all necessary on-line computations are done locally at each node. Furthermore, the information needed for these computations is related only to the queue lengths at the present node and the adjacent downstream nodes. Both cases when the controls can be continuously changed and when the controls are updated at discrete time instants are considered. In the latter case the controls at different nodes may be updated at different time instants (i.e. the network is not necessarily synchronous). It is shown that the controllers enjoy many desirable properties; in particular, they clear all the queues of the network in the absence of external message arrivals, in finite time. Furthermore, the controllers do not direct messages around a loop. They also have certain robustness properties. Some simulation results relating to a number of realistic problems are presented to illustrate various features of the controllers. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Design Strategies for the Multivariate Exponentially Weighted Moving Average Control Chart

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 6 2004
Murat Caner Testik
Abstract The multivariate exponentially weighted moving average (MEWMA) control chart has received significant attention from researchers and practitioners because of its desirable properties. There are several different approaches to the design of MEWMA control charts: statistical design; economic,statistical design; and robust design. In this paper a review and comparison of these design strategies is provided.Copyright © 2004 John Wiley & Sons, Ltd. [source]


Step-reset options: Design and valuation

THE JOURNAL OF FUTURES MARKETS, Issue 2 2002
L. Paul Hsueh
This study proposes a new design of reset options in which the option's exercise price adjusts gradually, based on the amount of time the underlying spent beyond prespecified reset levels. Relative to standard reset options, a step-reset design offers several desirable properties. First of all, it demands a lower option premium but preserves the same desirable reset attribute that appeals to market investors. Second, it overcomes the disturbing problem of delta jump as exhibited in standard reset option, and thus greatly reduces the difficulties in risk management for reset option sellers who hedge dynamically. Moreover, the step-reset feature makes the option more robust against short-term price movements of the underlying and removes the pressure of price manipulation often associated with standard reset options. To value this innovative option product, we develop a tree-based valuation algorithm in this study. Specifically, we parameterize the trinomial tree model to correctly account for the discrete nature of reset monitoring. The use of lattice model gives us the flexibility to price step-reset options with American exercise right. Finally, to accommodate the path-dependent exercise price, we introduce a state-to-state recursive pricing procedure to properly capture the path-dependent step-reset effect and enhance computational efficiency. © 2002 John Wiley & Sons, Inc. Jrl Fut Mark 22:155,171, 2002 [source]


Network Competition and Access Charge Rules

THE MANCHESTER SCHOOL, Issue 1 2002
Toker Doganoglu
This paper presents a model of two competing local telecommunications networks which are mandated to interconnect. After negotiating the access charges, the companies engage in price competition. Given the prices, each consumer selects a network and determines the consumption of phone calls. Using a discrete/continuous consumer choice model, it is shown that a pure strategy equilibrium exists quite generally and satisfies desirable properties. This equilibrium can be implemented by a simple rule that sets the access charges at a common discount from the retail prices. It requires no information and the discount factor is chosen by the companies through negotiations. Finally, if the networks are highly substitute, the retail prices obtained by imposing this rule will approximate the efficient prices. [source]


A Welfare Loss Measure of Unemployment with An Empirical Illustration

THE MANCHESTER SCHOOL, Issue 2 2001
Satya Paul
Based on interpersonal comparisons, a welfare loss measure of unemployment is developed. The proposed measure is additively decomposable which enables us to assess the group-specific contribution to aggregate welfare cost. It possesses certain other desirable properties. It is sensitive to unemployment rate, mean duration of unemployment and the relative differences in the duration of unemployment. Since all these can vary differently over the years and across regions, the proposed measure is most suitable for comparing the welfare cost of unemployment over a period of time or across regions. An empirical exercise based on the Australian labour force survey data illustrates the usefulness and an easy applicability of the proposed measure. [source]


Lumped dynamic model for a bistable genetic regulatory circuit within a variable-volume whole-cell modelling framework

ASIA-PACIFIC JOURNAL OF CHEMICAL ENGINEERING, Issue 6 2009
Gheorghe Maria
Abstract Genetic regulatory circuits (GRCs) including switches, oscillators, signal amplifiers or filters, and signalling circuits are responsible for the control of cell metabolism. Modelling such complex GRCs is a difficult task due to high complexity of the the process (partly known) and the structural, functional and temporal hierarchical organisation of the cell system. Modular lumped representation, grouping some reactions/components and including different types of variables, is a promising alternative allowing individual module characterisation and elaboration of extended simulation platforms for representing the GRC dynamic properties and designing new cell functions. Such models allow to in-silico design modified micro-organisms with desirable properties for practical applications in bioprocess engineering and biotechnology. In the present work, the analysis of a designed bistable switch formed by two gene expression modules is performed in a variable-volume and whole-cell modelling framework, by mimicking the Escherichia coli cell growth. The advantages but also limitations of such a new approach are investigated, by using a Hill-type kinetics combined with few elementary steps, with the aim of better representing the adjustable levels of key intermediates tuning the GRC regulatory properties in terms of stability strength, species connectivity, responsiveness, and regulatory efficiency under stationary and dynamic perturbations. Copyright © 2009 Curtin University of Technology and John Wiley & Sons, Ltd. [source]


Exact Confidence Bounds Following Adaptive Group Sequential Tests

BIOMETRICS, Issue 2 2009
Werner Brannath
Summary We provide a method for obtaining confidence intervals, point estimates, and p-values for the primary effect size parameter at the end of a two-arm group sequential clinical trial in which adaptive changes have been implemented along the way. The method is based on applying the adaptive hypothesis testing procedure of Müller and Schäfer (2001, Biometrics57, 886,891) to a sequence of dual tests derived from the stage-wise adjusted confidence interval of Tsiatis, Rosner, and Mehta (1984, Biometrics40, 797,803). In the nonadaptive setting this confidence interval is known to provide exact coverage. In the adaptive setting exact coverage is guaranteed provided the adaptation takes place at the penultimate stage. In general, however, all that can be claimed theoretically is that the coverage is guaranteed to be conservative. Nevertheless, extensive simulation experiments, supported by an empirical characterization of the conditional error function, demonstrate convincingly that for all practical purposes the coverage is exact and the point estimate is median unbiased. No procedure has previously been available for producing confidence intervals and point estimates with these desirable properties in an adaptive group sequential setting. The methodology is illustrated by an application to a clinical trial of deep brain stimulation for Parkinson's disease. [source]


Recent Progress in Biomolecular Engineering

BIOTECHNOLOGY PROGRESS, Issue 1 2000
Dewey D. Y. Ryu
During the next decade or so, there will be significant and impressive advances in biomolecular engineering, especially in our understanding of the biological roles of various biomolecules inside the cell. The advances in high throughput screening technology for discovery of target molecules and the accumulation of functional genomics and proteomics data at accelerating rates will enable us to design and discover novel biomolecules and proteins on a rational basis in diverse areas of pharmaceutical, agricultural, industrial, and environmental applications. As an applied molecular evolution technology, DNA shuffling will play a key role in biomolecular engineering. In contrast to the point mutation techniques, DNA shuffling exchanges large functional domains of sequences to search for the best candidate molecule, thus mimicking and accelerating the process of sexual recombination in the evolution of life. The phage-display system of combinatorial peptide libraries will be extensively exploited to design and create many novel proteins, as a result of the relative ease of screening and identifying desirable proteins. Even though this system has so far been employed mainly in screening the combinatorial antibody libraries, its application will be extended further into the science of protein-receptor or protein-ligand interactions. The bioinformatics for genome and proteome analyses will contribute substantially toward ever more accelerated advances in the pharmaceutical industry. Biomolecular engineering will no doubt become one of the most important scientific disciplines, because it will enable systematic and comprehensive analyses of gene expression patterns in both normal and diseased cells, as well as the discovery of many new high-value molecules. When the functional genomics database, EST and SAGE techniques, microarray technique, and proteome analysis by 2-dimensional gel electrophoresis or capillary electrophoresis in combination with mass spectrometer are all put to good use, biomolecular engineering research will yield new drug discoveries, improved therapies, and significantly improved or new bioprocess technology. With the advances in biomolecular engineering, the rate of finding new high-value peptides or proteins, including antibodies, vaccines, enzymes, and therapeutic peptides, will continue to accelerate. The targets for the rational design of biomolecules will be broad, diverse, and complex, but many application goals can be achieved through the expansion of knowledge based on biomolecules and their roles and functions in cells and tissues. Some engineered biomolecules, including humanized Mab's, have already entered the clinical trials for therapeutic uses. Early results of the trials and their efficacy are positive and encouraging. Among them, Herceptin, a humanized Mab for breast cancer treatment, became the first drug designed by a biomolecular engineering approach and was approved by the FDA. Soon, new therapeutic drugs and high-value biomolecules will be designed and produced by biomolecular engineering for the treatment or prevention of not-so-easily cured diseases such as cancers, genetic diseases, age-related diseases, and other metabolic diseases. Many more industrial enzymes, which will be engineered to confer desirable properties for the process improvement and manufacturing of high-value biomolecular products at a lower production cost, are also anticipated. New metabolites, including novel antibiotics that are active against resistant strains, will also be produced soon by recombinant organisms having de novo engineered biosynthetic pathway enzyme systems. The biomolecular engineering era is here, and many of benefits will be derived from this field of scientific research for years to come if we are willing to put it to good use. [source]