Sequential Approach (sequential + approach)

Distribution by Scientific Domains


Selected Abstracts


ChemInform Abstract: The Direct Synthesis of Unsymmetrical Vicinal Diamines from Terminal Alkynes: A Tandem Sequential Approach for the Synthesis of Imidazolidinones.

CHEMINFORM, Issue 20 2009
Alison V. Lee
Abstract ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 200 leading journals. To access a ChemInform Abstract of an article which was published elsewhere, please select a "Full Text" option. The original article is trackable via the "References" option. [source]


Welfare Comparisons: Sequential Procedures for Heterogeneous Populations

ECONOMICA, Issue 276 2002
Peter J. Lambert
Some analysts use sequential dominance criteria, and others use equivalence scales in combination with non-sequential dominance tests, to make welfare comparisons of joint distributions of income and needs. In this paper we present a new sequential procedure which copes with situations in which sequential dominance fails. We also demonstrate that the recommendations deriving from the sequential approach are valid for distributions of equivalent income whatever equivalence scale the analyst might adopt. Thus, the paper marries together the sequential and equivalizing approaches, seen as alternatives in much previous literature. All results are specified in forms that allow for demographic differences in the populations being compared. [source]


Combining spatial and phylogenetic eigenvector filtering in trait analysis

GLOBAL ECOLOGY, Issue 6 2009
Ingolf Kühn
ABSTRACT Aim, To analyse the effects of simultaneously using spatial and phylogenetic information in removing spatial autocorrelation of residuals within a multiple regression framework of trait analysis. Location, Switzerland, Europe. Methods, We used an eigenvector filtering approach to analyse the relationship between spatial distribution of a trait (flowering phenology) and environmental covariates in a multiple regression framework. Eigenvector filters were calculated from ordinations of distance matrices. Distance matrices were either based on pure spatial information, pure phylogenetic information or spatially structured phylogenetic information. In the multiple regression, those filters were selected which best reduced Moran's I coefficient of residual autocorrelation. These were added as covariates to a regression model of environmental variables explaining trait distribution. Results, The simultaneous provision of spatial and phylogenetic information was effectively able to remove residual autocorrelation in the analysis. Adding phylogenetic information was superior to adding purely spatial information. Applying filters showed altered results, i.e. different environmental predictors were seen to be significant. Nevertheless, mean annual temperature and calcareous substrate remained the most important predictors to explain the onset of flowering in Switzerland; namely, the warmer the temperature and the more calcareous the substrate, the earlier the onset of flowering. A sequential approach, i.e. first removing the phylogenetic signal from traits and then applying a spatial analysis, did not provide more information or yield less autocorrelation than simple or purely spatial models. Main conclusions, The combination of spatial and spatio-phylogenetic information is recommended in the analysis of trait distribution data in a multiple regression framework. This approach is an efficient means for reducing residual autocorrelation and for testing the robustness of results, including the indication of incomplete parameterizations, and can facilitate ecological interpretation. [source]


Bayesian meta-modelling of engineering design simulations: a sequential approach with adaptation to irregularities in the response behaviour

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 15 2005
A. Farhang-Mehr
Abstract Among current meta-modelling approaches, Bayesian-based interpolative methods have received significant attention in the literature. These methods are particularly known for their capability to adapt to the response function behaviour in order to generate good meta-models with fewer experiments. Current Bayesian adaptation techniques, however, are mainly based on the assumption that some variables are more important (or sensitive) than others. These less sensitive variables are weighted less or ignored to reduce the dimension of the design space. This assumption limits the scope and applicability of these models since in many practical cases none of the variables can be completely ignored or weighted less than others. This paper proposes a pragmatic approach that identifies regions of the design space where more experiments are needed based on the response function behaviour. The proposed approach adaptively utilizes the information obtained from previous experiments, builds interim meta-models, and identifies ,irregular' regions in which more experiments are needed. The behaviour of the interim meta-model is then quantified as a spatial function and incorporated into the next stage of the design to sequentially improve the accuracy of the obtained meta-model. The performance of the new approach is demonstrated using a numerical and an engineering example. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Deterministic global optimization of nonlinear dynamic systems

AICHE JOURNAL, Issue 4 2007
Youdong Lin
Abstract A new approach is described for the deterministic global optimization of dynamic systems, including optimal control problems. The method is based on interval analysis and Taylor models and employs a type of sequential approach. A key feature of the method is the use of a new validated solver for parametric ODEs, which is used to produce guaranteed bounds on the solutions of dynamic systems with interval-valued parameters. This is combined with a new technique for domain reduction based on the use of Taylor models in an efficient constraint propagation scheme. The result is that an ,-global optimum can be found with both mathematical and computational certainty. Computational studies on benchmark problems are presented showing that this new approach provides significant improvements in computational efficiency, well over an order of magnitude in most cases, relative to other recently described methods. © 2007 American Institute of Chemical Engineers AIChE J, 2007 [source]


A Conceptual Framework for Computing U.S. Non-manufacturing PMI Indexes

JOURNAL OF SUPPLY CHAIN MANAGEMENT, Issue 3 2007
Danny I. Cho
SUMMARY This research develops a conceptual framework for computing new weighted composite indexes for the U.S. non-manufacturing sector using a two-step sequential approach , a correlation analysis, followed by a principal components analysis. The results suggest that different weights (i.e., the highest weight to New orders and the lowest weight to Supply deliveries) be assigned if all diffusion indexes in the initial set of six are retained. It also turns out that a simpler index based on two (New orders and Supply deliveries) of the six diffusion indexes, with equal weights, can be computed with little information loss. The new indexes are shown to correlate highly with many key business/economic indicators. [source]


Simultaneous Data Reconciliation and Parameter Estimation in Bulk Polypropylene Polymerizations in Real Time

MACROMOLECULAR SYMPOSIA, Issue 1 2006
Diego Martinez Prata
Abstract This work presents the implementation of a methodology for dynamic data reconciliation and simultaneous estimation of quality and productivity parameters in real time, using data from an industrial bulk Ziegler-Natta propylene polymerization process. A phenomenological model of the real process, based on mass and energy balances, was developed and implemented for interpretation of actual plant data. The resulting nonlinear dynamic optimization problem was solved using a sequential approach on a time window specifically tuned for the studied process. Despite the essentially isothermal operation conditions, obtained results show that inclusion of energy balance constraints allows for increase of information redundancy and, as a consequence, for computation of better parameter estimates than the ones obtained when the energy balance constraints are not considered (Prata et al., 2005). Examples indicate that the proposed technique can be used very effectively for monitoring of polymer quality and identification of process malfunctions in real time even when laboratory analyses are scarce. [source]


RANDOM APPROXIMATED GREEDY SEARCH FOR FEATURE SUBSET SELECTION

ASIAN JOURNAL OF CONTROL, Issue 3 2004
Feng Gao
ABSTRACT We propose a sequential approach called Random Approximated Greedy Search (RAGS) in this paper and apply it to the feature subset selection for regression. It is an extension of GRASP/Super-heuristics approach to complex stochastic combinatorial optimization problems, where performance estimation is very expensive. The key points of RAGS are from the methodology of Ordinal Optimization (OO). We soften the goal and define success as good enough but not necessarily optimal. In this way, we use more crude estimation model, and treat the performance estimation error as randomness, so it can provide random perturbations mandated by the GRASP/Super-heuristics approach directly and save a lot of computation effort at the same time. By the multiple independent running of RAGS, we show that we obtain better solutions than standard greedy search under the comparable computation effort. [source]


Pluralist action research: a review of the information systems literature,

INFORMATION SYSTEMS JOURNAL, Issue 1 2009
Mike Chiasson
Abstract Action research (AR) has for many years been promoted and practised as one way to conduct field studies within the information systems (IS) discipline. Based on a review of articles published in leading journals, we explore how IS researchers practise AR. Our review suggests that AR lends itself strongly towards pluralist approaches which facilitate the production of both theoretical and practical knowledge. First, on the level of each study we analyse how research and problem-solving activities are mixed, in three ways: the research dominant, the problem-solving dominant and the interactive approaches. Second, in the context of the wider research programme in which the study is situated, we analyse how AR is mixed with other research methods, in two ways: the dominant and the sequential approaches. We argue that these pluralist practices of mixing types of research activities and types of research methods provide IS action researchers with a rich portfolio of approaches to knowledge production. This portfolio helps them address the risks involved in AR to ensure their efforts contribute to the literature as well as to practical problem-solving. [source]


Two-Stage Group Sequential Robust Tests in Family-Based Association Studies: Controlling Type I Error

ANNALS OF HUMAN GENETICS, Issue 4 2008
Lihan K. Yan
Summary In family-based association studies, an optimal test statistic with asymptotic normal distribution is available when the underlying genetic model is known (e.g., recessive, additive, multiplicative, or dominant). In practice, however, genetic models for many complex diseases are usually unknown. Using a single test statistic optimal for one genetic model may lose substantial power when the model is mis-specified. When a family of genetic models is scientifically plausible, the maximum of several tests, each optimal for a specific genetic model, is robust against the model mis-specification. This robust test is preferred over a single optimal test. Recently, cost-effective group sequential approaches have been introduced to genetic studies. The group sequential approach allows interim analyses and has been applied to many test statistics, but not to the maximum statistic. When the group sequential method is applied, type I error should be controlled. We propose and compare several approaches of controlling type I error rates when group sequential analysis is conducted with the maximum test for family-based candidate-gene association studies. For a two-stage group sequential robust procedure with a single interim analysis, two critical values for the maximum tests are provided based on a given alpha spending function to control the desired overall type I error. [source]


Model,data synthesis in terrestrial carbon observation: methods, data requirements and data uncertainty specifications

GLOBAL CHANGE BIOLOGY, Issue 3 2005
M. R. Raupach
Systematic, operational, long-term observations of the terrestrial carbon cycle (including its interactions with water, energy and nutrient cycles and ecosystem dynamics) are important for the prediction and management of climate, water resources, food resources, biodiversity and desertification. To contribute to these goals, a terrestrial carbon observing system requires the synthesis of several kinds of observation into terrestrial biosphere models encompassing the coupled cycles of carbon, water, energy and nutrients. Relevant observations include atmospheric composition (concentrations of CO2 and other gases); remote sensing; flux and process measurements from intensive study sites; in situ vegetation and soil monitoring; weather, climate and hydrological data; and contemporary and historical data on land use, land use change and disturbance (grazing, harvest, clearing, fire). A review of model,data synthesis tools for terrestrial carbon observation identifies ,nonsequential' and ,sequential' approaches as major categories, differing according to whether data are treated all at once or sequentially. The structure underlying both approaches is reviewed, highlighting several basic commonalities in formalism and data requirements. An essential commonality is that for all model,data synthesis problems, both nonsequential and sequential, data uncertainties are as important as data values themselves and have a comparable role in determining the outcome. Given the importance of data uncertainties, there is an urgent need for soundly based uncertainty characterizations for the main kinds of data used in terrestrial carbon observation. The first requirement is a specification of the main properties of the error covariance matrix. As a step towards this goal, semi-quantitative estimates are made of the main properties of the error covariance matrix for four kinds of data essential for terrestrial carbon observation: remote sensing of land surface properties, atmospheric composition measurements, direct flux measurements, and measurements of carbon stores. [source]