Popular Approach (popular + approach)

Distribution by Scientific Domains


Selected Abstracts


Power and sample size when multiple endpoints are considered

PHARMACEUTICAL STATISTICS: THE JOURNAL OF APPLIED STATISTICS IN THE PHARMACEUTICAL INDUSTRY, Issue 3 2007
Stephen Senn
Abstract A common approach to analysing clinical trials with multiple outcomes is to control the probability for the trial as a whole of making at least one incorrect positive finding under any configuration of true and false null hypotheses. Popular approaches are to use Bonferroni corrections or structured approaches such as, for example, closed-test procedures. As is well known, such strategies, which control the family-wise error rate, typically reduce the type I error for some or all the tests of the various null hypotheses to below the nominal level. In consequence, there is generally a loss of power for individual tests. What is less well appreciated, perhaps, is that depending on approach and circumstances, the test-wise loss of power does not necessarily lead to a family wise loss of power. In fact, it may be possible to increase the overall power of a trial by carrying out tests on multiple outcomes without increasing the probability of making at least one type I error when all null hypotheses are true. We examine two types of problems to illustrate this. Unstructured testing problems arise typically (but not exclusively) when many outcomes are being measured. We consider the case of more than two hypotheses when a Bonferroni approach is being applied while for illustration we assume compound symmetry to hold for the correlation of all variables. Using the device of a latent variable it is easy to show that power is not reduced as the number of variables tested increases, provided that the common correlation coefficient is not too high (say less than 0.75). Afterwards, we will consider structured testing problems. Here, multiplicity problems arising from the comparison of more than two treatments, as opposed to more than one measurement, are typical. We conduct a numerical study and conclude again that power is not reduced as the number of tested variables increases. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Germ Line Transformation of Mammals by Pronuclear Microinjection

EXPERIMENTAL PHYSIOLOGY, Issue 6 2000
T. Rülicke
The most popular approach for generating transgenic mammals is the direct injection of transgenes into one pronucleus of a fertilized oocyte. In the past 15 years microinjection has been successfully applied in laboratory as well as in farm animals. The frequency of transgenic founders, although highly different between the species, is efficient enough to render this technique applicable to a wide range of mammals. The expression levels and patterns of a transgene are initially influenced by the construction of the transgene. However, the overall phenotype of a transgenic organism is influenced by several genetic and environmental factors. Due to the features of this technique not all of the genetic factors can be experimentally controlled by the scientist. In this article we will emphasize some peculiarities which have to be taken into account for the successful performance of transgenesis by pronuclear microinjection [source]


SOME PRACTICAL GUIDANCE FOR THE IMPLEMENTATION OF PROPENSITY SCORE MATCHING

JOURNAL OF ECONOMIC SURVEYS, Issue 1 2008
Marco Caliendo
Abstract Propensity score matching (PSM) has become a popular approach to estimate causal treatment effects. It is widely applied when evaluating labour market policies, but empirical examples can be found in very diverse fields of study. Once the researcher has decided to use PSM, he is confronted with a lot of questions regarding its implementation. To begin with, a first decision has to be made concerning the estimation of the propensity score. Following that one has to decide which matching algorithm to choose and determine the region of common support. Subsequently, the matching quality has to be assessed and treatment effects and their standard errors have to be estimated. Furthermore, questions like ,what to do if there is choice-based sampling?' or ,when to measure effects?' can be important in empirical studies. Finally, one might also want to test the sensitivity of estimated treatment effects with respect to unobserved heterogeneity or failure of the common support condition. Each implementation step involves a lot of decisions and different approaches can be thought of. The aim of this paper is to discuss these implementation issues and give some guidance to researchers who want to use PSM for evaluation purposes. [source]


Government,nonprofit partnership: a defining framework

PUBLIC ADMINISTRATION & DEVELOPMENT, Issue 1 2002
Jennifer M. Brinkerhoff
Partnership has emerged as an increasingly popular approach to privatization and government,nonprofit relations generally. While in principle it offers many advantages, there is no consensus on what it means and its practice varies. Following a review of partnership literature, the article refines the partnership concept, developing two definitional dimensions: mutuality and organization identity. Based on these dimensions, partnership is defined on a relative scale and is distinguished from other relationship types: contracting, extension, and co-optation or gradual absorption. Examples of each are provided. The model enables actors to assess their relative tolerance for partnership approaches, and provides a common language among potential partners. Linking its defining dimensions to partnership's value-added assists partners to advocate for partnership approaches from an instrumental as well as normative perspective. The model and inter-organizational relationship matrix can inform continuing theory building and practical experimentation both to refine defining dimensions and indicators of partnership practice, and to enhance responsiveness to partners' expectations of partnership. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Mixed-effects Logistic Approach for Association Following Linkage Scan for Complex Disorders

ANNALS OF HUMAN GENETICS, Issue 2 2007
H. Xu
Summary An association study to identify possible causal single nucleotide polymorphisms following linkage scanning is a popular approach for the genetic dissection of complex disorders. However, in association studies cases and controls are assumed to be independent, i.e., genetically unrelated. Choosing a single affected individual per family is statistically inefficient and leads to a loss of power. On the other hand, because of the relatedness of family members, using affected family members and unrelated normal controls directly leads to false-positive results in association studies. In this paper we propose a new approach using mixed-model logistic regression, in which associations are performed using family members and unrelated controls. Thus, the important genetic information can be obtained from family members while retaining high statistical power. To examine the properties of this new approach we developed an efficient algorithm, to simulate environmental risk factors and the genotypes at both the disease locus and a marker locus with and without linkage disequilibrium (LD) in families. Extensive simulation studies showed that our approach can effectively control the type-I error probability. Our approach is better than family-based designs such as TDT, because it allows the use of unrelated cases and controls and uses all of the affected members for whom DNA samples are possibly already available. Our approach also allows the inclusion of covariates such as age and smoking status. Power analysis showed that our method has higher statistical power than recent likelihood ratio-based methods when environmental factors contribute to disease susceptibility, which is true for most complex human disorders. Our method can be further extended to accommodate more complex pedigree structures. [source]


High-Dimensional Cox Models: The Choice of Penalty as Part of the Model Building Process

BIOMETRICAL JOURNAL, Issue 1 2010
Axel Benner
Abstract The Cox proportional hazards regression model is the most popular approach to model covariate information for survival times. In this context, the development of high-dimensional models where the number of covariates is much larger than the number of observations ( ) is an ongoing challenge. A practicable approach is to use ridge penalized Cox regression in such situations. Beside focussing on finding the best prediction rule, one is often interested in determining a subset of covariates that are the most important ones for prognosis. This could be a gene set in the biostatistical analysis of microarray data. Covariate selection can then, for example, be done by L1 -penalized Cox regression using the lasso (Tibshirani (1997). Statistics in Medicine16, 385,395). Several approaches beyond the lasso, that incorporate covariate selection, have been developed in recent years. This includes modifications of the lasso as well as nonconvex variants such as smoothly clipped absolute deviation (SCAD) (Fan and Li (2001). Journal of the American Statistical Association96, 1348,1360; Fan and Li (2002). The Annals of Statistics30, 74,99). The purpose of this article is to implement them practically into the model building process when analyzing high-dimensional data with the Cox proportional hazards model. To evaluate penalized regression models beyond the lasso, we included SCAD variants and the adaptive lasso (Zou (2006). Journal of the American Statistical Association101, 1418,1429). We compare them with "standard" applications such as ridge regression, the lasso, and the elastic net. Predictive accuracy, features of variable selection, and estimation bias will be studied to assess the practical use of these methods. We observed that the performance of SCAD and adaptive lasso is highly dependent on nontrivial preselection procedures. A practical solution to this problem does not yet exist. Since there is high risk of missing relevant covariates when using SCAD or adaptive lasso applied after an inappropriate initial selection step, we recommend to stay with lasso or the elastic net in actual data applications. But with respect to the promising results for truly sparse models, we see some advantage of SCAD and adaptive lasso, if better preselection procedures would be available. This requires further methodological research. [source]


Inference Based on Kernel Estimates of the Relative Risk Function in Geographical Epidemiology

BIOMETRICAL JOURNAL, Issue 1 2009
Martin L. Hazelton
Abstract Kernel smoothing is a popular approach to estimating relative risk surfaces from data on the locations of cases and controls in geographical epidemiology. The interpretation of such surfaces is facilitated by plotting of tolerance contours which highlight areas where the risk is sufficiently high to reject the null hypothesis of unit relative risk. Previously it has been recommended that these tolerance intervals be calculated using Monte Carlo randomization tests. We examine a computationally cheap alternative whereby the tolerance intervals are derived from asymptotic theory. We also examine the performance of global tests of hetereogeneous risk employing statistics based on kernel risk surfaces, paying particular attention to the choice of smoothing parameters on test power (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


ON MULTI-CLASS COST-SENSITIVE LEARNING

COMPUTATIONAL INTELLIGENCE, Issue 3 2010
Zhi-Hua Zhou
Rescaling,is possibly the most popular approach to cost-sensitive learning. This approach works by rebalancing the classes according to their costs, and it can be realized in different ways, for example, re-weighting or resampling the training examples in proportion to their costs, moving the decision boundaries of classifiers faraway from high-cost classes in proportion to costs, etc. This approach is very effective in dealing with two-class problems, yet some studies showed that it is often not so helpful on multi-class problems. In this article, we try to explore why the rescaling approach is often helpless on multi-class problems. Our analysis discloses that the rescaling approach works well when the costs are,consistent, while directly applying it to multi-class problems with,inconsistent,costs may not be a good choice. Based on this recognition, we advocate that before applying the rescaling approach, the,consistency,of the costs must be examined at first. If the costs are consistent, the rescaling approach can be conducted directly; otherwise it is better to apply rescaling after decomposing the multi-class problem into a series of two-class problems. An empirical study involving 20 multi-class data sets and seven types of cost-sensitive learners validates our proposal. Moreover, we show that the proposal is also helpful for class-imbalance learning. [source]


Supply Chain Strategy, Product Characteristics, and Performance Impact: Evidence from Chinese Manufacturers,

DECISION SCIENCES, Issue 4 2009
Yinan Qi
ABSTRACT Supply chain management has become one of the most popular approaches to enhance the global competitiveness of business corporations today. Firms must have clear strategic thinking in order to effectively organize such complicated activities, resources, communications, and processes. An emerging body of literature offers a framework that identifies three kinds of supply chain strategies: lean strategy, agile strategy, and lean/agile strategy based on in-depth case studies. Extant research also suggests that supply chain strategies must be matched with product characteristics in order for firms to achieve better performance. This article investigates supply chain strategies and empirically tests the supply chain strategy model that posits lean, agile, and lean/agile approaches using data collected from 604 manufacturing firms in China. Cluster analyses of the data indicate that Chinese firms are adopting a variation of lean, agile, and lean/agile supply chain strategies identified in the western literature. However, the data reveal that some firms have a traditional strategy that does not emphasize either lean or agile principles. These firms perform worse than firms that have a strategy focused on lean, agile, or lean/agile supply chain. The strategies are examined with respect to product characteristics and financial and operational performance. The article makes significant contributions to the supply chain management literature by examining the supply chain strategies used by Chinese firms. In addition, this work empirically tests the applicability of supply chain strategy models that have not been rigorously tested empirically or in the fast-growing Chinese economy. [source]


Quantitative analysis of essential oils: a complex task

FLAVOUR AND FRAGRANCE JOURNAL, Issue 6 2008
Carlo Bicchi
Abstract This article provides a critical overview of current methods to quantify essential oil components. The fields of application and limits of the most popular approaches, in particular relative percentage abundance, normalized percentage abundance, concentration and true amount determination via calibration curves, are discussed in detail. A specific paragraph is dedicated to the correct use of the most widely used detectors and to analyte response factors. A set of applications for each approach is also included to illustrate the considerations. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Business failure prediction using decision trees

JOURNAL OF FORECASTING, Issue 6 2010
Adrian Gepp
Abstract Accurate business failure prediction models would be extremely valuable to many industry sectors, particularly financial investment and lending. The potential value of such models is emphasised by the extremely costly failure of high-profile companies in the recent past. Consequently, a significant interest has been generated in business failure prediction within academia as well as in the finance industry. Statistical business failure prediction models attempt to predict the failure or success of a business. Discriminant and logit analyses have traditionally been the most popular approaches, but there are also a range of promising non-parametric techniques that can alternatively be applied. In this paper, the relatively new technique of decision trees is applied to business failure prediction. The numerical results suggest that decision trees could be superior predictors of business failure as compared to discriminant analysis. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Protein Kinase Target Discovery From Genome-Wide Messenger RNA Expression Profiling

MOUNT SINAI JOURNAL OF MEDICINE: A JOURNAL OF PERSONALIZED AND TRANSLATIONAL MEDICINE, Issue 4 2010
Avi Ma'ayan
Abstract Genome-wide messenger RNA profiling provides a snapshot of the global state of the cell under different experimental conditions such as diseased versus normal cellular states. However, because measurements are in the form of quantitative changes in messenger RNA levels, such experimental data does not provide direct understanding of the regulatory molecular mechanisms responsible for the observed changes. Identifying potential cell signaling regulatory mechanisms responsible for changes in gene expression under different experimental conditions or in different tissues has been the focus of many computational systems biology studies. Most popular approaches include promoter analysis, gene ontology, or pathway enrichment analysis, as well as reverse engineering of networks from messenger RNA expression data. Here we present a rational approach for identifying and ranking protein kinases that are likely responsible for observed changes in gene expression. By combining promoter analysis; data from various chromatin immunoprecipitation studies such as chromatin immunoprecipitation sequencing, chromatin immunoprecipitation coupled with paired-end ditag, and chromatin immunoprecipitation-on-chip; protein-protein interactions; and kinase-protein phosphorylation reactions collected from the literature, we can identify and rank candidate protein kinases for knock-down, or other types of functional validations, based on genome-wide changes in gene expression. We describe how protein kinase candidate identification and ranking can be made robust by cross-validation with phosphoproteomics data as well as through a literature-based text-mining approach. In conclusion, data integration can produce robust candidate rankings for understanding cell regulation through identification of protein kinases responsible for gene expression changes, and thus rapidly advancing drug target discovery and unraveling drug mechanisms of action. Mt Sinai J Med 77:345,349, 2010. © 2010 Mount Sinai School of Medicine [source]


Not Giving the Skeptic a Hearing: Pragmatism and Radical Doubt

PHILOSOPHY AND PHENOMENOLOGICAL RESEARCH, Issue 1 2005
ERIK J. OLSSON
Pragmatist responses to radical skepticism do not receive much attention in contemporary analytic epistemology. This observation is my motivation for undertaking a search for a coherent pragmatist reply to radical doubt, one that can compete, in terms of clarity and sophistication, with the currently most popular approaches, such as contextualism and relevant alternatives theory. As my point of departure I take the texts of C. S. Peirce and William James. The Jamesian response is seen to consist in the application of a wager argument to the skeptical issue in analogy with Pascal's wager. The Peircean strategy, on the other hand, is to attempt a direct rejection of one of the skeptic's main premises: that we do not know we are not deceived. I argue that while the Jamesian attempt is ultimately incoherent, Peirce's argument contains the core of a detailed and characteristically "pragmatic" rebuttal of skepticism, one that deserves to be taken seriously in the contemporary debate. [source]


A Lepskij,type Stopping-Rule for Newton-type Methods with Random Noise

PROCEEDINGS IN APPLIED MATHEMATICS & MECHANICS, Issue 1 2005
Frank Bauer
Regularized Newton methods are one of the most popular approaches for the solution of inverse problems in differential equations. Since these problems are usually ill-posed, an appropriate stopping rule is an essential ingredient of such methods. In this paper we suggest an a-posteriori stopping rule of Lepskij-type which is appropriate for data perturbed by random noise. The numerical results for this look promising. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]