Home About us Contact | |||
Inferential Procedures (inferential + procedure)
Selected AbstractsDetecting changes in the mean of functional observationsJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 5 2009István Berkes Summary., Principal component analysis has become a fundamental tool of functional data analysis. It represents the functional data as Xi(t)=,(t)+,1,l<,,i, l+ vl(t), where , is the common mean, vl are the eigenfunctions of the covariance operator and the ,i, l are the scores. Inferential procedures assume that the mean function ,(t) is the same for all values of i. If, in fact, the observations do not come from one population, but rather their mean changes at some point(s), the results of principal component analysis are confounded by the change(s). It is therefore important to develop a methodology to test the assumption of a common functional mean. We develop such a test using quantities which can be readily computed in the R package fda. The null distribution of the test statistic is asymptotically pivotal with a well-known asymptotic distribution. The asymptotic test has excellent finite sample performance. Its application is illustrated on temperature data from England. [source] Analysis of single-locus tests to detect gene/disease associations,GENETIC EPIDEMIOLOGY, Issue 3 2005Kathryn Roeder Abstract A goal of association analysis is to determine whether variation in a particular candidate region or gene is associated with liability to complex disease. To evaluate such candidates, ubiquitous Single Nucleotide Polymorphisms (SNPs) are useful. It is critical, however, to select a set of SNPs that are in substantial linkage disequilibrium (LD) with all other polymorphisms in the region. Whether there is an ideal statistical framework to test such a set of ,tag SNPs' for association is unknown. Compared to tests for association based on frequencies of haplotypes, recent evidence suggests tests for association based on linear combinations of the tag SNPs (Hotelling T2 test) are more powerful. Following this logical progression, we wondered if single-locus tests would prove generally more powerful than the regression-based tests? We answer this question by investigating four inferential procedures: the maximum of a series of test statistics corrected for multiple testing by the Bonferroni procedure, TB, or by permutation of case-control status, TP; a procedure that tests the maximum of a smoothed curve fitted to the series of of test statistics, TS; and the Hotelling T2 procedure, which we call TR. These procedures are evaluated by simulating data like that from human populations, including realistic levels of LD and realistic effects of alleles conferring liability to disease. We find that power depends on the correlation structure of SNPs within a gene, the density of tag SNPs, and the placement of the liability allele. The clearest pattern emerges between power and the number of SNPs selected. When a large fraction of the SNPs within a gene are tested, and multiple SNPs are highly correlated with the liability allele, TS has better power. Using a SNP selection scheme that optimizes power but also requires a substantial number of SNPs to be genotyped (roughly 10,20 SNPs per gene), power of TP is generally superior to that for the other procedures, including TR. Finally, when a SNP selection procedure that targets a minimal number of SNPs per gene is applied, the average performances of TP and TR are indistinguishable. Genet. Epidemiol. © 2005 Wiley-Liss, Inc. [source] Adaptations for Nothing in ParticularJOURNAL FOR THE THEORY OF SOCIAL BEHAVIOUR, Issue 1 2004Simon J. Hampton An element of the contemporary dispute amongst evolution minded psychologists and social scientists hinges on the conception of mind as being adapted as opposed to adaptive. This dispute is not trivial. The possibility that human minds are both adapted and adaptive courtesy of selection pressures that were social in nature is of particular interest to a putative evolutionary social psychology. I suggest that the notion of an evolved psychological adaptation in social psychology can be retained only if it is accepted that this adaptation is for social interaction and has no rigidly fixed function and cannot be described in terms of algorithmic decision rules or fixed inferential procedures. What is held to be the reason for encephalisation in the Homo lineage and some of best atested ideas in social psychology offers license for such an approach. [source] Sensitivity analysis for incomplete contingency tables: the Slovenian plebiscite caseJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 1 2001Geert Molenberghs Classical inferential procedures induce conclusions from a set of data to a population of interest, accounting for the imprecision resulting from the stochastic component of the model. Less attention is devoted to the uncertainty arising from (unplanned) incompleteness in the data. Through the choice of an identifiable model for non-ignorable non-response, one narrows the possible data-generating mechanisms to the point where inference only suffers from imprecision. Some proposals have been made for assessing the sensitivity to these modelling assumptions; many are based on fitting several plausible but competing models. For example, we could assume that the missing data are missing at random in one model, and then fit an additional model where non-random missingness is assumed. On the basis of data from a Slovenian plebiscite, conducted in 1991, to prepare for independence, it is shown that such an ad hoc procedure may be misleading. We propose an approach which identifies and incorporates both sources of uncertainty in inference: imprecision due to finite sampling and ignorance due to incompleteness. A simple sensitivity analysis considers a finite set of plausible models. We take this idea one step further by considering more degrees of freedom than the data support. This produces sets of estimates (regions of ignorance) and sets of confidence regions (combined into regions of uncertainty). [source] Inference for Clustered Inhomogeneous Spatial Point ProcessesBIOMETRICS, Issue 2 2009P. A. Henrys Summary We propose a method to test for significant differences in the levels of clustering between two spatial point processes (cases and controls) while taking into account differences in their first-order intensities. The key advance on earlier methods is that the controls are not assumed to be a Poisson process. Inference and diagnostics are based around the inhomogeneous K -function with confidence envelopes obtained from either resampling events in a nonparametric bootstrap approach, or simulating new events as in a parametric bootstrap. Methods developed are demonstrated using the locations of adult and juvenile trees in a tropical forest. A simulation study briefly examines the accuracy and power of the inferential procedures. [source] On Assessing Surrogacy in a Single Trial Setting Using a Semicompeting Risks ParadigmBIOMETRICS, Issue 2 2009Debashis Ghosh Summary There has been a recent emphasis on the identification of biomarkers and other biologic measures that may be potentially used as surrogate endpoints in clinical trials. We focus on the setting of data from a single clinical trial. In this article, we consider a framework in which the surrogate must occur before the true endpoint. This suggests viewing the surrogate and true endpoints as semicompeting risks data; this approach is new to the literature on surrogate endpoints and leads to an asymmetrical treatment of the surrogate and true endpoints. However, such a data structure also conceptually complicates many of the previously considered measures of surrogacy in the literature. We propose novel estimation and inferential procedures for the relative effect and adjusted association quantities proposed by Buyse and Molenberghs (1998, Biometrics54, 1014,1029). The proposed methodology is illustrated with application to simulated data, as well as to data from a leukemia study. [source] |