Censoring

Distribution by Scientific Domains
Distribution within Mathematics and Statistics

Kinds of Censoring

  • dependent censoring
  • right censoring

  • Terms modified by Censoring

  • censoring mechanism
  • censoring time

  • Selected Abstracts


    Semiparametric Analysis for Recurrent Event Data with Time-Dependent Covariates and Informative Censoring

    BIOMETRICS, Issue 1 2010
    C.-Y. Huang
    Summary Recurrent event data analyses are usually conducted under the assumption that the censoring time is independent of the recurrent event process. In many applications the censoring time can be informative about the underlying recurrent event process, especially in situations where a correlated failure event could potentially terminate the observation of recurrent events. In this article, we consider a semiparametric model of recurrent event data that allows correlations between censoring times and recurrent event process via frailty. This flexible framework incorporates both time-dependent and time-independent covariates in the formulation, while leaving the distributions of frailty and censoring times unspecified. We propose a novel semiparametric inference procedure that depends on neither the frailty nor the censoring time distribution. Large sample properties of the regression parameter estimates and the estimated baseline cumulative intensity functions are studied. Numerical studies demonstrate that the proposed methodology performs well for realistic sample sizes. An analysis of hospitalization data for patients in an AIDS cohort study is presented to illustrate the proposed method. [source]


    Analysis of Times to Repeated Events in Two-Arm Randomized Trials with Noncompliance and Dependent Censoring

    BIOMETRICS, Issue 4 2004
    Shigeyuki Matsui
    Summary This article develops randomization-based methods for times to repeated events in two-arm randomized trials with noncompliance and dependent censoring. Structural accelerated failure time models are assumed to capture causal effects on repeated event times and dependent censoring time, but the dependence structure among repeated event times and dependent censoring time is unspecified. Artificial censoring techniques to accommodate nonrandom noncompliance and dependent censoring are proposed. Estimation of the acceleration parameters are based on rank-based estimating functions. A simulation study is conducted to evaluate the performance of the developed methods. An illustration of the methods using data from an acute myeloid leukemia trial is provided. [source]


    Addressing an Idiosyncrasy in Estimating Survival Curves Using Double Sampling in the Presence of Self-Selected Right Censoring

    BIOMETRICS, Issue 2 2001
    Constantine E. Frangakis
    Summary. We investigate the use of follow-up samples of individuals to estimate survival curves from studies that are subject to right censoring from two sources: (i) early termination of the study, namely, administrative censoring, or (ii) censoring due to lost data prior to administrative censoring, so-called dropout. We assume that, for the full cohort of individuals, administrative censoring times are independent of the subjects' inherent characteristics, including survival time. To address the loss to censoring due to dropout, which we allow to be possibly selective, we consider an intensive second phase of the study where a representative sample of the originally lost subjects is subsequently followed and their data recorded. As with double-sampling designs in survey methodology, the objective is to provide data on a representative subset of the dropouts. Despite assumed full response from the follow-up sample, we show that, in general in our setting, administrative censoring times are not independent of survival times within the two subgroups, nondropouts and sampled dropouts. As a result, the stratified Kaplan,Meier estimator is not appropriate for the cohort survival curve. Moreover, using the concept of potential outcomes, as opposed to observed outcomes, and thereby explicitly formulating the problem as a missing data problem, reveals and addresses these complications. We present an estimation method based on the likelihood of an easily observed subset of the data and study its properties analytically for large samples. We evaluate our method in a realistic situation by simulating data that match published margins on survival and dropout from an actual hip-replacement study. Limitations and extensions of our design and analytic method are discussed. [source]


    A non-linear and non-Gaussian state-space model for censored air pollution data

    ENVIRONMETRICS, Issue 2 2005
    Craig J. Johns
    Abstract Lidar technology is used to quantify airborne particulate matter less than 10,,m in diameter (PM10). These spatio-temporal lidar data on PM10 are subject to censoring due to detection limits. A non-linear and non-Gaussian state-space model is modified to accommodate data subject to detection limits and outline strategies for Markov-chain Monte Carlo estimation and filtering. The methods are applied to spatio-temporal lidar measurements of dust particle concentrations. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Semiparametric variance-component models for linkage and association analyses of censored trait data

    GENETIC EPIDEMIOLOGY, Issue 7 2006
    G. Diao
    Abstract Variance-component (VC) models are widely used for linkage and association mapping of quantitative trait loci in general human pedigrees. Traditional VC methods assume that the trait values within a family follow a multivariate normal distribution and are fully observed. These assumptions are violated if the trait data contain censored observations. When the trait pertains to age at onset of disease, censoring is inevitable because of loss to follow-up and limited study duration. Censoring also arises when the trait assay cannot detect values below (or above) certain thresholds. The latent trait values tend to have a complex distribution. Applying traditional VC methods to censored trait data would inflate type I error and reduce power. We present valid and powerful methods for the linkage and association analyses of censored trait data. Our methods are based on a novel class of semiparametric VC models, which allows an arbitrary distribution for the latent trait values. We construct appropriate likelihood for the observed data, which may contain left or right censored observations. The maximum likelihood estimators are approximately unbiased, normally distributed, and statistically efficient. We develop stable and efficient numerical algorithms to implement the corresponding inference procedures. Extensive simulation studies demonstrate that the proposed methods outperform the existing ones in practical situations. We provide an application to the age at onset of alcohol dependence data from the Collaborative Study on the Genetics of Alcoholism. A computer program is freely available. Genet. Epidemiol. 2006. © 2006 Wiley-Liss, Inc. [source]


    Estimating lifetime or episode-of-illness costs under censoring

    HEALTH ECONOMICS, Issue 9 2010
    Anirban Basu
    Abstract Many analyses of healthcare costs involve use of data with varying periods of observation and right censoring of cases before death or at the end of the episode of illness. The prominence of observations with no expenditure for some short periods of observation and the extreme skewness typical of these data raise concerns about the robustness of estimators based on inverse probability weighting (IPW) with the survival from censoring probabilities. These estimators also cannot distinguish between the effects of covariates on survival and intensity of utilization, which jointly determine costs. In this paper, we propose a new estimator that extends the class of two-part models to deal with random right censoring and for continuous death and censoring times. Our model also addresses issues about the time to death in these analyses and separates the survival effects from the intensity effects. Using simulations, we compare our proposed estimator to the inverse probability estimator, which shows bias when censoring is large and covariates affect survival. We find our estimator to be unbiased and also more efficient for these designs. We apply our method and compare it with the IPW method using data from the Medicare,SEER files on prostate cancer. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Calculus attainment and grades received in intermediate economic theory

    JOURNAL OF APPLIED ECONOMETRICS, Issue 6 2006
    Mingliang Li
    We revisit the work of Butler et al. (1998) who examine the effect of mathematical preparation on grades received in intermediate economic theory courses. Using a Bayesian approach under reasonably ,diffuse' priors, we are able to replicate their two-step point estimates almost exactly. We also introduce a new model specification that accounts for the censoring and discrete nature of the outcome variable (grade received). The results from this specification echo the conclusions of the original paper,the level of calculus attained plays an important role in explaining grades received in intermediate micro theory. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Household vegetable demand in the Philippines: Is there an urban-rural divide?

    AGRIBUSINESS : AN INTERNATIONAL JOURNAL, Issue 4 2007
    Maria Erlinda M. Mutuc
    A Nonlinear Quadratic Almost Ideal Demand System (NQAIDS) that accounts for censoring and endogeneity problems is used to assess the vegetable demand behavior of rural and urban households in the Philippines. Detailed household consumption data for a number of vegetable commodities are utilized in the analysis. The results show that most of the expenditure and own-price elasticities of the vegetables analyzed are near or larger than unitary in both rural and urban areas. For majority of the vegetable commodities examined, only the expenditure elasticity is significantly different between rural and urban households. On the other hand, own-price and cross-price elasticities of most vegetables do not significantly differ between rural and urban households. The disaggregate vegetable demand elasticities in this study, as well as the insights from the rural/urban comparisons, provide valuable information that can be utilized for the analysis and design of various food-related policies in the Philippines. [JEL Classification: R21; Q11] © 2007 Wiley Periodicals, Inc. Agribusiness 23: 511,527, 2007. [source]


    Household meat demand in Greece: A demand systems approach using microdata

    AGRIBUSINESS : AN INTERNATIONAL JOURNAL, Issue 1 2003
    Panagiotis Lazaridis
    This article examines meat consumption patterns of households in Greece using data from family budget surveys. For that purpose the linear approximate Almost Ideal Demond System was employed to investigate the economics and demographic effects on the demand for four types of meat. Prices were adjusted for quality, and the demographic translation method was used to incorporate the demographic variables. Finally, the two-stage generalized Heckman procedure was employed to take into account censoring of the dependent variables. [EconLit citations: Q11, D12.] © 2003 Wiley Periodicals, Inc. Agribusiness 19: 43,59, 2003. [source]


    Finding the best treatment under heavy censoring and hidden bias

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2007
    Myoung-jae Lee
    Summary., We analyse male survival duration after hospitalization following an acute myocardial infarction with a large (N=11024) Finnish data set to find the best performing hospital district (and to disseminate its treatment protocol). This is a multiple-treatment problem with 21 treatments (i.e. 21 hospital districts). The task of choosing the best treatment is difficult owing to heavy right censoring (73%), which makes the usual location measures (the mean and median) unidentified; instead, only lower quantiles are identified. There is also a sample selection issue that only those who made it to a hospital alive are observed (54%); this becomes a problem if we wish to know their potential survival duration after hospitalization, if they had survived to a hospital contrary to the fact. The data set is limited in its covariates,only age is available,but includes the distance to the hospital, which plays an interesting role. Given that only age and distance are observed, it is likely that there are unobserved confounders. To account for them, a sensitivity analysis is conducted following pair matching. All estimators employed point to a clear winner and the sensitivity analysis indicates that the finding is fairly robust. [source]


    Regression analysis based on semicompeting risks data

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 1 2008
    Jin-Jian Hsieh
    Summary., Semicompeting risks data are commonly seen in biomedical applications in which a terminal event censors a non-terminal event. Possible dependent censoring complicates statistical analysis. We consider regression analysis based on a non-terminal event, say disease progression, which is subject to censoring by death. The methodology proposed is developed for discrete covariates under two types of assumption. First, separate copula models are assumed for each covariate group and then a flexible regression model is imposed on the progression time which is of major interest. Model checking procedures are also proposed to help to choose a best-fitted model. Under a two-sample setting, Lin and co-workers proposed a competing method which requires an additional marginal assumption on the terminal event and implicitly assumes that the dependence structures in the two groups are the same. Using simulations, we compare the two approaches on the basis of their finite sample performances and robustness properties under model misspecification. The method proposed is applied to a bone marrow transplant data set. [source]


    Mixture cure survival models with dependent censoring

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 3 2007
    Yi Li
    Summary., The paper is motivated by cure detection among the prostate cancer patients in the National Institutes of Health surveillance epidemiology and end results programme, wherein the main end point (e.g. deaths from prostate cancer) and the censoring causes (e.g. deaths from heart diseases) may be dependent. Although many researchers have studied the mixture survival model to analyse survival data with non-negligible cure fractions, none has studied the mixture cure model in the presence of dependent censoring. To account for such dependence, we propose a more general cure model that allows for dependent censoring. We derive the cure models from the perspective of competing risks and model the dependence between the censoring time and the survival time by using a class of Archimedean copula models. Within this framework, we consider the parameter estimation, the cure detection and the two-sample comparison of latency distributions in the presence of dependent censoring when a proportion of patients is deemed cured. Large sample results by using martingale theory are obtained. We examine the finite sample performance of the proposed methods via simulation and apply them to analyse the surveillance epidemiology and end results prostate cancer data. [source]


    Monitoring processes with data censored owing to competing risks by using exponentially weighted moving average control charts

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 3 2001
    Stefan H. Steiner
    In industry, process monitoring is widely employed to detect process changes rapidly. However, in some industrial applications observations are censored. For example, when testing breaking strengths and failure times often a limited stress test is performed. With censored observations, a direct application of traditional monitoring procedures is not appropriate. When the censoring occurs due to competing risks, we propose a control chart based on conditional expected values to detect changes in the mean strength. To protect against possible confounding caused by changes in the mean of the censoring mechanism we also suggest a similar chart to detect changes in the mean censoring level. We provide an example of monitoring bond strength to illustrate the application of this methodology. [source]


    Bayesian incidence analysis of animal tumorigenicity data

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 2 2001
    D. B. Dunson
    Statistical inference about tumorigenesis should focus on the tumour incidence rate. Unfortunately, in most animal carcinogenicity experiments, tumours are not observable in live animals and censoring of the tumour onset times is informative. In this paper, we propose a Bayesian method for analysing data from such studies. Our approach focuses on the incidence of tumours and accommodates occult tumours and censored onset times without restricting tumour lethality, relying on cause-of-death data, or requiring interim sacrifices. We represent the underlying state of nature by a multistate stochastic process and assume general probit models for the time-specific transition rates. These models allow the incorporation of covariates, historical control data and subjective prior information. The inherent flexibility of this approach facilitates the interpretation of results, particularly when the sample size is small or the data are sparse. We use a Gibbs sampler to estimate the relevant posterior distributions. The methods proposed are applied to data from a US National Toxicology Program carcinogenicity study. [source]


    Estimating the transmission probability of human immunodeficiency virus in injecting drug users in Thailand

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 1 2001
    Michael G. Hudgens
    We estimate the transmission probability for the human immunodeficiency virus from seroconversion data of a cohort of injecting drug users (IDUs) in Thailand. The transmission probability model developed accounts for interval censoring and incorporates each IDU's reported frequency of needle sharing and injecting acts. Using maximum likelihood methods, the per needle sharing act transmission probability estimate between infectious and susceptible IDUs is 0.008. The effects of covariates, disease dynamics, mismeasured exposure information and the uncertainty of the disease prevalence on the transmission probability estimate are considered. [source]


    Ascites after liver transplantation,A mystery

    LIVER TRANSPLANTATION, Issue 5 2004
    Charmaine A. Stewart
    Ascites after liver transplantation, although uncommon, presents a serious clinical dilemma. The hemodynamic changes that support the development of ascites before liver transplantation are resolved after transplant; therefore, persistent ascites (PA) after liver transplantation is unexpected and poorly characterized. The aim of this study was to define the clinical factors associated with PA after liver transplantation. This was a retrospective case,control analysis of patients who underwent liver transplantation at the University of Pennsylvania. PA occurring for more than 3 months after liver transplantation was confirmed by imaging studies. PA was correlated with multiple recipient and donor variables, including etiology of liver disease, preoperative ascites, prior portosystemic shunt (PS), donor age, and cold ischemic (CI) time. There were 2 groups: group 1, cases with PA transplanted from November 1990 to July 2001, and group 2, consecutive, control subjects who underwent liver transplantation between September 1999 and December 2001. Both groups were followed to censoring, May 2002, or death. Twenty-five from group 1 had ascites after liver transplantation after a median follow-up of 2.6 years. In group 1 vs group 2 (n = 106), there was a male predominance 80% vs 61% (P = .10) with similar age 52 years; chronic hepatitis C virus (HCV) was diagnosed in 88% vs 44% (P < .0001); preoperative ascites and ascites refractory to treatment were more prevalent in group 1 (P = .0004 and P =.02, respectively), and CI was higher in group 1, (8.5 hours vs 6.3 hours, P = .002). Eight of the 25 (group 1) had portal hypertension with median portosystemic gradient 16.5 mm Hg (range, 16,24). PS was performed in 7 of 25 cases, which resulted in partial resolution of ascites. The development of PA after liver transplantation is multifactorial; HCV, refractory ascites before liver transplantation, and prolonged CI contribute to PA after liver transplantation. (Liver Transpl 2004;10:654,660.) [source]


    Exact likelihood inference for the exponential distribution under generalized Type-I and Type-II hybrid censoring

    NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 7 2004
    B. Chandrasekar
    Abstract Chen and Bhattacharyya [Exact confidence bounds for an exponential parameter under hybrid censoring, Commun Statist Theory Methods 17 (1988), 1857,1870] considered a hybrid censoring scheme and obtained the exact distribution of the maximum likelihood estimator of the mean of an exponential distribution along with an exact lower confidence bound. Childs et al. [Exact likelihood inference based on Type-I and Type-II hybrid censored samples from the exponential distribution, Ann Inst Statist Math 55 (2003), 319,330] recently derived an alternative simpler expression for the distribution of the MLE. These authors also proposed a new hybrid censoring scheme and derived similar results for the exponential model. In this paper, we propose two generalized hybrid censoring schemes which have some advantages over the hybrid censoring schemes already discussed in the literature. We then derive the exact distribution of the maximum likelihood estimator as well as exact confidence intervals for the mean of the exponential distribution under these generalized hybrid censoring schemes. © 2004 Wiley Periodicals, Inc. Naval Research Logistics, 2004 [source]


    Log-rank permutation tests for trend: saddlepoint p -values and survival rate confidence intervals

    THE CANADIAN JOURNAL OF STATISTICS, Issue 1 2009
    Ehab F. Abd-Elfattah
    MSC 2000: Primary 62N03; secondary 62N02 Abstract Suppose p,+,1 experimental groups correspond to increasing dose levels of a treatment and all groups are subject to right censoring. In such instances, permutation tests for trend can be performed based on statistics derived from the weighted log-rank class. This article uses saddlepoint methods to determine the mid- P -values for such permutation tests for any test statistic in the weighted log-rank class. Permutation simulations are replaced by analytical saddlepoint computations which provide extremely accurate mid- P -values that are exact for most practical purposes and almost always more accurate than normal approximations. The speed of mid- P -value computation allows for the inversion of such tests to determine confidence intervals for the percentage increase in mean (or median) survival time per unit increase in dosage. The Canadian Journal of Statistics 37: 5-16; 2009 © 2009 Statistical Society of Canada Supposons que p,+,1 groupes expérimentaux sont associés à un dosage croissant d'un traitement et que tous les groupes sont sujets à une censure à droite. Dans de tels cas, des tests de permutations pour la tendance peuvent être faits en se basant sur des statistiques obtenues à partir de la classe des statistiques de log-rangs pondérés. Cet article utilise l'approximation par le point de selle pour obtenir le seuil moyen de ces tests de permutation quelle que soit la statistique de test appartenant à la classe des statistiques de log-rangs pondérés. La simulation des permutations est remplacée par le calcul analytique du point de selle. Ce dernier procure un seuil moyen très précis qui peut être considéré exact pour la majorité des applications et qui est toujours plus précis que les approximations normales. La vitesse de calcul des seuils moyens permet l'inversion de ces tests afin de déterminer des intervalles de confiance pour le pourcentage d'augmentation moyen (ou médian) du temps de survie par unité de dosage supplémentaire. La revue canadienne de statistique 37: 5-16; 2009 © 2009 Société statistique du Canada [source]


    Identification and estimation of local average derivatives in non-separable models without monotonicity

    THE ECONOMETRICS JOURNAL, Issue 1 2009
    Stefan Hoderlein
    Summary, In many structural economic models there are no good arguments for additive separability of the error. Recently, this motivated intensive research on non-separable structures. For instance, in Hoderlein and Mammen (2007) a non-separable model in the single equation case was considered, and it was established that in the absence of the frequently employed monotonicity assumption local average structural derivatives (LASD) are still identified. In this paper, we introduce an estimator for the LASD. The estimator we propose is based on local polynomial fitting of conditional quantiles. We derive its large sample distribution through a Bahadur representation, and give some related results, e.g. about the asymptotic behaviour of the quantile process. Moreover, we generalize the concept of LASD to include endogeneity of regressors and discuss the case of a multivariate dependent variable. We also consider identification of structured non-separable models, including single index and additive models. We discuss specification testing, as well as testing for endogeneity and for the impact of unobserved heterogeneity. We also show that fixed censoring can easily be addressed in this framework. Finally, we apply some of the concepts to demand analysis using British Consumer Data. [source]


    Theory & Methods: Data Sharpening for Hazard Rate Estimation

    AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 3 2002
    Gerda Claeskens
    Data sharpening is a general tool for enhancing the performance of statistical estimators, by altering the data before substituting them into conventional methods. In one of the simplest forms of data sharpening, available for curve estimation, an explicit empirical transformation is used to alter the data. The attraction of this approach is diminished, however, if the formula has to be altered for each different application. For example, one could expect the formula for use in hazard rate estimation to differ from that for straight density estimation, since a hazard rate is a ratio,type functional of a density. This paper shows that, in fact, identical data transformations can be used in each case, regardless of whether the data involve censoring. This dramatically simplifies the application of data sharpening to problems involving hazard rate estimation, and makes data sharpening attractive. [source]


    Exact, Distribution Free Confidence Intervals for Late Effects in Censored Matched Pairs

    BIOMETRICAL JOURNAL, Issue 1 2009
    Shoshana R. Daniel
    Abstract When comparing censored survival times for matched treated and control subjects, a late effect on survival is one that does not begin to appear until some time has passed. In a study of provider specialty in the treatment of ovarian cancer, a late divergence in the Kaplan,Meier survival curves hinted at superior survival among patients of gynecological oncologists, who employ chemotherapy less intensively, when compared to patients of medical oncologists, who employ chemotherapy more intensively; we ask whether this late divergence should be taken seriously. Specifically, we develop exact, permutation tests, and exact confidence intervals formed by inverting the tests, for late effects in matched pairs subject to random but heterogeneous censoring. Unlike other exact confidence intervals with censored data, the proposed intervals do not require knowledge of censoring times for patients who die. Exact distributions are consequences of two results about signs, signed ranks, and their conditional independence properties. One test, the late effects sign test, has the binomial distribution; the other, the late effects signed rank test, uses nonstandard ranks but nonetheless has the same exact distribution as Wilcoxon's signed rank test. A simulation shows that the late effects signed rank test has substantially more power to detect late effects than do conventional tests. The confidence statement provides information about both the timing and magnitude of late effects (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    A Note on Comparing Exposure Data to a Regulatory Limit in the Presence of Unexposed and a Limit of Detection

    BIOMETRICAL JOURNAL, Issue 6 2005
    Haitao Chu
    Abstract In some occupational health studies, observations occur in both exposed and unexposed individuals. If the levels of all exposed individuals have been detected, a two-part zero-inflated log-normal model is usually recommended, which assumes that the data has a probability mass at zero for unexposed individuals and a continuous response for values greater than zero for exposed individuals. However, many quantitative exposure measurements are subject to left censoring due to values falling below assay detection limits. A zero-inflated log-normal mixture model is suggested in this situation since unexposed zeros are not distinguishable from those exposed with values below detection limits. In the context of this mixture distribution, the information contributed by values falling below a fixed detection limit is used only to estimate the probability of unexposed. We consider sample size and statistical power calculation when comparing the median of exposed measurements to a regulatory limit. We calculate the required sample size for the data presented in a recent paper comparing the benzene TWA exposure data to a regulatory occupational exposure limit. A simulation study is conducted to investigate the performance of the proposed sample size calculation methods. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Regression Analysis with a Misclassified Covariate from a Current Status Observation Scheme

    BIOMETRICS, Issue 2 2010
    Leilei Zeng
    Summary Naive use of misclassified covariates leads to inconsistent estimators of covariate effects in regression models. A variety of methods have been proposed to address this problem including likelihood, pseudo-likelihood, estimating equation methods, and Bayesian methods, with all of these methods typically requiring either internal or external validation samples or replication studies. We consider a problem arising from a series of orthopedic studies in which interest lies in examining the effect of a short-term serological response and other covariates on the risk of developing a longer term thrombotic condition called deep vein thrombosis. The serological response is an indicator of whether the patient developed antibodies following exposure to an antithrombotic drug, but the seroconversion status of patients is only available at the time of a blood sample taken upon the discharge from hospital. The seroconversion time is therefore subject to a current status observation scheme, or Case I interval censoring, and subjects tested before seroconversion are misclassified as nonseroconverters. We develop a likelihood-based approach for fitting regression models that accounts for misclassification of the seroconversion status due to early testing using parametric and nonparametric estimates of the seroconversion time distribution. The method is shown to reduce the bias resulting from naive analyses in simulation studies and an application to the data from the orthopedic studies provides further illustration. [source]


    A Bayesian Chi-Squared Goodness-of-Fit Test for Censored Data Models

    BIOMETRICS, Issue 2 2010
    Jing Cao
    Summary We propose a Bayesian chi-squared model diagnostic for analysis of data subject to censoring. The test statistic has the form of Pearson's chi-squared test statistic and is easy to calculate from standard output of Markov chain Monte Carlo algorithms. The key innovation of this diagnostic is that it is based only on observed failure times. Because it does not rely on the imputation of failure times for observations that have been censored, we show that under heavy censoring it can have higher power for detecting model departures than a comparable test based on the complete data. In a simulation study, we show that tests based on this diagnostic exhibit comparable power and better nominal Type I error rates than a commonly used alternative test proposed by Akritas (1988,,Journal of the American Statistical Association,83, 222,230). An important advantage of the proposed diagnostic is that it can be applied to a broad class of censored data models, including generalized linear models and other models with nonidentically distributed and nonadditive error structures. We illustrate the proposed model diagnostic for testing the adequacy of two parametric survival models for Space Shuttle main engine failures. [source]


    Bayesian Inference for Smoking Cessation with a Latent Cure State

    BIOMETRICS, Issue 3 2009
    Sheng Luo
    Summary We present a Bayesian approach to modeling dynamic smoking addiction behavior processes when cure is not directly observed due to censoring. Subject-specific probabilities model the stochastic transitions among three behavioral states: smoking, transient quitting, and permanent quitting (absorbent state). A multivariate normal distribution for random effects is used to account for the potential correlation among the subject-specific transition probabilities. Inference is conducted using a Bayesian framework via Markov chain Monte Carlo simulation. This framework provides various measures of subject-specific predictions, which are useful for policy-making, intervention development, and evaluation. Simulations are used to validate our Bayesian methodology and assess its frequentist properties. Our methods are motivated by, and applied to, the Alpha-Tocopherol, Beta-Carotene Lung Cancer Prevention study, a large (29,133 individuals) longitudinal cohort study of smokers from Finland. [source]


    Regularized Estimation for the Accelerated Failure Time Model

    BIOMETRICS, Issue 2 2009
    T. Cai
    Summary In the presence of high-dimensional predictors, it is challenging to develop reliable regression models that can be used to accurately predict future outcomes. Further complications arise when the outcome of interest is an event time, which is often not fully observed due to censoring. In this article, we develop robust prediction models for event time outcomes by regularizing the Gehan's estimator for the accelerated failure time (AFT) model (Tsiatis, 1996, Annals of Statistics18, 305,328) with least absolute shrinkage and selection operator (LASSO) penalty. Unlike existing methods based on the inverse probability weighting and the Buckley and James estimator (Buckley and James, 1979, Biometrika66, 429,436), the proposed approach does not require additional assumptions about the censoring and always yields a solution that is convergent. Furthermore, the proposed estimator leads to a stable regression model for prediction even if the AFT model fails to hold. To facilitate the adaptive selection of the tuning parameter, we detail an efficient numerical algorithm for obtaining the entire regularization path. The proposed procedures are applied to a breast cancer dataset to derive a reliable regression model for predicting patient survival based on a set of clinical prognostic factors and gene signatures. Finite sample performances of the procedures are evaluated through a simulation study. [source]


    Asymptotic Distribution of Score Statistics for Spatial Cluster Detection with Censored Data

    BIOMETRICS, Issue 4 2008
    Daniel Commenges
    SummaryCook, Gold, and Li (2007, Biometrics 63, 540,549) extended the Kulldorff (1997, Communications in Statistics 26, 1481,1496) scan statistic for spatial cluster detection to survival-type observations. Their approach was based on the score statistic and they proposed a permutation distribution for the maximum of score tests. The score statistic makes it possible to apply the scan statistic idea to models including explanatory variables. However, we show that the permutation distribution requires strong assumptions of independence between potential cluster and both censoring and explanatory variables. In contrast, we present an approach using the asymptotic distribution of the maximum of score statistics in a manner not requiring these assumptions. [source]


    A Copula Approach for Detecting Prognostic Genes Associated With Survival Outcome in Microarray Studies

    BIOMETRICS, Issue 4 2007
    Kouros Owzar
    Summary A challenging and crucial issue in clinical studies in cancer involving gene microarray experiments is the discovery, among a large number of genes, of a relatively small panel of genes whose elements are associated with a relevant clinical outcome variable such as time-to-death or time-to-recurrence of disease. A semiparametric approach, using dependence functions known as copulas, is considered to quantify and estimate the pairwise association between the outcome and each gene expression. These time-to-event type endpoints are typically subject to censoring as not all events are realized at the time of the analysis. Furthermore, given that the total number of genes is typically large, it is imperative to control a relevant error rate in any gene discovery procedure. The proposed method addresses the two aforementioned issues by direct incorporation of the censoring mechanism and by appropriate statistical adjustment for multiplicity. The performance of the proposed method is studied through simulation and illustrated with an application using a case study in lung cancer. [source]


    Analysis of Times to Repeated Events in Two-Arm Randomized Trials with Noncompliance and Dependent Censoring

    BIOMETRICS, Issue 4 2004
    Shigeyuki Matsui
    Summary This article develops randomization-based methods for times to repeated events in two-arm randomized trials with noncompliance and dependent censoring. Structural accelerated failure time models are assumed to capture causal effects on repeated event times and dependent censoring time, but the dependence structure among repeated event times and dependent censoring time is unspecified. Artificial censoring techniques to accommodate nonrandom noncompliance and dependent censoring are proposed. Estimation of the acceleration parameters are based on rank-based estimating functions. A simulation study is conducted to evaluate the performance of the developed methods. An illustration of the methods using data from an acute myeloid leukemia trial is provided. [source]


    Estimating Mean Response as a Function of Treatment Duration in an Observational Study, Where Duration May Be Informatively Censored

    BIOMETRICS, Issue 2 2004
    Brent A. Johnson
    Summary. After a treatment is found to be effective in a clinical study, attention often focuses on the effect of treatment duration on outcome. Such an analysis facilitates recommendations on the most beneficial treatment duration. In many studies, the treatment duration, within certain limits, is left to the discretion of the investigators. It is often the case that treatment must be terminated prematurely due to an adverse event, in which case a recommended treatment duration is part of a policy that treats patients for a specified length of time or until a treatment-censoring event occurs, whichever comes first. Evaluating mean response for a particular treatment-duration policy from observational data is difficult due to censoring and the fact that it may not be reasonable to assume patients are prognostically similar across all treatment strategies. We propose an estimator for mean response as a function of treatment-duration policy under these conditions. The method uses potential outcomes and embodies assumptions that allow consistent estimation of the mean response. The estimator is evaluated through simulation studies and demonstrated by application to the ESPRIT infusion trial coordinated at Duke University Medical Center. [source]