Failure Time Data (failure + time_data)

Distribution by Scientific Domains


Selected Abstracts


Competing Risks and Time-Dependent Covariates

BIOMETRICAL JOURNAL, Issue 1 2010
Giuliana Cortese
Abstract Time-dependent covariates are frequently encountered in regression analysis for event history data and competing risks. They are often essential predictors, which cannot be substituted by time-fixed covariates. This study briefly recalls the different types of time-dependent covariates, as classified by Kalbfleisch and Prentice [The Statistical Analysis of Failure Time Data, Wiley, New York, 2002] with the intent of clarifying their role and emphasizing the limitations in standard survival models and in the competing risks setting. If random (internal) time-dependent covariates are to be included in the modeling process, then it is still possible to estimate cause-specific hazards but prediction of the cumulative incidences and survival probabilities based on these is no longer feasible. This article aims at providing some possible strategies for dealing with these prediction problems. In a multi-state framework, a first approach uses internal covariates to define additional (intermediate) transient states in the competing risks model. Another approach is to apply the landmark analysis as described by van Houwelingen [Scandinavian Journal of Statistics 2007, 34, 70,85] in order to study cumulative incidences at different subintervals of the entire study period. The final strategy is to extend the competing risks model by considering all the possible combinations between internal covariate levels and cause-specific events as final states. In all of those proposals, it is possible to estimate the changes/differences of the cumulative risks associated with simple internal covariates. An illustrative example based on bone marrow transplant data is presented in order to compare the different methods. [source]


Robust Joint Modeling of Longitudinal Measurements and Competing Risks Failure Time Data

BIOMETRICAL JOURNAL, Issue 1 2009
Ning Li
Abstract Existing methods for joint modeling of longitudinal measurements and survival data can be highly influenced by outliers in the longitudinal outcome. We propose a joint model for analysis of longitudinal measurements and competing risks failure time data which is robust in the presence of outlying longitudinal observations during follow-up. Our model consists of a linear mixed effects sub-model for the longitudinal outcome and a proportional cause-specific hazards frailty sub-model for the competing risks data, linked together by latent random effects. Instead of the usual normality assumption for measurement errors in the linear mixed effects sub-model, we adopt a t -distribution which has a longer tail and thus is more robust to outliers. We derive an EM algorithm for the maximum likelihood estimates of the parameters and estimate their standard errors using a profile likelihood method. The proposed method is evaluated by simulation studies and is applied to a scleroderma lung study ( 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


A Goodness-of-fit Test for the Marginal Cox Model for Correlated Interval-censored Failure Time Data

BIOMETRICAL JOURNAL, Issue 6 2006
Lianming Wang
Abstract The marginal Cox model approach is perhaps the most commonly used method in the analysis of correlated failure time data (Cai, 1999; Cai and Prentice, 1995; Lin, 1994; Wei, Lin and Weissfeld, 1989). It assumes that the marginal distributions for the correlated failure times can be described by the Cox model and leaves the dependence structure completely unspecified. This paper discusses the assessment of the marginal Cox model for correlated interval-censored data and a goodness-of-fit test is presented for the problem. The method is applied to a set of correlated interval-censored data arising from an AIDS clinical trial. ( 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


An Independence Test for Doubly Censored Failure Time Data

BIOMETRICAL JOURNAL, Issue 5 2004
Jianguo Sun
Abstract The analysis of doubly censored failure time data has recently attracted a great deal of attention and for this, a number of methods have been proposed (De Gruttola and Lagakos, 1989; Kim et al., 1993; Pan, 2001; Sun, 2004). To simplify the analysis, most of these methods make an independence assumption: the distribution of the survival time of interest is independent of the occurrence of the initial event that defines the survival time. Although it is well-known that the assumption may not be true, there does not seem to be any existing research discussing the checking of the assumption. In this article, a Wald test is developed for testing this assumption and the method is applied to an AIDS cohort study. ( 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Incorporating Correlation for Multivariate Failure Time Data When Cluster Size Is Large

BIOMETRICS, Issue 2 2010
L. Xue
Summary We propose a new estimation method for multivariate failure time data using the quadratic inference function (QIF) approach. The proposed method efficiently incorporates within-cluster correlations. Therefore, it is more efficient than those that ignore within-cluster correlation. Furthermore, the proposed method is easy to implement. Unlike the weighted estimating equations in Cai and Prentice (1995,,Biometrika,82, 151,164), it is not necessary to explicitly estimate the correlation parameters. This simplification is particularly useful in analyzing data with large cluster size where it is difficult to estimate intracluster correlation. Under certain regularity conditions, we show the consistency and asymptotic normality of the proposed QIF estimators. A chi-squared test is also developed for hypothesis testing. We conduct extensive Monte Carlo simulation studies to assess the finite sample performance of the proposed methods. We also illustrate the proposed methods by analyzing primary biliary cirrhosis (PBC) data. [source]


Marginal Hazards Regression for Retrospective Studies within Cohort with Possibly Correlated Failure Time Data

BIOMETRICS, Issue 2 2009
Sangwook Kang
Summary A retrospective dental study was conducted to evaluate the degree to which pulpal involvement affects tooth survival. Due to the clustering of teeth, the survival times within each subject could be correlated and thus the conventional method for the case,control studies cannot be directly applied. In this article, we propose a marginal model approach for this type of correlated case,control within cohort data. Weighted estimating equations are proposed for the estimation of the regression parameters. Different types of weights are also considered for improving the efficiency. Asymptotic properties of the proposed estimators are investigated and their finite sample properties are assessed via simulations studies. The proposed method is applied to the aforementioned dental study. [source]


Regression Analysis of Doubly Censored Failure Time Data Using the Additive Hazards Model

BIOMETRICS, Issue 3 2004
Liuquan Sun
Summary Doubly censored failure time data arise when the survival time of interest is the elapsed time between two related events and observations on occurrences of both events could be censored. Regression analysis of doubly censored data has recently attracted considerable attention and for this a few methods have been proposed (Kim et al., 1993, Biometrics49, 13,22; Sun et al., 1999, Biometrics55, 909,914; Pan, 2001, Biometrics57, 1245,1250). However, all of the methods are based on the proportional hazards model and it is well known that the proportional hazards model may not fit failure time data well sometimes. This article investigates regression analysis of such data using the additive hazards model and an estimating equation approach is proposed for inference about regression parameters of interest. The proposed method can be easily implemented and the properties of the proposed estimates of regression parameters are established. The method is applied to a set of doubly censored data from an AIDS cohort study. [source]


Aspects of the Armitage,Doll gamma frailty model for cancer incidence data

ENVIRONMETRICS, Issue 3 2004
Shizue Izumi
Abstract Using solid cancer incidence data from atomic bomb survivors in Japan, we examine some aspects of the Armitage,Doll gamma frailty (ADF) model. We consider the following two interpretations for lack of fit of the Armitage,Doll multistage (AD) model found with cancer data: the AD type individual hazards are heterogeneous or the individual hazards increase more slowly with age than the AD type hazards. In order to examine these interpretations, we applied the ADF model and the modified AD model to radiation-related cancer incidence rates. We assessed the magnitude of frailty by a frailty parameter at the ADF model and departures from the AD-type baseline hazard by a shape increment parameter at the modified AD model. Akaike's information criterion (AIC) was used to examine the goodness of fit of the models. The modified AD model provided as good a fit as the ADF model. Our results support both interpretations and imply that these interpretations may be practically unidentifiable in univariate failure time data. Thus, results from the frailty model for univariate failure time data should be interpreted carefully. Copyright 2004 John Wiley & Sons, Ltd. [source]


Analysis of failure time data under competing censoring mechanisms

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 3 2007
Andrea Rotnitzky
Summary., We derive estimators of the survival curve of a failure time in the presence of competing right censoring mechanisms. Our approach allows for the possibility that some or all of the competing censoring mechanisms are associated with the end point, even after adjustment for recorded prognostic factors. It also allows the degree of residual association to be possibly different for distinct censoring processes. Our methods generalize from one to several competing censoring mechanisms the methods of Scharfstein and Robins. [source]


Control Charts for Monitoring Field Failure Data

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 7 2006
Robert G. Batson
Abstract One responsibility of the reliability engineer is to monitor failure trends for fielded units to confirm that pre-production life testing results remain valid. This research suggests an approach that is computationally simple and can be used with a small number of failures per observation period. The approach is based on converting failure time data from fielded units to normal distribution data, using simple logarithmic or power transformations. Appropriate normalizing transformations for the classic life distributions (exponential, lognormal, and Weibull) are identified from the literature. Samples of size 500 field failure times are generated for seven different lifetime distributions (normal, lognormal, exponential, and four Weibulls of various shapes). Various control charts are then tested under three sampling schemes (individual, fixed, and random) and three system reliability degradations (large step, small step, and linear decrease in mean time between failures (MTBF)). The results of these tests are converted to performance measures of time to first out-of-control signal and persistence of signal after out-of-control status begins. Three of the well-known Western Electric sensitizing rules are used to recognize the assignable cause signals. Based on this testing, the ,X -chart with fixed sample size is the best overall for field failure monitoring, although the individual chart was better for the transformed exponential and another highly-skewed Weibull. As expected, the linear decrease in MTBF is the most difficult change for any of the charts to detect. Copyright 2005 John Wiley & Sons, Ltd. [source]


Bayesian inference for Rayleigh distribution under progressive censored sample

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 3 2006
Shuo-Jye Wu
Abstract It is often the case that some information is available on the parameter of failure time distributions from previous experiments or analyses of failure time data. The Bayesian approach provides the methodology for incorporation of previous information with the current data. In this paper, given a progressively type II censored sample from a Rayleigh distribution, Bayesian estimators and credible intervals are obtained for the parameter and reliability function. We also derive the Bayes predictive estimator and highest posterior density prediction interval for future observations. Two numerical examples are presented for illustration and some simulation study and comparisons are performed. Copyright 2006 John Wiley & Sons, Ltd. [source]


Robust Joint Modeling of Longitudinal Measurements and Competing Risks Failure Time Data

BIOMETRICAL JOURNAL, Issue 1 2009
Ning Li
Abstract Existing methods for joint modeling of longitudinal measurements and survival data can be highly influenced by outliers in the longitudinal outcome. We propose a joint model for analysis of longitudinal measurements and competing risks failure time data which is robust in the presence of outlying longitudinal observations during follow-up. Our model consists of a linear mixed effects sub-model for the longitudinal outcome and a proportional cause-specific hazards frailty sub-model for the competing risks data, linked together by latent random effects. Instead of the usual normality assumption for measurement errors in the linear mixed effects sub-model, we adopt a t -distribution which has a longer tail and thus is more robust to outliers. We derive an EM algorithm for the maximum likelihood estimates of the parameters and estimate their standard errors using a profile likelihood method. The proposed method is evaluated by simulation studies and is applied to a scleroderma lung study ( 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


A Goodness-of-fit Test for the Marginal Cox Model for Correlated Interval-censored Failure Time Data

BIOMETRICAL JOURNAL, Issue 6 2006
Lianming Wang
Abstract The marginal Cox model approach is perhaps the most commonly used method in the analysis of correlated failure time data (Cai, 1999; Cai and Prentice, 1995; Lin, 1994; Wei, Lin and Weissfeld, 1989). It assumes that the marginal distributions for the correlated failure times can be described by the Cox model and leaves the dependence structure completely unspecified. This paper discusses the assessment of the marginal Cox model for correlated interval-censored data and a goodness-of-fit test is presented for the problem. The method is applied to a set of correlated interval-censored data arising from an AIDS clinical trial. ( 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


An Independence Test for Doubly Censored Failure Time Data

BIOMETRICAL JOURNAL, Issue 5 2004
Jianguo Sun
Abstract The analysis of doubly censored failure time data has recently attracted a great deal of attention and for this, a number of methods have been proposed (De Gruttola and Lagakos, 1989; Kim et al., 1993; Pan, 2001; Sun, 2004). To simplify the analysis, most of these methods make an independence assumption: the distribution of the survival time of interest is independent of the occurrence of the initial event that defines the survival time. Although it is well-known that the assumption may not be true, there does not seem to be any existing research discussing the checking of the assumption. In this article, a Wald test is developed for testing this assumption and the method is applied to an AIDS cohort study. ( 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Incorporating Correlation for Multivariate Failure Time Data When Cluster Size Is Large

BIOMETRICS, Issue 2 2010
L. Xue
Summary We propose a new estimation method for multivariate failure time data using the quadratic inference function (QIF) approach. The proposed method efficiently incorporates within-cluster correlations. Therefore, it is more efficient than those that ignore within-cluster correlation. Furthermore, the proposed method is easy to implement. Unlike the weighted estimating equations in Cai and Prentice (1995,,Biometrika,82, 151,164), it is not necessary to explicitly estimate the correlation parameters. This simplification is particularly useful in analyzing data with large cluster size where it is difficult to estimate intracluster correlation. Under certain regularity conditions, we show the consistency and asymptotic normality of the proposed QIF estimators. A chi-squared test is also developed for hypothesis testing. We conduct extensive Monte Carlo simulation studies to assess the finite sample performance of the proposed methods. We also illustrate the proposed methods by analyzing primary biliary cirrhosis (PBC) data. [source]


Regression Analysis of Doubly Censored Failure Time Data Using the Additive Hazards Model

BIOMETRICS, Issue 3 2004
Liuquan Sun
Summary Doubly censored failure time data arise when the survival time of interest is the elapsed time between two related events and observations on occurrences of both events could be censored. Regression analysis of doubly censored data has recently attracted considerable attention and for this a few methods have been proposed (Kim et al., 1993, Biometrics49, 13,22; Sun et al., 1999, Biometrics55, 909,914; Pan, 2001, Biometrics57, 1245,1250). However, all of the methods are based on the proportional hazards model and it is well known that the proportional hazards model may not fit failure time data well sometimes. This article investigates regression analysis of such data using the additive hazards model and an estimating equation approach is proposed for inference about regression parameters of interest. The proposed method can be easily implemented and the properties of the proposed estimates of regression parameters are established. The method is applied to a set of doubly censored data from an AIDS cohort study. [source]


Inference in Spline-Based Models for Multiple Time-to-Event Data, with Applications to a Breast Cancer Prevention Trial

BIOMETRICS, Issue 4 2003
Kiros Berhane
Summary. As part of the National Surgical Adjuvant Breast and Bowel Project, a controlled clinical trial known as the Breast Cancer Prevention Trial (BCPT) was conducted to assess the effectiveness of tamoxifen as a preventive agent for breast cancer. In addition to the incidence of breast cancer, data were collected on several other, possibly adverse, outcomes, such as invasive endometrial cancer, ischemic heart disease, transient ischemic attack, deep vein thrombosis and/or pulmonary embolism. In this article, we present results from an illustrative analysis of the BCPT data, based on a new modeling technique, to assess the effectiveness of the drug tamoxifen as a preventive agent for breast cancer. We extended the flexible model of Gray (1994, Spline-based test in survival analysis, Biometrics50, 640,652) to allow inference on multiple time-to-event outcomes in the style of the marginal modeling setup of Wei, Lin, and Weissfeld (1989, Regression analysis of multivariate incomplete failure time data by modeling marginal distributions, Journal of the American Statistical Association84, 1065,1073). This proposed model makes inference possible for multiple time-to-event data while allowing for greater flexibility in modeling the effects of prognostic factors with nonlinear exposure-response relationships. Results from simulation studies on the small-sample properties of the asymptotic tests will also be presented. [source]


Analysis of Survival Data from Case,Control Family Studies

BIOMETRICS, Issue 3 2002
Joanna H. Shih
Summary. In case,control family studies with survival endpoint, age of onset of diseases can be used to assess the familial aggregation of the disease and the relationship between the disease and genetic or environmental risk factors. Because of the retrospective nature of the case-control study, methods for analyzing prospectively collected correlated failure time data do not apply directly. In this article, we propose a semiparametric quasi-partial-likelihood approach to simultaneously estimate the effect of covariates on the age of onset and the association of ages of onset among family members that does not require specification of the baseline marginal distribution. We conducted a simulation study to evaluate the performance of the proposed approach and compare it with the existing semiparametric ones. Simulation results demonstrate that the proposed approach has better performance in terms of consistency and efficiency. We illustrate the methodology using a subset of data from the Washington Ashkenazi Study. [source]


A Semiparametric Estimate of Treatment Effects with Censored Data

BIOMETRICS, Issue 3 2001
Ronghui Xu
Summary. A semiparametric estimate of an average regression effect with right-censored failure time data has recently been proposed under the Cox-type model where the regression effect ,(t) is allowed to vary with time. In this article, we derive a simple algebraic relationship between this average regression effect and a measurement of group differences in K -sample transformation models when the random error belongs to the Gp family of Harrington and Fleming (1982, Biometrika69, 553,566), the latter being equivalent to the conditional regression effect in a gamma frailty model. The models considered here are suitable for the attenuating hazard ratios that often arise in practice. The results reveal an interesting connection among the above three classes of models as alternatives to the proportional hazards assumption and add to our understanding of the behavior of the partial likelihood estimate under nonproportional hazards. The algebraic relationship provides a simple estimator under the transformation model. We develop a variance estimator based on the empirical influence function that is much easier to compute than the previously suggested resampling methods. When there is truncation in the right tail of the failure times, we propose a method of bias correction to improve the coverage properties of the confidence intervals. The estimate, its estimated variance, and the bias correction term can all be calculated with minor modifications to standard software for proportional hazards regression. [source]


Survival Analysis in Clinical Trials: Past Developments and Future Directions

BIOMETRICS, Issue 4 2000
Thomas R. Fleming
Summary. The field of survival analysis emerged in the 20th century and experienced tremendous growth during the latter half of the century. The developments in this field that have had the most profound impact on clinical trials are the Kaplan-Meier (1958, Journal of the American Statistical Association53, 457,481) method for estimating the survival function, the log-rank statistic (Mantel, 1966, Cancer Chemotherapy Report50, 163,170) for comparing two survival distributions, and the Cox (1972, Journal of the Royal Statistical Society, Series B34, 187,220) proportional hazards model for quantifying the effects of covariates on the survival time. The counting-process martingale theory pioneered by Aalen (1975, Statistical inference for a family of counting processes, Ph.D. dissertation, University of California, Berkeley) provides a unified framework for studying the small- and large-sample properties of survival analysis statistics. Significant progress has been achieved and further developments are expected in many other areas, including the accelerated failure time model, multivariate failure time data, interval-censored data, dependent censoring, dynamic treatment regimes and causal inference, joint modeling of failure time and longitudinal data, and Baysian methods. [source]