Home About us Contact | |||
Hazard Functions (hazard + function)
Kinds of Hazard Functions Selected AbstractsKernel estimates of hazard functions for carcinoma data setsENVIRONMETRICS, Issue 3 2006Ivana Horová Abstract The present article focuses on kernel estimates of hazard functions and their derivatives. Our approach is based on the model introduced by Müller and Wang (1990). In order to estimate the hazard function in an effective manner an automatic procedure in a paper by Horová et al. (2002) is applied. The procedure chooses a bandwidth, a kernel and an order of a kernel. As a by-product we propose a special procedure for the estimation of the optimal bandwidth. This is applied to the carcinoma data sets kindly provided by the Masaryk Memorial Cancer Institute in Brno. Attention is also paid to the points of the most rapid change of the hazard function. Copyright © 2006 John Wiley & Sons, Ltd. [source] Bayesian inference in a piecewise Weibull proportional hazards model with unknown change pointsJOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 4 2007J. Casellas Summary The main difference between parametric and non-parametric survival analyses relies on model flexibility. Parametric models have been suggested as preferable because of their lower programming needs although they generally suffer from a reduced flexibility to fit field data. In this sense, parametric survival functions can be redefined as piecewise survival functions whose slopes change at given points. It substantially increases the flexibility of the parametric survival model. Unfortunately, we lack accurate methods to establish a required number of change points and their position within the time space. In this study, a Weibull survival model with a piecewise baseline hazard function was developed, with change points included as unknown parameters in the model. Concretely, a Weibull log-normal animal frailty model was assumed, and it was solved with a Bayesian approach. The required fully conditional posterior distributions were derived. During the sampling process, all the parameters in the model were updated using a Metropolis,Hastings step, with the exception of the genetic variance that was updated with a standard Gibbs sampler. This methodology was tested with simulated data sets, each one analysed through several models with different number of change points. The models were compared with the Deviance Information Criterion, with appealing results. Simulation results showed that the estimated marginal posterior distributions covered well and placed high density to the true parameter values used in the simulation data. Moreover, results showed that the piecewise baseline hazard function could appropriately fit survival data, as well as other smooth distributions, with a reduced number of change points. [source] In Vivo Modulation of Post-Spike Excitability in Vasopressin Cells by ,-Opioid Receptor ActivationJOURNAL OF NEUROENDOCRINOLOGY, Issue 8 2000C. H. Brown Abstract An endogenous ,-opioid agonist reduces the duration of phasic bursts in vasopressin cells. Non-synaptic post-spike depolarizing after-potentials underlie activity during bursts by increasing post-spike excitability and ,-receptor activation reduces depolarizing after-potential amplitude in vitro. To investigate the effects of ,-opioids on post-spike excitability in vivo, we analysed extracellular recordings of the spontaneous activity of identified supraoptic nucleus vasopressin cells in urethane-anaesthetized rats infused with Ringer's solution (n = 17) or the ,-agonist, U50,488H (2.5 µg/h at 0.5 µl/h; n = 23), into the supraoptic nucleus over 5 days. We plotted the mean hazard function for the interspike interval distributions as a measure of the post-spike excitability of these cells. Following each spike, the probability of another spike firing in vasopressin cells recorded from U50,488H infused nuclei was markedly reduced compared to Ringer's treated vasopressin cells. To determine whether U50,488H could reduce post-spike excitability in cells that displayed spontaneous phasic activity, we infused U50,488H (50 µg/h at 1 µl/h, i.c.v.), for 1,12 h while recording vasopressin cell activity. Nine of 10 vasopressin cells were silenced by i.c.v. U50,488H 15 ± 5 min into the infusion. Six cells exhibited spontaneous phasic activity before U50,488H infusion and recordings from three of these phasic cells were maintained until activity recovered; during U50,488H infusion, the activity of these three cells was irregular. Generation of the mean hazard function before and during U50,488H infusion revealed a reduction in post-spike excitability during U50,488H infusion. Thus, ,-receptor activation reduces post-spike excitability in vivo; this may reflect inhibition of depolarizing after-potentials and may thus underlie the reduction in burst duration of vasopressin cells caused by an endogenous ,-agonist in vivo. [source] Proportional hazards estimate of the conditional survival functionJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 4 2000Ronghui Xu We introduce a new estimator of the conditional survival function given some subset of the covariate values under a proportional hazards regression. The new estimate does not require estimating the base-line cumulative hazard function. An estimate of the variance is given and is easy to compute, involving only those quantities that are routinely calculated in a Cox model analysis. The asymptotic normality of the new estimate is shown by using a central limit theorem for Kaplan,Meier integrals. We indicate the straightforward extension of the estimation procedure under models with multiplicative relative risks, including non-proportional hazards, and to stratified and frailty models. The estimator is applied to a gastric cancer study where it is of interest to predict patients' survival based only on measurements obtained before surgery, the time at which the most important prognostic variable, stage, becomes known. [source] Warranty costs: An age-dependent failure/repair modelNAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 7 2004Boyan Dimitrov Abstract An age-dependent repair model is proposed. The notion of the "calendar age" of the product and the degree of repair are used to define the virtual age of the product. The virtual failure rate function and the virtual hazard function related to the lifetime of the product are discussed. Under a nonhomogeneous Poisson process scenario the expected warranty costs for repairable products associated with linear pro-rata, nonrenewing free replacement and renewing free replacement warranties are evaluated. Illustration of the results is given by numerical and graphical examples. © 2004 Wiley Periodicals, Inc. Naval Research Logistics, 2004. [source] Semiparametric Estimation of a Duration ModelOXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 5 2001A. Alonso Anton Within the framework of the proportional hazard model proposed in Cox (1972), Han and Hausman (1990) consider the logarithm of the integrated baseline hazard function as constant in each time period. We, however, proposed an alternative semiparametric estimator of the parameters of the covariate part. The estimator is considered as semiparametric since no prespecified functional form for the error terms (or certain convolution) is needed. This estimator, proposed in Lewbel (2000) in another context, shows at least four advantages. The distribution of the latent variable error is unknown and may be related to the regressors. It takes into account censored observations, it allows for heterogeneity of unknown form and it is quite easy to implement since the estimator does not require numerical searches. Using the Spanish Labour Force Survey, we compare empirically the results of estimating several alternative models, basically on the estimator proposed in Han and Hausman (1990) and our semiparametric estimator. [source] NONPARAMETRIC ESTIMATION OF CONDITIONAL CUMULATIVE HAZARDS FOR MISSING POPULATION MARKSAUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 1 2010Dipankar Bandyopadhyay Summary A new function for the competing risks model, the conditional cumulative hazard function, is introduced, from which the conditional distribution of failure times of individuals failing due to cause,j,can be studied. The standard Nelson,Aalen estimator is not appropriate in this setting, as population membership (mark) information may be missing for some individuals owing to random right-censoring. We propose the use of imputed population marks for the censored individuals through fractional risk sets. Some asymptotic properties, including uniform strong consistency, are established. We study the practical performance of this estimator through simulation studies and apply it to a real data set for illustration. [source] THE USE OF AGGREGATE DATA TO ESTIMATE GOMPERTZ-TYPE OLD-AGE MORTALITY IN HETEROGENEOUS POPULATIONSAUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 4 2009Christopher R. Heathcote Summary We consider two related aspects of the study of old-age mortality. One is the estimation of a parameterized hazard function from grouped data, and the other is its possible deceleration at extreme old age owing to heterogeneity described by a mixture of distinct sub-populations. The first is treated by half of a logistic transform, which is known to be free of discretization bias at older ages, and also preserves the increasing slope of the log hazard in the Gompertz case. It is assumed that data are available in the form published by official statistical agencies, that is, as aggregated frequencies in discrete time. Local polynomial modelling and weighted least squares are applied to cause-of-death mortality counts. The second, related, problem is to discover what conditions are necessary for population mortality to exhibit deceleration for a mixture of Gompertz sub-populations. The general problem remains open but, in the case of three groups, we demonstrate that heterogeneity may be such that it is possible for a population to show decelerating mortality and then return to a Gompertz-like increase at a later age. This implies that there are situations, depending on the extent of heterogeneity, in which there is at least one age interval in which the hazard function decreases before increasing again. [source] Multilevel Mixture Cure Models with Random EffectsBIOMETRICAL JOURNAL, Issue 3 2009Xin Lai Abstract This paper extends the multilevel survival model by allowing the existence of cured fraction in the model. Random effects induced by the multilevel clustering structure are specified in the linear predictors in both hazard function and cured probability parts. Adopting the generalized linear mixed model (GLMM) approach to formulate the problem, parameter estimation is achieved by maximizing a best linear unbiased prediction (BLUP) type log-likelihood at the initial step of estimation, and is then extended to obtain residual maximum likelihood (REML) estimators of the variance component. The proposed multilevel mixture cure model is applied to analyze the (i) child survival study data with multilevel clustering and (ii) chronic granulomatous disease (CGD) data on recurrent infections as illustrations. A simulation study is carried out to evaluate the performance of the REML estimators and assess the accuracy of the standard error estimates. [source] Joint Modelling of Repeated Transitions in Follow-up Data , A Case Study on Breast Cancer DataBIOMETRICAL JOURNAL, Issue 3 2005B. Genser Abstract In longitudinal studies where time to a final event is the ultimate outcome often information is available about intermediate events the individuals may experience during the observation period. Even though many extensions of the Cox proportional hazards model have been proposed to model such multivariate time-to-event data these approaches are still very rarely applied to real datasets. The aim of this paper is to illustrate the application of extended Cox models for multiple time-to-event data and to show their implementation in popular statistical software packages. We demonstrate a systematic way of jointly modelling similar or repeated transitions in follow-up data by analysing an event-history dataset consisting of 270 breast cancer patients, that were followed-up for different clinical events during treatment in metastatic disease. First, we show how this methodology can also be applied to non Markovian stochastic processes by representing these processes as "conditional" Markov processes. Secondly, we compare the application of different Cox-related approaches to the breast cancer data by varying their key model components (i.e. analysis time scale, risk set and baseline hazard function). Our study showed that extended Cox models are a powerful tool for analysing complex event history datasets since the approach can address many dynamic data features such as multiple time scales, dynamic risk sets, time-varying covariates, transition by covariate interactions, autoregressive dependence or intra-subject correlation. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] A Frailty-Model-Based Approach to Estimating the Age-Dependent Penetrance Function of Candidate Genes Using Population-Based Case-Control Study Designs: An Application to Data on the BRCA1 GeneBIOMETRICS, Issue 4 2009Lu Chen Summary The population-based case,control study design is perhaps one of, if not the most, commonly used designs for investigating the genetic and environmental contributions to disease risk in epidemiological studies. Ages at onset and disease status of family members are routinely and systematically collected from the participants in this design. Considering age at onset in relatives as an outcome, this article is focused on using the family history information to obtain the hazard function, i.e., age-dependent penetrance function, of candidate genes from case,control studies. A frailty-model-based approach is proposed to accommodate the shared risk among family members that is not accounted for by observed risk factors. This approach is further extended to accommodate missing genotypes in family members and a two-phase case,control sampling design. Simulation results show that the proposed method performs well in realistic settings. Finally, a population-based two-phase case,control breast cancer study of the BRCA1 gene is used to illustrate the method. [source] Nonparametric Association Analysis of Exchangeable Clustered Competing Risks DataBIOMETRICS, Issue 2 2009Yu Cheng Summary The work is motivated by the Cache County Study of Aging, a population-based study in Utah, in which sibship associations in dementia onset are of interest. Complications arise because only a fraction of the population ever develops dementia, with the majority dying without dementia. The application of standard dependence analyses for independently right-censored data may not be appropriate with such multivariate competing risks data, where death may violate the independent censoring assumption. Nonparametric estimators of the bivariate cumulative hazard function and the bivariate cumulative incidence function are adapted from the simple nonexchangeable bivariate setup to exchangeable clustered data, as needed with the large sibships in the Cache County Study. Time-dependent association measures are evaluated using these estimators. Large sample inferences are studied rigorously using empirical process techniques. The practical utility of the methodology is demonstrated with realistic samples both via simulations and via an application to the Cache County Study, where dementia onset clustering among siblings varies strongly by age. [source] Polynomial Spline Estimation and Inference of Proportional Hazards Regression Models with Flexible Relative Risk FormBIOMETRICS, Issue 3 2006Jianhua Z. Huang Summary The Cox proportional hazards model usually assumes an exponential form for the dependence of the hazard function on covariate variables. However, in practice this assumption may be violated and other relative risk forms may be more appropriate. In this article, we consider the proportional hazards model with an unknown relative risk form. Issues in model interpretation are addressed. We propose a method to estimate the relative risk form and the regression parameters simultaneously by first approximating the logarithm of the relative risk form by a spline, and then employing the maximum partial likelihood estimation. An iterative alternating optimization procedure is developed for efficient implementation. Statistical inference of the regression coefficients and of the relative risk form based on parametric asymptotic theory is discussed. The proposed methods are illustrated using simulation and an application to the Veteran's Administration lung cancer data. [source] Kernel estimates of hazard functions for carcinoma data setsENVIRONMETRICS, Issue 3 2006Ivana Horová Abstract The present article focuses on kernel estimates of hazard functions and their derivatives. Our approach is based on the model introduced by Müller and Wang (1990). In order to estimate the hazard function in an effective manner an automatic procedure in a paper by Horová et al. (2002) is applied. The procedure chooses a bandwidth, a kernel and an order of a kernel. As a by-product we propose a special procedure for the estimation of the optimal bandwidth. This is applied to the carcinoma data sets kindly provided by the Masaryk Memorial Cancer Institute in Brno. Attention is also paid to the points of the most rapid change of the hazard function. Copyright © 2006 John Wiley & Sons, Ltd. [source] Family-based association test for time-to-onset data with time-dependent differences between the hazard functionsGENETIC EPIDEMIOLOGY, Issue 2 2006Hongyu Jiang Abstract In genetic association studies, the differences between the hazard functions for the individual genotypes are often time-dependent. We address the non-proportional hazards data by using the weighted logrank approach by Fleming and Harrington [1981]:Commun Stat-Theor M 10:763,794. We introduce a weighted FBAT-Logrank whose weights are based on a non-parametric estimator for the genetic marker distribution function under the alternative hypothesis. We show that the computation of the marker distribution under the alternative does not bias the significance level of any subsequently computed FBAT-statistic. Hence, we use the estimated marker distribution to select the Fleming-Harrington weights so that the power of the weighted FBAT-Logrank test is maximized. In simulation studies and applications to an asthma study, we illustrate the practical relevance of the new methodology. In addition to power increases of 100% over the original FBAT-Logrank test, we also gain insight into the age at which a genotype exerts the greatest influence on disease risk. Genet. Epidemiol. 2006. © 2006 Wiley-Liss, Inc. [source] Selective harvest of sooty shearwater chicks: effects on population dynamics and sustainabilityJOURNAL OF ANIMAL ECOLOGY, Issue 4 2005CHRISTINE M. HUNTER Summary 1Selectivity of harvest influences harvest sustainability because individuals with different characteristics contribute differently to population growth. We investigate the effects of selection based on chick weight on a traditional harvest of the sooty shearwater Puffinus griseus by Rakiura Maori in New Zealand. 2We develop a periodic stage-structured matrix population model and incorporate seasonal harvest of three weight classes of chicks. Intensity and selectivity of harvest are defined in terms of weight-specific hazard functions. 3We investigate the effect of harvest intensity and selectivity on population growth rate, ,, and the chick exploitation rate, E. We also consider the interaction of chick harvest and adult mortality. 4, decreases and E increases as harvest intensity increases. At low harvest intensities, selection has little effect on ,. At high harvest intensities, , increases as selectivity increases because of the non-linear relationship between harvest intensity and the probability of being harvested. 5, is determined almost completely by E, irrespective of the combination of harvest selectivity and intensity producing E. This is true for both general patterns of selectivity and specific patterns estimated from empirical data. 6The elasticities of ,, the net reproductive rate and the generation time are unaffected by selectivity and show only small responses to harvest intensity. 7Adult sooty shearwaters are killed as bycatch in long-line and driftnet fisheries. Such mortality of adults has an effect on , about 10-fold greater than an equivalent level of chick harvest. 8The sustainability of any combination of chick harvest and adult mortality depends on the resulting reduction in ,. We explore these results in relation to indices of sustainability, particularly the United States Marine Mammal Protection Act (MMPA) standards. [source] Submarket Dynamics of Time to SaleREAL ESTATE ECONOMICS, Issue 3 2006Gwilym Pryce We argue that the rush to apply multiple regression estimation to time on the market (TOM) durations may have led to important details and idiosyncrasies in local housing market dynamics being overlooked. What is needed is a more careful examination of the fundamental properties of time to sale data. The approach promoted and presented here, therefore, is to provide an examination of housing sale dynamics using a step-by-step approach. We present three hypotheses about TOM: (i) there is nonmonotonic duration dependence in the hazard of sale, (ii) the hazard curve will vary both over time and across intra-urban areas providing evidence of the existence of submarkets and (iii) institutional idiosyncrasies can have a profound effect on the shape and position of the hazard curve. We apply life tables, kernel-smoothed hazard functions and likelihood ratio tests for homogeneity to a large Scottish data set to investigate these hypotheses. Our findings have important implications for TOM analysis. [source] Semiparametric estimation of single-index hazard functions without proportional hazardsTHE ECONOMETRICS JOURNAL, Issue 1 2006Tue Gřrgens Summary, This research develops a semiparametric kernel-based estimator of hazard functions which does not assume proportional hazards. The maintained assumption is that the hazard functions depend on regressors only through a linear index. The estimator permits both discrete and continuous regressors, both discrete and continuous failure times, and can be applied to right-censored data and to multiple-risks data, in which case the hazard functions are risk-specific. The estimator is root- n consistent and asymptotically normally distributed. The estimator performs well in Monte Carlo experiments. [source] Non-monotonic hazard functions and the autoregressive conditional duration modelTHE ECONOMETRICS JOURNAL, Issue 1 2000Joachim Grammig This paper shows that the monotonicity of the conditional hazard in traditional ACD models is both econometrically important and empirically invalid. To counter this problem we introduce a more flexible parametric model which is easy to fit and performs well both in simulation studies and in practice. In an empirical application to NYSE price duration processes, we show that non-monotonic conditional hazard functions are indicated for all stocks. Recently proposed specification tests for financial duration models clearly reject the standard ACD models, whereas the results for the new model are quite favorable. [source] On a discrimination problem for a class of stochastic processes with ordered first-passage timesAPPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 2 2001Antonio Di Crescenzo Abstract Consider the Bayes problem in which one has to discriminate if the random unknown initial state of a stochastic process is distributed according to either of two preassigned distributions, on the base of the observation of the first-passage time of the process through 0. For processes whose first-passage times to state 0 are increasing in the initial state according to the likelihood ratio order, such problem is solved by determining the Bayes decision function and the corresponding Bayes error. The special case of fixed initial values including a family of first-passage times with proportional reversed hazard functions is then studied. Finally, various applications to birth-and-death and to diffusion processes are discussed. Copyright © 2001 John Wiley & Sons, Ltd. [source] Parameter Estimation for Partially Complete Time and Type of Failure DataBIOMETRICAL JOURNAL, Issue 2 2004Debasis Kundu Abstract The theory of competing risks has been developed to asses a specific risk in presence of other risk factors. In this paper we consider the parametric estimation of different failure modes under partially complete time and type of failure data using latent failure times and cause specific hazard functions models. Uniformly minimum variance unbiased estimators and maximum likelihood estimators are obtained when latent failure times and cause specific hazard functions are exponentially distributed. We also consider the case when they follow Weibull distributions. One data set is used to illustrate the proposed techniques. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] A Mixture Point Process for Repeated Failure Times, with an Application to a Recurrent DiseaseBIOMETRICAL JOURNAL, Issue 7 2003O. Pons Abstract We present a model that describes the distribution of recurring times of a disease in presence of covariate effects. After a first occurrence of the disease in an individual, the time intervals between successive cases are supposed to be independent and to be a mixture of two distributions according to the issue of the previous treatment. Both sub-distributions of the model and the mixture proportion are allowed to involve covariates. Parametric inference is considered and we illustrate the methods with data of a recurrent disease and with simulations, using piecewise constant baseline hazard functions. [source] Cumulative Damage Survival Models for the Randomized Complete Block DesignBIOMETRICAL JOURNAL, Issue 2 2003Gavin G. Gregory Abstract A continuous time discrete state cumulative damage process {X(t), t , 0} is considered, based on a non-homogeneous Poisson hit-count process and discrete distribution of damage per hit, which can be negative binomial, Neyman type A, Polya-Aeppli or Lagrangian Poisson. Intensity functions considered for the Poisson process comprise a flexible three-parameter family. The survival function is S(t) = P(X(t) , L) where L is fixed. Individual variation is accounted for within the construction for the initial damage distribution {P(X(0) = x) | x = 0, 1, ,,}. This distribution has an essential cut-off before x = L and the distribution of L , X(0) may be considered a tolerance distribution. A multivariate extension appropriate for the randomized complete block design is developed by constructing dependence in the initial damage distributions. Our multivariate model is applied (via maximum likelihood) to litter-matched tumorigenesis data for rats. The litter effect accounts for 5.9 percent of the variance of the individual effect. Cumulative damage hazard functions are compared to nonparametric hazard functions and to hazard functions obtained from the PVF-Weibull frailty model. The cumulative damage model has greater dimensionality for interpretation compared to other models, owing principally to the intensity function part of the model. [source] Flexible Maximum Likelihood Methods for Bivariate Proportional Hazards ModelsBIOMETRICS, Issue 4 2003Wenqing He Summary. This article presents methodology for multivariate proportional hazards (PH) regression models. The methods employ flexible piecewise constant or spline specifications for baseline hazard functions in either marginal or conditional PH models, along with assumptions about the association among lifetimes. Because the models are parametric, ordinary maximum likelihood can be applied; it is able to deal easily with such data features as interval censoring or sequentially observed lifetimes, unlike existing semiparametric methods. A bivariate Clayton model (1978, Biometrika65, 141,151) is used to illustrate the approach taken. Because a parametric assumption about association is made, efficiency and robustness comparisons are made between estimation based on the bivariate Clayton model and "working independence" methods that specify only marginal distributions for each lifetime variable. [source] |