Standard Statistical Software (standard + statistical_software)

Distribution by Scientific Domains


Selected Abstracts


Fitting Semiparametric Additive Hazards Models using Standard Statistical Software

BIOMETRICAL JOURNAL, Issue 5 2007
Douglas E. Schaubel
Abstract The Cox proportional hazards model has become the standard in biomedical studies, particularly for settings in which the estimation covariate effects (as opposed to prediction) is the primary objective. In spite of the obvious flexibility of this approach and its wide applicability, the model is not usually chosen for its fit to the data, but by convention and for reasons of convenience. It is quite possible that the covariates add to, rather than multiply the baseline hazard, making an additive hazards model a more suitable choice. Typically, proportionality is assumed, with the potential for additive covariate effects not evaluated or even seriously considered. Contributing to this phenomenon is the fact that many popular software packages (e.g., SAS, S-PLUS/R) have standard procedures to fit the Cox model (e.g., proc phreg, coxph), but as of yet no analogous procedures to fit its additive analog, the Lin and Ying (1994) semiparametric additive hazards model. In this article, we establish the connections between the Lin and Ying (1994) model and both Cox and least squares regression. We demonstrate how SAS's phreg and reg procedures may be used to fit the additive hazards model, after some straightforward data manipulations. We then apply the additive hazards model to examine the relationship between Model for End-stage Liver Disease (MELD) score and mortality among patients wait-listed for liver transplantation. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Estimating numbers of infectious units from serial dilution assays

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 1 2006
Nigel Stallard
Summary., The paper concerns the design and analysis of serial dilution assays to estimate the infectivity of a sample of tissue when it is assumed that the sample contains a finite number of indivisible infectious units such that a subsample will be infectious if it contains one or more of these units. The aim of the study is to estimate the number of infectious units in the original sample. The standard approach to the analysis of data from such a study is based on the assumption of independence of aliquots both at the same dilution level and at different dilution levels, so that the numbers of infectious units in the aliquots follow independent Poisson distributions. An alternative approach is based on calculation of the expected value of the total number of samples tested that are not infectious. We derive the likelihood for the data on the basis of the discrete number of infectious units, enabling calculation of the maximum likelihood estimate and likelihood-based confidence intervals. We use the exact probabilities that are obtained to compare the maximum likelihood estimate with those given by the other methods in terms of bias and standard error and to compare the coverage of the confidence intervals. We show that the methods have very similar properties and conclude that for practical use the method that is based on the Poisson assumption is to be recommended, since it can be implemented by using standard statistical software. Finally we consider the design of serial dilution assays, concluding that it is important that neither the dilution factor nor the number of samples that remain untested should be too large. [source]


On Estimating the Relationship between Longitudinal Measurements and Time-to-Event Data Using a Simple Two-Stage Procedure

BIOMETRICS, Issue 3 2010
Paul S. Albert
SummaryYe, Lin, and Taylor (2008,,Biometrics,64, 1238,1246) proposed a joint model for longitudinal measurements and time-to-event data in which the longitudinal measurements are modeled with a semiparametric mixed model to allow for the complex patterns in longitudinal biomarker data. They proposed a two-stage regression calibration approach that is simpler to implement than a joint modeling approach. In the first stage of their approach, the mixed model is fit without regard to the time-to-event data. In the second stage, the posterior expectation of an individual's random effects from the mixed-model are included as covariates in a Cox model. Although Ye et al. (2008) acknowledged that their regression calibration approach may cause a bias due to the problem of informative dropout and measurement error, they argued that the bias is small relative to alternative methods. In this article, we show that this bias may be substantial. We show how to alleviate much of this bias with an alternative regression calibration approach that can be applied for both discrete and continuous time-to-event data. Through simulations, the proposed approach is shown to have substantially less bias than the regression calibration approach proposed by Ye et al. (2008). In agreement with the methodology proposed by Ye et al. (2008), an advantage of our proposed approach over joint modeling is that it can be implemented with standard statistical software and does not require complex estimation techniques. [source]


A Latent-Class Mixture Model for Incomplete Longitudinal Gaussian Data

BIOMETRICS, Issue 1 2008
Caroline Beunckens
Summary In the analyses of incomplete longitudinal clinical trial data, there has been a shift, away from simple methods that are valid only if the data are missing completely at random, to more principled ignorable analyses, which are valid under the less restrictive missing at random assumption. The availability of the necessary standard statistical software nowadays allows for such analyses in practice. While the possibility of data missing not at random (MNAR) cannot be ruled out, it is argued that analyses valid under MNAR are not well suited for the primary analysis in clinical trials. Rather than either forgetting about or blindly shifting to an MNAR framework, the optimal place for MNAR analyses is within a sensitivity-analysis context. One such route for sensitivity analysis is to consider, next to selection models, pattern-mixture models or shared-parameter models. The latter can also be extended to a latent-class mixture model, the approach taken in this article. The performance of the so-obtained flexible model is assessed through simulations and the model is applied to data from a depression trial. [source]


Matched Case,Control Data Analysis with Selection Bias

BIOMETRICS, Issue 4 2001
I-Feng Lin
Summary. Case-control studies offer a rapid and efficient way to evaluate hypotheses. On the other hand, proper selection of the controls is challenging, and the potential for selection bias is a major weakness. Valid inferences about parameters of interest cannot be drawn if selection bias exists. Furthermore, the selection bias is difficult to evaluate. Even in situations where selection bias can be estimated, few methods are available. In the matched case-control Northern Manhattan Stroke Study (NOMASS), stroke-free controls are sampled in two stages. First, a telephone survey ascertains demographic and exposure status from a large random sample. Then, in an in-person interview, detailed information is collected for the selected controls to be used in a matched case,control study. The telephone survey data provides information about the selection probability and the potential selection bias. In this article, we propose bias-corrected estimators in a case-control study using a joint estimating equation approach. The proposed bias-corrected estimate and its standard error can be easily obtained by standard statistical software. [source]