Home About us Contact | |||
Statistical Software (statistical + software)
Kinds of Statistical Software Selected AbstractsLinear Mixed Models: a Practical Guide using Statistical SoftwareJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2008R. Allan Reese No abstract is available for this article. [source] Linear Mixed Models , A Practical Guide Using Statistical Software.BIOMETRICAL JOURNAL, Issue 2 2009No abstract is available for this article. [source] Fitting Semiparametric Additive Hazards Models using Standard Statistical SoftwareBIOMETRICAL JOURNAL, Issue 5 2007Douglas E. Schaubel Abstract The Cox proportional hazards model has become the standard in biomedical studies, particularly for settings in which the estimation covariate effects (as opposed to prediction) is the primary objective. In spite of the obvious flexibility of this approach and its wide applicability, the model is not usually chosen for its fit to the data, but by convention and for reasons of convenience. It is quite possible that the covariates add to, rather than multiply the baseline hazard, making an additive hazards model a more suitable choice. Typically, proportionality is assumed, with the potential for additive covariate effects not evaluated or even seriously considered. Contributing to this phenomenon is the fact that many popular software packages (e.g., SAS, S-PLUS/R) have standard procedures to fit the Cox model (e.g., proc phreg, coxph), but as of yet no analogous procedures to fit its additive analog, the Lin and Ying (1994) semiparametric additive hazards model. In this article, we establish the connections between the Lin and Ying (1994) model and both Cox and least squares regression. We demonstrate how SAS's phreg and reg procedures may be used to fit the additive hazards model, after some straightforward data manipulations. We then apply the additive hazards model to examine the relationship between Model for End-stage Liver Disease (MELD) score and mortality among patients wait-listed for liver transplantation. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] A missing values imputation method for time series data: an efficient method to investigate the health effects of sulphur dioxide levelsENVIRONMETRICS, Issue 2 2010Swarna Weerasinghe Abstract Environmental data contains lengthy records of sequential missing values. Practical problem arose in the analysis of adverse health effects of sulphur dioxide (SO2) levels and asthma hospital admissions for Sydney, Nova Scotia, Canada. Reliable missing values imputation techniques are required to obtain valid estimates of the associations with sparse health outcomes such as asthma hospital admissions. In this paper, a new method that incorporates prediction errors to impute missing values is described using mean daily average sulphur dioxide levels following a stationary time series with a random error. Existing imputation methods failed to incorporate the prediction errors. An optimal method is developed by extending a between forecast method to include prediction errors. Validity and efficacy are demonstrated comparing the performances with the values that do not include prediction errors. The performances of the optimal method are demonstrated by increased validity and accuracy of the , coefficient of the Poisson regression model for the association with asthma hospital admissions. Visual inspection of the imputed values of sulphur dioxide levels with prediction errors demonstrated that the variation is better captured. The method is computationally simple and can be incorporated into the existing statistical software. Copyright © 2009 John Wiley & Sons, Ltd. [source] Advanced Statistics:Statistical Methods for Analyzing Cluster and Cluster-randomized DataACADEMIC EMERGENCY MEDICINE, Issue 4 2002Robert L. Wears MD Abstract. Sometimes interventions in randomized clinical trials are not allocated to individual patients, but rather to patients in groups. This is called cluster allocation, or cluster randomization, and is particularly common in health services research. Similarly, in some types of observational studies, patients (or observations) are found in naturally occurring groups, such as neighborhoods. In either situation, observations within a cluster tend to be more alike than observations selected entirely at random. This violates the assumption of independence that is at the heart of common methods of statistical estimation and hypothesis testing. Failure to account for the dependence between individual observations and the cluster to which they belong can have profound implications on the design and analysis of such studies. Their p-values will be too small, confidence intervals too narrow, and sample size estimates too small, sometimes to a dramatic degree. This problem is similar to that caused by the more familiar "unit of analysis error" seen when observations are repeated on the same subjects, but are treated as independent. The purpose of this paper is to provide an introduction to the problem of clustered data in clinical research. It provides guidance and examples of methods for analyzing clustered data and calculating sample sizes when planning studies. The article concludes with some general comments on statistical software for cluster data and principles for planning, analyzing, and presenting such studies. [source] Robust estimation of critical values for genome scans to detect linkageGENETIC EPIDEMIOLOGY, Issue 1 2005Silviu-Alin BacanuArticle first published online: 15 SEP 200 Abstract Estimation of study specific critical values for linkage scans (suggestive and significant thresholds) is important to identify promising regions. In this report, I propose a fast and concrete recipe for finding study specific critical values. Previously, critical values were derived theoretically or empirically. Theoretically-derived values are often conservative due to their assumption of fully informative transmissions. Empirically-derived critical values are computer and skill intensive and may not even be computationally feasible for large pedigrees. In this report, I propose a method to estimate critical values for multipoint linkage analysis using standard, widely used statistical software. The proposed method estimates study-specific critical values by using Autoregressive (AR) models to estimate the correlation between standard normal statistics at adjacent map points and then use this correlation to estimate study-specific critical values. The AR-based method is evaluated using different family structures and density of markers, under both the null hypothesis of no linkage and the alternative hypothesis of linkage between marker and disease locus. Simulations results show the AR-based method accurately predicts critical values for a wide range of study designs. © 2004 Wiley-Liss, Inc. [source] Spontaneous remission of primary hyperparathyroidism: A case report and meta-analysis of the literatureHEAD & NECK: JOURNAL FOR THE SCIENCES & SPECIALTIES OF THE HEAD AND NECK, Issue 1 2006Christopher T. Wootten MD Abstract Background. In a minority of patients, primary hyperparathyroidism spontaneously remits either by autoinfarction or by hemorrhage into or around the adenoma. We describe a case of autoparathyroidectomy occurring in a 63-year-old man 9 years after three parathyroid glands were removed during a total thyroidectomy. This case is compared with 50 previously reported cases of autoparathyriodectomy, and a meta-analysis is performed. Methods. Case report, literature review, and meta-analysis were done using statistical software (SigmaStat 2.0, SPSS, Chicago). Results. Fifty cases of autoparathyroidectomy were summarized according to the three etiologies. The greatest biochemical aberration was found in the acute intracapsular hemorrhage group, with [Ca++] falling from a mean 15.1 mg/dL to 8.9 mg/dL. The average drop in parathyroid hormone was 69% across all groups, comparing favorably to surgical resection. Conclusions. Autoparathyroidectomy is a rare but described outcome of unoperated primary hyperparathyroidism that may delay or supplant operative management. © 2005 Wiley Periodicals, Inc. Head Neck27: XXX,XXX, 2005 [source] Diagnostic value of FDG-PET in recurrent colorectal carcinoma: A meta-analysisINTERNATIONAL JOURNAL OF CANCER, Issue 1 2009Chenpeng Zhang Abstract Accurate detection of recurrent colorectal carcinoma remains a diagnostic challenge. The purposes of this study were to evaluate the diagnostic value of Positron emission tomography (PET) using fluor-18-deoxyglucose (FDG) in recurrent colorectal carcinoma with a meta-analysis. All the published studies in English relating the diagnostic value of FDG-PET in the detection of recurrent colorectal carcinoma were collected. Methodological quality of the included studies was evaluated. Pooled sensitivity, specificity and diagnostic odds ratio and SROC (summary receiver operating characteristic curves) were obtained by the statistical software. Twenty-seven studies were included in the meta-analysis. The pooled sensitivity and specificity for FDG-PET detecting distant metastasis or whole body involvement in recurrent colorectal carcinoma were 0.91 (95% CI 0.88,0.92) and 0.83 (95% CI 0.79,0.87), respectively. The pooled sensitivity and specificity for FDG-PET detecting hepatic metastasis were 0.97 (95% CI 0.95,0.98) and 0.98 (95% CI 0.97,0.99). The pooled sensitivity and specificity for pelvic metastasis or local regional recurrence were 0.94 (95% CI 0.91,0.97) and 0.94 (95% CI 0.92,0.96). FDG-PET is valuable for the assessment of recurrent colorectal carcinoma. © 2008 Wiley-Liss, Inc. [source] Factors associated with the coping of parents with a child in psychiatric inpatient careINTERNATIONAL JOURNAL OF NURSING PRACTICE, Issue 5 2001Tiina Puotiniemi MSc The purpose of this study was to establish the parental coping' factors associated with having a child in psychiatric inpatient care. The data were collected from 19 hospitals with child psychiatry units. At the time of data collection, all parents of children in psychiatric inpatient care in these hospitals were recruited. The method of data collection was a questionnaire (n = 79). The data were analysed with the Statistical Package for the Social Sciences (SPSS) for Windows statistical software. The connections between variables were studied with cross-tabulation, and the ,2 test was used to determine significance. Changes in internal and external family relationships and matters related to the upbringing of the child with mental problems statistically correlated significantly with parental coping (P < 0.001). Problem-oriented and emotionally-oriented coping strategies, skills and palliative strategies correlated significantly with parental coping (P < 0.001). Emotional support, support for the care and upbringing of the child in inpatient care, and love and acceptance also had statistically significant associations with parental coping (P < 0.001). [source] Do perineal exercises during pregnancy prevent the development of urinary incontinence?INTERNATIONAL JOURNAL OF UROLOGY, Issue 10 2008A systematic review Objectives: The aim of the current article was to conduct a systematic review of the performance of perineal exercises during pregnancy and their utility in the prevention of urinary incontinence. Methods: Randomized controlled studies (RCT) of a low-risk obstetric population (primiparas or nulliparas) who had done perineal exercises only during pregnancy met the inclusion criteria. Articles published between 1966 and 2007 from periodicals indexed in the LILACS, SCIELO, PubMed/MEDLINE, SCIRUS and Cochrane Library databases were selected, using the following keywords: ,urinary incontinence', ,pregnancy', ,pelvic floor' and ,exercise'. The Jadad scale was applied to assess the internal validity of the RCT and two meta-analysis: one of fixed effects and the other of random effects were carried out with data extracted from the RCT, using the Stata 9.2 statistical software and adopting a significance level of 0.05. Results: Four RCTs with high methodological quality, involving a total of 675 women were included. They indicated that perineal muscle exercise significantly reduced the development of urinary incontinence from 6 weeks to 3 months after delivery (odds ratio = 0.45; confidence interval: 0.3 to 0.66). However, when evaluating this effect during the 34th and 35th gestational week, a meta-analysis showed that the results were not significant (odds ratio = 0.13; confidence interval: 0.00 to 3.77). Conclusion: Pelvic floor muscle exercises may be effective at reducing the development of postpartum urinary incontinence, despite clinical heterogeneity among the RCT. [source] Estimating numbers of infectious units from serial dilution assaysJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 1 2006Nigel Stallard Summary., The paper concerns the design and analysis of serial dilution assays to estimate the infectivity of a sample of tissue when it is assumed that the sample contains a finite number of indivisible infectious units such that a subsample will be infectious if it contains one or more of these units. The aim of the study is to estimate the number of infectious units in the original sample. The standard approach to the analysis of data from such a study is based on the assumption of independence of aliquots both at the same dilution level and at different dilution levels, so that the numbers of infectious units in the aliquots follow independent Poisson distributions. An alternative approach is based on calculation of the expected value of the total number of samples tested that are not infectious. We derive the likelihood for the data on the basis of the discrete number of infectious units, enabling calculation of the maximum likelihood estimate and likelihood-based confidence intervals. We use the exact probabilities that are obtained to compare the maximum likelihood estimate with those given by the other methods in terms of bias and standard error and to compare the coverage of the confidence intervals. We show that the methods have very similar properties and conclude that for practical use the method that is based on the Poisson assumption is to be recommended, since it can be implemented by using standard statistical software. Finally we consider the design of serial dilution assays, concluding that it is important that neither the dilution factor nor the number of samples that remain untested should be too large. [source] Is detrusor hypertrophy in women associated with voiding dysfunction?AUSTRALIAN AND NEW ZEALAND JOURNAL OF OBSTETRICS AND GYNAECOLOGY, Issue 6 2009Orawan LEKSKULCHAI Background:, In men, bladder wall thickness ,5 mm seems to be a useful predictor of outlet obstruction, with a diagnostic value exceeding that of free uroflowmetry indices. There are no data in the literature examining whether this may also apply in women. Aims:, To identify the relationships between detrusor wall thickness (DWT) and symptoms and urodynamic findings suggestive of voiding dysfunction. Methods:, This is a retrospective study analysing data sets of 686 women seen for urodynamic testing in a tertiary urogynaecological unit. Hesitancy, poor stream and stop,start voiding were recorded as symptoms of voiding dysfunction. All women underwent free uroflowmetry and multichannel urodynamic testing. We used the urethral resistance factor (URA) and the obstruction coefficient (OCO), methods employed to quantify bladder outlet resistance in men. Transperineal ultrasound for DWT was performed after voiding and catheter removal. Statistical analysis was carried out by using the statistical software (spss 15.0; SPSS Inc., Chicago, IL, USA). Results:, Symptoms of voiding dysfunction were reported by 33.1% of patients and 22.4% had urodynamically diagnosed voiding dysfunction. The mean DWT in symptomatic women was not statistically different from the mean DWT in those without symptoms. URA and OCO of symptomatic women were significantly different from those of asymptomatic women (P < 0.01). DWT was not associated with parameters of voiding function, URA or OCO. Conclusions:, Contrary to the situation in men, increased DWT in women does not seem to be associated with symptoms or signs of voiding dysfunction. Therefore, DWT cannot be used as a predictor of voiding difficulty in women. [source] Substance use during pregnancy: risk factors and obstetric and perinatal outcomes in South AustraliaAUSTRALIAN AND NEW ZEALAND JOURNAL OF OBSTETRICS AND GYNAECOLOGY, Issue 3 2005Robyn KENNARE Abstract Objective: To determine the prevalence of self-reported substance use during pregnancy in South Australia, the characteristics of substance users, their obstetric outcomes and the perinatal outcomes of their babies. Methods: Multivariable logistic regression with STATA statistical software was undertaken using the South Australian perinatal data collection 1998,2002. An audit was conducted on every fifth case coded as substance use to identify the actual substances used. Results: Substance use was reported by women in 707 of 89 080 confinements (0.8%). Marijuana (38.9%), methadone (29.9%), amphetamines (14.6%) and heroin (12.5%) were most commonly reported, with polydrug use among 18.8% of the women audited. Substance users were more likely than non-users to be smokers, to have a psychiatric condition, to be single, indigenous, of lower socio-economic status and living in the metropolitan area. The outcome models had poor predictive powers. Substance use was associated with increased risks for placental abruption (OR 2.53) and antepartum haemorrhage from other causes (OR 1.41). The exposed babies had increased risks for preterm birth (OR 2.63), small for gestational age (OR 1.79), congenital abnormalities (1.52), nursery stays longer than 7 days (OR 4.07), stillbirth (OR 2.54) and neonatal death (OR 2.92). Conclusions: Substance use in pregnancy is associated with increased risks for antepartum haemorrhage and poor perinatal outcomes. However, only a small amount of the variance in outcomes can be explained by the substance use alone. Recent initiatives to improve identification and support of women exposed to adverse health, psychosocial and lifestyle factors will need evaluation. [source] On Estimating the Relationship between Longitudinal Measurements and Time-to-Event Data Using a Simple Two-Stage ProcedureBIOMETRICS, Issue 3 2010Paul S. Albert SummaryYe, Lin, and Taylor (2008,,Biometrics,64, 1238,1246) proposed a joint model for longitudinal measurements and time-to-event data in which the longitudinal measurements are modeled with a semiparametric mixed model to allow for the complex patterns in longitudinal biomarker data. They proposed a two-stage regression calibration approach that is simpler to implement than a joint modeling approach. In the first stage of their approach, the mixed model is fit without regard to the time-to-event data. In the second stage, the posterior expectation of an individual's random effects from the mixed-model are included as covariates in a Cox model. Although Ye et al. (2008) acknowledged that their regression calibration approach may cause a bias due to the problem of informative dropout and measurement error, they argued that the bias is small relative to alternative methods. In this article, we show that this bias may be substantial. We show how to alleviate much of this bias with an alternative regression calibration approach that can be applied for both discrete and continuous time-to-event data. Through simulations, the proposed approach is shown to have substantially less bias than the regression calibration approach proposed by Ye et al. (2008). In agreement with the methodology proposed by Ye et al. (2008), an advantage of our proposed approach over joint modeling is that it can be implemented with standard statistical software and does not require complex estimation techniques. [source] Analysis of Twin Data Using SASBIOMETRICS, Issue 2 2009Rui Feng Summary Twin studies are essential for assessing disease inheritance. Data generated from twin studies are traditionally analyzed using specialized computational programs. For many researchers, especially those who are new to twin studies, understanding and using those specialized computational programs can be a daunting task. Given that SAS (Statistical Analysis Software) is the most popular software for statistical analysis, we suggest that the use of SAS procedures for twin data may be a helpful alternative and demonstrate that we can obtain similar results from SAS to those produced by specialized computational programs. This numerical validation is practically useful, because a natural concern with general statistical software is whether it can deal with data that are generated from special study designs such as twin studies and if it can test a particular hypothesis. We concluded through our extensive simulation that SAS procedures can be used easily as a very convenient alternative to specialized programs for twin data analysis. [source] A Latent-Class Mixture Model for Incomplete Longitudinal Gaussian DataBIOMETRICS, Issue 1 2008Caroline Beunckens Summary In the analyses of incomplete longitudinal clinical trial data, there has been a shift, away from simple methods that are valid only if the data are missing completely at random, to more principled ignorable analyses, which are valid under the less restrictive missing at random assumption. The availability of the necessary standard statistical software nowadays allows for such analyses in practice. While the possibility of data missing not at random (MNAR) cannot be ruled out, it is argued that analyses valid under MNAR are not well suited for the primary analysis in clinical trials. Rather than either forgetting about or blindly shifting to an MNAR framework, the optimal place for MNAR analyses is within a sensitivity-analysis context. One such route for sensitivity analysis is to consider, next to selection models, pattern-mixture models or shared-parameter models. The latter can also be extended to a latent-class mixture model, the approach taken in this article. The performance of the so-obtained flexible model is assessed through simulations and the model is applied to data from a depression trial. [source] Matched Case,Control Data Analysis with Selection BiasBIOMETRICS, Issue 4 2001I-Feng Lin Summary. Case-control studies offer a rapid and efficient way to evaluate hypotheses. On the other hand, proper selection of the controls is challenging, and the potential for selection bias is a major weakness. Valid inferences about parameters of interest cannot be drawn if selection bias exists. Furthermore, the selection bias is difficult to evaluate. Even in situations where selection bias can be estimated, few methods are available. In the matched case-control Northern Manhattan Stroke Study (NOMASS), stroke-free controls are sampled in two stages. First, a telephone survey ascertains demographic and exposure status from a large random sample. Then, in an in-person interview, detailed information is collected for the selected controls to be used in a matched case,control study. The telephone survey data provides information about the selection probability and the potential selection bias. In this article, we propose bias-corrected estimators in a case-control study using a joint estimating equation approach. The proposed bias-corrected estimate and its standard error can be easily obtained by standard statistical software. [source] Linear regression analysis for comparing two measurers or methods of measurement: But which regression?CLINICAL AND EXPERIMENTAL PHARMACOLOGY AND PHYSIOLOGY, Issue 7 2010John Ludbrook Summary 1. There are two reasons for wanting to compare measurers or methods of measurement. One is to calibrate one method or measurer against another; the other is to detect bias. Fixed bias is present when one method gives higher (or lower) values across the whole range of measurement. Proportional bias is present when one method gives values that diverge progressively from those of the other. 2. Linear regression analysis is a popular method for comparing methods of measurement, but the familiar ordinary least squares (OLS) method is rarely acceptable. The OLS method requires that the x values are fixed by the design of the study, whereas it is usual that both y and x values are free to vary and are subject to error. In this case, special regression techniques must be used. 3. Clinical chemists favour techniques such as major axis regression (,Deming's method'), the Passing,Bablok method or the bivariate least median squares method. Other disciplines, such as allometry, astronomy, biology, econometrics, fisheries research, genetics, geology, physics and sports science, have their own preferences. 4. Many Monte Carlo simulations have been performed to try to decide which technique is best, but the results are almost uninterpretable. 5. I suggest that pharmacologists and physiologists should use ordinary least products regression analysis (geometric mean regression, reduced major axis regression): it is versatile, can be used for calibration or to detect bias and can be executed by hand-held calculator or by using the loss function in popular, general-purpose, statistical software. [source] Predicting the crease recovery performance and tear strength of cotton fabric treated with modified N -methylol dihydroxyethylene urea and polyethylene softenerCOLORATION TECHNOLOGY, Issue 5 2010Tanveer Hussain This study aimed at developing a model for predicting the crease recovery performance and tear strength of cotton fabric using modified N -methylol dihydroxyethylene urea, polyethylene softener, catalyst, curing time and curing temperature as the predictor variables. A quarter factorial design was constructed and, based on the experimental results, regression models were built to predict crease recovery angle and tear strength of the treated fabric. All experimental design and statistical analysis steps were implemented, using Minitab statistical software. [source] |