Home About us Contact | |||
Predicted Risk (predicted + risk)
Selected AbstractsEffect of Off-Pump Coronary Artery Bypass Grafting on Risk-Adjusted and Cumulative Sum Failure Outcomes After Coronary Artery SurgeryJOURNAL OF CARDIAC SURGERY, Issue 6 2002Richard J. Novick M.D. We therefore applied CUSUM, as well as standard statistical techniques, to analyze a surgeon's experience with off-pump coronary artery bypass grafting (OPCAB) and on-pump procedures to determine whether the two techniques have similar or different outcomes. Methods: In 320 patients undergoing nonemergent, first time coronary artery bypass grafting, preoperative patient characteristics, rates of mortality and major complications, and ICU and hospital lengths of stay were compared between the on-pump and OPCAB cohorts using Fisher's exact tests and Wilcoxon two sample tests. Predicted mortality and length of stay were determined using previously validated models of the Cardiac Care Network of Ontario. Observed versus expected ratios of both variables were calculated for the two types of procedures. Furthermore, CUSUM curves were constructed for the on-pump and OPCAB cohorts. A multivariable analysis of the predictors of hospital length of stay was also performed to determine whether the type of coronary artery bypass procedure had an independent impact on this variable. Results: The predicted mortality risk and predicted hospital length of stay were almost identical in the 208 on-pump patients ( 2.2 ± 3.9% ; 8.2 ± 2.5 days) and the 112 OPCAB patients ( 2.0 ± 2.2% ; 7.8 ± 2.1 days). The incidence of hospital mortality and postoperative stroke were 2.9% and 2.4% in on-pump patients versus zero in OPCAB patients (p= 0.09 and 0.17, respectively). Mechanical ventilation for greater than 48 hours was significantly less common in OPCAB (1.8%) than in on-pump patients (7.7%, p= 0.04). The rate of 10 major complications was 14.9% in on-pump versus 8.0% in OPCAB patients (p= 0.08). OPCAB patients experienced a hospital length of stay that was a median of 1.0 day shorter than on-pump patients (p= 0.01). The observed versus expected ratio for length of stay was 0.78 in OPCAB patients versus 0.95 in on-pump patients. On CUSUM analysis, the failure curve in OPCAB patients was negative and was flatter than that of on-pump patients throughout the duration of the study. Furthermore, OPCAB was an independent predictor of a reduced hospital length of stay on multivariable analysis. Conclusions: OPCAB was associated with better outcomes than on-pump coronary artery bypass despite a similar predicted risk. This robust finding was documented on sensitive CUSUM analysis, using standard statistical techniques and on a multivariable analysis of the independent predictors of hospital length of stay.(J Card Surg 2002;17:520-528) [source] Integrating DSM-IV Factors to Predict Violence in High-Risk Psychiatric PatientsJOURNAL OF FORENSIC SCIENCES, Issue 1 2010Donna M. Lynch M.S.N. Abstract:, This study incorporated Axis-II and Axis-IV factors in DSM-IV to test the relationship between predicted risk for violence assessed in the psychiatric emergency room and actual violence during hospitalization. Psychiatric nurses lack an objective instrument to use during the acute psychiatric assessment. The retrospective study comprised consecutive psychiatric admissions (n = 161) in one tertiary veterans' hospital. Statistical testing for the predictive power of risk factors, relationships between variables, and violent events included nonparametric tests, factor analysis, and logistic regression. Of the 32 patients who committed violence during hospitalization, 12 had committed violence in the psychiatric emergency room. Statistical significance was shown for violent incidents and dementia, court-ordered admission, mood disorder, and for three or more risk factors. The 13-item Risk of Violence Assessment (ROVA) scale suggests validity and sensitivity for rating DSM-IV factors and psychosocial stressors to predict risk for violence during hospitalization. Replication studies are recommended to strengthen validity of the ROVA scale. [source] Simultaneous use of serum IgG and IgM for risk scoring of suspected early Lyme borreliosis: graphical and bivariate analysesAPMIS, Issue 4 2010RAM B. DESSAU Dessau RB, Ejlertsen T, Hilden J. Simultaneous use of serum IgG and IgM for risk scoring of suspected early Lyme borreliosis: graphical and bivariate analyses. APMIS 2010; 118: 313,23. The laboratory diagnosis of early disseminated Lyme borreliosis (LB) rests on IgM and IgG antibodies in serum. The purpose of this study was to refine the statistical interpretation of IgM and IgG by combining the diagnostic evidence provided by the two immunoglobulins and exploiting the whole range of the quantitative variation in test values. ELISA assays based on purified flagella antigen were performed on sera from 815 healthy Danish blood donors as negative controls and 117 consecutive patients with confirmed neuroborreliosis (NB). A logistic regression model combining the standardized units of the IgM and IgG ELISA assays was constructed and the resulting disease risks graphically evaluated by receiver operating characteristic and ,predictiveness' curves. The combined model improves the discrimination between NB patients and blood donors. Hence, it is possible to report a predicted risk of disease graded for each individual patient, as is theoretically preferable. The predictiveness curve, when adapted to the local pretest probability of LB, allows high-risk and low-risk thresholds to be defined instead of cut-offs based on the laboratory characteristics only, and it allows the extent of under- and over-treatment to be assessed. It is shown that an example patient with low ELISA results in IgM and IgG, considered negative by the conventional cut-off, has a relatively high risk of belonging to the truly diseased population and a low risk of being false positive. Using a 20% high-risk threshold for advising the clinician to consider treatment, the sensitivity of the assay is increased from 76% to 85%, while the specificity is maintained at around 95%. [source] Land application of treated sewage sludge: quantifying pathogen risks from consumption of cropsJOURNAL OF APPLIED MICROBIOLOGY, Issue 2 2005P. Gale Abstract Aims:, To predict the number of humans in the UK infected through consumption of root crops grown on agricultural land to which treated sewage sludge has been applied in accordance with the current regulations and guidance (Safe Sludge Matrix). Methods and Results:, Quantitative risk assessments based on the source, pathway, receptor approach are developed for seven pathogens, namely salmonellas, Listeria monocytogenes, campylobacters, Escherichia coli O157, Cryptosporidium parvum, Giardia, and enteroviruses. Using laboratory data for pathogen destruction by mesophilic anaerobic digestion, and not extrapolating experimental data for pathogen decay in soil to the full 30-month harvest interval specified by the Matrix, predicts 50 Giardia infections per year, but less than one infection per year for the other six pathogens. Assuming linear decay in the soil, a 12-month harvest interval eliminates the risks from all seven pathogens; the highest predicted being one infection of C. parvum in the UK every 45 years. Computer simulations show that a protective effect from binding of pathogens to particulate matter could potentially exaggerate the observed rate of decay in experimental systems. Conclusions:, The results confirm, assuming pathogens behave according to our current understanding, that the risks to humans from consumption of vegetable crops are remote. Furthermore the harvest intervals stipulated by the Safe Sludge Matrix compensate for potential lapses in the operational efficiency of sludge treatment. Significance and Impact of the Study:, The models demonstrate the huge potential impact of decay in the soil over the 12/30-month intervals specified by the Matrix, although lack of knowledge on the exact nature of soil decay processes is a source of uncertainty. The models enable the sensitivity of the predicted risks to changes in the operational efficiency of sewage sludge treatment to be assessed. [source] |