Home About us Contact | |||
Prospective Comparison (prospective + comparison)
Selected AbstractsNS13P A PROSPECTIVE COMPARISON OF TWO CERVICAL INTERBODY FUSION CAGESANZ JOURNAL OF SURGERY, Issue 2007M. A. Hansen Purpose For some time the surgical management of chronic back pain has utilised interbody lumbar cages. Recently interbody cages for use in the cervical spine have been produced. Cervical cages provide initial stability during the fusion process. There is little literature comparing the performance of interbody cage systems due to their relative recent introduction. Methodology Patients with symptomatic cervical degeneration or traumatic lesions were treated with the dynamic ABC 2 Aesculap anterior cervical plating system and either the B-Braun Samarys or Zimmer cage systems. A single surgeon conducted all surgery. Pre- and post-operative radiological examinations were compared. Changes in disc height at affected and adjacent levels, lordosis and evidence of fusion were recorded. Patient outcome was measured with questionnaires. The modified Oswestry neck pain disability and Copenhagen neck disability scale scores were utilised to allow comparison between patients. Results A total of 43 patients were involved in the study (30 with the Zimmer cage system and 13 with the Samarys cage). Patient follow-up has been up to 12 months. Improvement in disability scores was shown in 90% of patients. Follow up imaging did not demonstrate subsidence of the cage or adjacent instability in either group. There was no statistical difference in complication rate between the two groups. Discussion Initial stability was provided by both interbody cervical spine cage system. Rates of fusion and symptomatic relief compared favourably to fusion involving autogenous bone graft without associated morbidity. Longer follow up is necessary to determine whether there is evidence of adjacent level instability or vertebral end-plate subsidence. [source] A Prospective Comparison of Ultrasound-guided and Blindly Placed Radial Arterial CathetersACADEMIC EMERGENCY MEDICINE, Issue 12 2006Stephen Shiver MD Abstract Background Arterial cannulation for continuous blood-pressure measurement and frequent arterial-blood sampling commonly are required in critically ill patients. Objectives To compare ultrasound (US)-guided versus traditional palpation placement of arterial lines for time to placement, number of attempts, sites used, and complications. Methods This was a prospective, randomized interventional study at a Level 1 academic urban emergency department with an annual census of 78,000 patients. Patients were randomized to either palpation or US-guided groups. Inclusion criteria were any adult patient who required an arterial line according to the treating attending. Patients who had previous attempts at an arterial line during the visit, or who could not be randomized because of time constraints, were excluded. Enrollment was on a convenience basis, during hours worked by researchers over a six-month period. Patients in either group who had three failed attempts were rescued with the other technique for patient comfort. Statistical analysis included Fisher's exact, Mann-Whitney, and Student's t-tests. Results Sixty patients were enrolled, with 30 patients randomized to each group. Patients randomized to the US group had a shorter time required for arterial line placement (107 vs. 314 seconds; difference, 207 seconds; p = 0.0004), fewer placement attempts (1.2 vs. 2.2; difference, 1; p = 0.001), and fewer sites required for successful line placement (1.1 vs. 1.6; difference, 0.5; p = 0.001), as compared with the palpation group. Conclusions In this study, US guidance for arterial cannulation was successful more frequently and it took less time to establish the arterial line as compared with the palpation method. [source] A Randomized, Bilateral, Prospective Comparison of Calcium Hydroxylapatite Microspheres versus Human-Based Collagen for the Correction of Nasolabial FoldsDERMATOLOGIC SURGERY, Issue 2007STACY SMITH MD BACKGROUND Current soft tissue fillers are a compromise between ease of use, duration of correction, reactivity, and cost. A product utilizing calcium hydroxylapatite (CaHA) is currently being used as a soft tissue filler. OBJECTIVE The objective was to compare the efficacy and safety of CaHA microspheres versus human-based collagen for the correction of nasolabial folds. MATERIALS AND METHODS Four centers enrolled 117 subjects with moderate to deep nasolabial folds. Subjects received CaHA on one side of the face and human collagen on the other. Up to two touch-ups were allowed. A blinded panel of experts evaluated subject photographs from initial and follow-up visits. RESULTS Seventy-nine percent of subjects had superior improvement on the CaHA side through 6 months (p<.0001). For optimal correction, significantly less volume and fewer injections were needed for CaHA than for collagen (p<.0001). Adverse event rates were comparable, with some increase in bruising and edema for CaHA-treated sides. Adverse event duration was similar for both groups and generally resolved within 14 to 21 days. CONCLUSION This CaHA-based product gives significantly longer-lasting correction of nasolabial folds compared to human collagen. Less total material and fewer injections are required. The adverse event profile of the product is similar to the collagen-based product. [source] Prospective comparison of course of disability in antipsychotic-treated and untreated schizophrenia patientsACTA PSYCHIATRICA SCANDINAVICA, Issue 3 2009J. Thirthalli Objective:, To compare the course of disability in schizophrenia patients receiving antipsychotics and those remaining untreated in a rural community. Method:, Of 215 schizophrenia patients identified in a rural south Indian community, 58% were not receiving antipsychotics. Trained raters assessed the disability in 190 of these at baseline and after 1 year. The course of disability in those who remained untreated was compared with that in those who received antipsychotics. Results:, Mean disability scores remained virtually unchanged in those who remained untreated, but showed a significant decline (indicating decrement in disability) in those who continued to receive antipsychotics and in those in whom antipsychotic treatment was initiated (P < 0.001; group × occasion effect). The proportion of patients classified as ,disabled' declined significantly in the treated group (P < 0.01), but remained the same in the untreated group. Conclusion:, Disability in untreated schizophrenia patients remains unchanged over time. Treatment with antipsychotics in the community results in a considerable reduction in disability. [source] Prospective comparison of subjective arousal during the pre-sleep period in primary sleep-onset insomnia and normal sleepersJOURNAL OF SLEEP RESEARCH, Issue 2 2007JENNIFER A. ROBERTSON Summary Psychophysiological insomnia (PI) is the most common insomnia subtype, representing 12,15% of all sleep centre referrals. Diagnostic guidelines describe PI as an intrinsic sleep disorder involving both hyperarousal and learned sleep-preventing associations. Whilst evidence for the first component is reasonably compelling, evidence for learned (conditioned) sleep effects is markedly lacking. Indeed, to date no study has attempted to capture directly the conditioned arousal effect assumed to characterize the disorder. Accordingly, the present study explored variations in subjective arousal over time in 15 PI participants (sleep onset type) and 15 normal sleepers (NS). Self-report measures of cognitive arousal, somatic arousal and sleepiness were taken at three time points: 3 h before bedtime (early to mid-evening); 1 h before bedtime (late evening); and in the bedroom at lights out (bedtime) across four, 24-h cycles. Fluctuations in mean arousal and sleepiness values, and in day-to-day variation were examined using analyses of variance. Participants with PI were significantly more cognitive aroused and significantly less sleepy relative to NS, within the bedroom environment. These results support the tenet of conditioned mental arousal to the bedroom, although competing explanations cannot be ruled out. Results are discussed with reference to extant insomnia models. [source] Prospective comparison of [18F]fluorodeoxyglucose positron emission tomography with conventional assessment by computed tomography scans and serum tumor markers for the evaluation of residual masses in patients with nonseminomatous germ cell carcinomaCANCER, Issue 9 2002Christian Kollmannsberger M.D. Abstract BACKGROUND To assess the ability of [18F]fluorodeoxyglucose (F-18 FDG) positron emission tomography (PET) to predict the viability of residual masses after chemotherapy in patients with metastatic nonseminomatous germ cell tumors (GCT), PET results were compared in a blinded analysis with computed tomography (CT) scans and serum tumor marker changes (TUM) as established methods of assessment. METHODS Independent reviewers who were blinded to each other's results evaluated the PET results and corresponding CT scan and TUM results in 85 residual lesions from 45 patients. All patients were treated within prospective clinical trials and received primary/salvage, high-dose chemotherapy with autologous blood stem cell support for primary poor prognosis disease or recurrent disease. PET results were assessed both visually and by quantifying glucose uptake (standardized uptake values). Results were validated either by histologic examination of a resected mass and/or biopsy (n = 28 lesions) or by a 6-month clinical follow-up after evaluation (n = 57 lesions). RESULTS F-18 FDG PET showed increased tracer uptake in 32 of 85 residual lesions, with 29 true positive (TP) lesions and three false positive (FP) lesions. Fifty-three lesions were classified by PET as negative (no viable GCT), 33 lesions were classified by PET as true negative (TN), and 20 lesions were classified by PET as false negative (FN). In the blinded reading of the corresponding CT scan and TUM results, 38 residual lesions were assessed correctly as containing viable carcinoma and/or teratoma. Forty-six lesions were classified as nonsuspicious by CT scan/TUM (33 TN lesions and 14 falsely classified lesions). PET correctly predicted the presence of viable carcinoma in 5 of these 14 and the absence of viable carcinoma in 3 of these 14 lesions. Resulting sensitivities and specificities for the prediction of residual mass viability were as follows: PET, 59% sensitivity and 92% specificity; radiologic monitoring, 55% sensitivity and 86% specificity; and TUM, 42% sensitivity and 100% specificity. The positive and negative predictive values for PET were 91% and 62%, respectively. The diagnostic efficacy of PET did not improve when patients with teratomatous elements in the primary tumor were excluded from the analysis. In patients with multiple residual masses, a uniformly increased residual F-18 FDG uptake in all lesions was a strong predictor for the presence of viable carcinoma. CONCLUSIONS F-18 FDG PET imaging performed in conjunction with conventional staging methods offers additional information for the prediction of residual mass histology in patients with nonseminomatous GCT. A positive PET is highly predictive for the presence of viable carcinoma. Other useful indications for a PET examination include patients with multiple residual masses and patients with marker negative disease. Cancer 2002;94:2353,62. © 2002 American Cancer Society. DOI 10.1002/cncr.10494 [source] A clinical prospective comparison of anesthetics sensitivity and hemodynamic effect among patients with or without obstructive jaundiceACTA ANAESTHESIOLOGICA SCANDINAVICA, Issue 7 2010L.-Q. YANG Background: To compare isoflurane anesthesia in patients with or without hyperbilirubinemia undergoing hepatobiliary surgery. Methods: Forty-two patients with obstructive jaundice and 40 control patients with normal liver function scheduled for hepatobiliary surgery under isoflurane anesthesia were studied. Anesthesia was induced with propofol (1.5,2 mg/kg) and remifentanil (2 ,g/kg). After tracheal intubation, anesthesia was titrated using isoflurane in oxygen-enriched air, adjusted to maintain a bispectral index (BIS) value of 46,54. Ephedrine, atropine and remifentanil were used to maintain hemodynamic parameters within 30% of the baseline. The mean arterial blood pressure (MAP), heart rate (HR), drug doses and the time taken to recover from anesthesia were recorded. Results: Demographic data, duration and BIS values were similar in both groups. Anesthesia induction and maintenance were associated with more hemodynamic instability in the patients with jaundice and they received more ephedrine and atropine and less remifentanil and isoflurane (51.1±24.2 vs. 84.6±20.3 mg/min; P for all <0.05) than control patients. Despite less anesthetic use, the time to recovery and extubation was significantly longer than that in control. Conclusion: Patients with obstructive jaundice have an increased sensitivity to isoflurane, more hypotension and bradycardia during anesthesia induction and maintenance and a prolonged recovery time compared with controls. [source] Dynamic Balance and Stepping Versus Tai Chi Training to Improve Balance and Stepping in At-Risk Older AdultsJOURNAL OF AMERICAN GERIATRICS SOCIETY, Issue 12 2006Joseph O. Nnodim MD OBJECTIVES: To compare the effect of two 10-week balance training programs, Combined Balance and Step Training (CBST) versus tai chi (TC), on balance and stepping measures. DESIGN: Prospective intervention trial. SETTING: Local senior centers and congregate housing facilities. PARTICIPANTS: Aged 65 and older with at least mild impairment in the ability to perform unipedal stance and tandem walk. INTERVENTION: Participants were allocated to TC (n= 107, mean age 78) or CBST, an intervention focused on improving dynamic balance and stepping (n=106, mean age 78). MEASUREMENTS: At baseline and 10 weeks, participants were tested in their static balance (Unipedal Stance and Tandem Stance (TS)), stepping (Maximum Step Length, Rapid Step Test), and Timed Up and Go (TUG). RESULTS: Performance improved more with CBST than TC, ranging from 5% to 10% for the stepping tests (Maximum Step Length and Rapid Step Test) and 9% for TUG. The improvement in TUG represented an improvement of more than 1 second. Greater improvements were also seen in static balance ability (in TS) with CBST than TC. CONCLUSION: Of the two training programs, in which variants of each program have been proven to reduce falls, CBST results in modest improvements in balance, stepping, and functional mobility versus TC over a 10-week period. Future research should include a prospective comparison of fall rates in response to these two balance training programs. [source] Superior effect of intravenous anti-D compared with IV gammaglobulin in the treatment of HIV-thrombocytopenia: Results of a small, randomized prospective comparisonAMERICAN JOURNAL OF HEMATOLOGY, Issue 5 2007Andromachi Scaradavou Abstract This small, prospective, randomized study compared increases in platelet counts and duration of response after intravenous gammaglobulin (IVIG) and IV anti-D in patients with HIV-related thrombocytopenia (HIV-TP). Nine Rh+, nonsplenectomized HIV-positive patients with thrombocytopenia were treated sequentially, in random order, with IVIG and IV anti-D in a cross over design, receiving each therapy for 3 months. Peak platelet counts and duration of effect after each treatment were compared. In addition, viral load measurements and CD4 counts were followed serially, as well as thrombopoietin levels. IV anti-D resulted in a mean peak platelet count of 77 x 109/L compared to only 29 x 109/L after IVIG (P = 0.07). The mean duration of response was significantly longer in patients treated with anti-D (41 days) compared to IVIG (19 days, P = 0.01). No consistent changes were seen in the CD4 counts or viral load measurements as a result of either therapy. Thrombopoietin levels were normal in all patients despite often severe thrombocytopenia. Anti-D was more efficacious than IVIG for the treatment of HIV-TP, confirming and extending previous results. Anti-D should be the first line therapy in HIV-positive, Rh+ patients, when antiretroviral agents are not indicated, not effective, or there is an urgent need to increase the platelet count. Am. J. Hematol. 82: 2007. © 2006 Wiley-Liss, Inc. [source] Video rigid laryngeal endoscopy compared to laryngeal mirror examination: An assessment of patient comfort and clinical visualizationTHE LARYNGOSCOPE, Issue 2 2009Joshua Dunklebarger MD Abstract Objectives: To determine whether there are differences in patient preference and extent of laryngeal visualization between video rigid (30 degree endoscope) laryngoscopy (VRL) and laryngeal mirror examination (LME). Study Design: A prospective comparison by patients undergoing laryngeal examination by both VRL and LME conducted by two examiners experienced in both mirror and rigid video endoscopy. Methods: Forty-three patients had laryngeal examination by both VRL and LME in alternating order. Patients were instructed to observe their exam on a monitor screen during the rigid exam. At the conclusioon of both laryngeal examinations, patients were asked to rank comfort and level of gagging on a 1 to 10 scale for both VRL and LME, as well as preference between the two methods and whether seeing their laryngeal examination on the video screen was helpful. The extent of laryngeal visualization by the clinician was recorded for each examination. Results: Patient comfort level was greater with VRL (P < .001) and gagging was significantly less with VRL (P < .001) compared to LME. VRL provided a more complete examination of the larynx by the clinician (P < .001) compared to LME. Patient preference significantly favored VRL (79.1%) compared to LME (18.4%) and 2.3% had no preference (P < .001) A total of 83.7% found visualization of laryngeal exam on the monitor during the VRL helpful. Conclusions: VRL is superior to LME for most patients based on comfort, extent of laryngeal examination by the clinician, and patient preference. The majority of patients found visualization of their laryngeal examination during VRL to be helpful. Laryngoscope, 2009 [source] Evaluating anatomical research in surgery: a prospective comparison of cadaveric and living anatomical studies of the abdominal wallANZ JOURNAL OF SURGERY, Issue 12 2009Warren M. Rozen Abstract Background:, Cadaveric research has widely influenced our understanding of clinical anatomy. However, while many soft-tissue structures remain quiescent after death, other tissues, such as viscera, undergo structural and functional changes that may influence their use in predicting living anatomy. In particular, our understanding of vascular anatomy has been based upon cadaveric studies, in which vascular tone and flow do not match the living situation. Methods:, An angiographic analysis of the abdominal wall vasculature was performed using plain film and computed tomography angiography in 60 cadaveric hemi-abdominal walls (from 31 cadavers) and 140 living hemi-abdominal walls (in 70 patients). The deep inferior epigastric artery (DIEA) and all of its perforating branches larger than 0.5 mm were analysed for number, calibre and location. Results:, Both large, named vessels and small calibre vessels show marked differences between living anatomy and cadaveric specimens. The DIEA was of larger diameter (4.2 mm versus 3.1 mm, P < 0.01) and had more detectable branches in the cadaveric specimens. Perforators were of greater calibre (diameter 1.5 mm versus 0.8 mm, P < 0.01) and were more plentiful (16 versus 6, P < 0.01) in cadaveric specimens. However, the location of individual vessels was similar. Conclusions:, Cadaveric anatomy displays marked differences to in vivo anatomy, with the absence of living vascular dynamics affecting vessel diameters in cadaveric specimens. Blood vessels are of greater measurable calibre in cadaveric specimens than in the living. Consequently, cadaveric anatomy should be interpreted with consideration of post-mortem changes, while living anatomical studies, particularly with the use of imaging technologies, should be embraced in anatomical research. [source] An unrandomized prospective comparison of urinary continence, bowel symptoms and the need for further procedures in patients with and with no adjuvant radiation after radical prostatectomyBJU INTERNATIONAL, Issue 4 2003T. Hofmann OBJECTIVE To prospectively assess, using a questionnaire-based study, the relative differences and changes in urinary continence and bowel symptoms, and the need for further surgery, within the first year after radical retropubic prostatectomy (RRP) in patients with and with no adjuvant radiotherapy (aRT). PATIENTS AND METHODS The study included 96 men with clinically organ-confined adenocarcinoma of the prostate who underwent RRP between March 1998 and June 1999. A subset of 36 patients was recommended aRT of the prostatic fossa (median dose 54 Gy) because of positive surgical margins and/or seminal vesicle involvement. Using a mailed questionnaire all patients were prospectively assessed at 4-month intervals for the first year after RRP. RESULTS Valid data were analysed from 83 patients (overall response rate 86%), of whom 30 (36%) had received aRT. At 4 months a significantly lower proportion used no pads and significantly more used 1 pad/day in the aRT than in the RRP group (both P < 0.05). Eight and 12 months after RRP there was no statistically significant difference between the groups in urinary incontinence. However, 53% of men in the aRT group had stool urgency and 13% reported fecal incontinence at 4 months, compared with 1.9% and none (both P < 0.01) of the RRP group. At 1 year after RRP bowel symptoms and fecal continence improved in the aRT group and there was no significant difference for these symptoms between the groups. Starting aRT early (, 12 weeks after RP) or late (> 12 weeks) had no significant effect on urinary continence, bowel symptoms and fecal incontinence. Apart from dilatation of urethral strictures in one patient in each group, no further procedures were reported during the follow-up. CONCLUSION A moderate dose of aRT after RRP had a temporary effect on subjective urinary continence at 4 months but not at 8 and 12 months. More patients receiving aRT reported significant bowel symptoms at 4 and 8 months than those with RRP only, but at 1 year most of these symptoms had resolved and there were no significant differences between the groups. [source] |