Home About us Contact | |||
Clinical Database (clinical + database)
Selected AbstractsSupport of Daily ECG Procedures in a Cardiology Department via the Integration of an Existing Clinical Database and a Commercial ECG Management SystemANNALS OF NONINVASIVE ELECTROCARDIOLOGY, Issue 3 2002Franco Chiarugi Dott. Background: In the context of HYGEIAnet, the regional health telematics network of Crete, a clinical cardiology database (CARDIS) has been installed in several hospitals. The large number of resting ECGs recorded daily made it a priority to have computerized support for the entire ECG procedure. Methods: Starting in late 2000, ICS-FORTH and Mortara Instrument, Inc., collaborated to integrate the Mortara E-Scribe/NT ECG management system with CAROIS in order to support daily ECG procedures. CARDIS was extended to allow automatic ordering of daily ECGs via E-Scribe/NT. The ECG order list is downloaded to the electrocardiographs and executed, the recorded ECGs are transmitted to E-Scribe/NT, where confirmed ECG records are linked back to CARDIS. A thorough testing period was used to identify and correct problems. An ECG viewer/printer was extended to read ECG files in E-Scribe/NT format. Results: The integration of E-Scribe/NT and CARDIS, enabling automatic scheduling of ECG orders and immediate availability of confirmed ECGs records for viewing and printing in the clinical database, took approximately 4 man months. The performance of the system is highly satisfactory and it is now ready for deployment in the hospital. Conclusions: Integration of a commercially available ECG management system with an existing clinical database can provide a rapid, practical solution that requires no major modifications to either software component. The success of this project makes us optimistic about extending CARDIS to support additional examination-procedures such as digital coronary angiography and ultrasound examinations. A.N.E. 2002;7(3):263,270 [source] Incidence and diagnostic diversity in first-episode psychosisACTA PSYCHIATRICA SCANDINAVICA, Issue 4 2010R. Reay Reay R, Mitford E, McCabe K, Paxton R, Turkington D. Incidence and diagnostic diversity in first-episode psychosis. Objective:, To investigate the incidence and range of diagnostic groups in patients with first-episode psychosis (FEP) in a defined geographical area. Method:, An observational database was set up on all patients aged 16 years and over presenting with FEP living in a county in Northern England between 1998 and 2005. Results:, The incidence of all FEP was 30.95/100 000. The largest diagnostic groups were psychotic depression (19%) and acute and transient psychotic disorder (19%). Fifty-four per cent of patients were aged 36 years and over. Patients with schizophrenia spectrum disorder only accounted for 55% of cases. Conclusion:, This clinical database revealed marked diversity in age and diagnostic groups in FEP with implications for services and guidelines. These common presentations of psychoses are grossly under researched, and no treatment guidelines currently exist for them. [source] Tolerability and Safety of Frovatriptan With Short- and Long-term Use for Treatment of Migraine and in Comparison With SumatriptanHEADACHE, Issue 2002Gilles Géraud MD Objective.,To evaluate the tolerability and safety of frovatriptan 2.5 mg in patients with migraine. Background.,Frovatriptan is a new, selective serotonin agonist (triptan) developed for the acute treatment of migraine. Dose range-finding studies identified 2.5 mg as the dose that conferred the optimal combination of efficacy and tolerability. Methods.,The tolerability and safety of frovatriptan 2.5 mg were assessed during controlled, acute migraine treatment studies, including a study that compared frovatriptan 2.5 mg with sumatriptan 100 mg, as well as a 12-month open-label study during which patients could take up to three doses of frovatriptan 2.5 mg within a 24-hour period. Safety and tolerability were assessed through the collection of adverse events, monitoring of heart rate and blood pressure performance of 12-lead electrocardiogram, hematology screen, and blood chemistry studies. Results.,In the short-term studies, 1554 patients took frovatriptan 2.5 mg and 838 took placebo. In the 12-month study, 496 patients treated 13 878 migraine attacks. Frovatriptan was well tolerated in the short- and long-term studies with 1% of patients in the short-term studies and 5% of patients in the long-term study withdrawing due to lack of tolerability. The incidence of adverse events was higher in the frovatriptan-treated patients than in the patients who took placebo (47% versus 34%) and the spectrum of adverse events was similar. When compared to sumatriptan 100 mg, significantly fewer patients taking frovatriptan experienced adverse events (43% versus 36%; P=.03) and the number of adverse events was lower (0.62 versus 0.91), there were also fewer adverse events suggestive of cardiovascular symptoms in the frovatriptan group. Analysis of the entire clinical database (n=2392) demonstrated that frovatriptan was well tolerated by the patients regardless of their age, gender, race, concomitant medication, or the presence of cardiovascular risk factors. No effects of frovatriptan on heart rate, blood pressure, 12-lead electrocardiogram, hematology screen, or blood chemistry were observed. No patient suffered any treatment-related serious adverse event. Conclusions.,Short- and long-term use of frovatriptan 2.5 mg was well tolerated by a wide variety of patients. Frovatriptan treatment produced an adverse events profile similar to that of placebo, and in a direct comparison study was better tolerated than sumatriptan 100 mg. [source] Fertility among female hodgkin lymphoma survivors attempting pregnancy following ABVD chemotherapyHEMATOLOGICAL ONCOLOGY, Issue 1 2007David C. Hodgson Abstract Although ABVD (doxorubicin, bleomycin, vinblastine, dacarbazine) chemotherapy is infrequently associated with premature amenorrhea, little is known about the success rate of women attempting pregnancy following ABVD. In the present study females treated for HL with ABVD chemotherapy without pelvic radiation therapy (RT) and who were alive without relapse ,3 years after treatment were identified from a clinical database and screened for inclusion. Using a standardized questionnaire, we determined the pregnancy rate (i.e. time-to-pregnancy, TTP) among survivors who had become pregnant, tried to become pregnant, or who had been sexually active for over 2 months without using contraception at any time following ABVD. The cumulative incidence of pregnancy was calculated using the Kaplan,Meier method. Cox proportional hazards models were constructed to compare the pregnancy rate among HL survivors to that reported by friend or sibling controls. Thirty-six female HL survivors, who had attempted pregnancy after ABVD treatment, and 29 controls, completed the survey. Eighteen patients (50%) received 2,4 cycles of ABVD, 16 (44%) received 4,6 cycles, and 2 (6%) received >6 cycles. The median TTP among both HL survivors and controls was 2.0 months. The 12-month pregnancy rates were 70% and 75%, respectively. The fertility ratio (FR) for HL survivors versus controls was 0.94 (95%CI,=,0.53,1.66; p,=,0.84) after adjusting for age and frequency of intercourse (where FR <,1 indicates subfertility). Age at treatment and the number of cycles of chemotherapy were not associated with pregnancy rate among HL survivors. Female HL patients who had survived without recurrence ,3 years and who had attempted pregnancy after ABVD did not experience significant sub-fertility. Copyright © 2006 John Wiley & Sons, Ltd. [source] The prevalence of reduced zidovudine susceptibility in zidovudine-naive, antiretroviral-experienced HIV-1-infected patientsHIV MEDICINE, Issue 4 2003Y Gilleece Objectives There is increasing in vitro and in vivo evidence that reduced zidovudine (ZDV) susceptibility is generated by the selective pressure conferred by other nucleoside reverse transcriptase inhibitors (NRTIs). However, the degree to which this occurs in clinical practice remains unclear. We assessed phenotypic and genotypic resistance in ZDV-naive patients with virological failure on stavudine (d4T)-containing regimens, with particular reference to potential cross-resistance between d4T and ZDV. Methods Patients were identified from a clinical database. Treatment history was confirmed by case note evaluation and discussion with patients. Genotypic and phenotypic analyses were undertaken by Virco (Virco BVBA, Mechelen, Belgium). Results Sixty-seven drug-experienced, ZDV-naive patients who underwent a resistance test while failing a d4T-containing regimen were identified. Of these patients, 23% had received three or more NRTIs and 42% at least one non-nucleoside reverse transcriptase (RT) inhibitor; 22% had viruses with reduced d4T susceptibility (>1.8-fold resistance), and 25% had viruses with reduced ZDV susceptibility (>4-fold). The most frequently observed RT mutations were identified. A significant correlation was found between susceptibility to d4T and susceptibility to ZDV (r=0.36; P=0.003), and also between virtual resistance to d4T and that to ZDV (r=0.38; P=0.002). Conclusions A significant minority of d4T-treated, ZDV-naive patients were found to have viruses with reduced ZDV susceptibility, with a variable association with classical ZDV resistance mutations. These data suggest that cross-resistance between d4T and ZDV may involve novel constellations of mutations. Correlations between d4T and ZDV susceptibilities and resistances further support cross-resistance between NRTIs. [source] Urban-Rural Differences in a Memory Disorders Clinical PopulationJOURNAL OF AMERICAN GERIATRICS SOCIETY, Issue 5 2001Sarah B. Wackerbarth PhD OBJECTIVES: To compare patient characteristics and family perceptions of patient function at one urban and one rural memory disorders clinic. DESIGN: Secondary, cross-sectional data analyses of an extant clinical database. SETTING/PARTICIPANTS: First time visits (n = 956) at two memory disorders clinics. MEASUREMENTS: Patient and family-member demographics and assessment results for the Mini-Mental State Examination (MMSE), instrumental activities of daily living (IADLs), activities of daily living (ADLs), the Memory Change and Personality Change components of the Blessed Dementia Rating Scale, and the Revised Memory and Behavior Problems Checklist. RESULTS: In both clinics, patients and family members were more likely female. The typical urban clinic patient was significantly more likely to be living in a facility and more educated than the typical rural patient. Urban and rural patients did not show significant differences in age- and education-adjusted MMSE scores or raw ADL/IADL ratings, but the urban family members reported more memory problems, twice as many personality changes, more-frequent behavior problems, and more adverse reactions to problems. CONCLUSION: Physicians who practice in both urban and rural areas can anticipate differences between patients, and their families, who seek a diagnosis of memory disorders. Our most important finding is that despite similarities in reported functional abilities, urban families appear to be more sensitive to and more distressed by patients' cognitive and behavioral symptoms than rural families. These differences may reflect different underlying needs, and should be explored in further research. [source] Coding diagnoses and procedures using a high-quality clinical database instead of a medical record reviewJOURNAL OF EVALUATION IN CLINICAL PRACTICE, Issue 3 2001Carl van Walraven MSc MD FRCPC Abstract A discharge abstract must be completed for each hospitalization. The most time-consuming component of this task is a complete review of the doctors' progress notes to identify and code all diagnoses and procedures. We have developed a clinical database that creates hospital discharge summaries. To compare diagnostic and procedural coding from a clinical database vs. the standard chart review by health records analysts (HRA). All patients admitted and discharged from general medical and surgical services at a teaching hospital in Ontario, Canada. Diagnostic and procedural codes were identified by reviewing discharge summaries generated from a clinical database. Independently, codes were identified by hospital health records analysts using chart review alone. Codes were compared with a gold standard case review conducted by a health records analyst and a doctor. Coding accuracy (percentage of codes in gold standard review) and completeness (percentage of gold standard codes identified). The study included 124 patients (mean length of stay 5.5 days; 66.4% medical patients). The accuracy of the most responsible diagnosis was 68.5% and 62.9% for the database (D) and chart review (C), respectively (P = 0.18). Overall, the database significantly improved the accuracy (D = 78.9% vs. C = 74.5%; P = 0.02) and completeness (D = 63.9% vs. C = 36.7%; P < 0.0001) of diagnostic coding. Although completeness of procedural coding was similar (D = 5.4% vs. C = 64.2%; P = NS), accuracy decreased with the database (D = 70.3% vs. C = 92.2%; P < 0.0001). Mean resource intensity weightings calculated from the codes (D = 1.3 vs. C = 1.4; P = NS) were similar. Coding from a clinical database may circumvent the need for HRAs to review doctors' progress notes, while maintaining the quality of coding in the discharge abstract. [source] Recipient and donor factors influence the incidence of graft-vs.-host disease in liver transplant patientsLIVER TRANSPLANTATION, Issue 4 2007Edie Y. Chan Acute cellular graft-vs.-host disease (GVHD) following liver transplantation has an incidence of 1 to 2% and a mortality rate of 85%. Our aim was to identify a patient population at high risk for developing GVHD using a large clinical database to study both recipient and donor factors. We compared our liver transplant patients who developed GVHD to those that did not for recipient and donor factors and combinations of factors. For 2003,2004 we had 205 first-time liver transplant patients surviving >30 days. From this group, 4 (1.9%) developed GVHD. Compared to the control group, there were no significant differences in recipient age, recipient gender, donor age, donor gender, total ischemia time, donor-recipient human leukocyte antigen (HLA) mismatch, or donor-recipient age difference. Percentages of liver disease etiologies among the patients who developed GVHD were as follows: 16% (1/6) autoimmune hepatitis (AIH) (P = 0.003), 5.6% (3/54) alcoholic liver disease (ALD) (P = 0.057), and 7.1% (3/42) hepatocellular carcinoma (HCC) (P = 0.026). The incidence of GVHD in patients with glucose intolerance (either Type I or Type II diabetes mellitus [DM]) was significant (P = 0.022). Focusing on patients only with high-risk factors for GVHD during the years 2003,2005, we had 19 such patients. Four of these high-risk patients developed GVHD. Three of these 4 patients had received a donor liver with steatosis of degree ,mild compared to only 2 of the 15 high-risk patients who did not develop GVHD (P = 0.037). In conclusion, we have identified liver transplant patients with AIH or the combination of ALD, HCC, and glucose intolerance who receive a steatotic donor liver as being at high risk for developing GVHD. Liver Transpl 13:516,522, 2007. © 2007 AASLD. [source] Developing tools for the safety specification in risk management plans: lessons learned from a pilot project,PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, Issue 5 2008Andrew J. P. Cooper BSc Abstract Purpose Following the adoption of the ICH E2E guideline, risk management plans (RMP) defining the cumulative safety experience and identifying limitations in safety information are now required for marketing authorisation applications (MAA). A collaborative research project was conducted to gain experience with tools for presenting and evaluating data in the safety specification. This paper presents those tools found to be useful and the lessons learned from their use. Methods Archive data from a successful MAA were utilised. Methods were assessed for demonstrating the extent of clinical safety experience, evaluating the sensitivity of the clinical trial data to detect treatment differences and identifying safety signals from adverse event and laboratory data to define the extent of safety knowledge with the drug. Results The extent of clinical safety experience was demonstrated by plots of patient exposure over time. Adverse event data were presented using dot plots, which display the percentages of patients with the events of interest, the odds ratio, and 95% confidence interval. Power and confidence interval plots were utilised for evaluating the sensitivity of the clinical database to detect treatment differences. Box and whisker plots were used to display laboratory data. Conclusions This project enabled us to identify new evidence-based methods for presenting and evaluating clinical safety data. These methods represent an advance in the way safety data from clinical trials can be analysed and presented. This project emphasises the importance of early and comprehensive planning of the safety package, including evaluation of the use of epidemiology data. Copyright © 2008 John Wiley & Sons, Ltd. [source] Pharyngeal dilation in cricopharyngeus muscle dysfunction and Zenker diverticulum,,THE LARYNGOSCOPE, Issue 5 2010Peter C. Belafsky MD Abstract Objectives/Hypothesis: Prolonged obstruction at the level of the lower esophageal sphincter is associated with a dilated, poorly contractile esophagus. The association between prolonged obstruction at the level of the upper esophageal sphincter (UES) and dilation and diminished contractility of the pharynx is uncertain. The purpose of this investigation was to evaluate the association between prolonged obstruction at the level of the UES and dilation and diminished contractility of the pharynx. Study Design: Case-control study. Methods: The fluoroscopic swallow studies of all persons with cricopharyngeus muscle dysfunction (CPD) diagnosed between January 1, 2006 and December 31, 2008 were retrospectively reviewed from a clinical database. Three categories of CPD were defined: nonobstructing cricopharyngeal bars (CPBs), obstructing CPBs, and Zenker diverticulum (ZD). The primary outcome measure was the pharyngeal constriction ratio (PCR), a surrogate measure of pharyngeal strength on fluoroscopy. Secondary outcome measures included pharyngeal area in the lateral fluoroscopic view and UES opening. The outcome measures were compared between groups and to a cohort of nondysphagic age- and gender-matched controls with the analysis of variance. Results: A total of 100 fluoroscopic swallow studies were evaluated. The mean age (±standard deviation) of the cohort was 70 years (±10 years). Thirty-six percent were female. The mean PCR progressively increased, indicating diminishing pharyngeal strength, from the normal (0.08), to the nonobstructing CPB (0.13), to the obstructing CPB (0.22), to the ZD group (0.28) (P < .001 with trend for linearity). There was a linear increase in pharyngeal area from the normal (8.75 cm2) to the nonobstructing CPB (10.00 cm2), to the obstructing CPB (10.46 cm2), to the ZD group (11.82 cm2) (P < .01 with trend for linearity). Conclusions: The data suggest that there is an association between cricopharyngeus muscle dysfunction and progressive dilation and weakness of the pharynx. Laryngoscope, 2010 [source] Low-dose weekly platinum-based chemoradiation for advanced head and neck cancer,,THE LARYNGOSCOPE, Issue 2 2010John M. Watkins MD Abstract Objectives/Hypothesis: The optimal concurrent chemoradiotherapy regimen for definitive treatment of locoregionally advanced head and neck cancer remains to be determined. The present investigation reports toxicities, disease control, patterns of failure, and survival outcomes in a large mature cohort of patients treated with low-dose weekly platinum-based concurrent chemoradiotherapy. Study Design: Retrospective single-institution series. Methods: Toxicity and outcome data for locoregionally advanced head and neck cancer patients treated with low-dose weekly platinum-based chemotherapy concurrent with standard fractionation radiotherapy were retrospectively collected and analyzed from a clinical database. Survival analysis methods, including Kaplan-Meier estimation and competing risks analysis, were used to assess locoregional disease control, freedom from failure, and overall survival. Results: Ninety-six patients were eligible for the present analysis. Nearly all patients had American Joint Committee on Cancer clinical stage III to IVB disease (99%). Severe acute toxicities included grade 3 mucositis (61%), grade 3/4 nausea (27%/1%), and grade 3 neutropenia (8%). Thirty-seven patients (38%) required hospitalization for a median of 7 days (range, 1,121). Ninety-two percent of patients completed the fully prescribed course of radiotherapy, and 87% completed ,6 cycles of chemotherapy. At a median survivor follow-up of 40 months (range, 8,68), 47% of patients were without evidence of disease recurrence. The estimated 4-year freedom from failure and overall survival were 48% and 58%, respectively. Initial site(s) of disease failure were locoregional for 22 patients, locoregional and distant (five patients), and distant only (14 patients). Conclusions: Weekly low-dose platinum-based chemotherapy with full-dose daily radiotherapy is a tolerable alternative regimen for locoregionally advanced head and neck cancers, with comparable efficacy and patterns of failure to alternative regimens. Laryngoscope, 2010 [source] Effect of an episode of critical illness on subsequent hospitalisation: a linked data studyANAESTHESIA, Issue 2 2010T. A. Williams Summary Healthcare utilisation can affect quality of life and is important in assessing the cost-effectiveness of medical interventions. A clinical database was linked to two Australian state administrative databases to assess the difference in incidence of healthcare utilisation of 19 921 patients who survived their first episode of critical illness. The number of hospital admissions and days of hospitalisation per patient-year was respectively 150% and 220% greater after than before an episode of critical illness (assessed over the same time period). This was the case regardless of age or type of surgery (i.e. cardiac vs non-cardiac). After adjusting for the ageing effect of the cohort as a whole, there was still an unexplained two to four-fold increase in hospital admissions per patient-year after an episode of critical illness. We conclude that an episode of critical illness is a robust predictor of subsequent healthcare utilisation. [source] Robust QT Interval Estimation,From Algorithm to ValidationANNALS OF NONINVASIVE ELECTROCARDIOLOGY, Issue 2009Joel Q. Xue Ph.D. Background: This article presents an effort of measuring QT interval with automatic computerized algorithms. The aims of the algorithms are consistency as well as accuracy. Multilead and multibeat information from a given segment of ECG are used for more consistent QT interval measurement. Methods: A representative beat is generated from selected segment of each lead, and then a composite beat is formed by the representative beats of all independent leads. The end result of the QT measure is so-called global QT measurement, which usually correlates with the longest QT interval in multiple leads. Individual lead QT interval was estimated by using the global measurement as a starting point, and then adapted to the signal of the particular lead and beat. In general, beat-by-beat QT measurement is more prone to noise, therefore less reliable than the global estimation. It is usually difficult to know if difference of beat-by-beat QT interval is due to true physiological change or noise fluctuation. Results: The algorithm was tested independently by a clinical database. It is also tested against action potential duration (APD) generated by a Cell-to-ECG forward-modeling based simulation signals. The modeling approach provided an objective test for the QT estimation. The modeling approach allowed us to evaluate the QT measurement versus APD. The mean error between the algorithm and cardiologist QT intervals is 3.95 ± 5.5 ms, based on the large clinical trial database consisting of 15,910 ECGs. The mean error between QT intervals and maximum APD is 17 ± 2.4, and the correlation coefficient is 0.99. Conclusions: The global QT interval measurement method presented in this study shows very satisfactory results against the CSE database and a large clinical trial database. The modeling test approach used in this study provides an alternative "gold standard" for QT interval measurement. [source] Support of Daily ECG Procedures in a Cardiology Department via the Integration of an Existing Clinical Database and a Commercial ECG Management SystemANNALS OF NONINVASIVE ELECTROCARDIOLOGY, Issue 3 2002Franco Chiarugi Dott. Background: In the context of HYGEIAnet, the regional health telematics network of Crete, a clinical cardiology database (CARDIS) has been installed in several hospitals. The large number of resting ECGs recorded daily made it a priority to have computerized support for the entire ECG procedure. Methods: Starting in late 2000, ICS-FORTH and Mortara Instrument, Inc., collaborated to integrate the Mortara E-Scribe/NT ECG management system with CAROIS in order to support daily ECG procedures. CARDIS was extended to allow automatic ordering of daily ECGs via E-Scribe/NT. The ECG order list is downloaded to the electrocardiographs and executed, the recorded ECGs are transmitted to E-Scribe/NT, where confirmed ECG records are linked back to CARDIS. A thorough testing period was used to identify and correct problems. An ECG viewer/printer was extended to read ECG files in E-Scribe/NT format. Results: The integration of E-Scribe/NT and CARDIS, enabling automatic scheduling of ECG orders and immediate availability of confirmed ECGs records for viewing and printing in the clinical database, took approximately 4 man months. The performance of the system is highly satisfactory and it is now ready for deployment in the hospital. Conclusions: Integration of a commercially available ECG management system with an existing clinical database can provide a rapid, practical solution that requires no major modifications to either software component. The success of this project makes us optimistic about extending CARDIS to support additional examination-procedures such as digital coronary angiography and ultrasound examinations. A.N.E. 2002;7(3):263,270 [source] Fast track surgery: A clinical auditAUSTRALIAN AND NEW ZEALAND JOURNAL OF OBSTETRICS AND GYNAECOLOGY, Issue 2 2010Jonathan CARTER Background:, Fast track surgery is a concept that utilises a variety of techniques to reduce the surgical stress response, allowing a shortened length of stay, improved outcomes and decreased time to full recovery. Aims:, To evaluate a peri-operative Fast Track Surgical Protocol (FTSP) in patients referred for abdominal surgery. Methods:, All patients undergoing a laparotomy over a 12-month period were entered prospectively on a clinical database. Data were retrospectively analysed. Results:, Over the study period, 72 patients underwent a laparotomy. Average patient age was 54 years and average weight and BMI were 67.2 kg and 26 respectively. Sixty three (88%) patients had a vertical midline incision (VMI). There were no intraoperative blood transfusions. The median length of stay (LOS) was 3.0 days. Thirty eight patients (53%) were discharged on or before post op day 3, seven (10%) of whom were discharged on postoperative day 2. On stepwise regression analysis, the following were found to be independently associated with reduced LOS: able to tolerate early enteral nutrition, good performance status, use of COX inhibitor and transverse incision. In comparison with colleagues at the SGOG not undertaking FTS for their patients, the authors' LOS was lower and the RANZCOG modified Quality Indicators (QI's) did not demonstrate excess morbidity. Conclusions:, Patients undergoing fast track surgery can be discharged from hospital with a reduced LOS, without an increased readmission rate and with comparative outcomes to non-fast tracked patients. [source] Automated quality evaluation of digital fundus photographsACTA OPHTHALMOLOGICA, Issue 6 2009Herman Bartling Abstract. Purpose:, Retinal images acquired by means of digital photography are often used for evaluation and documentation of the ocular fundus, especially in patients with diabetes, glaucoma or age-related macular degeneration. The clinical usefulness of an image is highly dependent on its quality. We set out to develop and evaluate an automatic method of evaluating the quality of digital fundus photographs. Methods:, A method for making a numerical quantification of image sharpness and illumination was developed using MatlabÔ image analysis functions. Based on their sharpness and illumination measures, 1000 fundus photographs, randomly selected from a clinical database, were assigned to four predefined quality groups (not acceptable, acceptable, good, very good). Six independent observers, comprising three experienced ophthalmologists and three ophthalmic nurses with extensive experience in fundus image acquisition, classified a selection of 100 of these images into the corresponding quality groups. Results:, Automatic quality evaluation was more sensitive than evaluation by human observers in terms of ability to discriminate between good and very good images. The median concordance between the six human observers and the automatic evaluation was substantial (kappa = 0.64). Conclusions:, The proposed method provides an objective quality assessment of digital fundus photographs which agrees well with evaluations made by qualified human observers and which may be useful in clinical practice. [source] Results from the International Cataract Surgery Outcomes StudyACTA OPHTHALMOLOGICA, Issue thesis2 2007Jens Christian Norregaard MD Abstract It is widely accepted that cataract extraction with intraocular lens implantation is a highly effective and successful procedure. However, quality assessments and studies of effectiveness should still be undertaken. As with any surgical treatment modality, complications may occur, leading to suboptimal outcomes, additional health costs and deterioration in patients' functional capacity. International variation in clinical practice patterns and outcomes can serve as important pointers in the attempt to identify areas amenable to improvements in quality and cost-effectiveness. Once demonstrated, similar clinical results obtained in different health care systems can improve the level of confidence in a clinical standard against which the quality of care can be evaluated. The International Cataract Surgery Outcomes Study was established in 1992. The objective of this international comparative research project was to compare cataract management, outcomes of surgery and quality of care in four international sites. The study was conducted in the 1990s, since when many developments and refinements have emerged within cataract surgery. The actual figures reported in this thesis may no longer be of specific relevance as a decade has passed since their collection. However, the research questions and methods used in the study are still highly important and justify the publication of this report. The report deals with problems related to quality assessment, benchmarking, and the establishment and design of nationwide clinical databases , issues that are currently the focus of much attention. Moreover, the problems related to cross-national comparisons are increasingly relevant as more international databases are established. The study makes suggestions on how to report and compare objective as well as subjective criteria for surgery. The issue of how to report subjective criteria is a particular subject of current discussion. Four sites with high-quality health care systems were examined in this study: the USA, Denmark, the Province of Manitoba (Canada), and Barcelona (Spain). The design of the international research programme was based on methods developed by the US National Cataract Surgery Outcomes Study conducted by the US Cataract Patients Outcomes Research Team. The International Cataract Surgery Outcomes Study comprised three separate studies: a survey of ophthalmologists; a prospective cohort study, and a retrospective register-based cohort study. The survey study was based on data generated by a self-administered questionnaire completed by ophthalmologists in the four study areas. The questionnaire examined routine clinical practice involving patients considered for cataract surgery, and included questions on anaesthesia, monitoring and surgical techniques. The prospective cohort study was a large-scale, longitudinal observational study of patients undergoing first-eye cataract surgery in each study site. Patients were sampled consecutively from multiple clinics and followed for 4 months postoperatively. The retrospective cohort study was based on the Danish National Patient Register and claims data from the USA. This study could not be carried out in Barcelona or Manitoba as no suitable administrative databases were available. The papers based on register databases deal with retinal detachment and endophthalmitis but are not included in this thesis as the material was previously reported in my PhD thesis. The application of the studies was highly co-ordinated among the four sites and similar methods and instruments were used for data collection. The development of the data collection strategy, questionnaires, clinical data forms and data analyses were co-ordinated through weekly telephone conferences, annual in-person conferences, correspondence by mail or fax, and the exchange of sas programs and data files via the Internet. The survey study was based on responses from 1121 ophthalmologists in the four sites and results were presented in two papers. Within the previous year the participating ophthalmologists had performed a total of 212 428 cataract surgeries. With regard to preoperative ophthalmic testing, the present study reveals that refraction, fundus examination and A-scanning were performed routinely by most surgeons in all four sites. Other tests were reported to be performed routinely by some surgeons. It is unclear why any surgeon would use these other tests routinely in cataract patients with no ocular comorbidity. It appears that if this recommendation from the US Clinical Practice Guidelines Panel was broadly accepted, the use of these procedures and costs of care could be reduced, especially in Barcelona, the USA and Canada. Restricted use of medical screening tests was reported in Denmark. If this restricted screening were to be implemented in the USA, Canada and Barcelona, it would have significant resource implications. The most striking finding concerned the difference in monitoring practice between Denmark and each of the other three sites. In Denmark, monitoring equipment is seldom used and only occasionally is an anaesthesiologist present during cataract surgery. By contrast, in the other study sites, the presence of an anaesthesiologist using monitoring equipment is the norm. Adopting the Danish model in other sites would potentially yield significant cost savings. The results represent part of the background data used to inform the decision to conduct the two large-scale, multicentre Studies of Medical Testing for Cataract Surgery. The current study is an example of how surveys of clinical practice can pinpoint topics that need to be examined in randomized clinical trials. For the second study, 1422 patients were followed from prior to surgery until 4 months postoperatively. Preoperatively, a medical history was obtained and an ophthalmic examination of each patient performed. After consent had been obtained, patients were contacted for an in-depth telephone interview. The interview was repeated 4 months postoperatively. The interview included the VF-14, an index of functional impairment in patients with cataract. Perioperative data were available for 1344 patients (95%). The 4-month postoperative interview and clinical examination were completed by 1284 patients (91%). Main reasons for not re-evaluating patients were: surgery was cancelled (3%); refusal to participate (2%); lost to follow-up (1%), and death or being too sick (1%). The results have been presented in several papers, of which four are included in this thesis. One paper compared the preoperative clinical status of patients across the four sites and showed differences in both visual acuity (VA) and VF-14 measures. The VF-14 is a questionnaire scoring disability related to vision. The findings suggest that indications for surgery in comparable patients were similar in the USA and Denmark and were more liberal than in Manitoba and Barcelona. The results highlight the need to control for patient case mix when making comparisons among providers in a clinical database. This information is important when planning national databases that aim to compare quality of care. A feasible method may be to use one of the recently developed systems for case severity grading before cataract surgery. In another paper, perioperative clinical practice and rates of early complications following cataract surgery were compared across the four health care systems. Once again, the importance of controlling for case mix was demonstrated. Significant differences in clinical practice patterns were revealed, suggesting a general trend towards slower diffusion of new medical technology in Europe compared with North America. There were significant differences across sites in rates of intra- and early postoperative events. The most important differences were seen for rates of capsular rupture, hyphaema, corneal oedema and elevated pressure. Rates of these adverse events might potentially be minimized if factors responsible for the observed differences could be identified. Our results point towards the need for further research in this area. In a third paper, 4-month VA outcomes were compared across the four sites. When mean postoperative VA or crude proportions of patients with a visual outcome of <,0.67 were compared across sites, a much poorer outcome was seen in Barcelona. However, higher age, poorer general health status, lower preoperative VA and presence of ocular comorbidity were found to be significant risk factors associated with increased likelihood of poorer postoperative VA. The proportions of patients with these risk factors varied across sites. After controlling for the different distributions of these factors, no significant difference remained across the four sites regarding risk of a poor visual outcome. Once again the importance of controlling for case mix was demonstrated. In the fourth paper, we examined the postoperative VF-14 score as a measure of visual outcomes for cataract surgery in health care settings in four countries. Controlling for case mix was also necessary for this variable. After controlling for patient case mix, the odds for achieving an optimal visual function outcome were similar across the four sites. Age, gender and coexisting ocular pathology were important predictors of visual functional outcome. Despite what seemed to be an optimal surgical outcome, a third of patients still experienced visual disabilities in everyday life. A measure of the VF-14 might help to elucidate this issue, especially in any study evaluating the benefits of cataract surgery in a public health care context. [source] Side-effects of allergen-specific immunotherapy.CLINICAL & EXPERIMENTAL ALLERGY, Issue 3 2006A prospective multi-centre study Summary Background and objective The safety of allergen-specific immunotherapy (SIT) is a parameter of great interest in the overall assessment of the treatment. A clinical database was developed in order to obtain early warnings of changes in the frequency and severity of side-effects and sufficient data for the evaluation of possible risk factors. Methods During a 3-year period, four allergy centres in Copenhagen, Denmark, included data from all patients initiating SIT to a common database. Information on initial allergic symptoms, allergens used for treatment, treatment regimens and systemic side-effects (SSEs) during the build-up phase was collected. Results A total of 1038 patients received treatment with 1709 allergens (timothy, birch, mugwort, house dust mite (HDM), cat, and wasp and bee venom), 23 047 injections in total. Most SIT patients completed the updosing phase without side-effects, but there was a significant difference between allergens: wasp (89%), birch (82%), HDM (81%), cat (74%) and grass (70%) (P=0.004). A total of 582 SSEs were registered in 341 patients. Most side-effects were mild grade 2 reactions (78%). A difference in severity between allergens was observed (P=0.02), with grass giving most problems. The type of allergen but not patient- or centre-related parameters seemed predictive of side-effects. Conclusions Allergen extracts differ in their tendency to produce side-effects. Multi-centre studies like the present one allow more patients to be evaluated, and thereby provide a more efficient surveillance of side-effects. Online Internet-based registration to a central national database of every allergen injection would be an even more powerful tool for evaluation of risk factors and surveillance of side-effects. [source] Does deprivation of area of residence influence the incidence, tumour site or T stage of cutaneous malignant melanoma?CLINICAL & EXPERIMENTAL DERMATOLOGY, Issue 5 2007A population-based, clinical database study Summary This study aimed to document the incidence of malignant melanoma at specific subsites in men and women, stratified by deprivation of area of residence in southeast England, and to explore the association between deprivation and tumour thickness at diagnosis. Data were extracted on 6468 cases from the Thames Cancer Registry for the years 1998 to 2002, and data on, and 508 cases were extracted from the clinical database of the Skin Tumour Unit, St Thomas' Hospital, for the years 1996 to 2004. The postcode of residence was used to assign quintiles of deprivation based on the income domain stated in the Indices of Deprivation 2000. For both males and females, the incidence was higher for those living in the most affluent areas. The trunk was the most common site in males and the lower limbs in females. All sites showed an affluence gradient, although this was least pronounced for head and neck tumours. Distribution of T stage at diagnosis did not differ by deprivation of area of residence. [source] HURTHLE CELL NEOPLASM OF THE THYROID GLANDANZ JOURNAL OF SURGERY, Issue 3 2008Mohammed Ahmed Background: A clinicopathological analysis and long-term follow up of 32 patients with Hurthle cell neoplasm (HCN) was undertaken to contrast the clinical and histological features between benign versus malignant HCN of thyroid and to examine the effect of treatment on the outcome. Methods: This is a retrospective study of 32 patients with HCN who were identified out of an archival clinical/pathological/imaging database of 3752 thyroid cancer patients seen between 1976 and June 2006. All patients underwent thyroid surgery. Data for the non-surgical treatment along with follow up were also analysed. Results: Seventeen patients were classified as malignant HCN (MHCN) and 15 as benign HCN (BHCN). Among the MHCN, there were 11 women and 6 men, whereas among BHCN there were 14 women and 1 man. Three patients designated MHCN presented with metastases, one with pulmonary metastases and two others with skeletal metastases who developed lung metastases 9,19 months later. The mean tumour size was 4.43 ± 0.66 cm for MHCN, and 2.57 ± 0.32 cm for BHCN (P = 0.03). Multicentric tumour foci were evident in five cases (29%) of MHCN but none among the BHCN (P = 0.03). At neck exploration cervical lymph node dissection was carried out in nine MHCN patients with findings of tumour metastases in 33%. Postoperatively, three MHCN patients had no thyroid remnant on ultrasound and computed tomography of neck and undetectable serum thyroglobulin; these were considered to be in remission. Fourteen other MHCN patients with postoperative thyroid remnant and/or distant metastases received 131I treatment. Eight of these patients had negative whole-body scans after 131I treatment and undetectable thyroglobulin. Accordingly, 11 MHCN patients (64.7%) showed evidence of remission and 6 patients did not respond to 131I treatment. After a mean follow up of 35 months, all BHCN patients are alive with no evidence of disease. Of the MHCN, 11 (64.7%) were in remission and 35% had evidence of persistence/recurrence. One patient who had recurrence is dead. A lack of effectiveness of 131I therapy in two patients with distant metastases is an important finding. Conclusion: Features of MHCN consisted of a large tumour size, unequivocal capsular and vascular invasion, multicentric tumour foci, metastatic lymph node deposits in one-third of patients and presence of distant metastasis in a few. Findings of dominant Hurthle cell cytology in a fine-needle aspiration biopsy from a thyroid nodule should prompt surgical resection of the lesion to assess malignancy. [source] |