Clinical Decisions (clinical + decision)

Distribution by Scientific Domains
Distribution within Medical Sciences

Terms modified by Clinical Decisions

  • clinical decision making
  • clinical decision rule
  • clinical decision support
  • clinical decision support system

  • Selected Abstracts


    Should the Heart Rate of Hypertensive Patients Influence Clinical Decisions?

    JOURNAL OF CLINICAL HYPERTENSION, Issue 12 2008
    DPhil, Thomas G. Pickering MD
    First page of article [source]


    Immediate Implant Placement: Clinical Decisions, Advantages, and Disadvantages

    JOURNAL OF PROSTHODONTICS, Issue 7 2008
    Monish Bhola DDS
    Abstract Implant placement in fresh extraction sockets in conjunction with appropriate guided bone regeneration is well documented. The decision to extract teeth and replace them with immediate implants is determined by many factors, which ultimately affect the total treatment plan. The goal of this article is to review some of the important clinical considerations when selecting patients for immediate implant placement, and to discuss the advantages and disadvantages of this mode of therapy. [source]


    Small and Medium-Sized Congenital Nevi in Children: A Comparison of the Costs of Excision and Long-Term Follow-Up

    DERMATOLOGIC SURGERY, Issue 12 2009
    FERNANDO ALFAGEME ROLDÁN MD
    BACKGROUND Clinical decisions on whether to follow up or remove small and medium congenital melanocytic nevi (SMCMN) in children have cost implications that have not been studied. OBJECTIVES To compare the costs of excision of SMCMN in children with lifelong follow-up in a tertiary center. METHODS AND MATERIALS We elaborated models for the evaluation of the costs of excision and long-term follow-up. We retrospectively collected data on 113 consecutive excised SMCMN (105 single-step interventions and 8 multiple-step interventions) from the medical records of our pediatric dermatology unit from 2001 to 2007 and calculated and compared the costs (direct and indirect) of surgery and follow-up. RESULTS The mean ± standard deviation and total cohort costs for single-step interventions were ,1,504.73 ± 198.33 and 157,996.20, respectively. Median and cohort lifelong follow-up costs were similar if performed every 4 years (1,482.66 ± 34.98 and 156,679.63). For multiple-step interventions (3 or 4 steps), surgery costs were similar to those of annual lifelong follow-up. In the case of two-step surgery, costs were similar to lifelong follow-up every 2 years. CONCLUSIONS An analysis of the costs of surgery and long-term follow-up in children with SMCMN is possible. Although the clinical judgment of the dermatologist and parental opinion are the main determinants in the management of SMCMN, costs should also be taken into account. [source]


    Clinical utility of computed tomography in the assessment of dementia: a memory clinic study

    INTERNATIONAL JOURNAL OF GERIATRIC PSYCHIATRY, Issue 5 2004
    Kelly A. Condefer
    Abstract Objective To define the influence of computed tomography (CT) on clinical decision-making in the outpatient evaluation of dementia. Design A case series in which two physicians reviewed standardised data extracted from clinical records, first blind to CT results, and then with CT results. Clinical decisions made with and without the input of CT were compared. The study was based in an outpatient referral centre for the assessment of memory disorders and dementia. The study involved 146 participants who were diagnosed with dementia after their first clinic visit, had Mini Mental State Examination scores >12, were aged >65 years, and had no history of neurologic disease. Results CT impacted on diagnosis in an average of 12% (±2), and on treatment plan in 11% (±2) of cases. Physicians predicted a priori which cases CT may influence with an average sensitivity of 28% (±2), and specificity of 78.5% (±1.5). There was no statistically significant relationship between diagnostically uncertain cases and helpful CT scans [average ,2,=,1.121 (±1.116), p,=,ns]. Blind to CT physicians appropriately identified cerebrovascular disease with an average sensitivity of 63% (±3), and specificity of 93.5% (±3.5). Conclusions In the outpatient setting, CT may be expected to impact on diagnosis and treatment of dementia in 10% to 15% of cases. Memory clinic physicians recognise and treat cerebrovascular risk factors with reasonable sensitivity and specificity without the input of CT. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    The management of heparin-induced thrombocytopenia

    BRITISH JOURNAL OF HAEMATOLOGY, Issue 3 2006
    David Keeling
    Abstract The Haemostasis and Thrombosis Task Force of the British Committee for Standards in Haematology has produced a concise practical guideline to highlight the key issues in the management of heparin-induced thrombocytopenia (HIT) for the practicing physician in the UK. The guideline is evidence-based and levels of evidence are included in the body of the article. All patients who are to receive heparin of any sort should have a platelet count on the day of starting treatment. For patients who have been exposed to heparin in the last 100 d, a baseline platelet count and a platelet count 24 h after starting heparin should be obtained. For all patients receiving unfractionated heparin (UFH), alternate day platelet counts should be performed from days 4 to 14. For surgical and medical patients receiving low-molecular-weight heparin (LMWH) platelet counts should be performed every 2,4 d from days 4 to 14. Obstetric patients receiving treatment doses of LMWH should have platelet counts performed every 2,4 d from days 4 to 14. Obstetric patients receiving prophylactic LMWH are at low risk and do not need routine platelet monitoring. If the platelet count falls by 50% or more, or falls below the laboratory normal range and/or the patient develops new thrombosis or skin allergy between days 4 and 14 of heparin administration HIT should be considered and a clinical assessment made. If the pretest probability of HIT is high, heparin should be stopped and an alternative anticoagulant started at full dosage unless there are significant contraindications while laboratory tests are performed. Platelet activation assays using washed platelets have a higher sensitivity than platelet aggregation assays but are technically demanding and their use should be restricted to laboratories experienced in the technique. Non-expert laboratories should use an antigen-based assay of high sensitivity. Only IgG class antibodies need to be measured. Useful information is gained by reporting the actual optical density, inhibition by high concentrations of heparin, and the cut-off value for a positive test rather than simply reporting the test as positive or negative. In making a diagnosis of HIT the clinician's estimate of the pretest probability of HIT together with the type of assay used and its quantitative result (enzyme-linked immunosorbent assay, ELISA, only) should be used to determine the overall probability of HIT. Clinical decisions should be made following consideration of the risks and benefits of treatment with an alternative anticoagulant. For patients with strongly suspected or confirmed HIT, heparin should be stopped and full-dose anticoagulation with an alternative, such as lepirudin or danaparoid, commenced (in the absence of a significant contraindication). Warfarin should not be used until the platelet count has recovered. When introduced in combination with warfarin, an alternative anticoagulant must be continued until the International Normalised Ratio (INR) is therapeutic for two consecutive days. Platelets should not be given for prophylaxis. Lepirudin, at doses adjusted to achieve an activated partial thromboplastin time (APTT) ratio of 1·5,2·5, reduces the risk of reaching the composite endpoint of limb amputation, death or new thrombosis in patients with HIT and HIT with thrombosis (HITT). The risk of major haemorrhage is directly related to the APTT ratio, lepirudin levels and serum creatinine levels. The patient's renal function needs to be taken into careful consideration before treatment with lepirudin is commenced. Severe anaphylaxis occurs rarely in recipients of lepirudin and is more common in previously exposed patients. Danaparoid in a high-dose regimen is equivalent to lepirudin in the treatment of HIT and HITT. Danaparoid at prophylactic doses is not recommended for the treatment of HIT or HITT. Patients with previous HIT who are antibody negative (usually so after >100 d) who require cardiac surgery should receive intraoperative UFH in preference to other anticoagulants that are less validated for this purpose. Pre- and postoperative anticoagulation should be with an anticoagulant other than UFH or LMWH. Patients with recent or active HIT should have the need for surgery reviewed and delayed until the patient is antibody negative if possible. They should then proceed as above. If deemed appropriate early surgery should be carried out with an alternative anticoagulant. We recommend discussion of these complex cases requiring surgery with an experienced centre. The diagnosis must be clearly recorded in the patient's medical record. [source]


    Sadé and Tos classifications of the tympanic membrane: not reliable?

    CLINICAL OTOLARYNGOLOGY, Issue 3 2006
    D. Pothier
    Objectives. Retraction pockets of the tympanic membrane may progress to cholesteatoma.1 To assess and describe the condition of retraction pockets and to monitor their progress, classification systems are used; this classification is of particular importance when follow-up is conducted by more than one clinician. The Sadé classification is used for the pars tensa and the Tos classification system for the pars flaccida. Although widely used, neither method has been validated or evaluated for reliability when applied by different users. Method. A series of 20 standardised slides of retraction pockets (10 each of PT and PF retractions) was shown to 22 Otolaryngologists. Participants were asked to classify each lesion using the appropriate classification according to supplied, printed definitions. Results. Overall inter-rater reliability was very low (mean kappa of 0.39), with very high levels of variability. Levels of agreement between raters was lower when classifying PF retractions than PT retractions (P = 0.01). No slide was classified unanimously as a single grade. Some slides even had an equal distribution of classifications spread between all grades of the classification system. No significant difference in inter-rater reliability was found between grades (P = 0.7). Conclusions. This is the first attempt at validation of the Tos and Sadé classification systems for retraction pockets. Despite the reliance on classification of retraction pockets, it would appear from this study that the value of these classification systems may be limited. Clinical decisions based on classifications may be flawed, particularly when made on the basis of a classification made by another clinician. Reference 1 Akyildiz N., Akbay C., Ozgirgin O.N., et al. (1993) The role of retraction pockets in cholesteatoma development: an ultrastructural study. Ear Nose Throat J.72, 210--212 [source]


    Charlson Index Is Associated with One-year Mortality in Emergency Department Patients with Suspected Infection

    ACADEMIC EMERGENCY MEDICINE, Issue 5 2006
    Scott B. Murray MD
    Abstract Objectives: A patient's baseline health status may affect the ability to survive an acute illness. Emergency medicine research requires tools to adjust for confounders such as comorbid illnesses. The Charlson Comorbidity Index has been validated in many settings but not extensively in the emergency department (ED). The purpose of this study was to examine the utility of the Charlson Index as a predictor of one-year mortality in a population of ED patients with suspected infection. Methods: The comorbid illness components of the Charlson Index were prospectively abstracted from the medical records of adult (age older than 18 years) ED patients at risk for infection (indicated by the clinical decision to obtain a blood culture) and weighted. Charlson scores were grouped into four previously established indices: 0 points (none), 1,2 points (low), 3,4 points (moderate), and ,5 points (high). The primary outcome was one-year mortality assessed using the National Death Index and medical records. Cox proportional-hazards ratios were calculated, adjusting for age, gender, and markers of 28-day in-hospital mortality. Results: Between February 1, 2000, and February 1, 2001, 3,102 unique patients (96% of eligible patients) were enrolled at an urban teaching hospital. Overall one-year mortality was 22% (667/3,102). Mortality rates increased with increasing Charlson scores: none, 7% (95% confidence interval [CI] = 5.4% to 8.5%); low, 22% (95% CI = 19% to 24%); moderate, 31% (95% CI = 27% to 35%); and high, 40% (95% CI = 36% to 44%). Controlling for age, gender, and factors associated with 28-day mortality, and using the "none" group as a reference group, the Charlson Index predicted mortality as follows: low, odds ratio of 2.0; moderate, odds ratio of 2.5; and high, odds ratio of 4.7. Conclusions: This study suggests that the Charlson Index predicts one-year mortality among ED patients with suspected infection. [source]


    Low health-related quality of life is associated with all-cause mortality in patients with diabetes on haemodialysis: the Japan Dialysis Outcomes and Practice Pattern Study

    DIABETIC MEDICINE, Issue 9 2009
    Y. Hayashino
    Abstract Aims, Whether health-related quality of life (HRQoL) can be accurately predicted in patients with extremely low HRQoL as a result of diabetic complications is unclear. We investigated the impact of HRQoL on mortality risk in patients with diabetes on haemodialysis. Methods, Data from the Dialysis Outcomes Practice Pattern Study (DOPPS) were analysed for randomly selected patients receiving haemodialysis in Japan. Information regarding the diagnosis of diabetes and clinical events during follow-up was abstracted from the medical records at baseline and HRQoL was assessed by a self-reported short form (SF)-36 questionnaire. The association between physical component score and mental component score in the SF-36 and mortality risk was analysed using a Cox proportional hazard model. Results, Data from 527 patients with diabetes on haemodialysis were analysed. The mortality age-adjusted hazard ratio of having a physical component score greater than or equal to the median was 0.27 [95% confidence interval (CI) 0.08,0.96] and the multivariable-adjusted mortality hazard ratio of having an mental component score greater than or equal to the median was 1.21 (95% CI 0.44,3.35). Conclusions, The physical component score derived from the SF-36 is an independent risk factor for mortality in patients with diabetes on haemodialysis who generally had very low HRQoL scores. Baseline mental component score was not predictive of mortality. Patient self-reporting regarding the physical component of health status may aid in risk stratification and clinical decision making for patients with diabetes on haemodialysis. [source]


    Achieving Quality in Clinical Decision Making: Cognitive Strategies and Detection of Bias

    ACADEMIC EMERGENCY MEDICINE, Issue 11 2002
    Pat Croskerry MD
    Clinical decision making is a cornerstone of high-quality care in emergency medicine. The density of decision making is unusually high in this unique milieu, and a combination of strategies has necessarily evolved to manage the load. In addition to the traditional hypothetico-deductive method, emergency physicians use several other approaches, principal among which are heuristics. These cognitive short-cutting strategies are especially adaptive under the time and resource limitations that prevail in many emergency departments (EDs), but occasionally they fail. When they do, we refer to them as cognitive errors. They are costly but highly preventable. It is important that emergency physicians be aware of the nature and extent of these heuristics and biases, or cognitive dispositions to respond (CDRs). Thirty are catalogued in this article, together with descriptions of their properties as well as the impact they have on clinical decision making in the ED. Strategies are delineated in each case, to minimize their occurrence. Detection and recognition of these cognitive phenomena are a first step in achieving cognitive de-biasing to improve clinical decision making in the ED. [source]


    Clinical decision-making in the context of chronic illness

    HEALTH EXPECTATIONS, Issue 1 2000
    Susan Watt DSW CSW
    This paper develops a framework to compare clinical decision making in relation to chronic and acute medical conditions. Much of the literature on patient-physician decision making has focused on acute and often life-threatening medical situations in which the patient is highly dependent upon the expertise of the physician in providing the therapeutic options. Decision making is often constrained and driven by the overwhelming impact of the acute medical problem on all aspects of the individual's life. With chronic conditions, patients are increasingly knowledgeable, not only about their medical conditions, but also about traditional, complementary, and alternative therapeutic options. They must make multiple and repetitive decisions, with variable outcomes, about how they will live with their chronic condition. Consequently, they often know more than attending treatment personnel about their own situations, including symptoms, responses to previous treatment, and lifestyle preferences. This paper compares the nature of the illness, the characteristics of the decisions themselves, the role of the patient, the decision-making relationship, and the decision-making environment in acute and chronic illnesses. The author argues for a different understanding of the decision-making relationships and processes characteristic in chronic conditions that take into account the role of trade-offs between medical regimens and lifestyle choices in shaping both the process and outcomes of clinical decision-making. The paper addresses the concerns of a range of professional providers and consumers. [source]


    Wound care in the community setting: clinical decision making in context

    JOURNAL OF ADVANCED NURSING, Issue 4 2000
    Christine E. Hallett PhD BNurs BA Hons RGN HVCert DNCert PGDE
    Wound care in the community setting: clinical decision making in context Sixty-two community nurses in northern England of grades B and D to H were interviewed by a team of four researchers. The interviews were semi-structured, and were tape-recorded, fully transcribed and content analysed. They were conducted as part of a larger study, the aim of which was to examine community nurses' perceptions of quality in nursing care. One of the main themes the work focused on was decision-making as an element of quality. Data relating to wound care were considered from the perspective of the insights they offered into clinical decision-making. Data were interpreted in the light of a literature review in which a distinction had been made between theories which represented clinical decision-making as a linear or staged process and those which represented it as intuitive. Within the former category, three sub-categories were suggested: theorists could be divided into ,pragmatists', ,systematisers' and those who advocated ,diagnostic reasoning'. The interpretation of the data suggested that the clinical decisions made by community nurses in the area of wound care appeared largely intuitive, yet were also closely related to ,diagnostic reasoning'. They were furthermore based on a range of sources of information and justified by a number of different types of rationale. [source]


    Hematopoietic progenitor cells (HPC) and immature reticulocytes evaluations in mobilization process: new parameters measured by conventional blood cell counter

    JOURNAL OF CLINICAL LABORATORY ANALYSIS, Issue 4 2006
    J.F.A. Noronha
    Abstract Monitoring the timing of leukapheresis in peripheral blood stem cells (PBSC) mobilization is an important clinical decision that requires an accurate analytical tool. The present study assessed hematopoietic progenitor cells (HPC) and immature reticulocyte fraction (IRF) counts provided by a routine automated blood counter as potential parameters for predicting the appropriate time for harvesting. The HPC and IRF values were compared with white blood cell (WBC) and CD34+ cell counts obtained by flow cytometry in 30 adult patients with hematological malignancies undergoing PBSC mobilization. It was observed that there was a significant correlation between HPC counts and CD34+ cells in peripheral blood counts (r=0.61, P=0.0003) and between the number of HPC and CD34+cells collected by leukapheresis (r=0.5733, P=0.0009). Comparing HPC, IRF, WBC, and CD34+ cells parameters as a sign of hematological recovery showed that the raise in immature reticulocytes counts preceded the increase of WBC (P=0.0002), HPC (P=0.0001), and CD34+ (P=0.0001) cells in peripheral blood counts. According to our results, HPC and IRF parameters may be integrated into clinical protocols to evaluate the timing of leukapheresis. IRF, as previously demonstrated in bone marrow transplantation, is the earliest sign of hematopoietic recovery in mobilization process. J. Clin. Lab. Anal. 20:149,153, 2006. © 2006 Wiley-Liss, Inc. [source]


    Clinical prediction rules for bacteremia and in-hospital death based on clinical data at the time of blood withdrawal for culture: an evaluation of their development and use

    JOURNAL OF EVALUATION IN CLINICAL PRACTICE, Issue 6 2006
    Tsukasa Nakamura MD (Research Fellow)
    Abstract Rationale, aims and objectives, To develop clinical prediction rules for true bacteremia, blood culture positive for gram-negative rods, and in-hospital death using the data at the time of blood withdrawal for culture. Methods, Data on all hospitalized adults who underwent blood cultures at a tertiary care hospital in Japan were collected from an integrated medical computing system. Logistic regression was used for developing prediction rules followed by the jackknife cross validation. Results, Among 739 patients, 144 (19.5%) developed true bacteremia, 66 (8.9) were positive for gram-negative rods, and 203 (27.5%) died during hospitalization. Prediction rule based on the data at the time of blood withdrawal for culture stratified them into five groups with probabilities of true bacteremia 6.5, 9.6, 21.9, 30.1, and 59.6%. For blood culture positive for gram-negative rods, the probabilities were 0.6, 4.7, 8.6, and 31.7%, and for in-hospital death, those were 6.7, 15.5, 26.0, 35.5, and 56.1%. The area of receiver operating characteristic for true bacteremia, blood culture positive for gram-negative rods, and in-hospital death were 0.73, 0.64, and 0.64, respectively, in original cohort and 0.72, 0.64, and 0.64 in validation respectively. Conclusions, The clinical prediction rules are helpful for improved clinical decision making for bacteremia patients. [source]


    Integrating evidence into clinical practice: an alternative to evidence-based approaches

    JOURNAL OF EVALUATION IN CLINICAL PRACTICE, Issue 3 2006
    Mark R. Tonelli MD MA
    Abstract Evidence-based medicine (EBM) has thus far failed to adequately account for the appropriate incorporation of other potential warrants for medical decision making into clinical practice. In particular, EBM has struggled with the value and integration of other kinds of medical knowledge, such as those derived from clinical experience or based on pathophysiologic rationale. The general priority given to empirical evidence derived from clinical research in all EBM approaches is not epistemically tenable. A casuistic alternative to EBM approaches recognizes that five distinct topics, 1) empirical evidence, 2) experiential evidence, 3) pathophysiologic rationale, 4) patient goals and values, and 5) system features are potentially relevant to any clinical decision. No single topic has a general priority over any other and the relative importance of a topic will depend upon the circumstances of the particular case. The skilled clinician must weigh these potentially conflicting evidentiary and non-evidentiary warrants for action, employing both practical and theoretical reasoning, in order to arrive at the best choice for an individual patient. [source]


    Medullary thyroid carcinoma and biomarkers: past, present and future

    JOURNAL OF INTERNAL MEDICINE, Issue 1 2009
    W. Van Veelen
    Abstract. The clinical management of patients with persistent or recurrent medullary thyroid carcinoma (MTC) is still under debate, because these patients either have a long-term survival, due to an indolent course of the disease, or develop rapidly progressing disease leading to death from distant metastases. At this moment, it cannot be predicted what will happen within most individual cases. Biomarkers, indicators which can be measured objectively, can be helpful in MTC diagnosis, molecular imaging and treatment, and/or identification of MTC progression. Several MTC biomarkers are already implemented in the daily management of MTC patients. More research is being aimed at the improvement of molecular imaging techniques and the development of molecular systemic therapies. Recent discoveries, like the prognostic value of plasma calcitonin and carcino-embryonic antigen doubling-time and the presence of somatic RET mutations in MTC tissue, may be useful tools in clinical decision making in the future. In this review, we provide an overview of different MTC biomarkers and their applications in the clinical management of MTC patients. [source]


    The impact of the model for end-stage liver disease on recipient selection for adult living liver donation

    LIVER TRANSPLANTATION, Issue 10C 2003
    Richard B. Freeman
    Key points 1. The Model for End-Stage Liver Disease (MELD) system can be used to assess recipient pre-transplant risks and help select appropriate candidates for the adult to adult living donation liver transplant procedure. 2. Selection of candidates for the adult to adult living donation liver transplant procedure requires assessment of candidate risk of death without a transplant, risk of death with a transplant, and donor risk of death. 3. Understanding of the risks involved allows for development of clinical decision models to inform the risk benefit analyses. 4. MELD provides a useful, objective, and universal tool for clinicians around the world to estimate risks for clinical decision making in all forms of liver transplantation. [source]


    Interpreting the significance of drinking by alcohol-dependent liver transplant patients: Fostering candor is the key to recovery

    LIVER TRANSPLANTATION, Issue 6 2000
    Robert M. Weinrieb
    Few studies have examined the value of treating alcohol addiction either before or after liver transplantation. Nevertheless, most liver transplant programs and many insurance companies require 6 months to 1 year of abstinence from alcohol as a condition of eligibility for liver transplantation (the 6-month rule). We believe there are potentially harsh clinical consequences to the implementation of this rule. For example, the natural history of alcohol use disorders often involves brief fallbacks to drinking ("slips"), but when alcoholic liver transplant candidates slip, most are removed from consideration for transplantation or are required to accrue another 6 months of sobriety. Because there is no alternative treatment to liver transplantation for most patients with end-stage liver disease, the 6-month rule could be lethal in some circumstances. In this review, we survey the literature concerning the ability of the 6-month rule to predict drinking by alcoholic patients who undergo liver transplantation and examine its impact on the health consequences of drinking before and after liver transplantation. We believe that fostering candor between the alcoholic patient and the transplant team is the key to recovery from alcoholism. We conclude that it is unethical to force alcoholic liver patients who have resumed alcohol use while waiting for or after transplantation to choose between hiding their drinking to remain suitable candidates for transplantation or risk death by asking for treatment of alcoholism. Consequently, we advocate a flexible approach to clinical decision making for the transplant professional caring for an alcoholic patient who has resumed drinking and provide specific guidelines for patient management. [source]


    Insulin glargine improves hemoglobin A1c in children and adolescents with poorly controlled type 1 diabetes

    PEDIATRIC DIABETES, Issue 2 2003
    Anne Jackson
    Abstract:, The pediatric diabetes team at the University of Minnesota made a clinical decision to switch patients with type 1 diabetes with a hemoglobin A1c level greater than 8.0% to insulin glargine in an effort to improve glycemic control. Retrospective chart analysis was performed on 37 patients 6 months after the switch to insulin glargine therapy. Results:, After 6 months, the average hemoglobin A1c level in the entire cohort dropped from 10.1 ± 2.0 to 8.9 ± 1.6% (p = 0.001). Thirty patients responded with an average hemoglobin A1c drop of 1.7 ± 1.5%, from 10.3 ± 2.2 to 8.6 ± 1.5% (p < 0.001). Seven patients did not respond to insulin glargine therapy, with an average hemoglobin A1c rise of 1.0 ± 0.8% from a baseline of 9.5 ± 1.0% to 10.4 ± 1.4% (p = 0.01). The greatest response was seen in children with an A1c > 12.0%, who dropped their hemoglobin A1c by 3.5 ± 1.9%. Compared with responders, non-responders had significantly less contact with the diabetes team in the form of clinic visits and telephone conversations both before and after initiation of glargine therapy. Sixty-two per cent of patients received insulin glargine at lunchtime, when injections could be supervised at school. Three episodes of severe hypoglycemia occurred after initiation of insulin glargine therapy. Conclusions:, Insulin glargine substantially improved glycemic control in children and adolescents with poorly controlled type 1 diabetes. This response was most remarkable in those with a baseline hemoglobin A1c level > 12.0%, and may have been related to increased supervision of injections. [source]


    How neurologists think: A cognitive psychology perspective on missed diagnoses,

    ANNALS OF NEUROLOGY, Issue 4 2010
    Barbara G. Vickrey MD
    Physicians use heuristics or shortcuts in their decision making to help them sort through complex clinical information and formulate diagnoses efficiently. Practice would come to a halt without them. However, there are pitfalls to the use of certain heuristics, the same ones to which humans are prone in everyday life. It may be possible to improve clinical decision making through techniques that minimize biases inherent in heuristics. Five common clinical heuristics or other sources of cognitive error are illustrated through neurological cases with missed diagnoses, and literature from cognitive psychology and medicine are presented to support the occurrence of these errors in diagnostic reasoning as general phenomena. Articulation of the errors inherent in certain common heuristics alerts clinicians to their weaknesses as diagnosticians and should be beneficial to practice. Analysis of cases with missed diagnoses in teaching conferences might proceed along formal lines that identify the type of heuristic used and of inherent potential cognitive errors. Addressing these cognitive errors by becoming conscious of them is a useful tool in neurologic education and should facilitate a career-long process of continuous self-improvement. ANN NEUROL 2010;67:425,433 [source]


    The impact of the evolution of invasive surgical procedures for low back pain: a population based study of patient outcomes and hospital utilization

    ANZ JOURNAL OF SURGERY, Issue 9 2009
    Rachael Elizabeth Moorin
    Abstract Background:, Low back pain (LBP) is a ubiquitous health problem in Western societies, and while clinical decision making for patients requiring hospitalization for LBP has changed significantly over the past two decades, knowledge of the net impact on patient outcomes and health care utilization is lacking. The aim of this study was to evaluate the effectiveness of changes in the medical control of lumbar back pain in Western Australia in terms of the rate of patient readmission and the total bed days associated with readmissions. Methods:, A record linkage population-based study of hospitalization for LBP from 1980,2003 in Western Australia was performed. The rate of admission for LBP, changes in re-admission rates and number of bed days accrued 1 and 3 years post-initial admission over time adjusted for potential confounders was evaluated. Results:, The annual rate of first-time hospitalization for LBP halved. The proportion of females admitted increased (+6%). The disease severity increased and the proportion of individuals having an invasive procedure also increased (+75%) over the study period. While rate of readmission for non-invasive procedures fell, readmission for invasive procedures increased over the study period. Overall, the number of bed days associated with readmission reduced over time. Conclusion:, Between 1980 and 2003, there has been a shift from non-invasive procedural treatments towards invasive techniques both at the time of initial hospitalization and upon subsequent readmission. While overall readmission rates were unaffected, there was a reduction in the number of bed days associated with readmissions. [source]


    Analysis of risk factors for persistent gestational trophoblastic disease

    AUSTRALIAN AND NEW ZEALAND JOURNAL OF OBSTETRICS AND GYNAECOLOGY, Issue 6 2009
    Soo-Keat KHOO
    Setting:, Persistent disease is a serious consequence of molar pregnancies. Its early detection is critical to effective chemotherapy. Therefore, determination of risk becomes an important clinical decision. Objectives:, To determine the relative risk of persistent disease in a cohort of patients with partial and complete molar pregnancies by analysis of five factors derived from a database using multivariate analysis. Results:, Of 686 patients, 78 developed persistent disease which required treatment (rate of 11.3%). Risk was markedly increased when serum human chorionic gonadotrophin (HCG) failed to reach negative by 12 weeks after evacuation [hazard ratio (HR) = 120.78, P < 0.001]. Risk was markedly decreased when the interval from last pregnancy exceeded 12 months (HR = 0.24, P = 0.005). Other factors such as patient's age, stage of gestation and serum HCG level at presentation were not found to be strongly associated with risk of persistent disease. Conclusion:, These findings support the application of the following two factors in risk prediction for molar pregnancies: > 12 weeks to become HCG negative and interval from last pregnancy < 12 months. They will contribute to a greater awareness of persistent disease and assist in early detection and effective chemotherapy. [source]


    Agreement on cardiotocogram interpretation and clinical decision using the STAN guidelines

    BJOG : AN INTERNATIONAL JOURNAL OF OBSTETRICS & GYNAECOLOGY, Issue 11 2009
    D Ayres-de-Campos
    No abstract is available for this article. [source]


    Agreement on cardiotocogram interpretation and clinical decision using the STAN guidelines Authors' Reply

    BJOG : AN INTERNATIONAL JOURNAL OF OBSTETRICS & GYNAECOLOGY, Issue 11 2009
    MEMH Westerhuis
    No abstract is available for this article. [source]


    Using fractional exhaled nitric oxide to guide asthma therapy: design and methodological issues for ASthma TReatment ALgorithm studies

    CLINICAL & EXPERIMENTAL ALLERGY, Issue 4 2009
    P. G. Gibson Prof.
    Summary Background Current asthma guidelines recommend treatment based on the assessment of asthma control using symptoms and lung function. Noninvasive markers are an attractive way to modify therapy since they offer improvedselection of active treatment(s) based on individual response, and improvedtitration of treatment using markers that are better related to treatment outcomes. Aims: To review the methodological and design features of noninvasive marker studies in asthma. Methods Systematic assessment of published randomized trials of asthma therapy guided by fraction of exhaled nitric oxide(FENO). Results FENO has appeal as a marker to adjust asthma therapy since it is readily measured, gives reproducible results, and is responsive to changes in inhaled corticosteroid doses. However, the five randomised trials of FENO guided therapy have had mixed results. This may be because there are specific design and methodological issues that need to be addressed in the conduct of ASthma TReatment ALgorithm(ASTRAL) studies. There needs to be a clear dose response relationship for the active drugs used and the outcomes measured. The algorithm decision points should be based on outcomes in the population of interest rather than the range of values in healthy people, and the algorithm used needs to provide a sufficiently different result to clinical decision making in order for there to be any discernible benefit. A new metric is required to assess the algorithm performance, and the discordance:concordance(DC) ratio can assist with this. Conclusion Incorporating these design features into future FENO studies should improve the study performance and aid in obtaining a better estimate of the value of FENO guided asthma therapy. [source]


    Monitoring of acromegaly: what should be performed when GH and IGF-1 levels are discrepant?

    CLINICAL ENDOCRINOLOGY, Issue 2 2009
    Pamela U. Freda
    Summary Monitoring of a patient with acromegaly requires periodic evaluation of levels of GH and IGF-1, the biochemical markers of this disease. Although the results of these two tests are usually concordant, they can be discrepant and how to proceed when they are can be a challenging clinical problem. In some cases, IGF-1 levels are normal yet GH suppression after oral glucose is abnormal; this pattern may be due to persistent GH dysregulation despite remission. In other cases, IGF-1 levels are elevated yet GH suppression appears to be normal; this pattern may be observed if the cutoff for GH suppression is inappropriately high for the GH assay being used. Various conditions known to alter GH and IGF-1 including malnutrition, thyroid disease and oestrogen use as well as the potential for methodological or normative data issues with the GH and IGF-1 assays should be considered in the interpretation of discrepant results. When a known cause of the discrepancy other than acromegaly is not identified, a clinical decision about the patient's therapy needs to be made. We adjust treatment in most patients whose results are discrepant based on the IGF-1 level, continuing current treatment if it is persistently normal or modifying this if it is elevated. The clinical picture of the patient, however, also needs to be incorporated into this decision. All patients should have continued periodic surveillance of both GH and IGF-1 levels. [source]


    Complementary and integrative medical therapies, the FDA, and the NIH: definitions and regulation

    DERMATOLOGIC THERAPY, Issue 2 2003
    Michael H. Cohen
    ABSTRACT: ,,The National Center for Complementary and Alternative Medicine (NCCAM) presently defines complementary and alternative medicine (CAM) as covering "a broad range of healing philosophies (schools of thought), approaches, and therapies that mainstream Western (conventional) medicine does not commonly use, accept, study, understand, or make available. The research landscape, including NCCAM-funded research, is continually changing and subject to vigorous methodologic and interpretive debates. Part of the impetus for greater research dollars in this arena has been increasing consumer reliance on CAM to dramatically expand. State (not federal) law controls much of CAM practice. However, a significant federal role exists in the regulation of dietary supplements. The U.S. Food and Drug Administration (FDA) regulates foods, drugs, and cosmetics in interstate commerce. No new "drug" may be introduced into interstate commerce unless proven "safe" and "effective" for its intended use, as determined by FDA regulations. "Foods", however, are subject to different regulatory requirements, and need not go through trials proving safety and efficacy. The growing phenomenon of consumer use of vitamins, minerals, herbs, and other "dietary supplements" challenged the historical divide between drugs and foods. The federal Dietary Supplements Health Education Act (DSHEA) allows manufacturers to distribute dietary supplements without having to prove safety and efficacy, so long as the manufacturers make no claims linking the supplements to a specific disease. State law regulates the use of CAM therapies through a variety of legal rules. Of these, several major areas of concern for clinicians are professional licensure, scope of practice, and malpractice. Regarding licensure, each state has enacted medical licensing that prohibits the unlicensed practice of medicine and thereby criminalizes activity by unlicensed CAM providers who offer health care services to patients. Malpractice is defined as unskillful practice which fails to conform to a standard of care in the profession and results in injury. The definition is no different in CAM than in general medicine; its application to CAM, however, raises novel questions. Courts rely on medical consensus regarding the appropriateness of a given therapy. A framework for assessing potential liability risk involves assessing the medical evidence concerning safety and efficacy, and then aligning clinical decisions with liability concerns. Ultimately research will or will not establish a specific CAM therapy as an important part of the standard of care for the condition in question. Legal rules governing CAM providers and practices are, in many cases, new and evolving. Further, laws vary by state and their application depends on the specific clinical scenario in question. New research is constantly emerging, as are federal and state legislative developments and judicial opinions resulting from litigation. [source]


    Variability in agreement between physicians and nurses when measuring the Glasgow Coma Scale in the emergency department limits its clinical usefulness

    EMERGENCY MEDICINE AUSTRALASIA, Issue 4 2006
    Anna Holdgate
    Abstract Objective:, To assess the interrater reliability of the Glasgow Coma Scale (GCS) between nurses and senior doctors in the ED. Methods:, This was a prospective observational study with a convenience sample of patients aged 18 or above who presented with a decreased level of consciousness to a tertiary hospital ED. A senior ED doctor (emergency physicians and trainees) and registered nurse each independently scored the patient's GCS in blinded fashion within 15 min of each other. The data were then analysed to determine interrater reliability using the weighted kappa statistic and the size and directions of differences between paired scores were examined. Results:, A total of 108 eligible patients were enrolled, with GCS scores ranging from 3 to 14. Interrater agreement was excellent (weighted kappa > 0.75) for verbal scores and total GCS scores, and intermediate (weighted kappa 0.4,0.75) for motor and eye scores. Total GCS scores differed by more than two points in 10 of the 108 patients. Interrater agreement did not vary substantially across the range of actual numeric GCS scores. Conclusions:, Although the level of agreement for GCS scores was generally high, a significant proportion of patients had GCS scores which differed by two or more points. This degree of disagreement indicates that clinical decisions should not be based solely on single GCS scores. [source]


    The Status of Bedside Ultrasonography Training in Emergency Medicine Residency Programs

    ACADEMIC EMERGENCY MEDICINE, Issue 1 2003
    Francis L. Counselman MD
    Abstract Bedside ultrasonography (BU) is rapidly being incorporated into emergency medicine (EM) training programs and clinical practice. In the past decade, several organizations in EM have issued position statements on the use of this technology. Program training content is currently driven by the recently published "Model of the Clinical Practice of Emergency Medicine," which includes BU as a necessary skill. Objective: The authors sought to determine the current status of BU training in EM residency programs. Methods: A survey was mailed in early 2001 to all 122 Accreditation Council for Graduate Medical Education (ACGME)-accredited EM residency programs. The survey instrument asked whether BU was currently being taught, how much didactic and hands-on training time was incorporated into the curriculum, and what specialty representation was present in the faculty instructors. In addition, questions concerning the type of tests performed, the number considered necessary for competency, the role of BU in clinical decision making, and the type of quality assurance program were included in the survey. Results: A total of 96 out of 122 surveys were completed (response rate of 79%). Ninety-one EM programs (95% of respondents) reported they teach BU, either clinically and/or didactically, as part of their formal residency curriculum. Eighty-one (89%) respondents reported their residency program or primary hospital emergency department (ED) had a dedicated ultrasound machine. BU was performed most commonly for the following: the FAST scan (focused abdominal sonography for trauma, 79/87%); cardiac examination (for tamponade, pulseless electrical activity, etc., 65/71%); transabdominal (for intrauterine pregnancy, ectopic pregnancy, etc., 58/64%); and transvaginal (for intrauterine pregnancy, ectopic pregnancy, etc., 45/49%). One to ten hours of lecture on BU was provided in 43%, and one to ten hours of hands-on clinical instruction was provided in 48% of the EM programs. Emergency physicians were identified as the faculty most commonly involved in teaching BU to EM residents (86/95%). Sixty-one (69%) programs reported that EM faculty and/or residents made clinical decisions and patient dispositions based on the ED BU interpretation alone. Fourteen (19%) programs reported that no formal quality assurance program was in place. Conclusions: The majority of ACGME-accredited EM residency programs currently incorporate BU training as part of their curriculum. The majority of BU instruction is done by EM faculty. The most commonly performed BU study is the FAST scan. The didactic component and clinical time devoted to BU instruction are variable between programs. Further standardization of training requirements between programs may promote increasing standardization of BU in future EM practice. [source]


    Health-related quality of life assessment in randomised controlled trials in multiple myeloma: a critical review of methodology and impact on treatment recommendations

    EUROPEAN JOURNAL OF HAEMATOLOGY, Issue 4 2009
    Ann Kristin Kvam
    Abstract Objectives:, Patients with multiple myeloma (MM) often have pronounced symptoms and substantially reduced quality of life. The aims of treatment are to control disease, maximise quality of life and prolong survival. Hence, health-related quality of life (HRQOL) should be an important end-point in randomised controlled trials (RCTs) in addition to traditional endpoints. We wanted to evaluate whether trials reporting HRQOL outcomes have influenced clinical decision making and whether HRQOL was assessed robustly according to predefined criteria. Methods:, A systematic review identified RCTs in MM with HRQOL assessment as a study end-point. The methodological quality of these studies was assessed according to a checklist developed for evaluating HRQOL outcomes in clinical trials. The impact of the HRQOL results on clinical decision making was assessed, using published clinical guidelines as a reference. Results:, Fifteen publications presenting RCTs with HRQOL as a study end-point were identified. In 13 trials, the author stated that HRQOL results should influence clinical decision making. We found, however, that the HRQOL data only had a limited impact on published treatment guidelines for bisphosphonates, high-dose treatment, interferon, erythropoiesis-stimulating agents and novel agents. Conclusion:, The present review indicates that the there are still few RCTs in MM including HRQOL as a study end-point. Systematic incorporation of HRQOL measures into clinical trials allows for a comparison of treatment arms that includes the patients' perspective. Before the full impact on clinical decisions can be realised, the quality and methodology of collecting HRQOL data must be further improved and the results rendered more comprehensible to clinicians. [source]


    Random forest can predict 30-day mortality of spontaneous intracerebral hemorrhage with remarkable discrimination

    EUROPEAN JOURNAL OF NEUROLOGY, Issue 7 2010
    S. -Y.
    Background and purpose:, Risk-stratification models based on patient and disease characteristics are useful for aiding clinical decisions and for comparing the quality of care between different physicians or hospitals. In addition, prediction of mortality is beneficial for optimizing resource utilization. We evaluated the accuracy and discriminating power of the random forest (RF) to predict 30-day mortality of spontaneous intracerebral hemorrhage (SICH). Methods:, We retrospectively studied 423 patients admitted to the Taichung Veterans General Hospital who were diagnosed with spontaneous SICH within 24 h of stroke onset. The initial evaluation data of the patients were used to train the RF model. Areas under the receiver operating characteristic curves (AUC) were used to quantify the predictive performance. The performance of the RF model was compared to that of an artificial neural network (ANN), support vector machine (SVM), logistic regression model, and the ICH score. Results:, The RF had an overall accuracy of 78.5% for predicting the mortality of patients with SICH. The sensitivity was 79.0%, and the specificity was 78.4%. The AUCs were as follows: RF, 0.87 (0.84,0.90); ANN, 0.81 (0.77,0.85); SVM, 0.79 (0.75,0.83); logistic regression, 0.78 (0.74,0.82); and ICH score, 0.72 (0.68,0.76). The discriminatory power of RF was superior to that of the other prediction models. Conclusions:, The RF provided the best predictive performance amongst all of the tested models. We believe that the RF is a suitable tool for clinicians to use in predicting the 30-day mortality of patients after SICH. [source]