Concordance Test (concordance + test)

Distribution by Scientific Domains

Kinds of Concordance Test

  • script concordance test


  • Selected Abstracts


    Poorly performing physicians: Does the script concordance test detect bad clinical reasoning?,

    THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS, Issue 3 2010
    François Goulet MD
    Abstract Introduction Evaluation of poorly performing physicians is a worldwide concern for licensing bodies. The Collège des Médecins du Québec currently assesses the clinical competence of physicians previously identified with potential clinical competence difficulties through a day-long procedure called the Structured Oral Interview (SOI). Two peer physicians produce a qualitative report. In view of remediation activities and the potential for legal consequences, more information on the clinical reasoning process (CRP) and quantitative data on the quality of that process is needed. This study examines the Script Concordance Test (SCT), a tool that provides a standardized and objective measure of a specific dimension of CRP, clinical data interpretation (CDI), to determine whether it could be useful in that endeavor. Methods Over a 2-year period, 20 family physicians took, in addition to the SOI, a 1-hour paper-and-pencil SCT. Three evaluators, blind as to the purpose of the experiment, retrospectively reviewed SOI reports and were asked to estimate clinical reasoning quality. Subjects were classified into 2 groups (below and above median of the score distribution) for the 2 assessment methods. Agreement between classifications is estimated with the use of the Kappa coefficient. Results Intraclass correlation for SOI was 0.89. Cronbach alpha coefficient for the SCT was 0.90. Agreement between methods was found for 13 participants (Kappa: 0.30, P = 0.18), but 7 out of 20 participants were classified differently in both methods. All participants but 1 had SCT scores below 2 SD of panel mean, thus indicating serious deficiencies in CDI. Discussion The finding that the majority of the referred group did so poorly on CDI tasks has great interest for assessment as well as for remediation. In remediation of prescribing skills, adding SCT to SOI is useful for assessment of cognitive reasoning in poorly performing physicians. The structured oral interview should be improved with more precise reporting by those who assess the clinical reasoning process of examinees, and caution is recommended in interpreting SCT scores; they reflect only a part of the reasoning process. [source]


    15 Assessing the Clinical Reasoning Skills of Emergency Medicine Clerkship Students Using a Script Concordance Test

    ACADEMIC EMERGENCY MEDICINE, Issue 2008
    Aloysius Humbert
    Fourth-year medical students in emergency medicine (EM) clerkships are evaluated by various methods. Multiple choice examinations are frequently used to supplement clinical evaluations. These are limited in their ability to evaluate students' clinical reasoning skills. The Script Concordance Test (SCT) is an innovative assessment method developed to evaluate clinical reasoning. The SCT consists of a series of clinical vignettes, each followed by a series of specific questions that present an additional piece of data (a lab result, a physical finding, etc.) to the student. The students then indicate how the additional data affect their thinking regarding a possible diagnosis, an investigational strategy, or a therapeutic intervention, using a 5 point Likert scale (-2,-1,0,+1,+2). SCT questions have no single correct answer; instead, students receive credit based upon the level of agreement between their answers and those of a panel of 10 to 20 expert physicians who take the test to derive the answer key. The SCT is easily administered. In other disciplines, the SCT has demonstrated the ability to differentiate between the clinical reasoning skills of experienced and novice clinicians. The clerkship directors developed an EM SCT using an expert panel of 10 EM attending physicians. For the 07-08 academic year, SCT questions have been incorporated into the EM clerkship end-of-rotation written examination. The EM SCT shows promise as a measure of a student's clinical reasoning ability. Future studies will assess in greater detail the performance and statistical properties of the SCT in the setting of the EM clerkship. [source]


    Reasoning versus knowledge retention and ascertainment throughout a problem-based learning curriculum

    MEDICAL EDUCATION, Issue 9 2009
    Anne Collard
    Context, Since 2000, problem-based learning (PBL) seminars have been introduced into the curriculum of medical studies at the University of Liège. We aimed to carry out a cross-sectional investigation of the maturational increase in biomedical reasoning capacity in comparison with factual knowledge retention throughout the curriculum. Methods, We administered a factual knowledge test (i.e. a true/false test with ascertainment degree) and a biomedical reasoning test (i.e. an adapted script concordance test [SCT]) to 104 students (Years 3,6) and a reference panel. The selected topic was endocrinology. Results, On the SCT, the students obtained higher scores in Years 5 and 6 than in Years 3 and 4. In Year 3, the scores obtained on SCT questions in a new context indicated transfer of reasoning skills. On the true/false test, the scores of Year 3 students were significantly higher than those of students in the other three year groups. A positive correlation between SCT scores and true/false test scores was observed only for students in Years 3 and 4. In each group, the ascertainment degree scores were higher for correct than for incorrect responses and the difference was calculated as an index of self-estimation of core knowledge. This index was found to be positively correlated to SCT scores in the four year groups studied. Conclusions, Biomedical reasoning skills are evidenced early in a curriculum involving PBL and further increase during training. This is accompanied by a decrease in factual knowledge retention. The self-estimation of core knowledge appears to be related to reasoning capacity, which suggests there is a link between the two processes. [source]


    Assessment in the context of uncertainty: how many members are needed on the panel of reference of a script concordance test?

    MEDICAL EDUCATION, Issue 3 2005
    R Gagnon
    Purpose, The script concordance test (SCT) assesses clinical reasoning in the context of uncertainty. Because there is no single correct answer, scoring is based on a comparison of answers provided by examinees with those provided by members of a panel of reference made up of experienced practitioners. This study aims to determine how many members are needed on the panel to obtain reliable scores to compare against the scores of examinees. Methods, A group of 80 residents were tested on 73 items (Cronbach's ,: 0.76). A total of 38 family doctors made up the pool of experienced practitioners, from which 1000 random panels of reference of increasing sizes (5, 10, 15, 20, 25 and 30) were generated with a resampling procedure. Residents' scores were computed for each panel sample. Units of analysis were means of residents' score, test reliability coefficient and correlation coefficient between scores obtained with a given panel of reference versus the scores obtained with the full panel of 38. Statistics were averaged across the 1000 samples for each panel size for the mean and test reliability computations, and across 100 samples for the correlation computation. Results, For sample variability, there was a 3-fold increase in standard deviation of means between a sample panel size of 5 (SD = 1.57) and a panel size of 30 (SD = 0.50). For reliability, there was a large difference in precision between a panel size of 5 (0.62) and a panel size of 10 (0.70). When the panel size was over 20, the gain became negligible (0.74 for 20 and 0.76 for 38). For correlation, the mean correlation coefficient values were 0.90 with 5 panel members, 0.95 with 10 members and 0.98 with 20 members. Conclusion, Any number over 10 is associated with acceptable reliability and good correlation between the samples versus the full panel of 38. For high stake examinations, using a panel of 20 members is recommended. Recruiting more than 20 panel members shows only a marginal benefit in terms of psychometric properties. [source]


    Poorly performing physicians: Does the script concordance test detect bad clinical reasoning?,

    THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS, Issue 3 2010
    François Goulet MD
    Abstract Introduction Evaluation of poorly performing physicians is a worldwide concern for licensing bodies. The Collège des Médecins du Québec currently assesses the clinical competence of physicians previously identified with potential clinical competence difficulties through a day-long procedure called the Structured Oral Interview (SOI). Two peer physicians produce a qualitative report. In view of remediation activities and the potential for legal consequences, more information on the clinical reasoning process (CRP) and quantitative data on the quality of that process is needed. This study examines the Script Concordance Test (SCT), a tool that provides a standardized and objective measure of a specific dimension of CRP, clinical data interpretation (CDI), to determine whether it could be useful in that endeavor. Methods Over a 2-year period, 20 family physicians took, in addition to the SOI, a 1-hour paper-and-pencil SCT. Three evaluators, blind as to the purpose of the experiment, retrospectively reviewed SOI reports and were asked to estimate clinical reasoning quality. Subjects were classified into 2 groups (below and above median of the score distribution) for the 2 assessment methods. Agreement between classifications is estimated with the use of the Kappa coefficient. Results Intraclass correlation for SOI was 0.89. Cronbach alpha coefficient for the SCT was 0.90. Agreement between methods was found for 13 participants (Kappa: 0.30, P = 0.18), but 7 out of 20 participants were classified differently in both methods. All participants but 1 had SCT scores below 2 SD of panel mean, thus indicating serious deficiencies in CDI. Discussion The finding that the majority of the referred group did so poorly on CDI tasks has great interest for assessment as well as for remediation. In remediation of prescribing skills, adding SCT to SOI is useful for assessment of cognitive reasoning in poorly performing physicians. The structured oral interview should be improved with more precise reporting by those who assess the clinical reasoning process of examinees, and caution is recommended in interpreting SCT scores; they reflect only a part of the reasoning process. [source]