Home About us Contact | |||
Fail Decisions (fail + decision)
Selected AbstractsOn the reliability of a dental OSCE, using SEM: effect of different daysEUROPEAN JOURNAL OF DENTAL EDUCATION, Issue 3 2008M. Schoonheim-Klein Abstract Aim:, The first aim was to study the reliability of a dental objective structured clinical examination (OSCE) administered over multiple days, and the second was to assess the number of test stations required for a sufficiently reliable decision in three score interpretation perspectives of a dental OSCE administered over multiple days. Materials and methods:, In four OSCE administrations, 463 students of the year 2005 and 2006 took the summative OSCE after a dental course in comprehensive dentistry. The OSCE had 16,18 5-min stations (scores 1,10), and was administered per OSCE on four different days of 1 week. ANOVA was used to test for examinee performance variation across days. Generalizability theory was used for reliability analyses. Reliability was studied from three interpretation perspectives: for relative (norm) decisions, for absolute (domain) and pass,fail (mastery) decisions. As an indicator of reproducibility of test scores in this dental OSCE, the standard error of measurement (SEM) was used. The benchmark of SEM was set at <0.51. This is corresponding to a 95% confidence interval (CI) of <1 on the original scoring scale that ranged from 1 to 10. Results:, The mean weighted total OSCE score was 7.14 on a 10-point scale. With the pass,fail score set at 6.2 for the four OSCE, 90% of the 463 students passed. There was no significant increase in scores over the different days the OSCE was administered. ,Wished' variance owing to students was 6.3%. Variance owing to interaction between student and stations and residual error was 66.3%, more than two times larger than variance owing to stations' difficulty (27.4%). The SEM norm was 0.42 with a CI of ±0.83 and the SEM domain was 0.50, with a CI of ±0.98. In order to make reliable relative decisions (SEM <0.51), the use of minimal 12 stations is necessary, and for reliable absolute and pass,fail decisions, the use of minimal 17 stations is necessary in this dental OSCE. Conclusions:, It appeared reliable, when testing large numbers of students, to administer the OSCE on different days. In order to make reliable decisions for this dental OSCE, minimum 17 stations are needed. Clearly, wide sampling of stations is at the heart of obtaining reliable scores in OSCE, also in dental education. [source] Formative assessment of the consultation performance of medical students in the setting of general practice using a modified version of the Leicester Assessment PackageMEDICAL EDUCATION, Issue 7 2000Robert K McKinley Objective To evaluate the use of a modified version of the Leicester Assessment Package (LAP) in the formative assessment of the consultation performance of medical students with particular reference to validity, inter-assessor reliability, acceptability, feasibility and educational impact. Design 180 third and fourth year Leicester medical students were directly observed consulting with six general practice patients and independently assessed by a pair of assessors. A total of 70 practice and 16 departmental assessors took part. Performance scores were subjected to generalizability analysis and students' views of the assessment were gathered by questionnaire. Results Four of the five categories of consultation performance (Interviewing and history taking, Patient management, Problem solving and Behaviour and relationship with patients) were assessed in over 99% of consultations and Physical examination was assessed in 94%. Seventy-six percent of assessors reported that the case mix was ,satisfactory' and 20% that it was ,borderline'; 85% of students believed it to have been satisfactory. Generalizability analysis indicates that two independent assessors assessing the performance of students across six consultations would achieve a reliability of 0·94 in making pass or fail decisions. Ninety-eight percent of students perceived that their particular strengths and weaknesses were correctly identified, 99% that they were given specific advice on how to improve their performance and 98% believed that the feedback they had received would have long-term benefit. Conclusions The modified version of the LAP is valid, reliable and feasible in formative assessment of the consultation performance of medical students. Furthermore, almost all students found the process fair and believed it was likely to lead to improvements in their consultation performance. This approach may also be applicable to regulatory assessment as it accurately identifies students at the pass/fail margin. [source] Achieving acceptable reliability in oral examinations: an analysis of the Royal College of General Practitioners membership examination's oral componentMEDICAL EDUCATION, Issue 2 2003Val Wass Background, The membership examination of the Royal College of General Practitioners (RCGP) uses structured oral examinations to assess candidates' decision making skills and professional values. Aim, To estimate three indices of reliability for these oral examinations. Methods, In summer 1998, a revised system was introduced for the oral examinations. Candidates took two 20-minute (five topic) oral examinations with two examiner pairs. Areas for oral topics had been identified. Examiners set their own topics in three competency areas (communication, professional values and personal development) and four contexts (patient, teamwork, personal, society). They worked in two pairs (a quartet) to preplan questions on 10 topics. The results were analysed in detail. Generalisability theory was used to estimate three indices of reliability: (A) intercase (B) pass/fail decision and (C) standard error of measurement (SEM). For each index, a benchmark requirement was preset at (A) 0·8 (B) 0·9 and (C) 0·5. Results, There were 896 candidates in total. Of these, 87 candidates (9·7%) failed. Total score variance was attributed to: 41% candidates, 32% oral content, 27% examiners and general error. Reliability coefficients were: (A) intercase 0·65; (B) pass/fail 0·85. The SEM was 0·52 (i.e. precise enough to distinguish within one unit on the rating scale). Extending testing time to four 20-minute oral examinations, each with two examiners, or five orals, each with one examiner, would improve intercase and pass/fail reliabilities to 0·78 and 0·94, respectively. Conclusion, Structured oral examinations can achieve reliabilities appropriate to high stakes examinations if sufficient resources are available. [source] Consultants' opinion on a new practice-based assessment programme for first-year residents in anaesthesiologyACTA ANAESTHESIOLOGICA SCANDINAVICA, Issue 9 2002C. Ringsted Background: Assessment in postgraduate education is moving towards using a broad spectrum of practice-based assessment methods. This approach was recently introduced in first-year residency in anaesthesiology in Denmark. The new assessment programme covers: clinical skills, communication skills, organizational skills and collaborative skills, scholarly proficiencies and professionalism. Eighteen out of a total of 21 assessment instruments were used for pass/fail decisions. The aim of this study was to survey consultants' opinions of the programme in terms of the representativeness of competencies tested, the suitability of the programme as a basis for pass/fail decisions and the relevance and sufficiency of the content of the different assessment instruments. Methods: A description of the assessment programme and a questionnaire were sent to all consultants of anaesthesiology in Denmark. The questionnaire consisted of items, to be answered on a five-point scale, asking the consultants' opinions about representativeness, suitability and content of the programme. Results: The response rate was 251/382 (66%). More than 75% of the respondents agreed that the assessment programme offered adequate coverage of the competencies of a first-year resident and was appropriate for making pass/fail decisions. There was strong agreement that the content of the 18 tests used for pass/fail decisions was relevant and sufficient for pass/fail decisions. Conclusion: Judging from the consultants' opinions, the assessment programme for first-year residency in anaesthesiology appears to be appropriate regarding the range of competencies assessed, the appropriateness as a basis for pass/fail decisions, and regarding the content of the tests used for pass/fail decisions. Further studies are needed to assess the feasibility and acceptability of the programme in practice. [source] |