Home About us Contact | |||
Competency Domains (competency + domain)
Selected AbstractsSetting school-level outcome standardsMEDICAL EDUCATION, Issue 2 2006David T Stern Background, To establish international standards for medical schools, an appropriate panel of experts must decide on performance standards. A pilot test of such standards was set in the context of a multidimensional (multiple-choice question examination, objective structured clinical examination, faculty observation) examination at 8 leading schools in China. Methods, A group of 16 medical education leaders from a broad array of countries met over a 3-day period. These individuals considered competency domains, examination items, and the percentage of students who could fall below a cut-off score if the school was still to be considered as meeting competencies. This 2-step process started with a discussion of the borderline school and the relative difficulty of a borderline school in achieving acceptable standards in a given competency domain. Committee members then estimated the percentage of students falling below the standard that is tolerable at a borderline school and were allowed to revise their ratings after viewing pilot data. Results, Tolerable failure rates ranged from 10% to 26% across competency domains and examination types. As with other standard-setting exercises, standard deviations from initial to final estimates of the tolerable failure rates fell, but the cut-off scores did not change significantly. Final, but not initial cut-off scores were correlated with student failure rates (r = 0.59, P = 0.03). Discussion, This paper describes a method to set school-level outcome standards at an international level based on prior established standard-setting methods. Further refinement of this process and validation using other examinations in other countries will be needed to achieve accurate international standards. [source] Using job analysis to identify core and specific competencies: implications for selection and recruitmentMEDICAL EDUCATION, Issue 12 2008Fiona Patterson Objective, Modern postgraduate medical training requires both accurate and reliable selection procedures. An essential first step is to conduct detailed job analysis studies. This paper reports data on a series of job analyses to develop a competency model for three secondary care specialties (anaesthesia, obstetrics and gynaecology, and paediatrics). Methods, Three independent job analysis studies were conducted. The content validity of the resulting competency domains was tested using a questionnaire-based study with specialty trainees (specialist registrars [SpRs]) and consultants drawn from the three specialties. Job analysis was carried out in the Yorkshire and the Humber region in the UK. The validation study was administered with additional participants from the West Midlands and Trent regions in the UK. This was an exploratory study. The outcome is a set of competency domains with data on their importance at senior house officer, SpR and consultant grade in each specialty. Results, The study produced a model comprising 14 general competency domains that were common to all the three specialties. However, there were significant between-specialty differences in both definitions of domains and the ratings of importance attached to them. Conclusions, The results indicate that a wide range of attributes beyond clinical knowledge and academic achievement need to be considered in order to ensure doctors train and work within a specialty for which they have a particular aptitude. This has significant implications for developing selection criteria for specialty training. Future research should explore the content validity of these competency domains in other secondary care specialties. [source] Setting school-level outcome standardsMEDICAL EDUCATION, Issue 2 2006David T Stern Background, To establish international standards for medical schools, an appropriate panel of experts must decide on performance standards. A pilot test of such standards was set in the context of a multidimensional (multiple-choice question examination, objective structured clinical examination, faculty observation) examination at 8 leading schools in China. Methods, A group of 16 medical education leaders from a broad array of countries met over a 3-day period. These individuals considered competency domains, examination items, and the percentage of students who could fall below a cut-off score if the school was still to be considered as meeting competencies. This 2-step process started with a discussion of the borderline school and the relative difficulty of a borderline school in achieving acceptable standards in a given competency domain. Committee members then estimated the percentage of students falling below the standard that is tolerable at a borderline school and were allowed to revise their ratings after viewing pilot data. Results, Tolerable failure rates ranged from 10% to 26% across competency domains and examination types. As with other standard-setting exercises, standard deviations from initial to final estimates of the tolerable failure rates fell, but the cut-off scores did not change significantly. Final, but not initial cut-off scores were correlated with student failure rates (r = 0.59, P = 0.03). Discussion, This paper describes a method to set school-level outcome standards at an international level based on prior established standard-setting methods. Further refinement of this process and validation using other examinations in other countries will be needed to achieve accurate international standards. [source] Using Patient Care Quality Measures to Assess Educational OutcomesACADEMIC EMERGENCY MEDICINE, Issue 5 2007Susan R. Swing PhD ObjectivesTo report the results of a project designed to develop and implement a prototype methodology for identifying candidate patient care quality measures for potential use in assessing the outcomes and effectiveness of graduate medical education in emergency medicine. MethodsA workgroup composed of experts in emergency medicine residency education and patient care quality measurement was convened. Workgroup members performed a modified Delphi process that included iterative review of potential measures; individual expert rating of the measures on four dimensions, including measures quality of care and educational effectiveness; development of consensus on measures to be retained; external stakeholder rating of measures followed by a final workgroup review; and a post hoc stratification of measures. The workgroup completed a structured exercise to examine the linkage of patient care process and outcome measures to educational effectiveness. ResultsThe workgroup selected 62 measures for inclusion in its final set, including 43 measures for 21 clinical conditions, eight medication measures, seven measures for procedures, and four measures for department efficiency. Twenty-six measures met the more stringent criteria applied post hoc to further stratify and prioritize measures for development. Nineteen of these measures received high ratings from 75% of the workgroup and external stakeholder raters on importance for care in the ED, measures quality of care, and measures educational effectiveness; the majority of the raters considered these indicators feasible to measure. The workgroup utilized a simple framework for exploring the relationship of residency program educational activities, competencies from the six Accreditation Council for Graduate Medical Education general competency domains, patient care quality measures, and external factors that could intervene to affect care quality. ConclusionsNumerous patient care quality measures have potential for use in assessing the educational effectiveness and performance of graduate medical education programs in emergency medicine. The measures identified in this report can be used as a starter set for further development, implementation, and study. Implementation of the measures, especially for high-stakes use, will require resolution of significant measurement issues. [source] |