Formative Assessment (formative + assessment)

Distribution by Scientific Domains


Selected Abstracts


Debriefing as Formative Assessment: Closing Performance Gaps in Medical Education

ACADEMIC EMERGENCY MEDICINE, Issue 11 2008
Jenny W. Rudolph PhD
Abstract The authors present a four-step model of debriefing as formative assessment that blends evidence and theory from education research, the social and cognitive sciences, experience drawn from conducting over 3,000 debriefings, and teaching debriefing to approximately 1,000 clinicians worldwide. The steps are to: 1) note salient performance gaps related to predetermined objectives, 2) provide feedback describing the gap, 3) investigate the basis for the gap by exploring the frames and emotions contributing to the current performance level, and 4) help close the performance gap through discussion or targeted instruction about principles and skills relevant to performance. The authors propose that the model, designed for postsimulation debriefings, can also be applied to bedside teaching in the emergency department (ED) and other clinical settings. [source]


Formative assessment of the consultation performance of medical students in the setting of general practice using a modified version of the Leicester Assessment Package

MEDICAL EDUCATION, Issue 7 2000
Robert K McKinley
Objective To evaluate the use of a modified version of the Leicester Assessment Package (LAP) in the formative assessment of the consultation performance of medical students with particular reference to validity, inter-assessor reliability, acceptability, feasibility and educational impact. Design 180 third and fourth year Leicester medical students were directly observed consulting with six general practice patients and independently assessed by a pair of assessors. A total of 70 practice and 16 departmental assessors took part. Performance scores were subjected to generalizability analysis and students' views of the assessment were gathered by questionnaire. Results Four of the five categories of consultation performance (Interviewing and history taking, Patient management, Problem solving and Behaviour and relationship with patients) were assessed in over 99% of consultations and Physical examination was assessed in 94%. Seventy-six percent of assessors reported that the case mix was ,satisfactory' and 20% that it was ,borderline'; 85% of students believed it to have been satisfactory. Generalizability analysis indicates that two independent assessors assessing the performance of students across six consultations would achieve a reliability of 0·94 in making pass or fail decisions. Ninety-eight percent of students perceived that their particular strengths and weaknesses were correctly identified, 99% that they were given specific advice on how to improve their performance and 98% believed that the feedback they had received would have long-term benefit. Conclusions The modified version of the LAP is valid, reliable and feasible in formative assessment of the consultation performance of medical students. Furthermore, almost all students found the process fair and believed it was likely to lead to improvements in their consultation performance. This approach may also be applicable to regulatory assessment as it accurately identifies students at the pass/fail margin. [source]


Framing French Success in Elementary Mathematics: Policy, Curriculum, and Pedagogy

CURRICULUM INQUIRY, Issue 3 2004
FRANCES C. FOWLER
ABSTRACT For many decades Americans have been concerned about the effective teaching of mathematics, and educational and political leaders have often advocated reforms such as a return to the basics and strict accountability systems as the way to improve mathematical achievement. International studies, however, suggest that such reforms may not be the best path to successful mathematics education. Through this qualitative case study, the authors explore in depth the French approach to teaching elementary mathematics, using interviews, classroom observations, and documents as their data sets. They apply three theoretical frameworks to their data and find that the French use large-group instruction and a visible pedagogy, focusing on the discussion of mathematical concepts rather than on the completion of practice exercises. The national curriculum is relatively nonprescriptive, and teachers are somewhat empowered through site-based management. The authors conclude that the keys to French success with mathematics education are ongoing formative assessment, mathematically competent teachers, policies and practices that help disadvantaged children, and the use of constructivist methods. They urge comparative education researchers to look beyond international test scores to deeper issues of policy and practice. [source]


The academic environment: the students' perspective

EUROPEAN JOURNAL OF DENTAL EDUCATION, Issue 2008
K. Divaris (nci)
Abstract Dental education is regarded as a complex, demanding and often stressful pedagogical procedure. Undergraduates, while enrolled in programmes of 4,6 years duration, are required to attain a unique and diverse collection of competences. Despite the major differences in educational systems, philosophies, methods and resources available worldwide, dental students' views regarding their education appear to be relatively convergent. This paper summarizes dental students' standpoint of their studies, showcases their experiences in different educational settings and discusses the characteristics of a positive academic environment. It is a consensus opinion that the ,students' perspective' should be taken into consideration in all discussions and decisions regarding dental education. Moreover, it is suggested that the set of recommendations proposed can improve students' quality of life and well-being, enhance their total educational experience and positively influence their future careers as oral health physicians. The ,ideal' academic environment may be defined as one that best prepares students for their future professional life and contributes towards their personal development, psychosomatic and social well-being. A number of diverse factors significantly influence the way students perceive and experience their education. These range from ,class size', ,leisure time' and ,assessment procedures' to ,relations with peers and faculty', ,ethical climate' and ,extra-curricular opportunities'. Research has revealed that stress symptoms, including psychological and psychosomatic manifestations, are prevalent among dental students. Apparently some stressors are inherent in dental studies. Nevertheless, suggested strategies and preventive interventions can reduce or eliminate many sources of stress and appropriate support services should be readily available. A key point for the Working Group has been the discrimination between ,teaching' and ,learning'. It is suggested that the educational content should be made available to students through a variety of methods, because individual learning styles and preferences vary considerably. Regardless of the educational philosophy adopted, students should be placed at the centre of the process. Moreover, it is critical that they are encouraged to take responsibility for their own learning. Other improvements suggested include increased formative assessment and self-assessment opportunities, reflective portfolios, collaborative learning, familiarization with and increased implementation of information and communication technology applications, early clinical exposure, greater emphasis on qualitative criteria in clinical education, community placements, and other extracurricular experiences such as international exchanges and awareness of minority and global health issues. The establishment of a global network in dental education is firmly supported but to be effective it will need active student representation and involvement. [source]


Currents and eddies in the discourse of assessment: a learning-focused interpretation1

INTERNATIONAL JOURNAL OF APPLIED LINGUISTICS, Issue 2 2006
Pauline Rea-Dickins
évaluation formative de la langue; évaluation sommative de la langue; enseignement; l'anglais en tant que langue supplémentaire (seconde); interaction dans la classe This article explores processes of classroom assessment, in particular ways in which learners using English as an additional language engage in formative assessment within a primary school setting. Transcript evidence of teacher and learner interactions during activities viewed by teachers as formative or summative assessment opportunities are presented as the basis for an analysis of teacher feedback, learner responses to this feedback, as well as learner-initiated talk. The analyses suggest that there are different teacher orientations within assessment and highlight the potential that assessment dialogues might offer for assessment as a resource for language learning, thus situating this work at the interface between assessment and second language acquisition. The article also questions the extent to which learners are aware of the different assessment purposes embedded within instruction. Cet article explore les procédés d'évaluation pratiqués dans les salles de classe des écoles primaires en particulier les méthodes que les apprenants de l'anglais seconde langue utilisent dans le cadre d' une évaluation formative. Les transcriptions des interactions entre l'enseignant et l'apprenant durant les activités considérées par les enseignants comme étant des opportunités d'évaluation à la fois formatives et sommativesforment la base de l'analyse du feedback de l'enseignant, des réponses de l'apprenant à ce feedback ainsi que du discours initié par l'apprenant. Les analyses suggèrent qu'il existe différentes orientations de la part de l'enseignant au sein de l'évaluation et mettent en valeur le potentiel que les dialogues d'évaluation peuvent offrir en tant que ressource dans l'apprentissage d'une langue, situant ainsi ce travail dans l'interface entre l'évaluation et l'acquisition d'une seconde langue. L'auteur de cet article se demande à quel point les apprenants sont conscients des différents objectifs d'évaluation ancrés dans l'enseignement. [source]


Making formative assessment discernable to pre-service teachers of science

JOURNAL OF RESEARCH IN SCIENCE TEACHING, Issue 4 2010
Gayle A. Buck
Abstract The purpose of this pragmatic action research study was to explore our re-conceptualization efforts in preparing pre-service teachers to guide the inquiry process with formative assessment and subsequently use the understandings to improve our teacher preparation program. The process was guided by two questions: to what extent did course re-conceptualization efforts lead to a more informed understanding of formative assessment by pre-service teachers and did strategies enacted in the re-conceptualized methods course foster or hinder pre-service teachers' understanding of formative assessment? Data from this study support the following findings: (1) a substantial pre- to post-methods course difference was realized in the pre-service teachers' understanding of formative assessment; (2) explicit and contextualized approaches to formative assessment in the methods course led to increased understandings by pre-service teachers; (3) an implicit approach led to improvements in course structure but did not foster pre-service teachers' understanding of the reflexive nature of formative assessment; and (4) a field-based case study on elementary science teaching both hindered and fostered our efforts with formative assessment. This study yields implications for pre-service teacher education on formative assessment. To foster pre-service teachers' knowledge and skills, we suggest explicit instruction on formative assessment combined with case studies, field experiences, and ongoing reflection. © 2009 Wiley Periodicals, Inc. J Res Sci Teach 47: 402,421, 2010 [source]


Preservice elementary teachers' views of their students' prior knowledge of science

JOURNAL OF RESEARCH IN SCIENCE TEACHING, Issue 4 2008
Valerie K. Otero
Abstract Pre-service teachers face many challenges as they learn to teach in ways that are different from their own educational experiences. Pre-service teachers often enter teacher education courses with pre-conceptions about teaching and learning that may or may not be consistent with contemporary learning theory. To build on preservice teachers' prior knowledge, we need to identify the types of views they have when entering teacher education courses and the views they develop throughout these courses. The study reported here focuses specifically on preservice teachers' views of their own students' prior knowledge and the implications these views have on their understanding of the formative assessment process. Sixty-one preservice teachers were studied from three sections of a science methods course. Results indicate that preservice teachers exhibited a limited number of views about students' prior knowledge. These views tended to privilege either academic or experience-based concepts for different aspects of formative assessment, in contrast to contemporary perspectives on teaching for understanding. Rather than considering these views as misconceptions, it is argued that it is more useful to consider them as resources for further development of a more flexible concept of formative assessment. Four common views are discussed in detail and applied to science teacher education. © 2008 Wiley Periodicals, Inc. J Res Sci Teach 45: 497,523, 2008 [source]


Exploring teachers' informal formative assessment practices and students' understanding in the context of scientific inquiry

JOURNAL OF RESEARCH IN SCIENCE TEACHING, Issue 1 2007
Maria Araceli Ruiz-Primo
This study explores teachers' informal formative assessment practices in three middle school science classrooms. We present a model for examining these practices based on three components of formative assessment (eliciting, recognizing, and using information) and the three domains linked to scientific inquiry (epistemic frameworks, conceptual structures, and social processes). We describe the informal assessment practices as ESRU cycles,the teacher Elicits a question; the Student responds; the teacher Recognizes the student's response; and then Uses the information collected to support student learning. By tracking the strategies teachers used in terms of ESRU cycles, we were able to capture differences in assessment practices across the three teachers during the implementation of four investigations of a physical science unit on buoyancy. Furthermore, based on information collected in a three-question embedded assessment administered to assess students' learning, we linked students' level of performance to the teachers' informal assessment practices. We found that the teacher who more frequently used complete ESRU cycles had students with higher performance on the embedded assessment as compared with the other two teachers. We conclude that the ESRU model is a useful way of capturing differences in teachers' informal assessment practices. Furthermore, the study suggests that effective informal formative assessment practices may be associated with student learning in scientific inquiry classrooms. © 2006 Wiley Periodicals, Inc. J Res Sci Teach [source]


Challenges in multisource feedback: intended and unintended outcomes

MEDICAL EDUCATION, Issue 6 2007
Joan Sargeant
Context, Multisource feedback (MSF) is a type of formative assessment intended to guide learning and performance change. However, in earlier research, some doctors questioned its validity and did not use it for improvement, raising questions about its consequential validity (i.e. its ability to produce intended outcomes related to learning and change). The purpose of this qualitative study was to increase understanding of the consequential validity of MSF by exploring how doctors used their feedback and the conditions influencing this use. Methods, We used interviews with open-ended questions. We purposefully recruited volunteer participants from 2 groups of family doctors who participated in a pilot assessment of MSF: those who received high (n = 25) and those who received average/lower (n = 44) scores. Results, Respondents included 12 in the higher- and 16 in the average/lower-scoring groups. Fifteen interpreted their feedback as positive (i.e. confirming current practice) and did not make changes. Thirteen interpreted feedback as negative in 1 or more domains (i.e. not confirming their practice and indicating need for change). Seven reported making changes. The most common changes were in patient and team communication; the least common were in clinical competence. Positive influences upon change included receiving specific feedback consistent with other sources of feedback from credible reviewers who were able to observe the subjects. These reviewers were most frequently patients. Discussion, Findings suggest circumstances that may contribute to low consequential validity of MSF for doctors. Implications for practice include enhancing procedural credibility by ensuring reviewers' ability to observe respective behaviours, enhancing feedback usefulness by increasing its specificity, and considering the use of more objective measures of clinical competence. [source]


Early identification of ,at-risk' students by the parents of paediatric patients

MEDICAL EDUCATION, Issue 9 2005
Maree O'Keefe
Introduction, Assessment of medical student clinical skills is best carried out using multiple assessment methods. A programme was developed to obtain parent evaluations of medical student paediatric interview skills for feedback and to identify students at risk of poor performance in summative assessments. Method, A total of 130 parent evaluations were obtained for 67 students (parent participation 72%, student participation 58%). Parents completed a 13-item questionnaire [Interpersonal Skills Rating Scale (IPS) maximum score 91, higher scores = higher student skill level]. Students received their individual parent scores and de-identified class mean scores as feedback, and participants were surveyed regarding the programme. Parent evaluation scores were compared with student performance in formative and summative faculty assessments of clinical interview skills. Results, Parents supported the programme and participating students valued parent feedback. Students with a parent score that was less than 1 standard deviation (SD) below the class mean (low IPS score students) obtained lower faculty summative assessment scores than did other students (mean ± SD, 59% ± 5 versus 64% ± 7; P < 0.05). Obtaining 1 low IPS score was associated with a subsequent faculty summative assessment score below the class mean (sensitivity 0.38, specificity 0.88). Parent evaluations combined with faculty formative assessments identified 50% of students who subsequently performed below the class mean in summative assessments. Conclusions, Parent evaluations provided useful feedback to students and identified 1 group of students at increased risk of weaker performance in summative assessments. They could be combined with other methods of formative assessment to enhance screening procedures for clinically weak students. [source]


Formative assessment of the consultation performance of medical students in the setting of general practice using a modified version of the Leicester Assessment Package

MEDICAL EDUCATION, Issue 7 2000
Robert K McKinley
Objective To evaluate the use of a modified version of the Leicester Assessment Package (LAP) in the formative assessment of the consultation performance of medical students with particular reference to validity, inter-assessor reliability, acceptability, feasibility and educational impact. Design 180 third and fourth year Leicester medical students were directly observed consulting with six general practice patients and independently assessed by a pair of assessors. A total of 70 practice and 16 departmental assessors took part. Performance scores were subjected to generalizability analysis and students' views of the assessment were gathered by questionnaire. Results Four of the five categories of consultation performance (Interviewing and history taking, Patient management, Problem solving and Behaviour and relationship with patients) were assessed in over 99% of consultations and Physical examination was assessed in 94%. Seventy-six percent of assessors reported that the case mix was ,satisfactory' and 20% that it was ,borderline'; 85% of students believed it to have been satisfactory. Generalizability analysis indicates that two independent assessors assessing the performance of students across six consultations would achieve a reliability of 0·94 in making pass or fail decisions. Ninety-eight percent of students perceived that their particular strengths and weaknesses were correctly identified, 99% that they were given specific advice on how to improve their performance and 98% believed that the feedback they had received would have long-term benefit. Conclusions The modified version of the LAP is valid, reliable and feasible in formative assessment of the consultation performance of medical students. Furthermore, almost all students found the process fair and believed it was likely to lead to improvements in their consultation performance. This approach may also be applicable to regulatory assessment as it accurately identifies students at the pass/fail margin. [source]


Teamwork Training for Interdisciplinary Applications

ACADEMIC EMERGENCY MEDICINE, Issue 2009
Bev Foster
Safe healthcare delivery in the emergency department is a team sport. Medical educators seek efficient and effective methods to teach and practice teamwork skills to all levels of interdisciplinary learners with the goal of enhancing communication, insuring smooth clinical operations, and improving patient safety. We present a new interdisciplinary, health professions teamwork curriculum, modified from TeamSTEPPS, that is efficient, effective, and can be delivered using multiple teaching modalities. This flexible curriculum structure begins with a brief didactic core designed to orient the learners to team concepts and invest them in the rationale for focusing on teamwork skills. This is followed by one of four additional instructional modalities: traditional didactic, interactive audience response didactic, low-fidelity simulation (role play), and high-fidelity patient simulation. Each of these additional modalities can be utilized singly or in combination to enhance the learners' attitudes, knowledge, and skills in team-based behaviors. Interdisciplinary cases have been defined, piloted, modified, and deployed at two major universities across more than 400 learners. Interdisciplinary simulation scenarios range from team-based role play to high-fidelity human patient simulation. Assessment cases using standardized patients are designed for interdisciplinary applications and focus on observable team-based behaviors rather than clinical knowledge. All of these cases have accompanying assessment instruments for attitudes, knowledge, and skills. These instruments may be used for formative assessment to provide feedback to the learners and standardize the faculty's information delivery. If used in a summative manner they provide data for course completion criteria, remediation, or competency assessment. [source]


Debriefing as Formative Assessment: Closing Performance Gaps in Medical Education

ACADEMIC EMERGENCY MEDICINE, Issue 11 2008
Jenny W. Rudolph PhD
Abstract The authors present a four-step model of debriefing as formative assessment that blends evidence and theory from education research, the social and cognitive sciences, experience drawn from conducting over 3,000 debriefings, and teaching debriefing to approximately 1,000 clinicians worldwide. The steps are to: 1) note salient performance gaps related to predetermined objectives, 2) provide feedback describing the gap, 3) investigate the basis for the gap by exploring the frames and emotions contributing to the current performance level, and 4) help close the performance gap through discussion or targeted instruction about principles and skills relevant to performance. The authors propose that the model, designed for postsimulation debriefings, can also be applied to bedside teaching in the emergency department (ED) and other clinical settings. [source]


Enhancing learning through formative assessment and feedback , By Alastair Irons

BRITISH JOURNAL OF EDUCATIONAL TECHNOLOGY, Issue 5 2008
Liesbeth Baartman
No abstract is available for this article. [source]


Local perspective of the impact of the HIPAA privacy rule on research

CANCER, Issue 2 2006
M.P.H., Michael S. Wolf Ph.D.
Abstract BACKGROUND The operational and economic impact of the Health Insurance Portability and Accountability Act (HIPAA) of 1996 was evaluated. The setting was a natural experiment which involved a single-site, clinical research study that was initiated before the enactment of HIPAA and subsequently modified to be compliant with the new policy. METHODS A formative assessment was conducted of the recruitment process to a clinical trial evaluating the efficacy of an educational strategy to inform Veterans about the National Cancer Institute/Department of Veterans Affairs cosponsored Selenium and Vitamin E Cancer Prevention Trial (SELECT). Personnel time and costs were determined based on weekly accrual for study periods before and after the implementation of HIPAA. Root cause analysis was used to assess the recruitment protocol and to identify areas for improvement. RESULTS The implementation of HIPAA resulted in a 72.9% decrease in patient accrual (7.0 patients/wk vs. 1.9 patients/wk, P < 0.001), and a threefold increase in mean personnel time spent recruiting (4.1 hrs/patient vs. 14.1 hrs/patient, P < 0.001) and mean recruitment costs ($49/patient vs. $169/patient, P < 0.001). Upon review of the modified HIPAA-compliant protocol, revisions in the recruitment procedure were adopted. The revised protocol improved weekly accrual by 73% (1.9 patients/wk vs. 7.1 patients/wk, P < 0.001) and resulted in improvements in personnel time (5.4 hrs/patient) and recruitment costs ($65/patient). CONCLUSION Enactment of HIPAA initially placed a considerable burden on research time and costs. Establishing HIPAA-compliant recruitment policies can overcome some of these obstacles, although recruitment costs and time are likely to be greater than those observed before HIPAA. Cancer 2006. © 2005 American Cancer Society. [source]


Moving Toward a Comprehensive Assessment System: A Framework for Considering Interim Assessments

EDUCATIONAL MEASUREMENT: ISSUES AND PRACTICE, Issue 3 2009
Marianne Perie
Local assessment systems are being marketed as formative, benchmark, predictive, and a host of other terms. Many so-called formative assessments are not at all similar to the types of assessments and strategies studied by,Black and Wiliam (1998),but instead are interim assessments. In this article, we clarify the definition and uses of interim assessments and argue that they can be an important piece of a comprehensive assessment system that includes formative, interim, and summative assessments. Interim assessments are given on a larger scale than formative assessments, have less flexibility, and are aggregated to the school or district level to help inform policy. Interim assessments are driven by their purpose, which fall into the categories of instructional, evaluative, or predictive. Our intent is to provide a specific definition for these "interim assessments" and to develop a framework that district and state leaders can use to evaluate these systems for purchase or development. The discussion lays out some concerns with the current state of these assessments as well as hopes for future directions and suggestions for further research. [source]


Early identification of ,at-risk' students by the parents of paediatric patients

MEDICAL EDUCATION, Issue 9 2005
Maree O'Keefe
Introduction, Assessment of medical student clinical skills is best carried out using multiple assessment methods. A programme was developed to obtain parent evaluations of medical student paediatric interview skills for feedback and to identify students at risk of poor performance in summative assessments. Method, A total of 130 parent evaluations were obtained for 67 students (parent participation 72%, student participation 58%). Parents completed a 13-item questionnaire [Interpersonal Skills Rating Scale (IPS) maximum score 91, higher scores = higher student skill level]. Students received their individual parent scores and de-identified class mean scores as feedback, and participants were surveyed regarding the programme. Parent evaluation scores were compared with student performance in formative and summative faculty assessments of clinical interview skills. Results, Parents supported the programme and participating students valued parent feedback. Students with a parent score that was less than 1 standard deviation (SD) below the class mean (low IPS score students) obtained lower faculty summative assessment scores than did other students (mean ± SD, 59% ± 5 versus 64% ± 7; P < 0.05). Obtaining 1 low IPS score was associated with a subsequent faculty summative assessment score below the class mean (sensitivity 0.38, specificity 0.88). Parent evaluations combined with faculty formative assessments identified 50% of students who subsequently performed below the class mean in summative assessments. Conclusions, Parent evaluations provided useful feedback to students and identified 1 group of students at increased risk of weaker performance in summative assessments. They could be combined with other methods of formative assessment to enhance screening procedures for clinically weak students. [source]