Students' Scores (student + score)

Distribution by Scientific Domains


Selected Abstracts


The effectiveness and reliability of peer-marking in first-year medical students

MEDICAL EDUCATION, Issue 10 2006
Rachel English
Background, Peer-marking has been suggested as a method to enhance self-directed learning and reflection, although whether this improves performance is unclear. This study evaluated the impact of peer-marking on examination performance and investigated its reliability and acceptability to students. Methods, First-year medical students were randomised to peer-marking using a model answer or no intervention (control arm). Student scores were compared with tutor-marked scores. Two months later, students completed a summative assessment and performance was compared between students randomised to peer-marking and the control arm. A focus group was held with students in the intervention arm to capture their experiences and attitudes. Results, A total of 289 of 568 students consented to participate and 147 were randomised to peer-marking (142 controls). Students randomised to peer-marking achieved marginally higher examination marks (1.5% difference, 95% CI ,0.8% to 3.9%, P = 0.19) than controls (adjusting for year and in-course assessment), although this may have been due to chance. Students were harsher markers than the tutors. Focus group analysis suggested that students valued peer-marking, although concerns about passing judgement on a colleague's work were expressed. Conclusions, Peer-marking did not have a substantial effect on examination performance, although a modest effect cannot be excluded. Students gained insight into examination technique but may not have gained deeper knowledge. Given its potential positive educational value, further work is required to understand how peer-marking can be used more effectively to enhance the learning experience. [source]


Relationship Between Perceived Clothing Comfort and Exam Performance

FAMILY & CONSUMER SCIENCES RESEARCH JOURNAL, Issue 4 2005
Rick Bell
Recent controlled laboratory studies have shown an effect of clothing comfort on cognitive performance. To test this relationship under naturalistic conditions, student scores on statistics exams were compared with comfort ratings. Prior to the exam, students rated their confidence in taking the exam, number of hours studied, comfort level, type of clothes being worn, and other relevant variables. To maintain naturalistic conditions, clothing was not manipulated but was self-selected. Controlling for other variables associated with exam performance, multiple regression results indicated a significant positive relationship between comfort ratings and exam scores, with the model explaining 48% of the variance in exam scores R2 = .48). As expected, the more formal the attire, the lower the comfort rating of that attire and the lower the exam score. This study provides further evidence of a relationship between perceived clothing comfort and cognitive performance. [source]


Putting Rubrics to the Test: The Effect of a Model, Criteria Generation, and Rubric-Referenced Self-Assessment on Elementary School Students' Writing

EDUCATIONAL MEASUREMENT: ISSUES AND PRACTICE, Issue 2 2008
Heidi L. Andrade
The purpose of this study was to investigate the effect of reading a model written assignment, generating a list of criteria for the assignment, and self-assessing according to a rubric, as well as gender, time spent writing, prior rubric use, and previous achievement on elementary school students' scores for a written assignment (N = 116). Participants were in grades 3 and 4. The treatment involved using a model paper to scaffold the process of generating a list of criteria for an effective story or essay, receiving a written rubric, and using the rubric to self-assess first drafts. The comparison condition involved generating a list of criteria for an effective story or essay, and reviewing first drafts. Findings include a main effect of treatment and of previous achievement on total writing scores, as well as main effects on scores for the individual criteria on the rubric. The results suggest that using a model to generate criteria for an assignment and using a rubric for self-assessment can help elementary school students produce more effective writing. [source]


The Impact of Performance Level Misclassification on the Accuracy and Precision of Percent at Performance Level Measures

JOURNAL OF EDUCATIONAL MEASUREMENT, Issue 2 2008
Damian W. Betebenner
No Child Left Behind (NCLB) performance mandates, embedded within state accountability systems, focus school AYP (adequate yearly progress) compliance squarely on the percentage of students at or above proficient. The singular importance of this quantity for decision-making purposes has initiated extensive research into percent proficient as a measure of school quality. In particular, technical discussions have scrutinized the impact of sampling, measurement, and other sources of error on percent proficient statistics. In this article, we challenge the received orthodoxy that measurement error associated with individual students' scores is inconsequential for aggregate percent proficient statistics. Synthesizing current classification accuracy research with techniques from randomized response designs, we establish results which specify the extent to which measurement error,manifest as performance level misclassifications,produces bias and increases error variability for percent at performance level statistics. The results have direct relevance for the design of coherent and fair accountability systems based upon assessment outcomes. [source]


Evaluation of HIV/AIDS Education in Russia Using a Video Approach

JOURNAL OF SCHOOL HEALTH, Issue 6 2000
Mohammad R. Torabi
ABSTRACT: HIV/AIDS has intruded upon the geographic, political, ethnic, gender, and sexual orientation of communities all over the world. As of April 1999, Russia has recorded approximately 13,532 cases of HIV infection. Since the costs of treatment are expensive for many countries, and especially for Russia, educational intervention appears to offer the most effective and affordable solution. A quasi-experimental design, with pre/post tests and intervention (through video education)/control groups, was used to study 20 public schools in St. Petersburg, Russia. Results confirmed the lack of HIV/AIDS education in schools and insufficient information sources from parents, friends, and public health education. ANCOVA statistics demonstrated that use of video education significantly improved students' scores on knowledge and attitudes related to HIV/AIDS prevention. Thus, health educators should consider video education as an effective and efficient tool to present facts to a young audience when they face constraints of shortage of funds, lack of trained teachers, and scarcity of related information. [source]


assessment: Influences of deep learning, need for cognition and preparation time on open- and closed-book test performance

MEDICAL EDUCATION, Issue 9 2010
Marjolein Heijne-Penninga
Medical Education 2010: 44: 884,891 Objectives, The ability to master discipline-specific knowledge is one of the competencies medical students must acquire. In this context, ,mastering' means being able to recall and apply knowledge. A way to assess this competency is to use both open- and closed-book tests. Student performance on both tests can be influenced by the way the student processes information. Deep information processing is expected to influence performance positively. The personal preferences of students in relation to how they process information in general (i.e. their level of need for cognition) may also be of importance. In this study, we examined the inter-relatedness of deep learning, need for cognition and preparation time, and scores on open- and closed-book tests. Methods, This study was conducted at the University Medical Centre Groningen. Participants were Year 2 students (n = 423). They were asked to complete a questionnaire on deep information processing, a scale for need for cognition on a questionnaire on intellectualism and, additionally, to write down the time they spent on test preparation. We related these measures to the students' scores on two tests, both consisting of open- and closed-book components and used structural equation modelling to analyse the data. Results, Both questionnaires were completed by 239 students (57%). The results showed that need for cognition positively influenced both open- and closed-book test scores (,-coefficients 0.05 and 0.11, respectively). Furthermore, study outcomes measured by open-book tests predicted closed-book test results better than the other way around (,-coefficients 0.72 and 0.11, respectively). Conclusions, Students with a high need for cognition performed better on open- as well as closed-book tests. Deep learning did not influence their performance. Adding open-book tests to the regularly used closed-book tests seems to improve the recall of knowledge that has to be known by heart. Need for cognition may provide a valuable addition to existing theories on learning. [source]


Reliability and validity of the direct observation clinical encounter examination (DOCEE)

MEDICAL EDUCATION, Issue 3 2003
Hossam Hamdy
Context, The College of Medicine and Medical Sciences at the Arabian Gulf University, Bahrain, replaced the traditional long case/short case clinical examination on the final MD examination with a direct observation clinical encounter examination (DOCEE). Each student encountered four real patients. Two pairs of examiners from different disciplines observed the students taking history and conducting physical examinations and jointly assessed their clinical competence. Objectives, To determine the reliability and validity of the DOCEE by investigating whether examiners agree when scoring, ranking and classifying students; to determine the number of cases and examiners necessary to produce a reliable examination, and to establish whether the examination has content and concurrent validity. Subjects, Fifty-six final year medical students and 22 examiners (in pairs) participated in the DOCEE in 2001. Methods, Generalisability theory, intraclass correlation, Pearson correlation and kappa were used to study reliability and agreement between the examiners. Case content and Pearson correlation between DOCEE and other examination components were used to study validity. Results, Cronbach's alpha for DOCEE was 0·85. The intraclass and Pearson correlation of scores given by specialists and non-specialists ranged from 0·82 to 0·93. Kappa scores ranged from 0·56 to 1·00. The overall intraclass correlation of students' scores was 0·86. The generalisability coefficient with four cases and two raters was 0·84. Decision studies showed that increasing the cases from one to four improved reliability to above 0·8. However, increasing the number of raters had little impact on reliability. The use of a pre-examination blueprint for selecting the cases improved the content validity. The disattenuated Pearson correlations between DOCEE and other performance measures as a measure of concurrent validity ranged from 0·67 to 0·79. Conclusions, The DOCEE was shown to have good reliability and interrater agreement between two independent specialist and non-specialist examiners on the scoring, ranking and pass/fail classification of student performance. It has adequate content and concurrent validity and provides unique information about students' clinical competence. [source]