Peer Assessment (peer + assessment)

Distribution by Scientific Domains


Selected Abstracts


Peer assessment of competence

MEDICAL EDUCATION, Issue 6 2003
John J Norcini
Objective, This instalment in the series on professional assessment summarises how peers are used in the evaluation process and whether their judgements are reliable and valid. Method, The nature of the judgements peers can make, the aspects of competence they can assess and the factors limiting the quality of the results are described with reference to the literature. The steps in implementation are also provided. Results, Peers are asked to make judgements about structured tasks or to provide their global impressions of colleagues. Judgements are gathered on whether certain actions were performed, the quality of those actions and/or their suitability for a particular purpose. Peers are used to assess virtually all aspects of professional competence, including technical and non-technical aspects of proficiency. Factors influencing the quality of those assessments are reliability, relationships, stakes and equivalence. Conclusion, Given the broad range of ways peer evaluators can be used and the sizeable number of competencies they can be asked to judge, generalisations are difficult to derive and this form of assessment can be good or bad depending on how it is carried out. [source]


Measuring the perceived impact of facilitation on implementing recommendations from external assessment: lessons from the Dutch visitatie programme for medical specialists

JOURNAL OF EVALUATION IN CLINICAL PRACTICE, Issue 6 2005
M. J. M. H. (Kiki) Lombarts PhD
Abstract Objective, To evaluate the impact of facilitation by management consultants on implementing recommendations from external quality assessment (visitatie). Design, Data collection through a postal survey amongst 205 medical specialists, representing 50 hospital-based specialist groups in, the ,Netherlands., Setting, Under the auspices of the specialty societies of surgeons, paediatricians and gynaecologists, 25 groups were offered ,20 h of management consulting to support the implementation of recommendations for quality improvement and were compared to 25 specialist groups not receiving the support. Intervention, The Quality Consultation (QC) took a site-specific multifaceted implementation approach. Main measures, Self-reported degree of implementation of recommendations, specialists' judgement of implementation result and process; experienced obstructing factors in implementing recommendations. Results, The response rate was 54% (n = 110). The supported specialist groups were more successful in partially or fully implementing the recommendations from external peer assessment: 66.1% vs. 53.8%. The implementation result and process were also rated significantly higher for the supported groups. The supported groups reported significantly less (P < 0.005) obstructing factors; in particular for the barriers ,expectation of implementation advantages', ,acceptance of the recommendations' and ,assessed self-efficacy'. The experienced obstructing factors are strongly related with the degree of implementation (spearman rho 0.57,32.5%). Conclusions, This study suggests QC is a powerful implementation strategy. It also shows the limitations of merely quantitatively analysing multifaceted strategies: it does not offer any insight into the ,black box' of the QC. It is recommended that these limitations are met by also exploring multifaceted strategies qualitatively. [source]


Are there better indices for evaluation purposes than the h index?

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 5 2008
A comparison of nine different variants of the h index using data from biomedicine
In this study, we examined empirical results on the h index and its most important variants in order to determine whether the variants developed are associated with an incremental contribution for evaluation purposes. The results of a factor analysis using bibliographic data on postdoctoral researchers in biomedicine indicate that regarding the h index and its variants, we are dealing with two types of indices that load on one factor each. One type describes the most productive core of a scientist's output and gives the number of papers in that core. The other type of indices describes the impact of the papers in the core. Because an index for evaluative purposes is a useful yardstick for comparison among scientists if the index corresponds strongly with peer assessments, we calculated a logistic regression analysis with the two factors resulting from the factor analysis as independent variables and peer assessment of the postdoctoral researchers as the dependent variable. The results of the regression analysis show that peer assessments can be predicted better using the factor ,impact of the productive core' than using the factor ,quantity of the productive core.' [source]


Assessment of professional behaviour in undergraduate medical education: peer assessment enhances performance

MEDICAL EDUCATION, Issue 9 2007
Johanna Schönrock-Adema
Objectives, To examine whether peer assessment can enhance scores on professional behaviour, with the expectation that students who assess peers score more highly on professional behaviour than students who do not assess peers. Methods, Undergraduate medical students in their first and second trimesters were randomly assigned to conditions with or without peer assessment. Of the total group of 336 students, 278 students participated in the first trimester, distributed over 31 tutorial groups, 17 of which assessed peers. The second trimester involved 272 students distributed over 32 groups, 15 of which assessed peers. Professional behaviour was rated by tutors on 3 dimensions: Task Performance; Aspects of Communication, and Personal Performance. The rating scale ranged from 1 (poor) to 10 (excellent). Data were analysed using multivariate repeated measures multilevel analysis. Results, Assessment scores were found to have generally increased in the second trimester, especially the personal performance scores of students who assessed peers. In addition, female students were found to have significantly higher scores than male students. Conclusions, In undergraduate medical education, peer assessment has a positive influence on professional behaviour. However, the results imply that peer assessment is only effective after students have become adjusted to the complex learning environment. [source]


Physician peer assessments for compliance with methadone maintenance treatment guidelines

THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS, Issue 4 2007
Carol Strike PhD
Abstract Introduction: Medical associations and licensing bodies face pressure to implement quality assurance programs, but evidence-based models are lacking. To improve the quality of methadone maintenance treatment (MMT), the College of Physicians and Surgeons of Ontario, Canada, conducts an innovative quality assurance program on the basis of peer assessments. Using data from this program, we assessed physician compliance with MMT guidelines and determined whether physician factors (e.g., training, years of practice), practice type, practice location, and/or caseload is associated with MMT guideline adherence. Methods: Secondary analysis of methadone practice assessment data collected by the College of Physicians and Surgeons of Ontario, Canada. Assessment data from methadone prescribing physicians who completed their first year of methadone practice were analyzed. We calculated the mean percentage compliance per guideline per physician and global compliance across all guidelines per physician. Linear regression was used to assess factors associated with compliance. Results: Data from 149 physician practices and 1,326 patient charts were analyzed. Compliance across all charts was greater than 90% for most areas of care. Compliance was less than 90% for take-home medication procedures; urine toxicology screening; screening for hepatitis B virus (HBV), hepatitis C virus (HCV), human immunodeficiency virus (HIV), tuberculosis, other sexually transmitted infections, and completion of a psychosocial assessment. Mean global compliance across all charts and guidelines per physician was 94.3% (standard deviation = 7.4%) with a range of 70% to 100%. Linear regression analysis revealed that only year of medical school graduation was a significant predictor of physician compliance. Discussion: This is the first report of MMT peer assessments in Canada. Compliance is high. Few countries conduct similar assessment processes; none report physician-level results. We cannot quantify the contribution of peer assessment, training, or self-selection to the compliance rates, but compared to other areas of practice these rates suggest that peer assessment may exert a significant effect on compliance. A similar assessment process may in other areas of clinical practice improve physician compliance. [source]


Effectiveness of an enhanced peer assessment program: Introducing education into regulatory assessment

THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS, Issue 3 2006
Elizabeth F. Wenghofer PhD
Abstract Introduction: The College of Physicians and Surgeons of Ontario developed an enhanced peer assessment (EPA), the goal of which was to provide participating physicians educational value by helping them identify specific learning needs and aligning the assessment process with the principles of continuing education and professional development. In this article, we examine the educational value of the EPA and whether physicians will change their practice as a result of the recommendations received during the assessment. Methods: A group of 41 randomly selected physicians (23 general or family practitioners, 7 obstetrician-gynecologists, and 11 general surgeons) agreed to participate in the EPA pilot. Nine experienced peer assessors were trained in the principles of knowledge translation and the use of practice resources (tool kits) and clinical practice guidelines. The EPA was evaluated through the use of a postassessment questionnaire and focus groups. Results: The physicians felt that the EPA was fair and educationally valuable. Most focus group participants indicated that they implemented recommendations made by the assessor and made changes to some aspect of their practice. The physicians' suggestions for improvement included expanding the assessment beyond the current medical record review and interview format (eg, to include multisource feedback), having assessments occur at regular intervals (eg, every 5 to 10 years), and improving the administrative process by which physicians apply for educational credit for EPA activities. Conclusions: The EPA pilot study has demonstrated that providing detailed individualized feedback and optimizing the one-to-one interaction between assessors and physicians is a promising method for changing physician behavior. The college has started the process of aligning all its peer assessments with the principles of continuing professional development outlined in the EPA model. [source]


The nature of publishing and assessment in Geography and Environmental Studies: evidence from the Research Assessment Exercise 2008

AREA, Issue 3 2009
Keith Richards
We present a summary of the kinds of outputs submitted to the Geography and Environmental Studies sub-panel (H-32) for the 2008 Research Assessment Exercise (RAE), and examine the relationships between the peer assessment of research quality that the RAE process has typified, and alternative modes of assessment based on bibliometrics. This comparison is effected using (in aggregate form) some of the results from the RAE, together with citation data gathered after completion of the RAE assessment, specifically for the purpose of this paper. We conclude that, if it continues to be necessary and desirable to assess, in some measure and however imprecisely, research quality, then peer assessment cannot be replaced by bibliometrics. Bibliometrics permit measurement of something that may be linked to quality but is essentially a different phenomenon , a measure of ,impact', for example. [source]


Assessor or assessee: How student learning improves by giving and receiving peer feedback

BRITISH JOURNAL OF EDUCATIONAL TECHNOLOGY, Issue 3 2010
Lan Li
This study investigated the relationship between the quality of peer assessment and the quality of student projects in a technology application course for teacher education students. Forty-three undergraduate student participants completed the assigned projects. During the peer assessment process, students first anonymously rated and commented on two randomly assigned peers' projects, and they were then asked to improve their projects based on the feedback they received. Two independent raters blindly evaluated student initial and final projects. Data analysis indicated that when controlling for the quality of the initial projects, there was a significant relationship between the quality of peer feedback students provided for others and the quality of the students' own final projects. However, no significant relationship was found between the quality of peer feedback students received and the quality of their own final projects. This finding supported a prior research claim that active engagement in reviewing peers' projects may facilitate student learning. [source]


Assessing Democracy in a Contested Polity

JCMS: JOURNAL OF COMMON MARKET STUDIES, Issue 4 2001
Christopher Lord
After reviewing difficulties with the literature on the democratic deficit, this article concludes that a method is needed for assessing democracy in a political system where there is fundamental agreement on what would constitute adequately democratic institutions. It then goes on to explore two suggestions for such a method: the development of well-specified indicators of democratic performance for contrasting ideal-types of Euro-democracy; and the attribution of self- and peer assessments to institutional actors with competing perspectives on democratic standards in the EU. [source]


Are there better indices for evaluation purposes than the h index?

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 5 2008
A comparison of nine different variants of the h index using data from biomedicine
In this study, we examined empirical results on the h index and its most important variants in order to determine whether the variants developed are associated with an incremental contribution for evaluation purposes. The results of a factor analysis using bibliographic data on postdoctoral researchers in biomedicine indicate that regarding the h index and its variants, we are dealing with two types of indices that load on one factor each. One type describes the most productive core of a scientist's output and gives the number of papers in that core. The other type of indices describes the impact of the papers in the core. Because an index for evaluative purposes is a useful yardstick for comparison among scientists if the index corresponds strongly with peer assessments, we calculated a logistic regression analysis with the two factors resulting from the factor analysis as independent variables and peer assessment of the postdoctoral researchers as the dependent variable. The results of the regression analysis show that peer assessments can be predicted better using the factor ,impact of the productive core' than using the factor ,quantity of the productive core.' [source]


Physician peer assessments for compliance with methadone maintenance treatment guidelines

THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS, Issue 4 2007
Carol Strike PhD
Abstract Introduction: Medical associations and licensing bodies face pressure to implement quality assurance programs, but evidence-based models are lacking. To improve the quality of methadone maintenance treatment (MMT), the College of Physicians and Surgeons of Ontario, Canada, conducts an innovative quality assurance program on the basis of peer assessments. Using data from this program, we assessed physician compliance with MMT guidelines and determined whether physician factors (e.g., training, years of practice), practice type, practice location, and/or caseload is associated with MMT guideline adherence. Methods: Secondary analysis of methadone practice assessment data collected by the College of Physicians and Surgeons of Ontario, Canada. Assessment data from methadone prescribing physicians who completed their first year of methadone practice were analyzed. We calculated the mean percentage compliance per guideline per physician and global compliance across all guidelines per physician. Linear regression was used to assess factors associated with compliance. Results: Data from 149 physician practices and 1,326 patient charts were analyzed. Compliance across all charts was greater than 90% for most areas of care. Compliance was less than 90% for take-home medication procedures; urine toxicology screening; screening for hepatitis B virus (HBV), hepatitis C virus (HCV), human immunodeficiency virus (HIV), tuberculosis, other sexually transmitted infections, and completion of a psychosocial assessment. Mean global compliance across all charts and guidelines per physician was 94.3% (standard deviation = 7.4%) with a range of 70% to 100%. Linear regression analysis revealed that only year of medical school graduation was a significant predictor of physician compliance. Discussion: This is the first report of MMT peer assessments in Canada. Compliance is high. Few countries conduct similar assessment processes; none report physician-level results. We cannot quantify the contribution of peer assessment, training, or self-selection to the compliance rates, but compared to other areas of practice these rates suggest that peer assessment may exert a significant effect on compliance. A similar assessment process may in other areas of clinical practice improve physician compliance. [source]


Effectiveness of an enhanced peer assessment program: Introducing education into regulatory assessment

THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS, Issue 3 2006
Elizabeth F. Wenghofer PhD
Abstract Introduction: The College of Physicians and Surgeons of Ontario developed an enhanced peer assessment (EPA), the goal of which was to provide participating physicians educational value by helping them identify specific learning needs and aligning the assessment process with the principles of continuing education and professional development. In this article, we examine the educational value of the EPA and whether physicians will change their practice as a result of the recommendations received during the assessment. Methods: A group of 41 randomly selected physicians (23 general or family practitioners, 7 obstetrician-gynecologists, and 11 general surgeons) agreed to participate in the EPA pilot. Nine experienced peer assessors were trained in the principles of knowledge translation and the use of practice resources (tool kits) and clinical practice guidelines. The EPA was evaluated through the use of a postassessment questionnaire and focus groups. Results: The physicians felt that the EPA was fair and educationally valuable. Most focus group participants indicated that they implemented recommendations made by the assessor and made changes to some aspect of their practice. The physicians' suggestions for improvement included expanding the assessment beyond the current medical record review and interview format (eg, to include multisource feedback), having assessments occur at regular intervals (eg, every 5 to 10 years), and improving the administrative process by which physicians apply for educational credit for EPA activities. Conclusions: The EPA pilot study has demonstrated that providing detailed individualized feedback and optimizing the one-to-one interaction between assessors and physicians is a promising method for changing physician behavior. The college has started the process of aligning all its peer assessments with the principles of continuing professional development outlined in the EPA model. [source]