Peer Evaluation (peer + evaluation)

Distribution by Scientific Domains


Selected Abstracts


Peer evaluation in nurses' professional development: a pilot study to investigate the issues

JOURNAL OF CLINICAL NURSING, Issue 2 2000
Riitta Vuorinen MNSc
,,Peer evaluation in nursing is a method by which the nurse evaluates the work of a peer, according to set evaluation criteria. ,,The aim of the study was to clarify the potential significance of peer evaluation with regard to nurses' career development and relates to the introduction of a career development programme for nurses in a Finnish University Hospital. ,,The research concepts were created on the basis of literature analysis. The concepts served as a basis for data collection, and five open-ended questions were devised from them. Informants (n = 24) gave free-form essay-type answers to these questions. The material was analysed using qualitative content analysis. ,,The results indicate that self-evaluation constitutes the basis for peer evaluation. Peer evaluation allows nurses to give and receive professional and personal support promoting professional development. Professional support offers possibilities for change and alternative action. Personal support requires respect for the peer's equality and individuality. Personal peer support can decrease feelings of uncertainty and insecurity caused by work. ,,The conclusion is drawn that peer evaluation is a means of promoting nurses' professional development to further on-the-job learning in collaboration with peers. [source]


Introducing peer evaluation during tutorial presentation

MEDICAL EDUCATION, Issue 11 2004
Shabih Manzar
No abstract is available for this article. [source]


ENTRY-LEVEL POLICE CANDIDATE ASSESSMENT CENTER: AN EFFICIENT TOOL OR A HAMMER TO KILL A FLY?

PERSONNEL PSYCHOLOGY, Issue 4 2002
KOBI DAYAN
The study examined the validity of the assessment center (AC) as a selection process for entry-level candidates to die police and its unique value beyond cognitive ability tests. The sample included 712 participants who responded to personality and cognitive ability testing (CAT), and underwent an AC procedure. AC results included the overall assessment rating (OAR) and peer evaluation (PE). Seven criterion measures were collected for 585 participants from a training stage and on-the-job performance. Results showed that the selection system was valid. Findings yielded significant unique validities of OAR and PE beyond CAT and of PE beyond OAR even after corrections for restriction of range. Results support the use of ACs for entry-level candidates. [source]


Metrics or Peer Review?

POLITICAL STUDIES REVIEW, Issue 1 2009
Evaluating the 2001 UK Research Assessment Exercise in Political Science
Evaluations of research quality in universities are now widely used in the advanced economies. The UK's Research Assessment Exercise (RAE) is the most highly developed of these research evaluations. This article uses the results from the 2001 RAE in political science to assess the utility of citations as a measure of outcome, relative to other possible indicators. The data come from the 4,400 submissions to the RAE political science panel. The 28,128 citations analysed relate not only to journal articles, but to all submitted publications , including authored and edited books and book chapters. The results show that citations are the most important predictor of the RAE outcome, followed by whether or not a department had a representative on the RAE panel. The results highlight the need to develop robust quantitative indicators to evaluate research quality which would obviate the need for a peer evaluation based on a large committee. Bibliometrics should form the main component of such a portfolio of quantitative indicators. [source]


The intercultural communication motivation scale: An instrument to assess motivational training needs of candidates for international assignments

HUMAN RESOURCE MANAGEMENT, Issue 5 2009
Bernd Kupka
Abstract The Intercultural Communication Motivation Scale (ICMS) is a tool to assess the intercultural communication motivation of candidates for international assignments. The ICMS performed well in four studies conducted with undergraduate students in New Zealand, the United States, the United Arab Emirates, and Germany. Generally showing a stable fi ve-factor structure, high test-retest correlations, very high Cronbach's alphas, and almost no social desirability bias in self and peer evaluations, the ICMS is sensitive enough to detect test-retest differences. Thus, socially responsible strategic international HR programs can use this scale to reliably evaluate employees and their families for specifi c international locations. © 2009 Wiley Periodicals, Inc. [source]


Self-Monitoring and Performance Appraisal Satisfaction: An Exploratory Field Study

HUMAN RESOURCE MANAGEMENT, Issue 4 2001
Janice S. Miller
Members of 12 project teams in five organizations participated in a study that assessed their self-monitoring characteristics and level of satisfaction with their performance appraisal system. Overall, taking part in self-ratings and upward appraisals of team leaders was associated with greater levels of appraisal satisfaction than was participating in peer evaluations. Self-monitoring level was negatively associated with appraisal satisfaction after controlling for level of ratings generated by peers, self, and leader. The paper discusses results, and offers practical implications in light of the social and interpersonal context that surrounds performance evaluation. © 2001 John Wiley & Sons, Inc. [source]


A Global Ranking of Political Science Departments

POLITICAL STUDIES REVIEW, Issue 3 2004
Simon Hix
Rankings of academic institutions are key information tools for universities, funding agencies, students and faculty. The main method for ranking departments in political science, through peer evaluations, is subjective, biased towards established institutions, and costly in terms of time and money. The alternative method, based on supposedly ,objective' measures of outputs in scientific journals, has thus far only been applied narrowly in political science, using publications in a small number of US-based journals. An alternative method is proposed in this paper , that of ranking departments based on the quantity and impact of their publications in the 63 main political science journals in a given five-year period. The result is a series of global and easily updatable rankings that compare well with results produced by applying a similar method in economics. [source]


Open evaluation of science: can we simply say "no, thank you?"

ACTA OPHTHALMOLOGICA, Issue 2008
G STEFANO
In today's world of information doubling at faster rates because of rapid technological and biomedical advances nations must pay closer attention to the productivity and creativity that can be obtained from their universities. A professor's intellectual property may have important patent consequences. Thus, universities must foster lines of communication that aid the professor in making critical decisions not only about the advance but its potential to generate a revenue stream. In the same light, universities must also be able to evaluate the contribution and the potential of a lab to make future contributions in an objective manner since all laboratories and projects cannot be funded in a nation/university due to the high cost of doing so. In the past, this evaluation has taken the form of a grant, which depends on peer evaluations. Now however, due to ever increasing flow of information, which generates new technologies, additional evaluation processes must be in place so the funding can be prioritized and revenue not wasted. This calls for a rapid evaluation process, taking advantage of the increase in informational flow. This process must be as objective as possible, providing documentation of the ability to generate successful projects without damaging continuing research and hurting the ability of high risk projects to reach fruition. [source]