Home About us Contact | |||
Validity Coefficients (validity + coefficient)
Selected AbstractsA Review and Extension of Current Models of Dynamic CriteriaINTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT, Issue 3 2000Debra Steele-Johnson An important issue in personnel selection and test validation has been the nature of performance criteria and more specifically the existence of dynamic criteria. There is a continuing debate regarding the extent to which performance and validity coefficients remain stable over time. We examine research within work, laboratory, and academic settings and evaluate existing models of dynamic criteria. Building on previous models, we propose an integrative model of dynamic criteria that identifies important issues for ability and performance constructs and discusses how variables related to the task, job, and organization can affect the temporal stability of criterion performance and the ability-performance relationship. [source] Creating a Progress-Monitoring System in Reading for Middle-School Students: Tracking Progress Toward Meeting High-Stakes StandardsLEARNING DISABILITIES RESEARCH & PRACTICE, Issue 2 2010Christine Espin In this study, we examined the reliability and validity of curriculum-based measures (CBM) in reading for indexing the performance of secondary-school students. Participants were 236 eighth-grade students (134 females and 102 males) in the classrooms of 17 English teachers. Students completed 1-, 2-, and 3-minute reading aloud and 2-, 3-, and 4-minute maze selection tasks. The relation between performance on the CBMs and the state reading test were examined. Results revealed that both reading aloud and maze selection were reliable and valid predictors of performance on the state standards tests, with validity coefficients above .70. An exploratory follow-up study was conducted in which the growth curves produced by the reading-aloud and maze-selection measures were compared for a subset of 31 students from the original study. For these 31 students, maze selection reflected change over time whereas reading aloud did not. This pattern of results was found for both lower- and higher-performing students. Results suggest that it is important to consider both performance and progress when examining the technical adequacy of CBMs. Implications for the use of measures with secondary-level students for progress monitoring are discussed. [source] INTERRATER CORRELATIONS DO NOT ESTIMATE THE RELIABILITY OF JOB PERFORMANCE RATINGSPERSONNEL PSYCHOLOGY, Issue 4 2000KEVIN R. MURPHY Interrater correlations are widely interpreted as estimates of the reliability of supervisory performance ratings, and are frequently used to correct the correlations between ratings and other measures (e.g., test scores) for attenuation. These interrater correlations do provide some useful information, but they are not reliability coefficients. There is clear evidence of systematic rater effects in performance appraisal, and variance associated with raters is not a source of random measurement error. We use generalizability theory to show why rater variance is not properly interpreted as measurement error, and show how such systematic rater effects can influence both reliability estimates and validity coefficients. We show conditions under which interrater correlations can either overestimate or underestimate reliability coefficients, and discuss reasons other than random measurement error for low interrater correlations. [source] Personality in nonhuman primates: a review and evaluation of past researchAMERICAN JOURNAL OF PRIMATOLOGY, Issue 8 2010Hani D. Freeman Abstract Scientific reports of personality in nonhuman primates are now appearing with increasing frequency across a wide range of disciplines, including psychology, anthropology, endocrinology, and zoo management. To identify general patterns of research and summarize the major findings to date, we present a comprehensive review of the literature, allowing us to pinpoint the major gaps in knowledge and determine what research challenges lay ahead. An exhaustive search of five scientific databases identified 210 relevant research reports. These articles began to appear in the 1930s, but it was not until the 1980s that research on primate personality began to gather pace, with more than 100 articles published in the last decade. Our analyses of the literature indicate that some domains (e.g., sex, age, rearing conditions) are more evenly represented in the literature than are others (e.g., species, research location). Studies examining personality structure (e.g., with factor analysis) have identified personality dimensions that can be divided into 14 broad categories, with Sociability, Confidence/Aggression, and Fearfulness receiving the most research attention. Analyses of the findings pertaining to inter-rater agreement, internal consistency, test,retest reliability, generally support not only the reliability of primate personality ratings scales but also point to the need for more psychometric studies and greater consistency in how the analyses are reported. When measured at the level of broad dimensions, Extraversion and Dominance generally demonstrated the highest levels of inter-rater reliability, with weaker findings for the dimensions of Agreeableness, Emotionality, and Conscientiousness. Few studies provided data with regard to convergent and discriminant validity; Excitability and Dominance demonstrated the strongest validity coefficients when validated against relevant behavioral criterion measures. Overall, the validity data present a somewhat mixed picture, suggesting that high levels of validity are attainable, but by no means guaranteed. Discussion focuses on delineating major theoretical and empirical questions facing research and practice in primate personality. Am. J. Primatol. 72:653,671, 2010. © 2010 Wiley-Liss, Inc. [source] A construct validity study of clinical competence: A multitrait multimethod matrix approachTHE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS, Issue 1 2010Lubna Baig MBBS, PhD Managing Director, Professor of Community Medicine Abstract Introduction: The purpose of the study was to adduce evidence for estimating the construct validity of clinical competence measured through assessment instruments used for high-stakes examinations. Methods: Thirty-nine international physicians (mean age = 41 + 6.5 y) participated in high-stakes examination and 3-month supervised clinical practice to determine the practice readiness of physicians. Three traits,doctor,patient relationship, clinical competence, and communication skills,were assessed with objective structured clinical examinations, in-training evaluation reports, and clinical assessments. These traits were intercorrelated in a multitrait multimethod matrix (MTMM). Results: The reliability of assessments ranged from moderate to high (Cronbach's ,: 0.58,0.98; Ep2 = 0.79). There is evidence for both convergent and divergent validity for clinical competence, followed by doctor,patient relationships, and communications (validity coefficients = 0.12,0.85). The correlations between the same methods but different traits indicate that there is substantial method specificity in the assessment accounting for nearly one-quarter of the variance (23.7%). Discussion: There is evidence for the construct validity of all 3 traits across 3 methods. The MTMM approach, currently underutilized, could be used to estimate the degree of evidence for validating complex constructs, such as clinical competence. [source] |