Assessment Center (assessment + center)

Distribution by Scientific Domains

Terms modified by Assessment Center

  • assessment center rating

  • Selected Abstracts


    ENTRY-LEVEL POLICE CANDIDATE ASSESSMENT CENTER: AN EFFICIENT TOOL OR A HAMMER TO KILL A FLY?

    PERSONNEL PSYCHOLOGY, Issue 4 2002
    KOBI DAYAN
    The study examined the validity of the assessment center (AC) as a selection process for entry-level candidates to die police and its unique value beyond cognitive ability tests. The sample included 712 participants who responded to personality and cognitive ability testing (CAT), and underwent an AC procedure. AC results included the overall assessment rating (OAR) and peer evaluation (PE). Seven criterion measures were collected for 585 participants from a training stage and on-the-job performance. Results showed that the selection system was valid. Findings yielded significant unique validities of OAR and PE beyond CAT and of PE beyond OAR even after corrections for restriction of range. Results support the use of ACs for entry-level candidates. [source]


    DEVELOPMENT ENGAGEMENT WITHIN AND FOLLOWING DEVELOPMENTAL ASSESSMENT CENTERS: CONSIDERING FEEDBACK FAVORABILITY AND SELF,ASSESSOR AGREEMENT

    PERSONNEL PSYCHOLOGY, Issue 4 2008
    SANG E. WOO
    This study sought to understand employees' level of behavioral engagement in response to feedback received in developmental assessment center (DAC) programs. Hypotheses were drawn from theories of self-enhancement and self-consistency and from findings in the multisource feedback and assessment center literatures regarding recipients' perceptions of feedback. Data were gathered from 172 U.S. middle managers participating in a DAC program. Results suggested that more favorable feedback was related to higher behavioral engagement. When discrepancies between self- and assessor ratings were examined, overraters (participants whose overall self-ratings were higher than their assessor ratings) tended to show less engagement in the program compared to underraters. However, pattern agreement on the participant's dimension profile did not significantly correlate with behavioral engagement. Based on these findings, avenues for future research are presented and practical implications are discussed. [source]


    Emotional response to the ano-genital examination of suspected sexual abuse

    JOURNAL OF FORENSIC NURSING, Issue 3 2009
    Gail Hornor RNC
    Abstract Introduction: Concerns have arisen among professionals working with children regarding potential emotional distress as a result of the ano-genital examination for suspected child sexual abuse. The purpose of this study was to describe and compare children's anxiety immediately preceding and immediately following the medical assessment of suspected child sexual abuse, including the ano-genital exam, and to examine demographic characteristics of those children reporting clinically significant anxiety. Method: In this descriptive study, children between the ages of 8 to 18 years of age requiring an ano-genital examination for concerns of suspected sexual abuse presenting to the Child Assessment Center of the Center for Child and Family Advocacy at Nationwide Children's Hospital were asked to participate. The Multidimensional Anxiety Scale for Children (MASC-10) was utilized in the study. The MASC-10 was completed by the child before and after the physical exam for suspected sexual abuse. Results: Although most (86%) children gave history of sexual abuse during their forensic interview, the majority (83%) of children in this study did not report clinically significant anxiety before or after the child sexual abuse examination. Children reporting clinically significant anxiety were more likely to have a significant cognitive disability, give history of more invasive forms of sexual abuse, have a chronic medical diagnosis, have a prior mental health diagnosis, have an ano-genital exam requiring anal or genital cultures, and lack private/public medical insurance. Discussion: A brief assessment of child demographics should be solicited prior to exam. Children sharing demographic characteristics listed above may benefit from interventions to decrease anxiety regardless of provider ability to detect anxiety. [source]


    Assessment Center for Pilot Selection: Construct and Criterion Validity and the Impact of Assessor Type

    APPLIED PSYCHOLOGY, Issue 2 2003
    Marc Damitz
    Cette recherche a examiné la validité d'un centre d'évaluation pour la sélection de pilotes. Les scores de N = 1,036 participants ont été utilisés pour étudier la validité de construit. Un sous-échantillon de participants performants a été suivi et les évaluations des pairs ont été retenus comme mesures du critère. Les résultats démontrent une première évidence de validité de construit et de critère pour cet outil d'évaluation des compétences interpersonnelles et liées à la performance. Par ailleurs, les résultats ont aussi montré que le type d'évaluateur (psychologue vs pilote) modère la validité prédictive des scores du centre d'évaluation. Cet effet "type d'évaluateur" dépend de la sorte de variables prédictives. Les résultats sont discutés et des implications pratiques sont suggérées. This study examined the validity of an assessment center in pilot selection as a new field of application. Assessment center ratings of N= 1,036 applicants were used to examine the construct validity. A subsample of successful applicants was followed up and peer ratings were chosen as criterion measures. The results provide first evidence of the construct and criterion validity of this assessment center approach for rating interpersonal and performance-related skills. Furthermore the type of assessor (psychologist versus pilot) moderates the predictive validity of the assessment center ratings. This type-of-assessor effect depends on the kind of predictor variables. The results are discussed and practical implications are suggested. [source]


    It Is Not Yet Time to Dismiss Dimensions in Assessment Centers

    INDUSTRIAL AND ORGANIZATIONAL PSYCHOLOGY, Issue 1 2008
    KLAUS G. MELCHERS
    [source]


    Task-Based Assessment Centers: Empirical support for a systems model

    INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT, Issue 2 2010
    Duncan J. R. Jackson
    Task-based assessment centers (TBACs) have been suggested to hold promise for practitioners and users of real-world ACs. However, a theoretical understanding of this approach is lacking in the literature, which leads to misunderstandings. The present study tested aspects of a systems model empirically, to help elucidate TBACs and explore their inner workings. When applied to data from an AC completed by 214 managers, canonical correlation analysis revealed that extraversion, abstract reasoning, and verbal reasoning, conceptualized as inputs into a system, explained around 21% of variance in manifest assessment center behavior. Behavior, in this regard, was found to consist of both general and situationally specific elements. Results are discussed in terms of their support for a systems model and as they pertain to the literature on TBACs. [source]


    Assessment Centers in Human Resource Management: Strategies for Prediction, Diagnosis, and Development

    PERSONNEL PSYCHOLOGY, Issue 1 2007
    Article first published online: 22 FEB 200
    First page of article [source]


    IMPLICATIONS OF TRAIT-ACTIVATION THEORY FOR EVALUATING THE CONSTRUCT VALIDITY OF ASSESSMENT CENTER RATINGS

    PERSONNEL PSYCHOLOGY, Issue 1 2002
    STEPHANIE HAALAND
    Assessment centers have often been criticized for lacking evidence supporting the construct validity of dimension ratings. This study examines whether the poor convergence of assessment center ratings is a result of correlating ratings from exercises that differ in the extent that behavior relevant to personality traits can be observed. Using data from a promotional assessment center for law enforcement officers (n= 79), the convergence of assessment center ratings was evaluated within the context of the five factor model by comparing the average within-dimension correlation of ratings from exercises that allowed for more opportunity to observe trait-relevant behavior to the average of those involving exercises where there was less opportunity. For each personality trait, ratings from exercises judged by experts to be high in trait-activation potential displayed stronger convergence (mean r= .30) than did ratings from exercises that were low in activation potential for that trait (mean r= .15). Implications for evaluating the construct validity of assessment centers are discussed along with future directions for classifying exercises based on situational similarity. [source]


    IMI's Aspire program feeds its senior leader pipeline through self-nominations

    GLOBAL BUSINESS AND ORGANIZATIONAL EXCELLENCE, Issue 5 2009
    Victoria Stage
    Self-nominations, combined with sophisticated assessment and selection tools, have produced a more diverse pool of highly qualified talent that IMI, a worldwide engineering company, is now grooming for its top 40 senior leadership roles. A three-step nomination and selection process for the enterprise-level Aspire program includes 360-degree-type performance assessments; online testing of potential that measures foundational capabilities and predispositions as well as accelerators in order to assign a norm-based percentile standing; and an assessment center with simulations for gauging readiness for senior leadership roles. Those selected as Aspire participants are afforded a range of activities, geared to individual and organizational needs, that include training/education, on-the-job and business-driven development, and relationship-driven development. © 2009 Wiley Periodicals, Inc. [source]


    The Preliminary Employment Interview as a Predictor of Assessment Center Outcomes

    INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT, Issue 2 2008
    Kobi Dayan
    The current study examined the relationships between personnel employment interview scores (PEI), cognitive ability test scores (CAT) and assessment center (AC) scores, as well as the potential to circumvent the costly AC method for some of the candidates by using these less expensive selection methods. A total of 423 Israeli police force candidates participated in the study. Their PEI and CAT scores were collected in the first stage of the selection process. They subsequently participated in an AC and a final decision was made regarding their acceptance to the police force. It was found that PEI and CAT scores significantly correlated with the AC scores and the recruitment decision, although the PEI scores demonstrated stronger correlations with the criteria. An actuarial analysis demonstrated the benefit of using the AC procedure for those achieving middle range scores on the PEI and CAT, circumventing the costly ACs for those achieving high and low scores. This strategy resulted in minor costs of both , and , errors. Organizations can adopt this economical strategy when the AC costs exceed the manpower costs incurred by disposing of the AC. [source]


    Candidates' Ability to Identify Criteria in Nontransparent Selection Procedures: Evidence from an assessment center and a structured interview

    INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT, Issue 3 2007
    Cornelius J. König
    In selection procedures like assessment centers (ACs) and structured interviews, candidates are often not informed about the targeted criteria. Previous studies have shown that candidates' ability to identify these criteria (ATIC) is related to their performance in the respective selection procedure. However, past research has studied ATIC in only one selection procedure at a time, even though it has been assumed that ATIC is consistent across situations, which is a prerequisite for ATIC to contribute to selection procedures' criterion-related validity. In this study, 95 candidates participated in an AC and a structured interview. ATIC scores showed cross-situational consistency across the two procedures and accounted for part of the relationship between performance in the selection procedures. Furthermore, ATIC scores in one procedure predicted performance in the other procedure even after controlling for cognitive ability. Implications and directions for future research are discussed. [source]


    Higher Cost, Lower Validity and Higher Utility: Comparing the Utilities of Two Tests that Differ in Validity, Costs and Selectivity

    INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT, Issue 2 2000
    George C. Thornton
    Traditional approaches to comparing the utility of two tests have not systematically considered the effects of different levels of selectivity that are feasible and appropriate in various selection situations. For example, employers who hope to avoid adverse impact often find they can be more selective with some tests than with others. We conducted two studies to compare the utilities of two tests that differ in costs, validity, and feasible levels of selectivity which can be employed. First, an analytical solution was conducted starting with a standard formula for utility. This analysis showed that for both fixed and variable hiring costs, a higher-cost, lower-validity procedure can have higher utility than a lower-cost, higher-validity procedure when the selection ratios permissible using the two procedures are sufficiently (yet realistically) different. Second, using a computer simulation method, several combinations of the critical variables were varied systematically to detect the limits of this effect in a finite set of specific selection situations. The results showed that the existence of more severe levels of adverse impact greatly reduced the utility of a written test with relatively high validity and low cost in comparison with an assessment center with lower validity and higher cost. Both studies showed that the consideration of selectivity can yield surprising conclusions about the comparative utility of two tests. Even if one test has lower validity and higher cost than a second test, the first may yield higher utility if it allows the organization to exercise stricter levels of selectivity. [source]


    DEVELOPMENT ENGAGEMENT WITHIN AND FOLLOWING DEVELOPMENTAL ASSESSMENT CENTERS: CONSIDERING FEEDBACK FAVORABILITY AND SELF,ASSESSOR AGREEMENT

    PERSONNEL PSYCHOLOGY, Issue 4 2008
    SANG E. WOO
    This study sought to understand employees' level of behavioral engagement in response to feedback received in developmental assessment center (DAC) programs. Hypotheses were drawn from theories of self-enhancement and self-consistency and from findings in the multisource feedback and assessment center literatures regarding recipients' perceptions of feedback. Data were gathered from 172 U.S. middle managers participating in a DAC program. Results suggested that more favorable feedback was related to higher behavioral engagement. When discrepancies between self- and assessor ratings were examined, overraters (participants whose overall self-ratings were higher than their assessor ratings) tended to show less engagement in the program compared to underraters. However, pattern agreement on the participant's dimension profile did not significantly correlate with behavioral engagement. Based on these findings, avenues for future research are presented and practical implications are discussed. [source]


    ENTRY-LEVEL POLICE CANDIDATE ASSESSMENT CENTER: AN EFFICIENT TOOL OR A HAMMER TO KILL A FLY?

    PERSONNEL PSYCHOLOGY, Issue 4 2002
    KOBI DAYAN
    The study examined the validity of the assessment center (AC) as a selection process for entry-level candidates to die police and its unique value beyond cognitive ability tests. The sample included 712 participants who responded to personality and cognitive ability testing (CAT), and underwent an AC procedure. AC results included the overall assessment rating (OAR) and peer evaluation (PE). Seven criterion measures were collected for 585 participants from a training stage and on-the-job performance. Results showed that the selection system was valid. Findings yielded significant unique validities of OAR and PE beyond CAT and of PE beyond OAR even after corrections for restriction of range. Results support the use of ACs for entry-level candidates. [source]


    SELF- VERSUS OTHERS' RATINGS AS PREDICTORS OF ASSESSMENT CENTER RATINGS: VALIDATION EVIDENCE FOR 360-DEGREE FEEDBACK PROGRAMS

    PERSONNEL PSYCHOLOGY, Issue 4 2002
    Paul W. B. Atkins
    Although 360-degree feedback programs are rapidly increasing in popularity, few studies have examined how well ratings from these programs predict an independent criterion. This study had 2 main aims: First, to examine the validity of ratings from a 360-degree feedback program using assessment center ratings as an independent criterion and to determine which source (i.e., self, supervisor, peers, or subordinates) provided the most valid predictor of the criterion measure of competency. Second, to better understand the relationship between self-observer discrepancies and an independent criterion. The average of supervisor, peer, and subordinate ratings predicted performance on the assessment center, as did the supervisor ratings alone. The self-ratings were negatively and nonlinearly related to performance with some of those who gave themselves the highest ratings having the lowest performance on the assessment center. Supervisor ratings successfully discriminated between overestimators but were not as successful at discriminating underestimators, suggesting that more modest feedback recipients might be underrated by their supervisors. Peers overestimated performance for poor performers. Explanations of the results and the implications for the use of self-ratings in evaluations, the design of feedback reports, and the use of 360-degree feedback programs for involving and empowering staff are discussed. [source]


    IMPLICATIONS OF TRAIT-ACTIVATION THEORY FOR EVALUATING THE CONSTRUCT VALIDITY OF ASSESSMENT CENTER RATINGS

    PERSONNEL PSYCHOLOGY, Issue 1 2002
    STEPHANIE HAALAND
    Assessment centers have often been criticized for lacking evidence supporting the construct validity of dimension ratings. This study examines whether the poor convergence of assessment center ratings is a result of correlating ratings from exercises that differ in the extent that behavior relevant to personality traits can be observed. Using data from a promotional assessment center for law enforcement officers (n= 79), the convergence of assessment center ratings was evaluated within the context of the five factor model by comparing the average within-dimension correlation of ratings from exercises that allowed for more opportunity to observe trait-relevant behavior to the average of those involving exercises where there was less opportunity. For each personality trait, ratings from exercises judged by experts to be high in trait-activation potential displayed stronger convergence (mean r= .30) than did ratings from exercises that were low in activation potential for that trait (mean r= .15). Implications for evaluating the construct validity of assessment centers are discussed along with future directions for classifying exercises based on situational similarity. [source]


    A Cross-Cultural Look at Assessment Center Practices: Survey Results from Western Europe and North America

    APPLIED PSYCHOLOGY, Issue 4 2009
    Diana E. Krause
    No recent survey documents assessment center (AC) practices across several countries. Therefore, we analyse AC practices in a sample of 97 organisations from nine countries in Western Europe and North America. We report findings regarding job analysis, dimensions, exercises, additional diagnostic methods, use of technology, assessor characteristics, contents and methods of assessor training, observational systems, information provided to participants, evaluation of participants' reactions, data integration, characteristics of feedback, and features after the AC. Finally, we compare our results with prior findings to identify trends over time and point out features of ACs that could be improved. Face aux défis que soulèvent les centres d'évaluation (AC) dans les organisations internationales, nous proposons un modèle qui rend compte des variations transculturelles dans ces pratiques, variations relevant de données individuelles (la motivation et la qualification des experts en resources humaines), de conditions culturelles (le « contrôle de l'incertitude » et la « distance hiérarchique ») et de réalités institutionnelles (des differences dans le niveau officiel de collectivisme et des divergences en ce qui concerne les normes légales et les lois régissant l'emploi). Ce modèle est exploité pour expliquer les différences dans la planification, l'exécution et l'évaluation des AC dans des organisations situées dans neuf pays d'Europe de l'ouest et d'Amérique du nord. Nous mettons aussi en evidence des tendances sur le long terme dans les pratiques des AC et discutons de l'amélioration de ces pratiques et de l'orientation des futures recherches dans ce domaine. [source]


    Assessment Center for Pilot Selection: Construct and Criterion Validity and the Impact of Assessor Type

    APPLIED PSYCHOLOGY, Issue 2 2003
    Marc Damitz
    Cette recherche a examiné la validité d'un centre d'évaluation pour la sélection de pilotes. Les scores de N = 1,036 participants ont été utilisés pour étudier la validité de construit. Un sous-échantillon de participants performants a été suivi et les évaluations des pairs ont été retenus comme mesures du critère. Les résultats démontrent une première évidence de validité de construit et de critère pour cet outil d'évaluation des compétences interpersonnelles et liées à la performance. Par ailleurs, les résultats ont aussi montré que le type d'évaluateur (psychologue vs pilote) modère la validité prédictive des scores du centre d'évaluation. Cet effet "type d'évaluateur" dépend de la sorte de variables prédictives. Les résultats sont discutés et des implications pratiques sont suggérées. This study examined the validity of an assessment center in pilot selection as a new field of application. Assessment center ratings of N= 1,036 applicants were used to examine the construct validity. A subsample of successful applicants was followed up and peer ratings were chosen as criterion measures. The results provide first evidence of the construct and criterion validity of this assessment center approach for rating interpersonal and performance-related skills. Furthermore the type of assessor (psychologist versus pilot) moderates the predictive validity of the assessment center ratings. This type-of-assessor effect depends on the kind of predictor variables. The results are discussed and practical implications are suggested. [source]


    Task-Based Assessment Centers: Empirical support for a systems model

    INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT, Issue 2 2010
    Duncan J. R. Jackson
    Task-based assessment centers (TBACs) have been suggested to hold promise for practitioners and users of real-world ACs. However, a theoretical understanding of this approach is lacking in the literature, which leads to misunderstandings. The present study tested aspects of a systems model empirically, to help elucidate TBACs and explore their inner workings. When applied to data from an AC completed by 214 managers, canonical correlation analysis revealed that extraversion, abstract reasoning, and verbal reasoning, conceptualized as inputs into a system, explained around 21% of variance in manifest assessment center behavior. Behavior, in this regard, was found to consist of both general and situationally specific elements. Results are discussed in terms of their support for a systems model and as they pertain to the literature on TBACs. [source]


    Reasons for Being Selective When Choosing Personnel Selection Procedures

    INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT, Issue 1 2010
    Cornelius J. König
    The scientist,practitioner gap in personnel selection is large. Thus, it is important to gain a better understanding of the reasons that make organizations use or not use certain selection procedures. Based on institutional theory, we predicted that six variables should determine the use of selection procedures: the procedures' diffusion in the field, legal problems associated with the procedures, applicant reactions to the procedures, their usefulness for organizational self-promotion, their predictive validity, and the costs involved. To test these predictions, 506 HR professionals from the German-speaking part of Switzerland filled out an online survey on the selection procedures used in their organizations. Respondents also evaluated five procedures (semi-structured interviews, ability tests, personality tests, assessment centers, and graphology) on the six predictor variables. Multilevel logistic regression was used to analyze the data. The results revealed that the highest odd ratios belonged to the factors applicant reactions, costs, and diffusion. Lower (but significant) odds ratios belonged to the factors predictive validity, organizational self-promotion, and perceived legality. [source]


    Candidates' Ability to Identify Criteria in Nontransparent Selection Procedures: Evidence from an assessment center and a structured interview

    INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT, Issue 3 2007
    Cornelius J. König
    In selection procedures like assessment centers (ACs) and structured interviews, candidates are often not informed about the targeted criteria. Previous studies have shown that candidates' ability to identify these criteria (ATIC) is related to their performance in the respective selection procedure. However, past research has studied ATIC in only one selection procedure at a time, even though it has been assumed that ATIC is consistent across situations, which is a prerequisite for ATIC to contribute to selection procedures' criterion-related validity. In this study, 95 candidates participated in an AC and a structured interview. ATIC scores showed cross-situational consistency across the two procedures and accounted for part of the relationship between performance in the selection procedures. Furthermore, ATIC scores in one procedure predicted performance in the other procedure even after controlling for cognitive ability. Implications and directions for future research are discussed. [source]


    Are Internal Medicine Residency Programs Adequately Preparing Physicians to Care for the Baby Boomers?

    JOURNAL OF AMERICAN GERIATRICS SOCIETY, Issue 10 2006
    A National Survey from the Association of Directors of Geriatric Academic Programs Status of Geriatrics Workforce Study
    Patients aged 65 and older account for 39% of ambulatory visits to internal medicine physicians. This article describes the progress made in training internal medicine residents to care for older Americans. Program directors in internal medicine residency programs accredited by the Accreditation Council for Graduate Medical Education were surveyed in the spring of 2005. Findings from this survey were compared with those from a similar 2002 survey to determine whether any changes had occurred. A 60% response rate was achieved (n=235). In these 3-year residency training programs, 20 programs (9%) required less than 2 weeks of clinical instruction that was specifically structured to teach geriatric care principles, 48 (21%) at least 2 weeks but less than 4 weeks, 144 (62%) at least 4 weeks but less than 6 weeks, and 21 (9%) required 6 or more weeks. As in 2002, internal medicine residency programs continue to depend on nursing home facilities, geriatric preceptors in nongeriatric clinical ambulatory settings, and outpatient geriatric assessment centers for their geriatrics training. Training was most often offered in a block format. The mean number of physician faculty per residency program dedicated to teaching geriatric medicine was 3.5 full-time equivalents (FTEs) (range 0,50), compared with a mean of 2.2 FTE faculty in 2002 (P,.001). Internal medicine educators are continuing to improve the training of residents so that, as they become practicing physicians, they will have the knowledge and skills in geriatric medicine to care for older adults. [source]


    APPLICATION OF STEPWISE AMMONIUM SULFATE PRECIPITATION AS CLEANUP TOOL FOR AN ENZYME-LINKED IMMUNOSORBENT ASSAY OF GLYPHOSATE OXIDOREDUCTASE IN GENETICALLY MODIFIED RAPE OF GT73

    JOURNAL OF FOOD BIOCHEMISTRY, Issue 5 2009
    WENTAO XU
    ABSTRACT The method of enzyme-linked immunosorbent assay after stepwise ammonium sulfate (AS) purification (AS-ELISA) was developed and used to detect genetically modified (GM) rape of GT73 containing glyphosate oxidoreductase (Gox). Gox protein encoded by the Gox gene from Achromobacter sp. was highly expressed as inclusion bodies in Escherichia coli BL21 (DE3) and purified to homogeneity by Ni2+affinity chromatography. A simple and efficient extraction and purification procedure of Gox protein from the seeds and leaves of GM rape was developed by means of stepwise AS precipitation. Purified polyclonal antibodies against Gox was produced and enzyme-linked immunosorbent assay (ELISA) procedures were established further on to measure the Gox protein. AS-ELISA allowed 5% GMOs to be detected in the seeds of GT73 and 0.5% GMOs to be detected in the leaves of GT73 rape, which makes this method an acceptable method to access Gox protein in GM rape of GT73. PRACTICAL APPLICATIONS Many GMOs containing Gox gene have been approved worldwide such as GT73 rape, 1,445 cotton and Mon832 maize. Protein based methods were more important than DNA based methods, because protein performs a specific and concrete function and is closely interconnected with crop traits. AS-ELISA method can be used in the screening of GM plant, Gox protein expression assay and quantitative detection for GMO labeling. AS-ELISA Gox detecting method was established in this paper and was being evaluated of Inter-laboratory Comparison in some of Chinese GMO detection and assessment centers. With the knowledge of ELISA, ELISA method will be the national standards and international and will be a beneficial supplement for the DNA based GMO detecting methods. [source]


    IMPLICATIONS OF TRAIT-ACTIVATION THEORY FOR EVALUATING THE CONSTRUCT VALIDITY OF ASSESSMENT CENTER RATINGS

    PERSONNEL PSYCHOLOGY, Issue 1 2002
    STEPHANIE HAALAND
    Assessment centers have often been criticized for lacking evidence supporting the construct validity of dimension ratings. This study examines whether the poor convergence of assessment center ratings is a result of correlating ratings from exercises that differ in the extent that behavior relevant to personality traits can be observed. Using data from a promotional assessment center for law enforcement officers (n= 79), the convergence of assessment center ratings was evaluated within the context of the five factor model by comparing the average within-dimension correlation of ratings from exercises that allowed for more opportunity to observe trait-relevant behavior to the average of those involving exercises where there was less opportunity. For each personality trait, ratings from exercises judged by experts to be high in trait-activation potential displayed stronger convergence (mean r= .30) than did ratings from exercises that were low in activation potential for that trait (mean r= .15). Implications for evaluating the construct validity of assessment centers are discussed along with future directions for classifying exercises based on situational similarity. [source]