Test Content (test + content)

Distribution by Scientific Domains


Selected Abstracts


A meta-analysis of national research: Effects of teaching strategies on student achievement in science in the United States

JOURNAL OF RESEARCH IN SCIENCE TEACHING, Issue 10 2007
Carolyn M. Schroeder
This project consisted of a meta-analysis of U.S. research published from 1980 to 2004 on the effect of specific science teaching strategies on student achievement. The six phases of the project included study acquisition, study coding, determination of intercoder objectivity, establishing criteria for inclusion of studies, computation of effect sizes for statistical analysis, and conducting the analyses. Studies were required to have been carried out in the United States, been experimental or quasi-experimental, and must have included effect size or the statistics necessary to calculate effect size. Sixty-one studies met the criteria for inclusion in the meta-analysis. The following eight categories of teaching strategies were revealed during analysis of the studies (effect sizes in parentheses): Questioning Strategies (0.74); Manipulation Strategies (0.57); Enhanced Material Strategies (0.29); Assessment Strategies (0.51); Inquiry Strategies (0.65); Enhanced Context Strategies (1.48); Instructional Technology (IT) Strategies (0.48); and Collaborative Learning Strategies (0.95). All these effect sizes were judged to be significant. Regression analysis revealed that internal validity was influenced by Publication Type, Type of Study, and Test Type. External validity was not influenced by Publication Year, Grade Level, Test Content, or Treatment Categories. The major implication of this research is that we have generated empirical evidence supporting the effectiveness of alternative teaching strategies in science. © 2007 Wiley Periodicals, Inc. J Res Sci Teach 44: 1436,1460, 2007 [source]


Use of Knowledge, Skill, and Ability Statements in Developing Licensure and Certification Examinations

EDUCATIONAL MEASUREMENT: ISSUES AND PRACTICE, Issue 1 2005
Ning Wang
The task inventory approach is commonly used in job analysis for establishing content validity evidence supporting the use and interpretation of licensure and certification examinations. Although the results of a task inventory survey provide job task-related information that can be used as a reliable and valid source for test development, it is often the knowledge, skills, and abilities (KSAs) required for performing the tasks, rather than the job tasks themselves, which are tested by licensure and certification exams. This article presents a framework that addresses the important role of KSAs in developing and validating licensure and certification examinations. This includes the use of KSAs in linking job task survey results to the test content outline, transferring job task weights to test specifications, and eventually applying the results to the development of the test items. The impact of using KSAs in the development of test specifications is illustrated from job analyses for two diverse professions. One method for transferring job task weights from the job analysis to test specifications through KSAs is also presented, along with examples. The two examples demonstrated in this article are taken from nursing certification and real estate licensure programs. However, the methodology for using KSAs to link job tasks and test content is also applicable in the development of teacher credentialing examinations. [source]


Teaching for the Test: Validity, Fairness, and Moral Action

EDUCATIONAL MEASUREMENT: ISSUES AND PRACTICE, Issue 3 2003
Linda Crocker
In response to heightened levels of assessment activity at the K-12 level to meet requirements of the No Child Left Behind Act of 2001, measurement professionals are called to focus greater attention on four fundamental areas of measurement research and practice: (a) improving the research infrastructure for validation methods involving judgments of test content; (b) expanding the psychometric definition of fairness in achievement testing; (c) developing guidelines for validation studies of test use consequences; and (d) preparing teachers for new roles in instruction and assessment practice. Illustrative strategies for accomplishing these goals are outlined. [source]


Content Validation Is Useful for Many Things, but Validity Isn't One of Them

INDUSTRIAL AND ORGANIZATIONAL PSYCHOLOGY, Issue 4 2009
KEVIN R. MURPHY
Content-oriented validation strategies establish the validity of selection tests as predictors of performance by comparing the content of the tests with the content of the job. These comparisons turn out to have little if any bearing on the predictive validity of selection tests. There is little empirical support for the hypothesis that the match between job content and test content influences validity, and there are often structural factors in selection (e.g., positive correlations among selection tests) that strongly limit the possible influence of test content on validity. Comparisons between test content and job content have important implications for the acceptability of testing, the defensibility of tests in legal proceedings, and the transparency of test development and validation, but these comparisons have little if any bearing on validity. [source]