| |||
Achievement Tests (achievement + test)
Terms modified by Achievement Tests Selected AbstractsValidity of the comprehensive receptive and expressive vocabulary test in assessment of children with speech and learning problemsPSYCHOLOGY IN THE SCHOOLS, Issue 6 2002Teresa Smith The current researchers investigated construct, predictive, and differential validity for the Comprehensive Receptive and Expressive Vocabulary Test (CREVT). Participants were 243 public school students, ages 5.5 to 17.25 years. They represented four primary disabilities: Learning Disability (n = 115), Learning Disability with Speech Impairment (n = 29), Mental Retardation (n = 40), and Speech Impairment (n = 59). Adequate construct validity for the CREVT was documented, using the Wechsler Intelligence Test for Children,III as a criterion. Also, the CREVT significantly predicted the scores on the Wide Range Achievement Test,3. Lastly, the CREVT effectively differentiated between students with disabilities. These findings suggest that the CREVT may be helpful in identifying the presence of learning problems. © 2002 Wiley Periodicals, Inc. Psychol Schs 39: 613,619, 2002. [source] Effectiveness of problem-based learning on academic performance in geneticsBIOCHEMISTRY AND MOLECULAR BIOLOGY EDUCATION, Issue 6 2007Gülsüm Araz Abstract This study aimed at comparing the effectiveness of problem-based learning (PBL)1 and traditional lecture-based instruction on elementary school students' academic achievement and performance skills in a science unit on genetics while controlling for their reasoning ability. For the specified purpose, twoinstructional methods were randomly assigned to intact classes of two different teachers. Each teacher had both PBL classes and traditional classes. Although students in PBL classes (n = 126) worked on ill-structured problems cooperatively with the guidance of the teacher, students in traditional classes (n = 91) received instruction based on teacher's explanations, discussions, and textbooks. Genetics Achievement Test was developed by researchers to measure the academic achievement and performance skills. Multivariate Analysis of Covariance results showed that the PBL students had higher academic achievement and performance skills scores (M = 11.44 and M = 2.67, respectively) when compared with those in traditional classes (M = 10.91 and M = 2.20, respectively). This indicated that the PBL students tend to better acquire scientific conceptions related to genetics and integrate and organize the knowledge. Moreover, it was found that the reasoning ability explained a significant portion of variance in the scores of academic achievement and performance skills. [source] Can High School Achievement Tests Serve to Select College Students?EDUCATIONAL MEASUREMENT: ISSUES AND PRACTICE, Issue 2 2010Adriana D. Cimetta Postsecondary schools have traditionally relied on admissions tests such as the SATand ACT to select students. With high school achievement assessments in place in many states, it is important to ascertain whether scores from those exams can either supplement or supplant conventional admissions tests. In this study we examined whether the Arizona Instrument to Measure Standards (AIMS) high school tests could serve as a useful predictor of college performance. Stepwise regression analyses with a predetermined order of variable entry revealed that AIMS generally did not account for additional performance variation when added to high school grade-point average (HSGPA) and SAT. However, in a cohort of students that took the test for graduation purposes, AIMS did account for about the same proportion of variance as SAT when added to a model that included HSGPA. The predictive value of both SAT and AIMS was generally the same for Caucasian, Hispanic, and Asian American students. The ramifications of universities using high school achievement exams as predictors of college success, in addition to or in lieu of traditional measures, are discussed. [source] The Quality of Content Analyses of State Student Achievement Tests and Content StandardsEDUCATIONAL MEASUREMENT: ISSUES AND PRACTICE, Issue 4 2008Andrew C. Porter This article examines the reliability of content analyses of state student achievement tests and state content standards. We use data from two states in three grades in mathematics and English language arts and reading to explore differences by state, content area, grade level, and document type. Using a generalizability framework, we find that reliabilities for four coders are generally greater than .80. For the two problematic reliabilities, they are partly explained by an odd rater out. We conclude that the content analysis procedures, when used with at least five raters, provide reliable information to researchers, policymakers, and practitioners about the content of assessments and standards. [source] Identifying Sources of Differential Item and Bundle Functioning on Translated Achievement Tests: A Confirmatory AnalysisJOURNAL OF EDUCATIONAL MEASUREMENT, Issue 2 2001Mark J. Gierl Increasingly, tests are being translated and adapted into different languages. Differential item functioning (DIF) analyses are often used to identify non-equivalent items across language groups. However, few studies have focused on understanding why some translated items produce DIF. The purpose of the current study is to identify sources of differential item and bundle functioning on translated achievement tests using substantive and statistical analyses. A substantive analysis of existing DIF items was conducted by an 11-member committee of testing specialists. In their review, four sources of translation DIF were identified. Two certified translators used these four sources to categorize a new set of DIF items from Grade 6 and 9 Mathematics and Social Studies Achievement Tests. Each item was associated with a specific source of translation DIF and each item was anticipated to favor a specific group of examinees. Then, a statistical analysis was conducted on the items in each category using SIBTEST. The translators sorted the mathematics DIF items into three sources, and they correctly predicted the group that would be favored for seven of the eight items or bundles of items across two grade levels. The translators sorted the social studies DIF items into four sources, and they correctly predicted the group that would be favored for eight of the 13 items or bundles of items across two grade levels. The majority of items in mathematics and social studies were associated with differences in the words, expressions, or sentence structure of items that are not inherent to the language and/or culture. By combining substantive and statistical DIF analyses, researchers can study the sources of DIF and create a body of confirmed DIF hypotheses that may be used to develop guidelines and test construction principles for reducing DIF on translated tests. [source] A Domain-level Approach to Describing Growth in AchievementJOURNAL OF EDUCATIONAL MEASUREMENT, Issue 1 2005E. Matthew Schulz Descriptions of growth in educational achievement often rely on the notion that higher-level students can do whatever lower-level students can do, plus at least one more thing. This article presents a method of supporting such descriptions using the data of a subject-area achievement test. Multiple content domains with an expected order of difficulty were defined within the Grade 8 National Assessment of Educational Progress (NAEP) in mathematics. Teachers were able to reliably classify items into the domains by content. Using expected percentage correct scores on the domains, it was possible to describe each achievement level boundary (Basic, Proficient, and Advanced) on the NAEP scale by patterns of skill that include both mastery and non-mastery, and to show that higher achievement levels are associated with mastery of more skills. We conclude that general achievement tests like NAEP can be used to provide criterion-referenced descriptions of growth in achievement as a sequential mastery of skills. [source] Real world contexts in PISA science: Implications for context-based science educationJOURNAL OF RESEARCH IN SCIENCE TEACHING, Issue 8 2009Peter J. Fensham Abstract The PISA assessment instruments for students' scientific literacy in 2000, 2003, and 2006 have each consisted of units made up of a real world context involving Science and Technology, about which students are asked a number of cognitive and affective questions. This article discusses a number of issues from this use of S&T contexts in PISA and the implications they have for the current renewed interest in context-based science education. Suitably chosen contexts can engage both boys and girls. Secondary analyses of the students' responses using the contextual sets of items as the unit of analysis provides new information about the levels of performance in PISA 2006 Science. Embedding affective items in the achievement test did not lead to gender/context interactions of significance, and context interactions were less than competency ones. A number of implications for context-based science teaching and learning are outlined and the PISA 2006 Science test is suggested as a model for its assessment. © 2009 Wiley Periodicals, Inc. J Res Sci Teach 46: 884,896, 2009 [source] The Quality of Content Analyses of State Student Achievement Tests and Content StandardsEDUCATIONAL MEASUREMENT: ISSUES AND PRACTICE, Issue 4 2008Andrew C. Porter This article examines the reliability of content analyses of state student achievement tests and state content standards. We use data from two states in three grades in mathematics and English language arts and reading to explore differences by state, content area, grade level, and document type. Using a generalizability framework, we find that reliabilities for four coders are generally greater than .80. For the two problematic reliabilities, they are partly explained by an odd rater out. We conclude that the content analysis procedures, when used with at least five raters, provide reliable information to researchers, policymakers, and practitioners about the content of assessments and standards. [source] Testing Students with Special Needs: A Model for Understanding the Interaction Between Assessment and Student Characteristics in a Universally Designed EnvironmentEDUCATIONAL MEASUREMENT: ISSUES AND PRACTICE, Issue 3 2008Leanne R. Ketterlin-Geller This article presents a model of assessment development integrating student characteristics with the conceptualization, design, and implementation of standardized achievement tests. The model extends the assessment triangle proposed by the National Research Council (Pellegrino, Chudowsky, & Glaser, 2001) to consider the needs of students with disabilities and English learners on two dimensions: cognitive interaction and observation interaction. Specific steps in the test development cycle for including students with special needs are proposed following the guidelines provided byDowning (2006). Because this model of test development considers the range of student needs before test development commences, student characteristics are supported by applying the principles of universal design and appropriately aligning accommodations to address student needs. Specific guidelines for test development are presented. [source] A Domain-level Approach to Describing Growth in AchievementJOURNAL OF EDUCATIONAL MEASUREMENT, Issue 1 2005E. Matthew Schulz Descriptions of growth in educational achievement often rely on the notion that higher-level students can do whatever lower-level students can do, plus at least one more thing. This article presents a method of supporting such descriptions using the data of a subject-area achievement test. Multiple content domains with an expected order of difficulty were defined within the Grade 8 National Assessment of Educational Progress (NAEP) in mathematics. Teachers were able to reliably classify items into the domains by content. Using expected percentage correct scores on the domains, it was possible to describe each achievement level boundary (Basic, Proficient, and Advanced) on the NAEP scale by patterns of skill that include both mastery and non-mastery, and to show that higher achievement levels are associated with mastery of more skills. We conclude that general achievement tests like NAEP can be used to provide criterion-referenced descriptions of growth in achievement as a sequential mastery of skills. [source] Identifying Sources of Differential Item and Bundle Functioning on Translated Achievement Tests: A Confirmatory AnalysisJOURNAL OF EDUCATIONAL MEASUREMENT, Issue 2 2001Mark J. Gierl Increasingly, tests are being translated and adapted into different languages. Differential item functioning (DIF) analyses are often used to identify non-equivalent items across language groups. However, few studies have focused on understanding why some translated items produce DIF. The purpose of the current study is to identify sources of differential item and bundle functioning on translated achievement tests using substantive and statistical analyses. A substantive analysis of existing DIF items was conducted by an 11-member committee of testing specialists. In their review, four sources of translation DIF were identified. Two certified translators used these four sources to categorize a new set of DIF items from Grade 6 and 9 Mathematics and Social Studies Achievement Tests. Each item was associated with a specific source of translation DIF and each item was anticipated to favor a specific group of examinees. Then, a statistical analysis was conducted on the items in each category using SIBTEST. The translators sorted the mathematics DIF items into three sources, and they correctly predicted the group that would be favored for seven of the eight items or bundles of items across two grade levels. The translators sorted the social studies DIF items into four sources, and they correctly predicted the group that would be favored for eight of the 13 items or bundles of items across two grade levels. The majority of items in mathematics and social studies were associated with differences in the words, expressions, or sentence structure of items that are not inherent to the language and/or culture. By combining substantive and statistical DIF analyses, researchers can study the sources of DIF and create a body of confirmed DIF hypotheses that may be used to develop guidelines and test construction principles for reducing DIF on translated tests. [source] Academic Achievement Through FLES: A Case for Promoting Greater Access to Foreign Language Study Among Young LearnersMODERN LANGUAGE JOURNAL, Issue 1 2010CAROLYN TAYLOR The No Child Left Behind Act of 2001 established foreign languages as a core curricular content area; however, instructional emphasis continues to be placed on curricular areas that factor into state educational accountability programs. The present study explored whether foreign language study of first-year Grade 3 foreign language students who continued their foreign language study through Grade 5 in Louisiana public schools contributed to their academic achievement in curricular areas tested on the Iowa Tests of Basic Skills (ITBS) and the Louisiana Educational Assessment Program for the 21st Century (LEAP 21) test. Notable findings emerged. First, foreign language (FL) students significantly outperformed their non-FL peers on every test (English language arts, mathematics, science, and social studies) of the Grade 4 LEAP 21. Second, the present research suggested that regardless of the test, whether the Grade 4 criterion-referenced LEAP 21 or the Grade 5 norm-referenced ITBS, at each grade level FL students significantly outperformed their non-FL counterparts on language achievement tests. [source] Modeling the effects of health status and the educational infrastructure on the cognitive development of Tanzanian schoolchildrenAMERICAN JOURNAL OF HUMAN BIOLOGY, Issue 3 2005Alok Bhargava This paper models the proximate determinants of school attendance and scores on cognitive and educational achievement tests and on school examinations of over 600 schoolchildren from the Control group of a randomized trial in Tanzania, where children in the Intervention group heavily infected with hookworm and schistosomiasis received treatment. The modeling approach used a random effects framework and incorporated the inter-relationships between school attendance and performance on various tests, controlling for children's health status, socioeconomic variables, grade level, and the educational infrastructure. The empirical results showed the importance of variables such as children's height and hemoglobin concentration for the scores, especially on educational achievement tests that are easy to implement in developing countries. Also, teacher experience and work assignments were significant predictors of the scores on educational achievement tests, and there was some evidence of multiplicative effects of children's heights and work assignments on the test scores. Lastly, some comparisons were made for changes in test scores of treated children in the Intervention group with the untreated children in the Control group. Am. J. Hum. Biol. 17:280,292, 2005. © 2005 Wiley-Liss, Inc. [source] A test for geographers: the geography of educational achievement in Toronto and Hamilton, 1997THE CANADIAN GEOGRAPHER/LE GEOGRAPHE CANADIEN, Issue 3 2000RICHARD HARRIS The recent introduction of standardised achievement tests in several provinces has created an opportunity for Canadian geographers to contribute to public and theoretical debates. Geographers are well-equipped to comprehend and analyse the effects that neighbourhoods have upon pupil achievement. Independent of family background and school funding, such effects may be stronger in education than in other fields, such as voting behaviour and health research, but they have been ignored in recent public debates. They should be considered if informed judgements are to be made about whether specific teachers, schools, and boards are doing an adequate job. Analysis of the Ontario Grade 3 test results for 1997 in public schools in the City of Toronto and in Hamilton-Wentworth indicate that social class had a greater effect on pupil achievement than language background. Differences in the determinants of achievement between these two urban centres may be attributable to local variations in occupational structure and residential patterns. L'introduction récente en éducation des tests de compêtences standardisés, dans plusieurs provinces, offre aux géographes canadiens l'occasion de contribuer aux débats publics et théoriques. Les géographes sont bien placés pour comprendre et analyser les effets de quartier sur le rendement scolaire des élèves. Indépendamment du milieu socioculturel et du financement scolaire, ces effets ont peut être plus d'impact en éducation que dans les domaines tels que le comportement électoral et la recherche dans le milieu de la santé, cependant, ils demeurent à l'écart des débats publics. Ces éléments doivent être considérés si l'on prétend juger en connaissance de cause l'efficacité et le rendement des écoles, le corps enseignant et les conseils scolaires. L'analyse des résultats d'examens de l'Ontario en 1997, pour les élèves des écoles publiques de la troisième année des villes de Toronto et Hamilton-Wentworth, démontre que la réussite scolaire est plus liée au niveau socio-économique qu'à l'origine linguistique. La divergence des facteurs de réussites des deux centres urbains est peut-être attribuable aux variations des structures d'occupation locales et résidentielles. [source] |