Writing Sample (writing + sample)

Distribution by Scientific Domains


Selected Abstracts


Computational methods in authorship attribution

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 1 2009
Moshe Koppel
Statistical authorship attribution has a long history, culminating in the use of modern machine learning classification methods. Nevertheless, most of this work suffers from the limitation of assuming a small closed set of candidate authors and essentially unlimited training text for each. Real-life authorship attribution problems, however, typically fall short of this ideal. Thus, following detailed discussion of previous work, three scenarios are considered here for which solutions to the basic attribution problem are inadequate. In the first variant, the profiling problem, there is no candidate set at all; in this case, the challenge is to provide as much demographic or psychological information as possible about the author. In the second variant, the needle-in-a-haystack problem, there are many thousands of candidates for each of whom we might have a very limited writing sample. In the third variant, the verification problem, there is no closed candidate set but there is one suspect; in this case, the challenge is to determine if the suspect is or is not the author. For each variant, it is shown how machine learning methods can be adapted to handle the special challenges of that variant. [source]


Developmental, gender, and practical considerations in scoring curriculum-based measurement writing probes

PSYCHOLOGY IN THE SCHOOLS, Issue 4 2003
Christine Kerres Malecki
The present study focused on CBM written language procedures by conducting an investigation of the developmental, gender, and practical considerations surrounding three categories of CBM written language scoring indices: production-dependent, production-independent, and accurate-production. Students in first- through eighth-grade generated a three-minute writing sample in the fall and spring of the school year using standard CBM procedures. The writing samples were scored using all three types of scoring indices to assess the trends in scoring indices for students of varying ages and gender and of the time required to score writing samples using various scoring indices. With only one exception, older students outperformed younger students on all of the scoring indices. Although at the middle school level students' levels of writing fluency and writing accuracy were not closely associated, at the younger grade levels the CBM indices were significantly related. With regard to gender differences, girls outperformed boys on measures of writing fluency at all grade levels. The average scoring time per writing sample ranged from 1-1/2 to 2-1/2 minutes (depending on grade level). © 2003 Wiley Periodicals, Inc. Psychol Schs 40: 379,390, 2003. [source]


Writing in the Secondary Foreign Language Classroom: The Effects of Prompts and Tasks on Novice Learners of French

MODERN LANGUAGE JOURNAL, Issue 2 2000
Denise Paige Way
This study investigated the effects of 3 different writing tasks (descriptive, narrative, and expository) and 3 different writing prompts (bare, vocabulary, and prose model) on 937 writing samples culled from 330 novice learners enrolled in 15 classes of Levels 1 and 2 high school French. In order to assess the quality, fluency, syntactic complexity, and accuracy of the writing samples, the researchers employed 4 evaluation methods: holistic scoring, length of product, mean length of T-units, and percentage of correct T-units. Results indicate that the descriptive task was the easiest and the expository task the most difficult. The prose model prompts produced the highest mean scores, and the bare prompts produced the lowest mean scores. Based on these findings, the researchers question whether the description of a novice writer in the ACTFL Proficiency Guidelines(1986) should be used as a blueprint for curriculum development and textbook construction for secondary novice foreign language learners. [source]


Developmental, gender, and practical considerations in scoring curriculum-based measurement writing probes

PSYCHOLOGY IN THE SCHOOLS, Issue 4 2003
Christine Kerres Malecki
The present study focused on CBM written language procedures by conducting an investigation of the developmental, gender, and practical considerations surrounding three categories of CBM written language scoring indices: production-dependent, production-independent, and accurate-production. Students in first- through eighth-grade generated a three-minute writing sample in the fall and spring of the school year using standard CBM procedures. The writing samples were scored using all three types of scoring indices to assess the trends in scoring indices for students of varying ages and gender and of the time required to score writing samples using various scoring indices. With only one exception, older students outperformed younger students on all of the scoring indices. Although at the middle school level students' levels of writing fluency and writing accuracy were not closely associated, at the younger grade levels the CBM indices were significantly related. With regard to gender differences, girls outperformed boys on measures of writing fluency at all grade levels. The average scoring time per writing sample ranged from 1-1/2 to 2-1/2 minutes (depending on grade level). © 2003 Wiley Periodicals, Inc. Psychol Schs 40: 379,390, 2003. [source]