Home About us Contact | |||
Evaluation Design (evaluation + design)
Selected AbstractsEvaluation of Professional Development for Language Teachers in CaliforniaFOREIGN LANGUAGE ANNALS, Issue 2 2002Albert S. Lozano As one of the nine content areas of the California Subject Matter Project, CFLP is a voluntary project that served 609 elementary, secondary, and postsecondary foreign language teachers from 43 counties in 1998/1999. This article describes the origin and rationale of the California Foreign Language Project and discusses the importance of professional development programs, a topic of growing interest given the nationwide focus on student performance and school reform. Finally, the components of professional development program evaluation, and specifically of CFLP's Evaluation Design, will be presented, along with the findings from the 1998/1999 program year. [source] Looking at the evidence: What variations in practice might indicateNEW DIRECTIONS FOR EVALUATION, Issue 113 2007Lois-ellin Datta This chapter presents the findings from a review of the practice of evaluation in federal agencies as an attempt to inform policies on method choice. The author explores whether federal agencies differ in their approaches to evaluation design and the factors that influence these differences. The nature of the programs, agency culture, evaluator training and experience, and the politics of methodology all emerge as possible context-appropriate influences on method choice. [source] Evaluation of a teen dating violence social marketing campaign: Lessons learned when the null hypothesis was acceptedNEW DIRECTIONS FOR EVALUATION, Issue 110 2006Emily F. Rothman This chapter describes an evaluation of a teen dating violence prevention media campaign, including evaluation design and results, and the challenges that arose during the evaluation process. It makes recommendations for future evaluations of mass media campaigns that target adolescents. [source] Implementation of school-based wellness centersPSYCHOLOGY IN THE SCHOOLS, Issue 5 2003Nancy G. Guerra This article describes the planning, implementation, and evaluation of school-based Wellness Centers operated by the Riverside Unified School District in Riverside, CA, as part of the Safe Schools/Healthy Students Initiative funded by the Substance Abuse and Mental Health Services Administration (SAMHSA). We describe the program as planned in terms of the theoretical model for the intervention and the evaluation design, and discuss the actual implementation including accomplishments and challenges. The program was designed to promote positive development and wellness for individual students via self- and teacher-referrals for personal and mental health problems handled through a case management and referral process, support groups, and other activities such as after-school programs, mentoring, tutoring, and parent training. An effort was also made to promote wellness at the school level by providing wellness campaigns, information, and compatible policies and procedures designed to enhance healthy development. Our observations are based on a qualitative assessment that was a component of the evaluation. A more detailed evaluation examining the impact of school-wide and student-focused activities on academic and behavioral outcomes is currently underway. However, we do include comments from students suggesting that the Wellness Center concept holds much promise for school-based mental health and violence prevention services. © 2003 Wiley Periodicals, Inc. Psychol Schs 40: 473,487, 2003. [source] Comprehensive evaluation of an online tobacco control continuing education course in CanadaTHE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS, Issue 4 2008Kirsten E. Sears MHSc Abstract Introduction: To respond to the increasing need to build capacity for planning, implementing, and supporting tobacco control strategies, an evidence-based, online continuing education (CE) course aimed at Canadian public health professionals was developed. The purpose of this study was to comprehensively evaluate the course, Tobacco and Public Health: From Theory to Practice (http://tobaccocourse.otru.org). Methods: Rossett and McDonald's revision of Kirkpatrick's four-level evaluation model for training programs guided the evaluation design. A pre-, post-, and follow-up single group design assessed immediate reactions to course modules, knowledge change and retention, practice change, and overall perceived value of the course. Six external peer reviewers evaluated course module content. Results: Fifty-nine participants completed all three course modules and the final online questionnaire at time 3, representing a response rate of 78%. Significant knowledge gains occurred between times 1 and 2 (p < 0.001). Although time 3 scores remained higher than time 1 scores for each module (p < 0.001), they decreased significantly between times 2 and 3 (p < 0.001). The majority of participants (93%) felt the topics covered were useful to their daily work. All but one participant felt the course was a good investment of their time, and nearly all participants (97%) stated they would recommend the course to others. Peer reviewers found that module content flowed well and was comprehensive. Discussion: This comprehensive evaluation was valuable both for assessing whether course goals were achieved and for identifying areas for course improvement. We expect this design would be a useful model to evaluate other online continuing education courses. [source] Revised STandards for Reporting Interventions in Clinical Trials of Acupuncture (STRICTA): Extending the CONSORT StatementJOURNAL OF EVIDENCE BASED MEDICINE, Issue 3 2010Hugh MacPherson The STandards for Reporting Interventions in Clinical Trials of Acupuncture (STRICTA) were published in five journals in 2001 and 2002. These guidelines, in the form of a checklist and explanations for use by authors and journal editors, were designed to improve reporting of acupuncture trials, particularly the interventions, thereby facilitating their interpretation and replication. Subsequent reviews of the application and impact of STRICTA have highlighted the value of STRICTA as well as scope for improvements and revision. To manage the revision process a collaboration between the STRICTA Group, the CONSORT Group, and the Chinese Cochrane Centre was developed in 2008. An expert panel with 47 participants was convened that provided electronic feedback on a revised draft of the checklist. At a subsequent face-to-face meeting in Freiburg, a group of 21 participants further revised the STRICTA checklist and planned dissemination. The new STRICTA checklist, which is an official extension of CONSORT, includes six items and 17 sub-items. These set out reporting guidelines for the acupuncture rationale, the details of needling, the treatment regimen, other components of treatment, the practitioner background, and the control or comparator interventions. In addition, and as part of this revision process, the explanations for each item have been elaborated, and examples of good reporting for each item are provided. In addition, the word "controlled" in STRICTA is replaced by "clinical," to indicate that STRICTA is applicable to a broad range of clinical evaluation designs, including uncontrolled outcome studies and case reports. It is intended that the revised STRICTA, in conjunction with both the main CONSORT Statement and extension for nonpharmacologic treatment, will raise the quality of reporting of clinical trials of acupuncture. [source] Consequences of No Child Left Behind on evaluation purpose, design, and impactNEW DIRECTIONS FOR EVALUATION, Issue 117 2008Linda Mabry As an outgrowth of No Child Left Behind's narrow definition of scientifically based research, the priority given to certain quantitative evaluation designs has sparked debate among those in the evaluation community. Federal mandates for particular evaluation methodologies run counter to evaluation practice and to the direction of most evaluation theorists, who advocate for flexibility and adaptability in methods choices. The impact of this mandate for randomized clinical trials as the sine qua non of evaluation methods is not yet discernible, but the potential impact is explored through an analogous example involving the World Bank. © Wiley Periodicals, Inc. [source] |