Home About us Contact | |||
Evaluation Metrics (evaluation + metric)
Selected AbstractsEvaluation Metrics in Classification: A Quantification of Distance-BiasCOMPUTATIONAL INTELLIGENCE, Issue 3 2003Ricardo Vilalta This article provides a characterization of bias for evaluation metrics in classification (e.g., Information Gain, Gini, ,2, etc.). Our characterization provides a uniform representation for all traditional evaluation metrics. Such representation leads naturally to a measure for the distance between the bias of two evaluation metrics. We give a practical value to our measure by observing the distance between the bias of two evaluation metrics and its correlation with differences in predictive accuracy when we compare two versions of the same learning algorithm that differ in the evaluation metric only. Experiments on real-world domains show how the expectations on accuracy differences generated by the distance-bias measure correlate with actual differences when the learning algorithm is simple (e.g., search for the best single feature or the best single rule). The correlation, however, weakens with more complex algorithms (e.g., learning decision trees). Our results show how interaction among learning components is a key factor to understand learning performance. [source] Video tracking system optimization using evolution strategiesINTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 2 2007Jesús García Abstract A video-based tracking system for airport surveillance, composed by modules performing vision tasks at different levels, is adapted for operational conditions by means of Evolution Strategies (ES). An optimization procedure has been carried out considering different scenes composed of representative trajectories, supported by a global evaluation metric proposed to quantify the system performance. The generalization problem (the search of appropriate solutions for general situations, avoiding over-adaptation to particular conditions) is approached considering evaluation of ES-individuals over combinations of trajectories to build the fitness function. In this way, the optimization procedure covers sets of trajectories representing different types of problems. Besides, alternative operators for aggregating partial evaluations have been analysed. Results show how the optimization strategy provides a sensitive tuning of performance related to input parameters at different levels, and how the combination of different situations improves the generalization capability of the trained system. The global performance final system after optimization is also compared with representative algorithms in the state of the art of visual tracking. © 2007 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 17, 75,90, 2007 [source] Evaluation Metrics in Classification: A Quantification of Distance-BiasCOMPUTATIONAL INTELLIGENCE, Issue 3 2003Ricardo Vilalta This article provides a characterization of bias for evaluation metrics in classification (e.g., Information Gain, Gini, ,2, etc.). Our characterization provides a uniform representation for all traditional evaluation metrics. Such representation leads naturally to a measure for the distance between the bias of two evaluation metrics. We give a practical value to our measure by observing the distance between the bias of two evaluation metrics and its correlation with differences in predictive accuracy when we compare two versions of the same learning algorithm that differ in the evaluation metric only. Experiments on real-world domains show how the expectations on accuracy differences generated by the distance-bias measure correlate with actual differences when the learning algorithm is simple (e.g., search for the best single feature or the best single rule). The correlation, however, weakens with more complex algorithms (e.g., learning decision trees). Our results show how interaction among learning components is a key factor to understand learning performance. [source] Integrating art as a trans-boundary element in a radical innovation frameworkR & D MANAGEMENT, Issue 1 2010Christian Stüer Companies have learned that radical innovations (RIs) are a prerequisite to grow organically. However, companies struggle to identify and introduce RIs, as their inherent high uncertainties and novelty challenge established organisations and management routines. To address the first challenge, companies need to take a holistic approach and design a trans-boundary environment of creativity, trans-disciplinary and entrepreneurial spirit. This environment attracts and retains visionary people, fosters generation of new opportunities and cultivates adaptability. By adapting evaluation metrics for RI, setting up flexible processes, and promoting trans-disciplinary exchange, the second challenge can be addressed. Increased research has concentrated on several aspects of RI lately, but so far a combining framework is missing. Our paper bridges this gap by developing an improved theoretical framework, enhancing the existing literature and introducing art as a method to advance trans-disciplinary interchange. In a case-study approach, we have applied our framework to the research and development department of Vodafone Research and Development, Germany, as they integrate art methodically in their research and development process. Analysing their RI capabilities, we identify the trans-disciplinary exchange with artists as a novel initiator and driver of RI, which has not yet been adequately considered. [source] Competency Testing Using a Novel Eye Tracking DeviceACADEMIC EMERGENCY MEDICINE, Issue 2009Paul Wetzel Assessment and evaluation metrics currently rely upon interpretation of observed performance or end points by an ,expert' observer. Such metrics are subject to bias since they rely upon the traditional medical education model of ,see one, do one, teach one'. The Institute of Medicine's Report and the Flexner Report have demanded improvements in education metrics as a means to improve patient safety. Additionally, advancements in adult learning methods are challenging traditional medical education measures. Educators are faced with the daunting task of developing rubrics for competency testing that are currently limited by judgment and interpretation bias. Medical education is demanding learner-centered metrics to reflect quantitative and qualitative measures to document competency. Using a novel eye tracking system, educators now have the ability to know how their learners think. The system can track the focus of the learner during task performance. The eye tracking system demonstrates a learner-centered measuring tool capable of identifying deficiencies in task performance. The device achieves the goal of timely and direct feedback of performance metrics based on the learner's perspective. Employment of the eye tracking system in simulation education may identify mastery and retention deficits before compliance and quality improvement issues develop into patient safety concerns. [source] |