Home About us Contact | |||
Subjective Judgments (subjective + judgment)
Selected AbstractsSubjective and objective perception of upper incisorsJOURNAL OF ORAL REHABILITATION, Issue 7 2006S. WOLFART summary, The purpose of this study was to evaluate the subjective judgment (SJ) of patients on their own dental appearance and to correlate the results with objective measurements (OM) of their dentition concerning the appearance of the upper incisors. Seventy-five participants (30 men and 45 women) with normal well-being were included in the study. In a questionnaire they judged the appearance of their upper incisors. Furthermore, OM were evaluated by the investigator with regard to the following points: (i) absolute length of the upper central incisors, (ii) their length exposed during laughing, (iii) width-to-length ratio of central incisors and (iv) the proportion between the width of the lateral and central incisors. The subjective results were registered on visual-analogue scales. For the objective results standardized photographs were taken. No gender dependent differences could be found for the objectively measured parameters (median): OM1, 10·7 mm; OM2, 8·1 mm; OM3, 0·81; OM4, 0·79. However, significant correlations between subjective and objective results (SJ1/OM1, SJ2/OM2, SJ3/OM3) could be shown for men, but not for women. The maximum of the calculated regression-curves for men reflect ,golden standard values' well known from the literature. The degree of satisfaction concerning appearance of anterior incisors in accordance with golden standard values is higher for men than for women. [source] Automating survey coding by multiclass text categorization techniquesJOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 14 2003Daniela Giorgetti Survey coding is the task of assigning a symbolic code from a predefined set of such codes to the answer given in response to an open-ended question in a questionnaire (aka survey). This task is usually carried out to group respondents according to a predefined scheme based on their answers. Survey coding has several applications, especially in the social sciences, ranging from the simple classification of respondents to the extraction of statistics on political opinions, health and lifestyle habits, customer satisfaction, brand fidelity, and patient satisfaction. Survey coding is a difficult task, because the code that should be attributed to a respondent based on the answer she has given is a matter of subjective judgment, and thus requires expertise. It is thus unsurprising that this task has traditionally been performed manually, by trained coders. Some attempts have been made at automating this task, most of them based on detecting the similarity between the answer and textual descriptions of the meanings of the candidate codes. We take a radically new stand, and formulate the problem of automated survey coding as a text categorization problem, that is, as the problem of learning, by means of supervised machine learning techniques, a model of the association between answers and codes from a training set of precoded answers, and applying the resulting model to the classification of new answers. In this article we experiment with two different learning techniques: one based on naive Bayesian classification, and the other one based on multiclass support vector machines, and test the resulting framework on a corpus of social surveys. The results we have obtained significantly outperform the results achieved by previous automated survey coding approaches. [source] Making subjective judgments in quantitative studies: The importance of using effect sizes and confidence intervalsHUMAN RESOURCE DEVELOPMENT QUARTERLY, Issue 2 2006Jamie L. Callahan At least twenty-three journals in the social sciences purportedly require authors to report effect sizes and, to a much lesser extent, confidence intervals; yet these requirements are rarely clear in the information for contributors. This article reviews some of the literature criticizing the exclusive use of null hypothesis significance testing (NHST) and briefly highlights the state of NHST reporting in social science journals, including Human Resource Development Quarterly. Included are an overview of effect sizes and confidence intervals,their definitions, a brief historical review, and an argument regarding their importance. The article concludes with recommendations for changing the culture of quantitative research within human resource development (HRD) to more systematically reporting effect sizes and confidence intervals as supplements to NHST findings. [source] Evidential reasoning-based nonlinear programming model for MCDA under fuzzy weights and utilities,INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 1 2010Mi Zhou In a multiple-criteria decision analysis (MCDA) problem, qualitative information with subjective judgments of ambiguity is often provided by people, together with quantitative data that may also be imprecise or incomplete. There are several uncertainties that may be considered in an MCDA problem, such as fuzziness and ambiguity. The evidential reasoning (ER) approach is well suited for dealing with such MCDA problems and can generate comprehensive distributed assessments for different alternatives. Many researches in dealing with imprecise or uncertain belief structures have been conducted on the ER approach. In this paper, both triangular fuzzy weights of criteria and fuzzy utilities assigned to evaluation grades are introduced to the ER approach, which may be incurred in several circumstances such as group decision-making situation. The Hadamard multiplicative combination of judgment matrix is extended for the aggregation of triangular fuzzy judgment matrices, the result of which is applied as the fuzzy weights used in the fuzzy ER approach. The consistency of the aggregated triangular fuzzy judgment matrix is also proved. Several pairs of ER-based programming models are designed to generate the total fuzzy belief degrees and the overall expected fuzzy utilities for the comparison of alternatives. A numerical example is conducted to show the effectiveness of the proposed approach. © 2009 Wiley Periodicals, Inc. [source] Similarity of drug names: Comparison of objective and subjective measuresPSYCHOLOGY & MARKETING, Issue 7-8 2002Bruce L. Lambert Previous research has shown that objective measures of orthographic (i.e., spelling) similarity can predict the probability of drug-name confusion, but it is not clear how these objective measures relate to subjective judgments of similarity. This study examined the association between one objective measure of orthographic similarity, the Dice coefficient on trigrams, and one subjective measure, based on the Proscale multidimensional scaling system. Twenty-seven participants, divided into three groups, performed a similarity grouping task on one of three sets of 70 drug names drawn at random from a larger set of similar and dissimilar name pairs. Subjective groupings were converted to dissimilarity scores with the use of the Proscale multidimensional scaling program. The association between subjective and objective measures was assessed by correlation and regression analyses. Correlations between subjective and objective measures were ,0.70, ,0.48, and ,0.53 for the three groups, respectively (p < .001). Regression models with trigram similarity as the main predictor accounted for between 22 and 48% of the variance in subjective dissimilarity scores. It is concluded that objective measures of orthographic similarity between drug names are valid but incomplete measures of subjective similarity. © 2002 Wiley Periodicals, Inc. [source] |