Ordinal Data (ordinal + data)

Distribution by Scientific Domains


Selected Abstracts


Measurement System Analysis for Bounded Ordinal Data

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 5 2004
Jeroen de Mast
Abstract The precision of a measurement system is the consistency across multiple measurements of the same object. This paper studies the evaluation of the precision of measurement systems that measure on a bounded ordinal scale. A bounded ordinal scale consists of a finite number of categories that have a specific order. Based on an inventory of methods for the evaluation of precision for other types of measurement scales, the article proposes two approaches. The first approach is based on a latent variable model and is a variant of the intraclass correlation method. The second approach is a non-parametric approach, the results of which are, however, rather difficult to interpret. The approaches are illustrated with an artificial data set and an industrial data set. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Testing Equality between Two Diagnostic Procedures in Paired-Sample Ordinal Data

BIOMETRICAL JOURNAL, Issue 6 2004
Kung-Jong Lui
Abstract When a new diagnostic procedure is developed, it is important to assess whether the diagnostic accuracy of the new procedure is different from that of the standard procedure. For paired-sample ordinal data, this paper develops two test statistics for testing equality of the diagnostic accuracy between two procedures without assuming any parametric models. One is derived on the basis of the probability of correctly identifying the case for a randomly selected pair of a case and a non-case over all possible cutoff points, and the other is derived on the basis of the sensitivity and specificity directly. To illustrate the practical use of the proposed test procedures, this paper includes an example regarding the use of digitized and plain films for screening breast cancer. This paper also applies Monte Carlo simulation to evaluate the finite sample performance of the two statistics developed here and notes that they can perform well in a variety of situations. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Testing Marginal Homogeneity Against Stochastic Order in Multivariate Ordinal Data

BIOMETRICS, Issue 2 2009
B. Klingenberg
Summary Many assessment instruments used in the evaluation of toxicity, safety, pain, or disease progression consider multiple ordinal endpoints to fully capture the presence and severity of treatment effects. Contingency tables underlying these correlated responses are often sparse and imbalanced, rendering asymptotic results unreliable or model fitting prohibitively complex without overly simplistic assumptions on the marginal and joint distribution. Instead of a modeling approach, we look at stochastic order and marginal inhomogeneity as an expression or manifestation of a treatment effect under much weaker assumptions. Often, endpoints are grouped together into physiological domains or by the body function they describe. We derive tests based on these subgroups, which might supplement or replace the individual endpoint analysis because they are more powerful. The permutation or bootstrap distribution is used throughout to obtain global, subgroup, and individual significance levels as they naturally incorporate the correlation among endpoints. We provide a theorem that establishes a connection between marginal homogeneity and the stronger exchangeability assumption under the permutation approach. Multiplicity adjustments for the individual endpoints are obtained via stepdown procedures, while subgroup significance levels are adjusted via the full closed testing procedure. The proposed methodology is illustrated using a collection of 25 correlated ordinal endpoints, grouped into six domains, to evaluate toxicity of a chemical compound. [source]


Using DCE and ranking data to estimate cardinal values for health states for deriving a preference-based single index from the sexual quality of life questionnaire

HEALTH ECONOMICS, Issue 11 2009
Julie Ratcliffe
Abstract There is an increasing interest in using data derived from ordinal methods, particularly data derived from discrete choice experiments (DCEs), to estimate the cardinal values for health states to calculate quality adjusted life years (QALYs). Ordinal measurement strategies such as DCE may have considerable practical advantages over more conventional cardinal measurement techniques, e.g. time trade-off (TTO), because they may not require such a high degree of abstract reasoning. However, there are a number of challenges to deriving the cardinal values for health states using ordinal data, including anchoring the values on the full health,dead scale used to calculate QALYs. This paper reports on a study that deals with these problems in the context of using two ordinal techniques, DCE and ranking, to derive the cardinal values for health states derived from a condition-specific sexual health measure. The results were compared with values generated using a commonly used cardinal valuation technique, the TTO. This study raises some important issues about the use of ordinal data to produce cardinal health state valuations. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Calculation of sample size for stroke trials assessing functional outcome: comparison of binary and ordinal approaches

INTERNATIONAL JOURNAL OF STROKE, Issue 2 2008
The Optimising Analysis of Stroke Trials (OAST) collaboration
Background Many acute stroke trials have given neutral results. Sub-optimal statistical analyses may be failing to detect efficacy. Methods which take account of the ordinal nature of functional outcome data are more efficient. We compare sample size calculations for dichotomous and ordinal outcomes for use in stroke trials. Methods Data from stroke trials studying the effects of interventions known to positively or negatively alter functional outcome , Rankin Scale and Barthel Index , were assessed. Sample size was calculated using comparisons of proportions, means, medians (according to Payne), and ordinal data (according to Whitehead). The sample sizes gained from each method were compared using Friedman 2 way ANOVA. Results Fifty-five comparisons (54 173 patients) of active vs. control treatment were assessed. Estimated sample sizes differed significantly depending on the method of calculation (P<0·0001). The ordering of the methods showed that the ordinal method of Whitehead and comparison of means produced significantly lower sample sizes than the other methods. The ordinal data method on average reduced sample size by 28% (inter-quartile range 14,53%) compared with the comparison of proportions; however, a 22% increase in sample size was seen with the ordinal method for trials assessing thrombolysis. The comparison of medians method of Payne gave the largest sample sizes. Conclusions Choosing an ordinal rather than binary method of analysis allows most trials to be, on average, smaller by approximately 28% for a given statistical power. Smaller trial sample sizes may help by reducing time to completion, complexity, and financial expense. However, ordinal methods may not be optimal for interventions which both improve functional outcome and cause hazard in a subset of patients, e.g. thrombolysis. [source]


Multivariate exploratory analysis of ordinal data in ecology: Pitfalls, problems and solutions

JOURNAL OF VEGETATION SCIENCE, Issue 5 2005
János Podani
Abstract Questions: Are ordinal data appropriately treated by multivariate methods in numerical ecology? If not, what are the most common mistakes? Which dissimilarity coefficients, ordination and classification methods are best suited to ordinal data? Should we worry about such problems at all? Methods: A new classification model family, OrdClAn (Ordinal Cluster Analysis), is suggested for hierarchical and non-hierarchical classifications from ordinal ecological data, e.g. the abundance/dominance scores that are commonly recorded in relevés. During the clustering process, the objects are grouped so as to minimize a measure calculated from the ranks of within-cluster and between-cluster distances or dissimilarities. Results and Conclusions: Evaluation of the various steps of exploratory data analysis of ordinal ecological data shows that consistency of methodology throughout the study is of primary importance. In an optimal situation, each methodological step is order invariant. This property ensures that the results are independent of changes not affecting ordinal relationships, and guarantees that no illusory precision is introduced into the analysis. However, the multivariate procedures that are most commonly applied in numerical ecology do not satisfy these requirements and are therefore not recommended. For example, it is inappropriate to analyse Braun-Blanquet abudance/dominance data by methods assuming that Euclidean distance is meaningful. The solution of all problems is that the dissimilarity coefficient should be compatible with ordinal variables and the subsequent ordination or clustering method should consider only the rank order of dissimilarities. A range of artificial data sets exemplifying different subtypes of ordinal variables, e.g. indicator values or species scores from relevés, illustrate the advocated approach. Detailed analyses of an actual phytosociological data set demonstrate the classification by OrdClAn of relevés and species and the subsequent tabular rearrangement, in a numerical study remaining within the ordinal domain from the first step to the last. [source]


The combined effect of SNP-marker and phenotype attributes in genome-wide association studies

ANIMAL GENETICS, Issue 2 2009
E. K. F. Chan
Summary The last decade has seen rapid improvements in high-throughput single nucleotide polymorphism (SNP) genotyping technologies that have consequently made genome-wide association studies (GWAS) possible. With tens to hundreds of thousands of SNP markers being tested simultaneously in GWAS, it is imperative to appropriately pre-process, or filter out, those SNPs that may lead to false associations. This paper explores the relationships between various SNP genotype and phenotype attributes and their effects on false associations. We show that (i) uniformly distributed ordinal data as well as binary data are more easily influenced, though not necessarily negatively, by differences in various SNP attributes compared with normally distributed data; (ii) filtering SNPs on minor allele frequency (MAF) and extent of Hardy,Weinberg equilibrium (HWE) deviation has little effect on the overall false positive rate; (iii) in some cases, filtering on MAF only serves to exclude SNPs from the analysis without reduction of the overall proportion of false associations; and (iv) HWE, MAF and heterozygosity are all dependent on minor genotype frequency, a newly proposed measure for genotype integrity. [source]


Assessment of Multiple Ordinal Endpoints

BIOMETRICAL JOURNAL, Issue 1 2009
Lothar Häberle
Abstract Ranking multivariate ordinal data and applying a non-parametric test is an analytical approach commonly employed to compare treatments. We study three types of ranking and demonstrate how to combine them. The ranking methods rest upon partial orders of the multidimensional measurements or upon the sum of ranks. Since their usage is simple as regards statistical assumptions and technical realization, they are also adapted for health professionals without deep statistical knowledge. Our goal is discussing differences between the approaches and disclosing possible statistical consequences of their usage (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Testing Equality between Two Diagnostic Procedures in Paired-Sample Ordinal Data

BIOMETRICAL JOURNAL, Issue 6 2004
Kung-Jong Lui
Abstract When a new diagnostic procedure is developed, it is important to assess whether the diagnostic accuracy of the new procedure is different from that of the standard procedure. For paired-sample ordinal data, this paper develops two test statistics for testing equality of the diagnostic accuracy between two procedures without assuming any parametric models. One is derived on the basis of the probability of correctly identifying the case for a randomly selected pair of a case and a non-case over all possible cutoff points, and the other is derived on the basis of the sensitivity and specificity directly. To illustrate the practical use of the proposed test procedures, this paper includes an example regarding the use of digitized and plain films for screening breast cancer. This paper also applies Monte Carlo simulation to evaluate the finite sample performance of the two statistics developed here and notes that they can perform well in a variety of situations. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


A New Nonparametric Approach for Baseline Covariate Adjustment for Two-Group Comparative Studies

BIOMETRICS, Issue 4 2008
Alexander Schacht
Summary We consider two-armed clinical trials in which the response and/or the covariates are observed on either a binary, ordinal, or continuous scale. A new general nonparametric (NP) approach for covariate adjustment is presented using the notion of a relative effect to describe treatment effects. The relative effect is defined by the probability of observing a higher response in the experimental than in the control arm. The notion is invariant under monotone transformations of the data and is therefore especially suitable for ordinal data. For a normal or binary distributed response the relative effect is the transformed effect size or the difference of response probability, respectively. An unbiased and consistent NP estimator for the relative effect is presented. Further, we suggest a NP procedure for correcting the relative effect for covariate imbalance and random covariate imbalance, yielding a consistent estimator for the adjusted relative effect. Asymptotic theory has been developed to derive test statistics and confidence intervals. The test statistic is based on the joint behavior of the estimated relative effect for the response and the covariates. It is shown that the test statistic can be used to evaluate the treatment effect in the presence of (random) covariate imbalance. Approximations for small sample sizes are considered as well. The sampling behavior of the estimator of the adjusted relative effect is examined. We also compare the probability of a type I error and the power of our approach to standard covariate adjustment methods by means of a simulation study. Finally, our approach is illustrated on three studies involving ordinal responses and covariates. [source]