Data Features (data + feature)

Distribution by Scientific Domains


Selected Abstracts


Interaction-Dependent Semantics for Illustrative Volume Rendering

COMPUTER GRAPHICS FORUM, Issue 3 2008
Peter Rautek
In traditional illustration the choice of appropriate styles and rendering techniques is guided by the intention of the artist. For illustrative volume visualizations it is difficult to specify the mapping between the 3D data and the visual representation that preserves the intention of the user. The semantic layers concept establishes this mapping with a linguistic formulation of rules that directly map data features to rendering styles. With semantic layers fuzzy logic is used to evaluate the user defined illustration rules in a preprocessing step. In this paper we introduce interaction-dependent rules that are evaluated for each frame and are therefore computationally more expensive. Enabling interaction-dependent rules, however, allows the use of a new class of semantics, resulting in more expressive interactive illustrations. We show that the evaluation of the fuzzy logic can be done on the graphics hardware enabling the efficient use of interaction-dependent semantics. Further we introduce the flat rendering mode and discuss how different rendering parameters are influenced by the rule base. Our approach provides high quality illustrative volume renderings at interactive frame rates, guided by the specification of illustration rules. [source]


Robust Methods for the Analysis of Income Distribution, Inequality and Poverty

INTERNATIONAL STATISTICAL REVIEW, Issue 3 2000
Maria-Pia Victoria-Feser
Summary Income distribution embeds a large field of research subjects in economics. It is important to study how incomes are distributed among the members of a population in order for example to determine tax policies for redistribution to decrease inequality, or to implement social policies to reduce poverty. The available data come mostly from surveys (and not censuses as it is often believed) and often subject to long debates about their reliability because the sources of errors are numerous. Moreover the forms in which the data are availabe is not always as one would expect, i.e. complete and continuous (microdata) but one also can only have data in a grouped form (in income classes) and/or truncated data where a portion of the original data has been omitted from the sample or simply not recorded. Because of these data features, it is important to complement classical statistical procedures with robust ones. In tis paper such methods are presented, especially for model selection, model fitting with several types of data, inequality and poverty analysis and ordering tools. The approach is based on the Influence Function (IF) developed by Hampel (1974) and further developed by Hampel, Ronchetti, Rousseeuw & Stahel (1986). It is also shown through the analysis of real UK and Tunisian data, that robust techniques can give another picture of income distribution, inequality or poverty when compared to classical ones. [source]


Joint Modelling of Repeated Transitions in Follow-up Data , A Case Study on Breast Cancer Data

BIOMETRICAL JOURNAL, Issue 3 2005
B. Genser
Abstract In longitudinal studies where time to a final event is the ultimate outcome often information is available about intermediate events the individuals may experience during the observation period. Even though many extensions of the Cox proportional hazards model have been proposed to model such multivariate time-to-event data these approaches are still very rarely applied to real datasets. The aim of this paper is to illustrate the application of extended Cox models for multiple time-to-event data and to show their implementation in popular statistical software packages. We demonstrate a systematic way of jointly modelling similar or repeated transitions in follow-up data by analysing an event-history dataset consisting of 270 breast cancer patients, that were followed-up for different clinical events during treatment in metastatic disease. First, we show how this methodology can also be applied to non Markovian stochastic processes by representing these processes as "conditional" Markov processes. Secondly, we compare the application of different Cox-related approaches to the breast cancer data by varying their key model components (i.e. analysis time scale, risk set and baseline hazard function). Our study showed that extended Cox models are a powerful tool for analysing complex event history datasets since the approach can address many dynamic data features such as multiple time scales, dynamic risk sets, time-varying covariates, transition by covariate interactions, autoregressive dependence or intra-subject correlation. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Flexible Maximum Likelihood Methods for Bivariate Proportional Hazards Models

BIOMETRICS, Issue 4 2003
Wenqing He
Summary. This article presents methodology for multivariate proportional hazards (PH) regression models. The methods employ flexible piecewise constant or spline specifications for baseline hazard functions in either marginal or conditional PH models, along with assumptions about the association among lifetimes. Because the models are parametric, ordinary maximum likelihood can be applied; it is able to deal easily with such data features as interval censoring or sequentially observed lifetimes, unlike existing semiparametric methods. A bivariate Clayton model (1978, Biometrika65, 141,151) is used to illustrate the approach taken. Because a parametric assumption about association is made, efficiency and robustness comparisons are made between estimation based on the bivariate Clayton model and "working independence" methods that specify only marginal distributions for each lifetime variable. [source]