Object Shapes (object + shape)

Distribution by Scientific Domains


Selected Abstracts


Enhanced effectiveness in visuo-haptic object-selective brain regions with increasing stimulus salience

HUMAN BRAIN MAPPING, Issue 5 2010
Sunah Kim
Abstract The occipital and parietal lobes contain regions that are recruited for both visual and haptic object processing. The purpose of the present study was to characterize the underlying neural mechanisms for bimodal integration of vision and haptics in these visuo-haptic object-selective brain regions to find out whether these brain regions are sites of neuronal or areal convergence. Our sensory conditions consisted of visual-only (V), haptic-only (H), and visuo-haptic (VH), which allowed us to evaluate integration using the superadditivity metric. We also presented each stimulus condition at two different levels of signal-to-noise ratio or salience. The salience manipulation allowed us to assess integration using the rule of inverse effectiveness. We were able to localize previously described visuo-haptic object-selective regions in the lateral occipital cortex (lateral occipital tactile-visual area) and the intraparietal sulcus, and also localized a new region in the left anterior fusiform gyrus. There was no evidence of superadditivity with the VH stimulus at either level of salience in any of the regions. There was, however, a strong effect of salience on multisensory enhancement: the response to the VH stimulus was more enhanced at higher salience across all regions. In other words, the regions showed enhanced integration of the VH stimulus with increasing effectiveness of the unisensory stimuli. We called the effect "enhanced effectiveness." The presence of enhanced effectiveness in visuo-haptic object-selective brain regions demonstrates neuronal convergence of visual and haptic sensory inputs for the purpose of processing object shape. Hum Brain Mapp, 2010. © 2009 Wiley-Liss, Inc. [source]


Selective visuo-haptic processing of shape and texture

HUMAN BRAIN MAPPING, Issue 10 2008
Randall Stilla
Abstract Previous functional neuroimaging studies have described shape-selectivity for haptic stimuli in many cerebral cortical regions, of which some are also visually shape-selective. However, the literature is equivocal on the existence of haptic or visuo-haptic texture-selectivity. We report here on a human functional magnetic resonance imaging (fMRI) study in which shape and texture perception were contrasted using haptic stimuli presented to the right hand, and visual stimuli presented centrally. Bilateral selectivity for shape, with overlap between modalities, was found in a dorsal set of parietal areas: the postcentral sulcus and anterior, posterior and ventral parts of the intraparietal sulcus (IPS); as well as ventrally in the lateral occipital complex. The magnitude of visually- and haptically-evoked activity was significantly correlated across subjects in the left posterior IPS and right lateral occipital complex, suggesting that these areas specifically house representations of object shape. Haptic shape-selectivity was also found in the left postcentral gyrus, the left lingual gyrus, and a number of frontal cortical sites. Haptic texture-selectivity was found in ventral somatosensory areas: the parietal operculum and posterior insula bilaterally, as well as in the right medial occipital cortex, overlapping with a medial occipital cortical region, which was texture-selective for visual stimuli. The present report corroborates and elaborates previous suggestions of specialized visuo-haptic processing of texture and shape. Hum Brain Mapp 2008. © 2007 Wiley-Liss, Inc. [source]


Interactive shadowing for 2D Anime

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2-3 2009
Eiji Sugisaki
Abstract In this paper, we propose an instant shadow generation technique for 2D animation, especially Japanese Anime. In traditional 2D Anime production, the entire animation including shadows is drawn by hand so that it takes long time to complete. Shadows play an important role in the creation of symbolic visual effects. However shadows are not always drawn due to time constraints and lack of animators especially when the production schedule is tight. To solve this problem, we develop an easy shadowing approach that enables animators to easily create a layer of shadow and its animation based on the character's shapes. Our approach is both instant and intuitive. The only inputs required are character or object shapes in input animation sequence with alpha value generally used in the Anime production pipeline. First, shadows are automatically rendered on a virtual plane by using a Shadow Map1 based on these inputs. Then the rendered shadows can be edited by simple operations and simplified by the Gaussian Filter. Several special effects such as blurring can be applied to the rendered shadow at the same time. Compared to existing approaches, ours is more efficient and effective to handle automatic shadowing in real-time. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Are surface properties integrated into visuohaptic object representations?

EUROPEAN JOURNAL OF NEUROSCIENCE, Issue 10 2010
Simon Lacey
Abstract Object recognition studies have almost exclusively involved vision, focusing on shape rather than surface properties such as color. Visual object representations are thought to integrate shape and color information because changing the color of studied objects impairs their subsequent recognition. However, little is known about integration of surface properties into visuohaptic multisensory representations. Here, participants studied objects with distinct patterns of surface properties (color in Experiment 1, texture in Experiments 2 and 3) and had to discriminate between object shapes when color or texture schemes were altered in within-modal (visual and haptic) and cross-modal (visual study followed by haptic test and vice versa) conditions. In Experiment 1, color changes impaired within-modal visual recognition but had no effect on cross-modal recognition, suggesting that the multisensory representation was not influenced by modality-specific surface properties. In Experiment 2, texture changes impaired recognition in all conditions, suggesting that both unisensory and multisensory representations integrated modality-independent surface properties. However, the cross-modal impairment might have reflected either the texture change or a failure to form the multisensory representation. Experiment 3 attempted to distinguish between these possibilities by combining changes in texture with changes in orientation, taking advantage of the known view-independence of the multisensory representation, but the results were not conclusive owing to the overwhelming effect of texture change. The simplest account is that the multisensory representation integrates shape and modality-independent surface properties. However, more work is required to investigate this and the conditions under which multisensory integration of structural and surface properties occurs. [source]


A NEW TRUE ORTHO-PHOTO METHODOLOGY FOR COMPLEX ARCHAEOLOGICAL APPLICATION

ARCHAEOMETRY, Issue 3 2010
YAHYA ALSHAWABKEH
Ortho-photo is one of the most important photogrammetric products for archaeological documentation. It consists of a powerful textured representation combining geometric accuracy with rich detail, such as areas of damage and decay. Archaeological applications are usually faced with complex object shapes. Compared with conventional algorithms, ortho-projection of such rough curved objects is still a problem, due to the complex description of the analytical shape of the object. Even using a detailed digital surface model, typical ortho-rectification software does not produce the desired outcome, being incapable of handling image visibility and model occlusions, since it is limited to 2.5-dimensional surface descriptions. This paper presents an approach for the automated production of true ortho-mosaics for the documentation of cultural objects. The algorithm uses precise three-dimensional surface representations derived from laser scanning and several digital images that entirely cover the object of interest. After identifying all model surface triangles in the viewing direction, the triangles are projected back on to all initial images to establish visibilities for every available image. Missing image information can be filled in from adjacent images that must have been subjected to the same true ortho-photo procedure. [source]


Segmentation of 3D microtomographic images of granular materials with the stochastic watershed

JOURNAL OF MICROSCOPY, Issue 1 2010
M. FAESSEL
Summary Segmentation of 3D images of granular materials obtained by microtomography is not an easy task. Because of the conditions of acquisition and the nature of the media, the available images are not exploitable without a reliable method of extraction of the grains. The high connectivity in the medium, the disparity of the object's shape and the presence of image imperfections make classical segmentation methods (using image gradient and watershed constrained by markers) extremely difficult to perform efficiently. In this paper, we propose a non-parametric method using the stochastic watershed, allowing to estimate a 3D probability map of contours. Procedures allowing to extract final segmentation from this function are then presented. [source]