Virtual Objects (virtual + object)

Distribution by Scientific Domains


Selected Abstracts


Augmented reality agents for user interface adaptation

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2008
István Barakonyi
Abstract Most augmented reality (AR) applications are primarily concerned with letting a user browse a 3D virtual world registered with the real world. More advanced AR interfaces let the user interact with the mixed environment, but the virtual part is typically rather finite and deterministic. In contrast, autonomous behavior is often desirable in ubiquitous computing (Ubicomp), which requires the computers embedded into the environment to adapt to context and situation without explicit user intervention. We present an AR framework that is enhanced by typical Ubicomp features by dynamically and proactively exploiting previously unknown applications and hardware devices, and adapting the appearance of the user interface to persistently stored and accumulated user preferences. Our framework explores proactive computing, multi-user interface adaptation, and user interface migration. We employ mobile and autonomous agents embodied by real and virtual objects as an interface and interaction metaphor, where agent bodies are able to opportunistically migrate between multiple AR applications and computing platforms to best match the needs of the current application context. We present two pilot applications to illustrate design concepts. Copyright © 2007 John Wiley & Sons, Ltd. [source]


As-consistent-As-possible compositing of virtual objects and video sequences

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2006
Guofeng Zhang
Abstract We present an efficient approach that merges the virtual objects into video sequences taken by a freely moving camera in a realistic manner. The composition is visually and geometrically consistent through three main steps. First, a robust camera tracking algorithm based on key frames is proposed, which precisely recovers the focal length with a novel multi-frame strategy. Next, the concerned 3D models of the real scenes are reconstructed by means of an extended multi-baseline algorithm. Finally, the virtual objects in the form of 3D models are integrated into the real scenes, with special cares on the interaction consistency including shadow casting, occlusions, and object animation. A variety of experiments have been implemented, which demonstrate the robustness and efficiency of our approach. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Haptic Cues for Image Disambiguation

COMPUTER GRAPHICS FORUM, Issue 3 2000
G. Faconti
Haptic interfaces represent a revolution in human computer interface technology since they make it possible for users to touch and manipulate virtual objects. In this work we describe a cross-model interaction experiment to study the effect of adding haptic cues to visual cues when vision is not enough to disambiguate the images. We relate the results to those obtained in experimental psychology as well as to more recent studies on the subject. [source]


Scalable Algorithm for Resolving Incorrect Occlusion in Dynamic Augmented Reality Engineering Environments

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 1 2010
Amir H. Behzadan
As a result of introducing real-world objects into the visualization, less virtual models have to be deployed to create a realistic visual output that directly translates into less time and effort required to create, render, manipulate, manage, and update three-dimensional (3D) virtual contents (CAD model engineering) of the animated scene. At the same time, using the existing layout of land or plant as the background of visualization significantly alleviates the need to collect data about the surrounding environment prior to creating the final visualization while providing visually convincing representations of the processes being studied. In an AR animation, virtual and real objects must be simultaneously managed and accurately displayed to a user to create a visually convincing illusion of their coexistence and interaction. A critical challenge impeding this objective is the problem of incorrect occlusion that manifests itself when real objects in an AR scene partially or wholly block the view of virtual objects. In the presented research, a new AR occlusion handling system based on depth-sensing algorithms and frame buffer manipulation techniques was designed and implemented. This algorithm is capable of resolving incorrect occlusion occurring in dynamic AR environments in real time using depth-sensing equipment such as laser detection and ranging (LADAR) devices, and can be integrated into any mobile AR platform that allows a user to navigate freely and observe a dynamic AR scene from any vantage position. [source]


Perception of safe robot speed in virtual and real industrial environments

HUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 4 2006
Vincent G. Duffy
The purpose of this project was to study the influences of dynamic virtual objects in an Internet-based virtual industrial environment. The main objectives of this study were to investigate perception of safe robot speed and perception of acceptability. Virtual industrial environments were designed and developed to conduct the experiment. The hypotheses specifying the relationships between perceptions of the robot size, type, different robot starting speed conditions, and gender were tested through data collected from 32 participants. The results indicated that the perception of safe speed was significantly different depending on robot sizes and the initial robot speed conditions. This was consistent with results shown in previous literature for tests in a real industrial environment. © 2006 Wiley Periodicals, Inc. Hum Factors Man 16: 369,383, 2006. [source]


Effects of virtual lighting on visual performance and eye fatigue

HUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 2 2002
Vincent G. Duffy
This study is designed to determine whether differences in eye fatigue and visual performance can be shown under varying virtual industrial lighting conditions. It is based on the results of studies of more traditional video display terminal (VDT) tasks reported in the literature. One experiment was designed to determine if the effects of virtual lighting on eye fatigue and visual performance in a simulated virtual industrial environment are similar to some other VDT tasks with varying luminance contrast. Results of a test of 20 participants in a vigilance task show that there is a significant difference in performance and eye fatigue in the virtual environment with varying virtual light conditions. These results may help designers see that performance in some virtual "lighting" conditions, for some tasks, is consistent with that in the real. However, due to the difficulties of determining the appropriate virtual objects to be considered for the luminance measures, additional research is needed to be able to generalize the results to other industrial training scenarios. A second experiment was intended to test for the luminance decrement in a VDT that was shown in recent literature. The results would have potential implications for the experiment that included a vigilance task. However, the results showed that the luminance decrement demonstrated in recent literature did not occur. These results suggest that the equipment used in the present experiments should not cause difficulty in interpreting the results of the vigilance task. © 2002 Wiley Periodicals, Inc. [source]


What Am I? Virtual Machines and the Mind/Body Problem

PHILOSOPHY AND PHENOMENOLOGICAL RESEARCH, Issue 2 2008
JOHN L. POLLOCK
When your word processor or email program is running on your computer, this creates a "virtual machine" that manipulates windows, files, text, etc. What is this virtual machine, and what are the virtual objects it manipulates? Many standard arguments in the philosophy of mind have exact analogues for virtual machines and virtual objects, but we do not want to draw the wild metaphysical conclusions that have sometimes tempted philosophers in the philosophy of mind. A computer file is not made of epiphenomenal ectoplasm. I argue instead that virtual objects are "supervenient objects." The stereotypical example of supervenient objects is the statue and the lump of clay. To this end I propose a theory of supervenient objects. Then I turn to persons and mental states. I argue that my mental states are virtual states of a cognitive virtual machine implemented on my body, and a person is a supervenient object supervening on this cognitive virtual machine. [source]


Characteristic changes in the physiological components of cybersickness

PSYCHOPHYSIOLOGY, Issue 5 2005
Young Youn Kim
Abstract We investigated the characteristic changes in the physiology of cybersickness when subjects were exposed to virtual reality. Sixty-one participants experienced a virtual navigation for a total of 9.5 min, and were required to detect specific virtual objects. Three questionnaires for sickness susceptibility and immersive tendency were obtained before the navigation. Sixteen electrophysiological signals were recorded before, during, and after the navigation. The severity of cybersickness experienced by participants was reported from a simulator sickness questionnaire after the navigation. The total severity of cybersickness had a significant positive correlation with gastric tachyarrhythmia, eyeblink rate, heart period, and EEG delta wave and a negative correlation with EEG beta wave. These results suggest that cybersickness accompanies the pattern changes in the activities of the central and the autonomic nervous systems. [source]