Home About us Contact | |||
Virtual Environment (virtual + environment)
Kinds of Virtual Environment Selected AbstractsAn Eye Gaze Model for Dyadic Interaction in an Immersive Virtual Environment: Practice and ExperienceCOMPUTER GRAPHICS FORUM, Issue 1 2004V. Vinayagamoorthy Abstract This paper describes a behavioural model used to simulate realistic eye-gaze behaviour and body animations for avatars representing participants in a shared immersive virtual environment (IVE). The model was used in a study designed to explore the impact of avatar realism on the perceived quality of communication within a negotiation scenario. Our eye-gaze model was based on data and studies carried out on the behaviour of eye-gaze during face-to-face communication. The technical features of the model are reported here. Information about the motivation behind the study, experimental procedures and a full analysis of the results obtained are given in [17]. [source] DRIVE,Dispatching Requests Indirectly through Virtual EnvironmentCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2010Hyung Won Choi Abstract Dispatching a large number of dynamically changing requests directly to a small number of servers exposes the disparity between the requests and the machines. In this paper, we present a novel approach that dispatches requests to servers through virtual machines, called Dispatching Requests Indirectly through Virtual Environment (DRIVE). Client requests are first dispatched to virtual machines that are subsequently dispatched to actual physical machines. This buffering of requests helps to reduce the complexity involved in dispatching a large number of requests to a small number of machines. To demonstrate the effectiveness of the DRIVE framework, we set up an experimental environment consisting of a PC cluster and four benchmark suites. With the experimental results, we demonstrate that the use of virtual machines indeed abstracts away the client requests and hence helps to improve the overall performance of a dynamically changing computing environment. Copyright © 2009 John Wiley & Sons, Ltd. [source] A comparative study of awareness methods for peer-to-peer distributed virtual environmentsCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5 2008S. Rueda Abstract The increasing popularity of multi-player online games is leading to the widespread use of large-scale Distributed Virtual Environments (DVEs) nowadays. In these systems, peer-to-peer (P2P) architectures have been proposed as an efficient and scalable solution for supporting massively multi-player applications. However, the main challenge for P2P architectures consists of providing each avatar with updated information about which other avatars are its neighbors. This problem is known as the awareness problem. In this paper, we propose a comparative study of the performance provided by those awareness methods that are supposed to fully solve the awareness problem. This study is performed using well-known performance metrics in distributed systems. Moreover, while the evaluations shown in the literature are performed by executing P2P simulations on a single (sequential) computer, this paper evaluates the performance of the considered methods on actually distributed systems. The evaluation results show that only a single method actually provides full awareness to avatars. This method also provides the best performance results. Copyright © 2008 John Wiley & Sons, Ltd. [source] Immersive Integration of Physical and Virtual EnvironmentsCOMPUTER GRAPHICS FORUM, Issue 3 2004Henry Fuchs We envision future work and play environments in which the user's computing interface is more closely integrated with the physical surroundings than today's conventional computer display screens and keyboards. We are working toward realizable versions of such environments, in which multiple video projectors and digital cameras enable every visible surface to be both measured in 3D and used for display. If the 3D surface positions were transmitted to a distant location, they may also enable distant collaborations to become more like working in adjacent offices connected by large windows. With collaborators at the University of Pennsylvania, Brown University, Advanced Network and Services, and the Pittsburgh Supercomputing Center, we at Chapel Hill have been working to bring these ideas to reality. In one system, depth maps are calculated from streams of video images and the resulting 3D surface points are displayed to the user in head-tracked stereo. Among the applications we are pursuing for this tele-presence technology, is advanced training for trauma surgeons by immersive replay of recorded procedures. Other applications display onto physical objects, to allow more natural interaction with them "painting" a dollhouse, for example. More generally, we hope to demonstrate that the principal interface of a future computing environment need not be limited to a screen the size of one or two sheets of paper. Just as a useful physical environment is all around us, so too can the increasingly ubiquitous computing environment be all around us -integrated seamlessly with our physical surroundings. [source] Priority-Driven Acoustic Modeling for Virtual EnvironmentsCOMPUTER GRAPHICS FORUM, Issue 3 2000Patrick Min Geometric acoustic modeling systems spatialize sounds according to reverberation paths from a sound source to a receiver to give an auditory impression of a virtual 3D environment. These systems are useful for concert hall design, teleconferencing, training and simulation, and interactive virtual environments. In many cases, such as in an interactive walkthrough program, the reverberation paths must be updated within strict timing constraints - e.g., as the sound receiver moves under interactive control by a user. In this paper, we describe a geometric acoustic modeling algorithm that uses a priority queue to trace polyhedral beams representing reverberation paths in best-first order up to some termination criteria (e.g., expired time-slice). The advantage of this algorithm is that it is more likely to find the highest priority reverberation paths within a fixed time-slice, avoiding many geometric computations for lower-priority beams. Yet, there is overhead in computing priorities and managing the priority queue. The focus of this paper is to study the trade-offs of the priority-driven beam tracing algorithm with different priority functions. During experiments computing reverberation paths between a source and a receiver in a 3D building environment, we find that priority functions incorporating more accurate estimates of source-to-receiver path length are more likely to find early reverberation paths useful for spatialization, especially in situations where the source and receiver cannot reach each other through trivial reverberation paths. However, when receivers are added to the environment such that it becomes more densely and evenly populated, this advantage diminishes. [source] Transformed Social Interaction, Augmented Gaze, and Social Influence in Immersive Virtual EnvironmentsHUMAN COMMUNICATION RESEARCH, Issue 4 2005Jeremy N. Bailenson Immersive collaborative virtual environments (CVEs) are simulations in which geographically separated individuals interact in a shared, three-dimensional, digital space using immersive virtual environment technology. Unlike videoconference technology, which transmits direct video streams, immersive CVEs accurately track movements of interactants and render them nearly simultaneously (i.e., in real time) onto avatars, three-dimensional digital representations of the interactants. Nonverbal behaviors of interactants can be rendered veridically or transformed strategically (i.e., rendered nonveridically). This research examined augmented gaze, a transformation in which a given interactant's actual head movements are transformed by an algorithm that renders his or her gaze directly at multiple interactants simultaneously, such that each of the others perceives that the transformed interactant is gazing only at him or her. In the current study, a presenter read a persuasive passage to two listeners under various transformed gaze conditions, including augmented gaze. Results showed that women agreed with a persuasive message more during augmented gaze than other gaze conditions. Men recalled more verbal information from the passage than women. Implications for theories of social interaction and computer-mediated communication are discussed. [source] Virtual environments in machinery safety analysis and participatory ergonomicsHUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 5 2007Timo J. Määttä The objective of this work was to evaluate the impact of Virtual Environments (VEs) on safety analysis and participatory ergonomics. The developed method Safety Analysis with Virtual Environments (SAVE) is based on Participatory Ergonomics (PE), Task Analysis (TA), Work Safety Analysis (WSA), the standard EN 1050, and three-dimensional (3-D) functional modeling of the objects being analyzed. The materials of this work comprised machinery systems of six plants in a steel factory, which were implementing ongoing modernization projects. The results indicate that the SAVE method was applicable for safety analysis in the machinery layout design phase. According to the results, 58% of all identified hazards in a steel factory could be identified with VEs. A common understanding of designs, possibilities of evaluating and developing the system by the workers, and of providing training for operators and maintenance persons were the major contribution when using VEs in a safety analysis and applying a participative ergonomics approach. © 2007 Wiley Periodicals, Inc. Hum Factors Man 17: 435,443, 2007. [source] Self-Representations in Immersive Virtual Environments,JOURNAL OF APPLIED SOCIAL PSYCHOLOGY, Issue 11 2008Jeremy N. Bailenson This experiment varied whether individuals interacted with virtual representations of themselves or of others in an immersive virtual environment. In the self-representation condition, half of the participants interacted with a self-representation that bore photographic resemblance to them, whereas the other half interacted with a self-representation that bore no resemblance to them. In the other-representation condition, participants interacted with a representation of another individual. The experimental design was a 2 (Participant Gender) × 3 (Agent Identity: high-similarity self-representation vs. low-similarity self-representation vs. other representation). Overall, participants displayed more intimacy-consistent behaviors for representations of themselves than others. Implications of using immersive virtual environment technology for studying the self are discussed. [source] Design of a virtual environment aided by a model-based formal approach using DEVS,CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11 2009Azzedine Boukerche Abstract Virtual environment (VE) is a modern computer technique that aims to provide an attracting and meaningful human,computer interacting platform, which can essentially help the human users to learn, to play or to be trained in a ,like-real' situation. Recent advances in VE techniques have resulted in their being widely used in many areas, in particular, the E-learning-based training applications. Many researchers have developed the techniques for designing and implementing the 3D virtual environment; however, the existing approaches cannot fully catch up the increasing complexity of modern VE applications. In this paper, we designed and implemented a very attracting web-based 3D virtual environment application that aims to help the training practice of personnel working in the radiology department of a hospital. Furthermore, we presented a model-based formal approach using discrete event system specification (DEVS) to help us in validating the X3D components' behavior. As a step further, DEVS also helps to optimize our design through simulating the design alternatives. Copyright © 2009 John Wiley & Sons, Ltd. [source] Virtual environments in machinery safety analysis and participatory ergonomicsHUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 5 2007Timo J. Määttä The objective of this work was to evaluate the impact of Virtual Environments (VEs) on safety analysis and participatory ergonomics. The developed method Safety Analysis with Virtual Environments (SAVE) is based on Participatory Ergonomics (PE), Task Analysis (TA), Work Safety Analysis (WSA), the standard EN 1050, and three-dimensional (3-D) functional modeling of the objects being analyzed. The materials of this work comprised machinery systems of six plants in a steel factory, which were implementing ongoing modernization projects. The results indicate that the SAVE method was applicable for safety analysis in the machinery layout design phase. According to the results, 58% of all identified hazards in a steel factory could be identified with VEs. A common understanding of designs, possibilities of evaluating and developing the system by the workers, and of providing training for operators and maintenance persons were the major contribution when using VEs in a safety analysis and applying a participative ergonomics approach. © 2007 Wiley Periodicals, Inc. Hum Factors Man 17: 435,443, 2007. [source] A social agent pedestrian modelCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2008Andrew Park Abstract This paper presents a social agent pedestrian model based on experiments with human subjects. Research studies of criminology and environmental psychology show that certain features of the urban environment generate fear in people, causing them to take alternate routes. The Crime Prevention Through Environmental Design (CPTED) strategy has been implemented to reduce fear of crime and crime itself. Our initial prototype of a pedestrian model was developed based on these findings of criminology research. In the course of validating our model, we constructed a virtual environment (VE) that resembles a well-known fear-generating area where several decision points were set up. 60 human subjects were invited to navigate the VE and their choices of routes and comments during the post interviews were analyzed using statistical techniques and content analysis. Through our experimental results, we gained new insights into pedestrians' behavior and suggest a new enhanced and articulated agent model of a pedestrian. Our research not only provides a realistic pedestrian model, but also a new methodology for criminology research. Copyright © 2008 John Wiley & Sons, Ltd. [source] Snap: A time critical decision-making framework for MOUT simulationsCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2008Shang-Ping Ting Abstract Deliberative reasoning based on the rational analysis of various alternatives often requires too much information and may be too slow in time critical situations. In these situations, humans rely mainly on their intuitions rather than some structured decision-making processes. An important and challenging problem in Military Operations on Urban Terrain (MOUT) simulations is how to generate realistic tactical behaviors for the non-player characters (also known as bots), as these bots often need to make quick decisions in time-critical and uncertain situations. In this paper, we describe our work on Snap, a time critical decision-making framework for the bots in MOUT simulations. The novel features of Snap include case-based reasoning (CBR) and thin slicing. CBR is used to make quick decisions by comparing the current situation with past experience cases. Thin slicing is used to model human's ability to quickly form up situation awareness under uncertain and complex situations using key cues from partial information. To assess the effectiveness of Snap, we have integrated it into Twilight City, a virtual environment for MOUT simulations. Experimental results show that Snap is very effective in generating quick decisions during time critical situations for MOUT simulations. Copyright © 2008 John Wiley & Sons, Ltd. [source] Using computer vision to simulate the motion of virtual agentsCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2 2007Soraia R. Musse Abstract In this paper, we propose a new model to simulate the movement of virtual humans based on trajectories captured automatically from filmed video sequences. These trajectories are grouped into similar classes using an unsupervised clustering algorithm, and an extrapolated velocity field is generated for each class. A physically-based simulator is then used to animate virtual humans, aiming to reproduce the trajectories fed to the algorithm and at the same time avoiding collisions with other agents. The proposed approach provides an automatic way to reproduce the motion of real people in a virtual environment, allowing the user to change the number of simulated agents while keeping the same goals observed in the filmed video. Copyright © 2007 John Wiley & Sons, Ltd. [source] Real-time navigating crowds: scalable simulation and renderingCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2006Julien Pettré Abstract This paper introduces a framework for real-time simulation and rendering of crowds navigating in a virtual environment. The solution first consists in a specific environment preprocessing technique giving rise to navigation graphs, which are then used by the navigation and simulation tasks. Second, navigation planning interactively provides various solutions to the user queries, allowing to spread a crowd by individualizing trajectories. A scalable simulation model enables the management of large crowds, while saving computation time for rendering tasks. Pedestrian graphical models are divided into three rendering fidelities ranging from billboards to dynamic meshes, allowing close-up views of detailed digital actors with a large variety of locomotion animations. Examples illustrate our method in several environments with crowds of up to 35,000 pedestrians with real-time performance. Copyright © 2006 John Wiley & Sons, Ltd. [source] An integrated perception for autonomous virtual agents: active and predictive perceptionCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2006Toni Conde Abstract This paper presents an original model with methodologies that integrate in a novel way different types of an autonomous virtual agent's perception in a virtual environment. Our first new approach permits the coherent management of the shared virtual environment for the simulations of an autonomous virtual agent (AVA). Our second approach allows the prediction or the estimation of both the orientation and the attention of an AVA in a virtual environment. By means of a test application with a ,virtual goalkeeper', we demonstrate the speed and the robustness of our technique. Copyright © 2006 John Wiley & Sons, Ltd. [source] Behaviour-based multiplayer collaborative interaction managementCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2006Qingping Lin Abstract A collaborative virtual environment (CVE) allows geographically dispersed users to interact with each other and objects in a common virtual environment via network connections. One of the successful applications of CVE is multiplayer on-line role-playing game. To support massive interactions among virtual entities in a large-scale CVE and maintain consistent status of the interaction among users with the constraint of limited network bandwidth, an efficient collaborative interaction management method is required. In this paper, we propose a behaviour-based interaction management framework for supporting multiplayer role-playing CVE applications. It incorporates a two-tiered architecture which includes high-level role behaviour-based interaction management and low-level message routing. In the high level, interaction management is achieved by enabling interactions based on collaborative behaviour definitions. In the low level, message routing controls interactions according to the run-time status of the interactive entities. Collaborative Behaviour Description Language is designed as a scripting interface for application developers to define collaborative behaviours of interactive entities and simulation logics/game rules in a CVE. We demonstrate and evaluate the performance of the proposed framework through a prototype system and simulations. Copyright © 2006 John Wiley & Sons, Ltd. [source] Multiple path-based approach to image-based street walkthroughCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2 2005Dong Hoon Lee Abstract Image-based rendering for walkthrough in the virtual environment has many advantages should over the geometry-based approach, due to the fast construction of the environment and photo-realistic rendered results. In image-based rendering technique, rays from a set of input images are collected and a novel view image is rendered by the resampling of the stored rays. Current such techniques, however, are limited to a closed capture space. In this paper, we propose a multiple path-based capture configuration that can handle a large-scale scene and a disparity-based warping method for novel view generation. To acquire the disparity image, we segment the input image into vertical slit segments using a robust and inexpensive way of detecting vertical depth discontinuity. The depth slit segments, instead of depth pixels, reduce the processing time for novel view generation. We also discuss a dynamic cache strategy that supports real-time walkthroughs in large and complex street environments. The efficiency of the proposed method is demonstrated with several experiments. Copyright © 2005 John Wiley & Sons, Ltd. [source] A programming environment for behavioural animationCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5 2002Frédéric Devillers Abstract Behavioural models offer the ability to simulate autonomous agents like organisms and living beings. Psychological studies have shown that human behaviour can be described by a perception,decision,action loop, in which the decisional process should integrate several programming paradigms such as real time, concurrency and hierarchy. Building such systems for interactive simulation requires the design of a reactive system treating flows of data to and from the environment, and involving task control and preemption. Since a complete mental model based on vision and image processing cannot be constructed in real time using purely geometrical information, higher levels of information are needed in a model of the virtual environment. For example, the autonomous actors of a virtual world would exploit the knowledge of the environment topology to navigate through it. Accordingly, in this paper we present our programming environment for real-time behavioural animation which is compounded of a general animation and simulation platform, a behavioural modelling language and a scenario-authoring tool. Those tools has been used for different applications such as pedestrian and car driver interaction in urban environments, or a virtual museum populated by a group of visitors. Copyright © 2002 John Wiley & Sons, Ltd. [source] Multiple animated characters motion fusionCOMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5 2002Luo Zhongxiang Abstract One of the major problems of the motion capture-based computer animation technique is the relatively high cost of equipment and low reuse rate of data. To overcome this problem, many motion-editing methods have been developed. However, most of them can only handle one character whose motions are preset, and hence cannot interact with its environment automatically. In this paper, we construct a new architecture of multiple animated character motion fusion, which not only enables the characters to perceive and respond to the virtual environment, but also allows them to interact with each other. We will also discuss in detail the key issues, such as motion planning, coordination of multiple animated characters and generation of vivid continuous motions. Our experimental results will further testify to the effectiveness of the new methodology. Copyright © 2002 John Wiley & Sons, Ltd. [source] An Eye Gaze Model for Dyadic Interaction in an Immersive Virtual Environment: Practice and ExperienceCOMPUTER GRAPHICS FORUM, Issue 1 2004V. Vinayagamoorthy Abstract This paper describes a behavioural model used to simulate realistic eye-gaze behaviour and body animations for avatars representing participants in a shared immersive virtual environment (IVE). The model was used in a study designed to explore the impact of avatar realism on the perceived quality of communication within a negotiation scenario. Our eye-gaze model was based on data and studies carried out on the behaviour of eye-gaze during face-to-face communication. The technical features of the model are reported here. Information about the motivation behind the study, experimental procedures and a full analysis of the results obtained are given in [17]. [source] Design of a virtual environment aided by a model-based formal approach using DEVS,CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11 2009Azzedine Boukerche Abstract Virtual environment (VE) is a modern computer technique that aims to provide an attracting and meaningful human,computer interacting platform, which can essentially help the human users to learn, to play or to be trained in a ,like-real' situation. Recent advances in VE techniques have resulted in their being widely used in many areas, in particular, the E-learning-based training applications. Many researchers have developed the techniques for designing and implementing the 3D virtual environment; however, the existing approaches cannot fully catch up the increasing complexity of modern VE applications. In this paper, we designed and implemented a very attracting web-based 3D virtual environment application that aims to help the training practice of personnel working in the radiology department of a hospital. Furthermore, we presented a model-based formal approach using discrete event system specification (DEVS) to help us in validating the X3D components' behavior. As a step further, DEVS also helps to optimize our design through simulating the design alternatives. Copyright © 2009 John Wiley & Sons, Ltd. [source] Eye gaze in virtual environments: evaluating the need and initial work on implementationCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11 2009Norman Murray Abstract For efficient collaboration between participants, eye gaze is seen as being critical for interaction. Video conferencing either does not attempt to support eye gaze (e.g. AcessGrid) or only approximates it in round table conditions (e.g. life size telepresence). Immersive collaborative virtual environments represent remote participants through avatars that follow their tracked movements. By additionally tracking people's eyes and representing their movement on their avatars, the line of gaze can be faithfully reproduced, as opposed to approximated. This paper presents the results of initial work that tested if the focus of gaze could be more accurately gauged if tracked eye movement was added to that of the head of an avatar observed in an immersive VE. An experiment was conducted to assess the difference between user's abilities to judge what objects an avatar is looking at with only head movements being displayed, while the eyes remained static, and with eye gaze and head movement information being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects correctly identifying what a person is looking at in an immersive virtual environment. This is followed by a description of the work that is now being undertaken following the positive results from the experiment. We discuss the integration of an eye tracker more suitable for immersive mobile use and the software and techniques that were developed to integrate the user's real-world eye movements into calibrated eye gaze in an immersive virtual world. This is to be used in the creation of an immersive collaborative virtual environment supporting eye gaze and its ongoing experiments. Copyright © 2009 John Wiley & Sons, Ltd. [source] Judgment and action knowledge in speed adjustment tasks: experiments in a virtual environmentDEVELOPMENTAL SCIENCE, Issue 2 2003Susanne Huber Two experiments were conducted to investigate children's and adults' knowledge of time and speed in action and judgment tasks. Participants had to set the speed of a moving car to a new speed so that it would reach a target line at the same time as a reference car moving at a higher speed and disappearing in a tunnel at the midway point. In Experiment 1 (24 10-year-olds, 24 adults), children's and adults' speed adjustments followed the normative pattern when responses had to be graded linearly as a function of the car's initial speed. In a non-linear condition, only adults' action responses corresponded with the normative function. Simplifying the task by shortening the tunnel systematically in Experiment 2 (24 10-year-olds, 24 adults) enabled children to grade the speeds adequately in the action conditions only. Adults now produced normative response patterns in both judgment and action. Whether people show linearization biases was thus shown to depend on the interaction of age, task demands and response mode. [source] Behavioral Syndromes in Stable Social Groups: An Artifact of External Constraints?ETHOLOGY, Issue 12 2008Ximena J. Nelson Individuals of many species differ consistently in their behavioral reactions toward different stimuli, such as predators, rivals, and potential mates. These typical reactions, described as ,behavioral syndromes' or ,personalities,' appear to be heritable and therefore subject to selection. We studied behavioral syndromes in 36 male fowl living in 12 social groups and found that individuals behaved consistently over time. Furthermore, responses to different contexts (anti-predator, foraging, and territorial) were inter-correlated, suggesting that males exhibited comparable behavioral traits in these functionally distinct situations. We subsequently isolated the same roosters and conducted tests in a ,virtual environment,' using high-resolution digital video sequences to simulate the anti-predator, foraging, and territorial contexts that they had experienced outdoors. Under these controlled conditions, repeatability persisted but individual responses to the three classes of stimuli failed to predict one another. These were instead context-specific. In particular, production of each type of vocal signal was independent, implying that calls in the repertoire are controlled by distinct mechanisms. Our results show that extrinsic factors, such as social position, can be responsible for the appearance of traits that could readily be mistaken for the product of endogenous characters. [source] Detection of unexpected events during spatial navigation in humans: bottom-up attentional system and neural mechanismsEUROPEAN JOURNAL OF NEUROSCIENCE, Issue 4 2008Giuseppe Iaria Abstract Navigation is a complex cognitive ability requiring the processing and integration of several different types of information extracted from the environment. While navigating, however, an unexpected event may suddenly occur, which individuals are required to detect promptly in order to apply an appropriate behavioural response. The alerting mechanism that is integral to the detection of unexpected events is referred to as the bottom-up attentional system. Using event-related functional magnetic resonance imaging, we investigated the neural basis of bottom-up detection of unexpected events while individuals moved within a virtual environment. We identified activation within a right fronto-temporo-parietal network in response to unexpected events while navigating in this virtual environment. Furthermore, when an unexpected event requires an adjusted behavioural response, a region of the right ventrolateral pre-frontal cortex (areas 45 and 47/12) is selectively activated. Our data replicate earlier findings on the neural mechanisms underlying visual attention and extend these findings to the more complex real-life ability of spatial navigation, thereby suggesting that these neural mechanisms subserve the bottom-up attentional systems that are crucial for effective locomotion in real surroundings. [source] Effects of virtual lighting on visual performance and eye fatigueHUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 2 2002Vincent G. Duffy This study is designed to determine whether differences in eye fatigue and visual performance can be shown under varying virtual industrial lighting conditions. It is based on the results of studies of more traditional video display terminal (VDT) tasks reported in the literature. One experiment was designed to determine if the effects of virtual lighting on eye fatigue and visual performance in a simulated virtual industrial environment are similar to some other VDT tasks with varying luminance contrast. Results of a test of 20 participants in a vigilance task show that there is a significant difference in performance and eye fatigue in the virtual environment with varying virtual light conditions. These results may help designers see that performance in some virtual "lighting" conditions, for some tasks, is consistent with that in the real. However, due to the difficulties of determining the appropriate virtual objects to be considered for the luminance measures, additional research is needed to be able to generalize the results to other industrial training scenarios. A second experiment was intended to test for the luminance decrement in a VDT that was shown in recent literature. The results would have potential implications for the experiment that included a vigilance task. However, the results showed that the luminance decrement demonstrated in recent literature did not occur. These results suggest that the equipment used in the present experiments should not cause difficulty in interpreting the results of the vigilance task. © 2002 Wiley Periodicals, Inc. [source] Self-Representations in Immersive Virtual Environments,JOURNAL OF APPLIED SOCIAL PSYCHOLOGY, Issue 11 2008Jeremy N. Bailenson This experiment varied whether individuals interacted with virtual representations of themselves or of others in an immersive virtual environment. In the self-representation condition, half of the participants interacted with a self-representation that bore photographic resemblance to them, whereas the other half interacted with a self-representation that bore no resemblance to them. In the other-representation condition, participants interacted with a representation of another individual. The experimental design was a 2 (Participant Gender) × 3 (Agent Identity: high-similarity self-representation vs. low-similarity self-representation vs. other representation). Overall, participants displayed more intimacy-consistent behaviors for representations of themselves than others. Implications of using immersive virtual environment technology for studying the self are discussed. [source] Accuracy assessment of computer-assisted flapless implant placement in partial edentulismJOURNAL OF CLINICAL PERIODONTOLOGY, Issue 4 2010N. Van Assche Van Assche N, van Steenberghe D, Quirynen M, Jacobs R. Accuracy assessment of computer-assisted flapless implant placement in partial edentulism. J Clin Periodontol 2010; 37: 398,403. doi: 10.1111/j.1600-051X.2010.01535.x Abstract Aim: To assess the accuracy of implants placed flapless by a stereolithographic template in partially edentulous patients. Material and Methods: Eight patients, requiring two to four implants (maxilla or mandible), were consecutively recruited. Radiographical data were obtained by means of a cone beam or a multi-slice CT scan and imported in a software program. Implants (n=21) were planned in a virtual environment, leading to the manufacture of one stereolithographic template per patient to guide the implant placement in a one-stage flapless procedure. A postoperative cone beam CT was performed to calculate the difference between virtual implant (n=21) positions in the preoperative planning and postoperative situation. Results: A mean angular deviation of 2.7° (range 0.4,8, SD 1.9), with a mean deviation at the apex of 1.0 mm (range 0.2,3.0, SD 0.7), was observed. If one patient, a dropout because of non-conformity with the protocol, was excluded, the angular deviation was reduced to 2.2° (range 0.6,3.9, SD 1.1), and the apical deviation to 0.9 mm (range 0.2,1.8). Conclusion: Based on this limited patient population, a flapless implant installation appears to be a useful procedure even when based on accurate and reliable 3D CT-based image data and a dedicated implant planning software. [source] Virtual reality and hypermedia in learning to use a turning latheJOURNAL OF COMPUTER ASSISTED LEARNING, Issue 2 2001A. Antonietti Abstract A Virtual reality environment with hypermedia was designed to help undergraduates understand the structure and functioning of a turning lathe. Study 1 was carried out with 30 novice students and Study 2 involved 24 students attending a machining course. These studies demonstrated that the virtual lathe can foster the comprehension of some core machining concepts. Further, the studies suggest that novice students benefit most from earlier free navigation of the virtual environment whereas expert students benefit from an analysis of the hypermedia. [source] Flexible system for simulating and tele-operating robots through the internetJOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 3 2005F. A. Candelas Simulation and teleoperation tools offer many advantages for the training or learning of technological subjects, such as flexibility in time-tables and student access to expensive and limited equipment. In this paper, we present a new system for simulating and tele-operating robot arms through the Internet, which allows many users to simulate and test positioning commands for a robot by means of a virtual environment, as well as execute the validated commands in a real remote robot of the same characteristics. The main feature of the system is its flexibility in managing different robots or including new robot models and equipment. © 2005 Wiley Periodicals, Inc. [source] |