Robot Navigation (robot + navigation)

Distribution by Scientific Domains


Selected Abstracts


Omnidirectional Vision and Inertial Clues for Robot Navigation

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 1 2004
Irem Stratmann
The structural features inherent in the visual motion field of a mobile robot contain useful clues about its navigation. The combination of these visual clues and additional inertial sensor information may allow reliable detection of the navigation direction for a mobile robot and also the independent motion that might be present in the 3D scene. The motion field, which is the 2D projection of the 3D scene variations induced by the camera-robot system, is estimated through optical flow calculations. The singular points of the global optical flow field of omnidirectional image sequences indicate the translational direction of the robot as well as the deviation from its planned path. It is also possible to detect motion patterns of near obstacles or independently moving objects of the scene. In this paper, we introduce the analysis of the intrinsic features of the omnidirectional motion fields, in combination with gyroscopical information, and give some examples of this preliminary analysis. © 2004 Wiley Periodicals, Inc. [source]


A perspective factorization method for Euclidean reconstruction with uncalibrated cameras

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 4 2002
Mei Han
Abstract Structure from motion (SFM), which is recovering camera motion and scene structure from image sequences, has various applications, such as scene modelling, robot navigation, object recognition and virtual reality. Most of previous research on SFM requires the use of intrinsically calibrated cameras. In this paper we describe a factorization-based method to recover Euclidean structure from multiple perspective views with uncalibrated cameras. The method first performs a projective reconstruction using a bilinear factorization algorithm, and then converts the projective solution to a Euclidean one by enforcing metric constraints. The process of updating a projective solution to a full metric one is referred as normalization in most factorization-based SFM methods. We present three normalization algorithms which enforce Euclidean constraints on camera calibration parameters to recover the scene structure and the camera calibration simultaneously, assuming zero skew cameras. The first two algorithms are linear, one for dealing with the case that only the focal lengths are unknown, and another for the case that the focal lengths and the constant principal point are unknown. The third algorithm is bilinear, dealing with the case that the focal lengths, the principal points and the aspect ratios are all unknown. The results of experiments are presented. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Modeling hippocampal theta oscillation: Applications in neuropharmacology and robot navigation

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 9 2006
Tamás Kiss
This article introduces a biologically realistic mathematical, computational model of theta (,5 Hz) rhythm generation in the hippocampal CA1 region and some of its possible further applications in drug discovery and in robotic/computational models of navigation. The model shown here uses the conductance-based description of nerve cells: Populations of basket cells, alveus/lacunosum-moleculare interneurons, and pyramidal cells are used to model the hippocampal CA1 and a fast-spiking GABAergic interneuron population for modeling the septal influence. Results of the model show that the septo-hippocampal feedback loop is capable of robust theta rhythm generation due to proper timing of pyramidal cells and synchronization within the basket cell network via recurrent connections. © 2006 Wiley Periodicals, Inc. Int J Int Syst 21: 903,917, 2006. [source]


State space sampling of feasible motions for high-performance mobile robot navigation in complex environments

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 6-7 2008
Thomas M. Howard
Sampling in the space of controls or actions is a well-established method for ensuring feasible local motion plans. However, as mobile robots advance in performance and competence in complex environments, this classical motion-planning technique ceases to be effective. When environmental constraints severely limit the space of acceptable motions or when global motion planning expresses strong preferences, a state space sampling strategy is more effective. Although this has been evident for some time, the practical question is how to achieve it while also satisfying the severe constraints of vehicle dynamic feasibility. The paper presents an effective algorithm for state space sampling utilizing a model-based trajectory generation approach. This method enables high-speed navigation in highly constrained and/or partially known environments such as trails, roadways, and dense off-road obstacle fields. © 2008 Wiley Periodicals, Inc. [source]


A new space and time sensor fusion method for mobile robot navigation

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 7 2004
TaeSeok Jin
To fully utilize the information from the sensors of mobile robot, this paper proposes a new sensor-fusion technique where the sample data set obtained at a previous instant is properly transformed and fused with the current data sets to produce a reliable estimate for navigation control. Exploration of an unknown environment is an important task for the new generation of mobile service robots. The mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. Notice that in the conventional fusion schemes, the measurement is dependent on the current data sets only. Therefore, more sensors are required to measure a given physical parameter or to improve the reliability of the measurement. However, in this approach, instead of adding more sensors to the system, the temporal sequences of the data sets are stored and utilized for the purpose. The basic principle is illustrated by examples and the effectiveness is proved through simulations and experiments. The newly proposed STSF (space and time sensor fusion) scheme is applied to the navigation of a mobile robot in an environment using landmarks, and the experimental results demonstrate the effective performance of the system. © 2004 Wiley Periodicals, Inc. [source]


Three-dimensional map building for mobile robot navigation environments using a self-organizing neural network

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 6 2004
Min Young Kim
In recent years, mobile robots have been required to become more and more autonomous in such a way that they are able to sense and recognize the three-dimensional space in which they live or work. In this paper, we deal with such an environment map building problem from three-dimensional sensing data for mobile robot navigation. In particular, the problem to be dealt with is how to extract and model obstacles which are not represented on the map but exist in the real environment, so that the map can be newly updated using the modeled obstacle information. To achieve this, we propose a three-dimensional map building method, which is based on a self-organizing neural network technique called "growing neural gas network." Using the obstacle data acquired from the 3D data acquisition process of an active laser range finder, learning of the neural network is performed to generate a graphical structure that reflects the topology of the input space. For evaluation of the proposed method, a series of simulations and experiments are performed to build 3D maps of some given environments surrounding the robot. The usefulness and robustness of the proposed method are investigated and discussed in detail. © 2004 Wiley Periodicals, Inc. [source]


New Traversability Indices and Traversability Grid for Integrated Sensor/Map-Based Navigation

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 3 2003
Homayoun Seraji
This paper presents new measures of terrain traversability at short range and long range of a mobile robot; namely, local and global traversability indices. The sensor-based local traversability index is related by a set of linguistic rules to large obstacles and surface softness within a short range of the robot measured by on-board sensors. The map-based global traversability index is obtained from the terrain topographic map, and is based on major surface features such as hills and lakes within a long range of the robot. These traversability indices complement the mid-range sensor-based regional traversability index introduced earlier. Each traversability index is represented by four fuzzy sets with the linguistic labels {POOR, LOW, MODERATE, HIGH}, corresponding to surfaces that are unsafe, moderately-unsafe, moderately-safe, and safe for traversal, respectively. The global terrain analysis also leads to the new concepts of traversability map and traversability grid for representation of terrain quality based on the global map information. The traversability indices are used in two sensor-based traverse-local and traverse-regional behaviors and one map-based traverse-global behavior. These behaviors are integrated with a map-based seek-goal behavior to ensure that the mobile robot reaches its goal safely while avoiding both sensed and mapped terrain hazards. This provides a unified system in which the two independent sources of terrain quality information, i.e., prior maps and on-board sensors, are integrated together for reactive robot navigation. The paper is concluded by a graphical simulation study. © 2003 Wiley Periodicals, Inc. [source]