Ground Vehicle (ground + vehicle)

Distribution by Scientific Domains


Selected Abstracts


The DARPA LAGR program: Goals, challenges, methodology, and phase I results

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 11-12 2006
L. D. Jackel
The DARPA Learning Applied to Ground Vehicles (LAGR) program is accelerating progress in autonomous, perception-based, off-road navigation in unmanned ground vehicles (UGVs) by incorporating learned behaviors. In addition, the program is using passive optical systems to accomplish long-range scene analysis. By combining long-range perception with learned behavior, LAGR expects to make a qualitative break with the myopic, brittle behavior that characterizes most UGV autonomous navigation in unstructured environments. The very nature of testing navigation in unstructured, off-road environments makes accurate, objective measurement of progress a challenging task. While no absolute measure of performance has been defined by LAGR, the Government Team managing the program has created a relative measure: the Government Team tests navigation software by comparing its effectiveness to that of fixed, but state-of-the-art, navigation software running on a standardized vehicle on a series of varied test courses. Starting in March 2005, eight performers have been submitting navigation code for Government testing on such a standardized Government vehicle. As this text is being written, several teams have already demonstrated leaps in performance. In this paper we report observations on the state of the art in autonomous, off-road UGV navigation, we explain how LAGR intends to change current methods, we discuss the challenges we face in implementing technical aspects of the program, we describe early results, and we suggest where major opportunities for breakthroughs exist as LAGR progresses. © 2007 Wiley Periodicals, Inc. [source]


Learning in a hierarchical control system: 4D/RCS in the DARPA LAGR program

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 11-12 2006
Jim Albus
The Defense Applied Research Projects Agency (DARPA) Learning Applied to Ground Vehicles (LAGR) program aims to develop algorithms for autonomous vehicle navigation that learn how to operate in complex terrain. Over many years, the National Institute of Standards and Technology (NIST) has developed a reference model control system architecture called 4D/RCS that has been applied to many kinds of robot control, including autonomous vehicle control. For the LAGR program, NIST has embedded learning into a 4D/RCS controller to enable the small robot used in the program to learn to navigate through a range of terrain types. The vehicle learns in several ways. These include learning by example, learning by experience, and learning how to optimize traversal. Learning takes place in the sensory processing, world modeling, and behavior generation parts of the control system. The 4D/RCS architecture is explained in the paper, its application to LAGR is described, and the learning algorithms are discussed. Results are shown of the performance of the NIST control system on independently-conducted tests. Further work on the system and its learning capabilities is discussed. © 2007 Wiley Periodicals, Inc. [source]


Shared environment representation for a human-robot team performing information fusion

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 11-12 2007
Tobias Kaupp
This paper addresses the problem of building a shared environment representation by a human-robot team. Rich environment models are required in real applications for both autonomous operation of robots and to support human decision-making. Two probabilistic models are used to describe outdoor environment features such as trees: geometric (position in the world) and visual. The visual representation is used to improve data association and to classify features. Both models are able to incorporate observations from robotic platforms and human operators. Physically, humans and robots form a heterogeneous sensor network. In our experiments, the human-robot team consists of an unmanned air vehicle, a ground vehicle, and two human operators. They are deployed for an information gathering task and perform information fusion cooperatively. All aspects of the system including the fusion algorithms are fully decentralized. Experimental results are presented in form of the acquired multi-attribute feature map, information exchange patterns demonstrating human-robot information fusion, and quantitative model evaluation. Learned lessons from deploying the system in the field are also presented. © 2007 Wiley Periodicals, Inc. [source]


Hazard avoidance for high-speed mobile robots in rough terrain

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 5 2006
Matthew Spenko
Unmanned ground vehicles have important applications in high speed rough terrain scenarios. In these scenarios, unexpected and dangerous situations can occur that require rapid hazard avoidance maneuvers. At high speeds, there is limited time to perform navigation and hazard avoidance calculations based on detailed vehicle and terrain models. This paper presents a method for high speed hazard avoidance based on the "trajectory space," which is a compact model-based representation of a robot's dynamic performance limits in rough, natural terrain. Simulation and experimental results on a small gasoline-powered unmanned ground vehicle demonstrate the method's effectiveness on sloped and rough terrain. © 2006 Wiley Periodicals, Inc. [source]


Visual odometry for ground vehicle applications

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 1 2006
David Nistér
We present a system that estimates the motion of a stereo head, or a single moving camera, based on video input. The system operates in real time with low delay, and the motion estimates are used for navigational purposes. The front end of the system is a feature tracker. Point features are matched between pairs of frames and linked into image trajectories at video rate. Robust estimates of the camera motion are then produced from the feature tracks using a geometric hypothesize-and-test architecture. This generates motion estimates from visual input alone. No prior knowledge of the scene or the motion is necessary. The visual estimates can also be used in conjunction with information from other sources, such as a global positioning system, inertia sensors, wheel encoders, etc. The pose estimation method has been applied successfully to video from aerial, automotive, and handheld platforms. We focus on results obtained with a stereo head mounted on an autonomous ground vehicle. We give examples of camera trajectories estimated in real time purely from images over previously unseen distances (600 m) and periods of time. © 2006 Wiley Periodicals, Inc. [source]


Design evolution of the trinity college IGVC robot ALVIN

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 9 2004
Michelle Bovard
In this paper we discuss the design and evolution of Trinity College's ALVIN robot, an autonomous ground vehicle that has participated in the Association for Unmanned Vehicle Systems International Intelligent Ground Vehicle Competition (IGVC) since 2000. The paper first discusses the Trinity Robot Study Team, which has been responsible for developing ALVIN. We then illustrate the four generations of ALVIN, focusing on improvements made as the result of performance shortcomings and outright failures. The discussion considers the robot's body design, drive system, sensors, navigation algorithms, and vision systems. We focus especially on the vision and navigation systems developed for Trinity's fourth-generation IGVC robot, ALVIN IV. The paper concludes with a plan for future work on ALVIN and with a discussion of educational outcomes resulting from the ALVIN project. © 2004 Wiley Periodicals, Inc. [source]


Design of an unmanned ground vehicle, bearcat III, theory and practice

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 9 2004
Masoud Ghaffari
The purpose of this paper is to describe the design and implementation of an unmanned ground vehicle, called the Bearcat III, named after the University of Cincinnati mascot. The Bearcat III is an electric powered, three-wheeled vehicle that was designed for the Intelligent Ground Vehicle Competition and has been tested in the contest for 5 years. The dynamic model, control system, and design of the sensory systems are described. For the autonomous challenge line following, obstacle detection and pothole avoidance are required. Line following is accomplished with a dual camera system and video tracker. Obstacle detection is accomplished with either a rotating ultrasound or laser scanner. Pothole detection is implemented with a video frame grabber. For the navigation challenge waypoint following and obstacle detection are required. The waypoint navigation is implemented with a global positioning system. The Bearcat III has provided an educational test bed for not only the contest requirements but also other studies in developing artificial intelligence algorithms such as adaptive learning, creative control, automatic calibration, and internet-based control. The significance of this effort is in helping engineering and technology students understand the transition from theory to practice. © 2004 Wiley Periodicals, Inc. [source]


The DARPA LAGR program: Goals, challenges, methodology, and phase I results

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 11-12 2006
L. D. Jackel
The DARPA Learning Applied to Ground Vehicles (LAGR) program is accelerating progress in autonomous, perception-based, off-road navigation in unmanned ground vehicles (UGVs) by incorporating learned behaviors. In addition, the program is using passive optical systems to accomplish long-range scene analysis. By combining long-range perception with learned behavior, LAGR expects to make a qualitative break with the myopic, brittle behavior that characterizes most UGV autonomous navigation in unstructured environments. The very nature of testing navigation in unstructured, off-road environments makes accurate, objective measurement of progress a challenging task. While no absolute measure of performance has been defined by LAGR, the Government Team managing the program has created a relative measure: the Government Team tests navigation software by comparing its effectiveness to that of fixed, but state-of-the-art, navigation software running on a standardized vehicle on a series of varied test courses. Starting in March 2005, eight performers have been submitting navigation code for Government testing on such a standardized Government vehicle. As this text is being written, several teams have already demonstrated leaps in performance. In this paper we report observations on the state of the art in autonomous, off-road UGV navigation, we explain how LAGR intends to change current methods, we discuss the challenges we face in implementing technical aspects of the program, we describe early results, and we suggest where major opportunities for breakthroughs exist as LAGR progresses. © 2007 Wiley Periodicals, Inc. [source]


Hazard avoidance for high-speed mobile robots in rough terrain

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 5 2006
Matthew Spenko
Unmanned ground vehicles have important applications in high speed rough terrain scenarios. In these scenarios, unexpected and dangerous situations can occur that require rapid hazard avoidance maneuvers. At high speeds, there is limited time to perform navigation and hazard avoidance calculations based on detailed vehicle and terrain models. This paper presents a method for high speed hazard avoidance based on the "trajectory space," which is a compact model-based representation of a robot's dynamic performance limits in rough, natural terrain. Simulation and experimental results on a small gasoline-powered unmanned ground vehicle demonstrate the method's effectiveness on sloped and rough terrain. © 2006 Wiley Periodicals, Inc. [source]


A Human,Automation Interface Model to Guide Automation Design of System Functions

NAVAL ENGINEERS JOURNAL, Issue 1 2007
JOSHUA S. KENNEDY
A major component of the US Army's Future Combat Systems (FCS) will be a fleet of eight different manned ground vehicles (MGV). There are promises that "advanced automation" will accomplish many of the tasks formerly performed by soldiers in legacy vehicle systems. However, the current approach to automation design does not relieve the soldier operator of tasks; rather, it changes the role of the soldiers and the work they must do, often in ways unintended and unanticipated. This paper proposes a coherent, top-down, overarching approach to the design of a human,automation interaction model. First, a qualitative model is proposed to drive the functional architecture and human,automation interface scheme for the MGV fleet. Second, the proposed model is applied to a portion of the functional flow of the common crew station on the MGV fleet. Finally, the proposed model is demonstrated quantitatively via a computational task-network modeling program (Improved Performance Research and Integration Tool). The modeling approach offers insights into the impacts on human task-loading, workload, and human performance. Implications for human systems integration domains are discussed, including Manpower and Personnel, Human Factors Engineering, Training, System Safety, and Soldier Survivability. The proposed model gives engineers and scientists a top-down approach to explicitly define and design the interactions between proposed automation schemes and the human crew. Although this paper focuses on the Army's FCS MGV fleet, the model and analytical processes proposed, or similar approaches, are appropriate for many manned systems in multiple domains (aviation, space, maritime, ground transportation, manufacturing, etc.). [source]