Home About us Contact | |||
Sensor Data (sensor + data)
Selected AbstractsUrban Textural Analysis from Remote Sensor Data: Lacunarity Measurements Based on the Differential Box Counting MethodGEOGRAPHICAL ANALYSIS, Issue 4 2006Soe W. Myint Lacunarity is related to the spatial distribution of gap or hole sizes. For low lacunarity, all gap sizes are the same and geometric objects are deemed homogeneous; conversely, for high lacunarity, gap sizes are variable and objects are therefore heterogeneous. Textures that are homogeneous at small scales can be quite heterogeneous at large scales and vice versa, and hence, lacunarity can be considered a scale-dependent measure of heterogeneity or texture. In this article, we use a lacunarity method based on a differential box counting approach to identify urban land-use and land-cover classes from satellite sensor data. Our methodology focuses on two different gliding box methods to compute lacunarity values and demonstrate a mirror extension approach for a local moving window. The extension approach overcomes, or at least minimizes, the boundary problem. The results from our study suggest that the overlapping box approach is more effective than the skipping box approach, but that there is no significant difference between window sizes. Our work represents a contribution to not only advances in textural and spatial metrics as used in remote-sensing pattern interpretation but also for broadening understanding of the computational geometry of nonlinear shape models of which lacunarity is the reciprocal of fractal theory. [source] Optimal linear LQG control over lossy networks without packet acknowledgmentASIAN JOURNAL OF CONTROL, Issue 1 2008Bruno Sinopoli Abstract This paper is concerned with control applications over lossy data networks. Sensor data is transmitted to an estimation-control unit over a network, and control commands are issued to subsystems over the same network. Sensor and control packets may be randomly lost according to a Bernoulli process. In this context, the discrete-time linear quadratic Gaussian (LQG) optimal control problem is considered. It is known that in the scenario described above, and for protocols for which there is no acknowledgment of successful delivery of control packets (e.g. UDP-like protocols), the LQG optimal controller is in general nonlinear. However, the simplicity of a linear sub-optimal solution is attractive for a variety of applications. Accordingly, this paper characterizes the optimal linear static controller and compares its performance to the case when there is acknowledgment of delivery of packets (e.g. TCP-like protocols). Copyright © 2008 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society [source] Algorithms for time synchronization of wireless structural monitoring sensorsEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 6 2005Ying Lei Abstract Dense networks of wireless structural health monitoring systems can effectively remove the disadvantages associated with current wire-based sparse sensing systems. However, recorded data sets may have relative time-delays due to interference in radio transmission or inherent internal sensor clock errors. For structural system identification and damage detection purposes, sensor data require that they are time synchronized. The need for time synchronization of sensor data is illustrated through a series of tests on asynchronous data sets. Results from the identification of structural modal parameters show that frequencies and damping ratios are not influenced by the asynchronous data; however, the error in identifying structural mode shapes can be significant. The results from these tests are summarized in Appendix A. The objective of this paper is to present algorithms for measurement data synchronization. Two algorithms are proposed for this purpose. The first algorithm is applicable when the input signal to a structure can be measured. The time-delay between an output measurement and the input is identified based on an ARX (auto-regressive model with exogenous input) model for the input,output pair recordings. The second algorithm can be used for a structure subject to ambient excitation, where the excitation cannot be measured. An ARMAV (auto-regressive moving average vector) model is constructed from two output signals and the time-delay between them is evaluated. The proposed algorithms are verified with simulation data and recorded seismic response data from multi-story buildings. The influence of noise on the time-delay estimates is also assessed. Copyright © 2004 John Wiley & Sons, Ltd. [source] Ultralow-power CMOS/SOI circuit technologyELECTRICAL ENGINEERING IN JAPAN, Issue 3 2008Yuichi Kado Abstract We have introduced an example of a system that embodies the concept of a ubiquitous communication service and explained the importance of low power consumption in the communicator that will serve as the bridge between the real world and the network for real-time services in which sensor data is acquired every second. An effective solution to the problem of high energy efficiency is to employ the synergy of combining low-voltage analog circuit technology and FD-SOI devices. Taking advantage of that synergy to reduce the power consumption of the communicator during operation to about 10 mW and employing intermittent operation with an activity rate of less than l% would make it possible to support operation for 1 year or more with a commercial coin-type lithium battery. © 2007 Wiley Periodicals, Inc. Electr Eng Jpn, 162(3): 38,43, 2008; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/eej.20543 [source] Urban Textural Analysis from Remote Sensor Data: Lacunarity Measurements Based on the Differential Box Counting MethodGEOGRAPHICAL ANALYSIS, Issue 4 2006Soe W. Myint Lacunarity is related to the spatial distribution of gap or hole sizes. For low lacunarity, all gap sizes are the same and geometric objects are deemed homogeneous; conversely, for high lacunarity, gap sizes are variable and objects are therefore heterogeneous. Textures that are homogeneous at small scales can be quite heterogeneous at large scales and vice versa, and hence, lacunarity can be considered a scale-dependent measure of heterogeneity or texture. In this article, we use a lacunarity method based on a differential box counting approach to identify urban land-use and land-cover classes from satellite sensor data. Our methodology focuses on two different gliding box methods to compute lacunarity values and demonstrate a mirror extension approach for a local moving window. The extension approach overcomes, or at least minimizes, the boundary problem. The results from our study suggest that the overlapping box approach is more effective than the skipping box approach, but that there is no significant difference between window sizes. Our work represents a contribution to not only advances in textural and spatial metrics as used in remote-sensing pattern interpretation but also for broadening understanding of the computational geometry of nonlinear shape models of which lacunarity is the reciprocal of fractal theory. [source] Random fields,Union intersection tests for detecting functional connectivity in EEG/MEG imagingHUMAN BRAIN MAPPING, Issue 8 2009Felix Carbonell Abstract Electrophysiological (EEG/MEG) imaging challenges statistics by providing two views of the same underlying spatio-temporal brain activity: a topographic view (EEG/MEG) and tomographic view (EEG/MEG source reconstructions). It is a common practice that statistical parametric mapping (SPM) for these two situations is developed separately. In particular, assessing statistical significance of functional connectivity is a major challenge in these types of studies. This work introduces statistical tests for assessing simultaneously the significance of spatio-temporal correlation structure between ERP/ERF components as well as that of their generating sources. We introduce a greatest root statistic as the multivariate test statistic for detecting functional connectivity between two sets of EEG/MEG measurements at a given time instant. We use some new results in random field theory to solve the multiple comparisons problem resulting from the correlated test statistics at each time instant. In general, our approach using the union-intersection (UI) principle provides a framework for hypothesis testing about any linear combination of sensor data, which allows the analysis of the correlation structure of both topographic and tomographic views. The performance of the proposed method is illustrated with real ERP data obtained from a face recognition experiment. Hum Brain Mapp 2009. © 2009 Wiley-Liss, Inc. [source] Delineating runoff processes and critical runoff source areas in a pasture hillslope of the Ozark HighlandsHYDROLOGICAL PROCESSES, Issue 21 2008M. D. Leh Abstract The identification of runoff contributing areas would provide the ideal focal points for water quality monitoring and Best Management Practice (BMP) implementation. The objective of this study was to use a field-scale approach to delineate critical runoff source areas and to determine the runoff mechanisms in a pasture hillslope of the Ozark Highlands in the USA. Three adjacent hillslope plots located at the Savoy Experimental Watershed, north-west Arkansas, were bermed to isolate runoff. Each plot was equipped with paired subsurface saturation and surface runoff sensors, shallow groundwater wells, H-flumes and rain gauges to quantify runoff mechanisms and rainfall characteristics at continuous 5-minute intervals. The spatial extent of runoff source areas was determined by incorporating sensor data into a geographic information-based system and performing geostatistical computations (inverse distance weighting method). Results indicate that both infiltration excess runoff and saturation excess runoff mechanisms occur to varying extents (0,58% for infiltration excess and 0,26% for saturation excess) across the plots. Rainfall events that occurred 1,5 January 2005 are used to illustrate the spatial and temporal dynamics of the critical runoff source areas. The methodology presented can serve as a framework upon which critical runoff source areas can be identified and managed for water quality protection in other watersheds. Copyright © 2008 John Wiley & Sons, Ltd. [source] Delay aware reliable transport in wireless sensor networksINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 10 2007Vehbi C. Gungor Abstract Wireless sensor networks (WSN) are event-based systems that rely on the collective effort of several sensor nodes. Reliable event detection at the sink is based on collective information provided by the sensor nodes and not on any individual sensor data. Hence, conventional end-to-end reliability definitions and solutions are inapplicable in the WSN regime and would only lead to a waste of scarce sensor resources. Moreover, the reliability objective of WSN must be achieved within a certain real-time delay bound posed by the application. Therefore, the WSN paradigm necessitates a collective delay-constrained event-to-sink reliability notion rather than the traditional end-to-end reliability approaches. To the best of our knowledge, there is no transport protocol solution which addresses both reliability and real-time delay bound requirements of WSN simultaneously. In this paper, the delay aware reliable transport (DART) protocol is presented for WSN. The objective of the DART protocol is to timely and reliably transport event features from the sensor field to the sink with minimum energy consumption. In this regard, the DART protocol simultaneously addresses congestion control and timely event transport reliability objectives in WSN. In addition to its efficient congestion detection and control algorithms, it incorporates the time critical event first (TCEF) scheduling mechanism to meet the application-specific delay bounds at the sink node. Importantly, the algorithms of the DART protocol mainly run on resource rich sink node, with minimal functionality required at resource constrained sensor nodes. Furthermore, the DART protocol can accommodate multiple concurrent event occurrences in a wireless sensor field. Performance evaluation via simulation experiments show that the DART protocol achieves high performance in terms of real-time communication requirements, reliable event detection and energy consumption in WSN. Copyright © 2007 John Wiley & Sons, Ltd. [source] Error modeling and calibration of exteroceptive sensors for accurate mapping applicationsJOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 1 2010James P. Underwood Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc. [source] Generation and visualization of large-scale three-dimensional reconstructions from underwater robotic surveysJOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 1 2010Matthew Johnson-Roberson Robust, scalable simultaneous localization and mapping (SLAM) algorithms support the successful deployment of robots in real-world applications. In many cases these platforms deliver vast amounts of sensor data from large-scale, unstructured environments. These data may be difficult to interpret by end users without further processing and suitable visualization tools. We present a robust, automated system for large-scale three-dimensional (3D) reconstruction and visualization that takes stereo imagery from an autonomous underwater vehicle (AUV) and SLAM-based vehicle poses to deliver detailed 3D models of the seafloor in the form of textured polygonal meshes. Our system must cope with thousands of images, lighting conditions that create visual seams when texturing, and possible inconsistencies between stereo meshes arising from errors in calibration, triangulation, and navigation. Our approach breaks down the problem into manageable stages by first estimating local structure and then combining these estimates to recover a composite georeferenced structure using SLAM-based vehicle pose estimates. A texture-mapped surface at multiple scales is then generated that is interactively presented to the user through a visualization engine. We adapt established solutions when possible, with an emphasis on quickly delivering approximate yet visually consistent reconstructions on standard computing hardware. This allows scientists on a research cruise to use our system to design follow-up deployments of the AUV and complementary instruments. To date, this system has been tested on several research cruises in Australian waters and has been used to reliably generate and visualize reconstructions for more than 60 dives covering diverse habitats and representing hundreds of linear kilometers of survey. © 2009 Wiley Periodicals, Inc. [source] A perception-driven autonomous urban vehicleJOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 10 2008John Leonard This paper describes the architecture and implementation of an autonomous passenger vehicle designed to navigate using locally perceived information in preference to potentially inaccurate or incomplete map data. The vehicle architecture was designed to handle the original DARPA Urban Challenge requirements of perceiving and navigating a road network with segments defined by sparse waypoints. The vehicle implementation includes many heterogeneous sensors with significant communications and computation bandwidth to capture and process high-resolution, high-rate sensor data. The output of the comprehensive environmental sensing subsystem is fed into a kinodynamic motion planning algorithm to generate all vehicle motion. The requirements of driving in lanes, three-point turns, parking, and maneuvering through obstacle fields are all generated with a unified planner. A key aspect of the planner is its use of closed-loop simulation in a rapidly exploring randomized trees algorithm, which can randomly explore the space while efficiently generating smooth trajectories in a dynamic and uncertain environment. The overall system was realized through the creation of a powerful new suite of software tools for message passing, logging, and visualization. These innovations provide a strong platform for future research in autonomous driving in global positioning system,denied and highly dynamic environments with poor a priori information. © 2008 Wiley Periodicals, Inc. [source] A Self-Consistent Bathymetric Mapping AlgorithmJOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 1-2 2007Chris Roman The achievable accuracy of bathymetric mapping in the deep ocean using robotic systems is most often limited by the available guidance or navigation information used to combine the measured sonar ranges during the map making process. This paper presents an algorithm designed to mitigate the affects of poor ground referenced navigation by applying the principles of map registration and pose filtering commonly used in simultaneous localization and mapping (SLAM) algorithms. The goal of the algorithm is to produce a self-consistent point cloud representation of the bottom terrain with errors that are on a scale similar to the sonar range resolution rather than any direct positioning measurement. The presented algorithm operates causally and utilizes sensor data that are common to instrumented underwater robotic vehicles used for mapping and scientific explorations. Real world results are shown for data taken on several expeditions with the JASON remotely operated vehicle (ROV). Comparisons are made between more standard mapping approaches and the proposed method is shown to significantly improve the map quality and reveal scene information that would have otherwise been obscured due to poor direct navigation information. © 2007 Wiley Periodicals, Inc. [source] Range error detection caused by occlusion in non-coaxial LADARs for scene interpretationJOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 10 2005Bingbing Liu When processing laser detection and ranging (LADAR) sensor data for scene interpretation, for example, for the purposes of feature extraction and/or data association in mobile robotics, most previous work models such devices as processing range data which follows a normal distribution. In this paper, it is demonstrated that commonly used LADARs suffer from incorrect range readings at changes in surface reflectivity and/or range discontinuities, which can have a much more detrimental effect on such algorithms than random noise. Most LADARs fall into two categories: coaxial and separated transmitter and receiver configurations. The latter offer the advantage that optical crosstalk is eliminated, since it can be guaranteed that all of the transmitted light leaves the LADAR and is not in any way partially reflected within it due to the beam-splitting techniques necessary in coaxial LADARs. However, they can introduce a significant disparity effect, as the reflected laser energy from the target can be partially occluded from the receiver. As well as demonstrating that false range values can result due to this occlusion effect from scanned LADARs, the main contribution of this paper is that the occurrence of these values can be reliably predicted by monitoring the received signal strength and a quantity we refer to as the "transceiver separation angle" of the rotating mirror. This paper will demonstrate that a correct understanding of such systematic errors is essential for the correct further processing of the data. A useful design criterion for the optical separation of the receiver and transmitter is also derived for noncoaxial LADARs, based on the minimum detectable signal amplitude of a LADAR and environmental edge constraints. By investigating the effects of various sensor and environmental parameters on occlusion, some advice is given on how to make use of noncoaxial LADARs correctly so as to avoid range errors when scanning environmental discontinuities. © 2005 Wiley Periodicals, Inc. [source] An autonomous tracked vehicle with omnidirectional sensingJOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 8 2004R. David Hampton Operation of an autonomous vehicle along a marked path, in an obstacle-laden environment, requires path detection, relative position detection and control, and obstacle detection and avoidance. The design solution of the team from the U.S. Military Academy is a tracked vehicle operating open-loop in response to position information from an omnidirectional mirror, and to obstacle-detection input from the mirror and from a scanning laser. The use of a tracked rather than a wheeled vehicle is the team's open-loop solution to the problem of wheeled-vehicle slippage on wet and sandy surfaces. The vehicleresponds to sensor information from (1) a digital camera-mounted parabolic omnidirectional mirror for visual inputs and (2) a scanning laser for detecting obstacles in relief. Raw sensor data is converted synchronously into a global virtual context, which places the vehicle's center at the origin of a 2-D Cartesian coordinate system. A four-phase process is used to convert the camera's inputs into the data structures needed to reason about the vehicle's position relative to the course. Development of the path plan proceeds incrementally, using a space-sweeping algorithm to identify safe paths along waypoints within the course boundaries. An attempt is made to minimize translation errors by favoring paths which exhibit fewer sharp turns. Integration of Intel's OpenCV computer vision library and the Independent JPEG Group's JPEG library allow for very good encapsulation of the low-level functions needed to do most of the image processing. Ada95 is the language of choice for the majority of the team-developed software, except where needed to interface to motors and sensors. Use of an object-oriented high-level language has been invaluable in leveraging the efforts of previous years' development activities, and for maximizing the ability to log or otherwise respond to anomalous behavior. © 2004 Wiley Periodicals, Inc. [source] A Dynamic Analysis of a Spatial Manipulator to Determine Payload WeightJOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 7 2003Carl D. Crane III This paper presents a methodology whereby the payload weight of a serial manipulator can be determined from a minimum set of sensor data, i.e., joint angle and joint torque measurements. The particular manipulator geometry that is analyzed is a four degree-of-freedom serial chain that is commonly used in excavator systems. It was quite remarkable that a relatively simple solution was obtained for the payload weight considering that there are a total of nine unknown moments and cross moments of inertia of the payload together with the unknown location of the center of mass. Example calculations are presented. © 2003 Wiley Periodicals, Inc. [source] Use of the DirecNet Applied Treatment Algorithm (DATA) for diabetes management with a real-time continuous glucose monitor (the FreeStyle Navigator)PEDIATRIC DIABETES, Issue 2 2008Diabetes Research In Children Network (DirecNet) Study Group Background:, There are no published guidelines for use of real-time continuous glucose monitoring data by a patient; we therefore developed the DirecNet Applied Treatment Algorithm (DATA). The DATA provides algorithms for making diabetes management decisions using glucose values: (i) in real time which include the direction and rate of change of glucose levels, and (ii) retrospectively based on downloaded sensor data. Objective:, To evaluate the use and effectiveness of the DATA in children with diabetes using a real-time continuous glucose sensor (the FreeStyle Navigator). Subjects:, Thirty children and adolescents (mean ± standard deviation age = 11.2 ± 4.1 yr) receiving insulin pump therapy. Methods:, Subjects were instructed on use of the DATA and were asked to download their Navigator weekly to review glucose patterns. An Algorithm Satisfaction Questionnaire was completed at 3, 7, and 13 wk. Results:, At 13 wk, all of the subjects and all but one parent thought that the DATA gave good, clear directions for insulin dosing, and thought the guidelines improved their postprandial glucose levels. In responding to alarms, 86% of patients used the DATA at least 50% of the time at 3 wk, and 59% reported doing so at 13 wk. Similar results were seen in using the DATA to adjust premeal bolus doses of insulin. Conclusions:, These results show the feasibility of implementing the DATA when real-time continuous glucose monitoring is initiated and support its use in future clinical trials of real-time continuous glucose monitoring. [source] |