Beowulf Cluster (beowulf + cluster)

Distribution by Scientific Domains


Selected Abstracts


On-Line Control Architecture for Enabling Real-Time Traffic System Operations

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2004
Srinivas Peeta
Critical to their effectiveness are the control architectures that provide a blueprint for the efficient transmission and processing of large amounts of real-time data, and consistency-checking and fault tolerance mechanisms to ensure seamless automated functioning. However, the lack of low-cost, high-performance, and easy-to-build computing environments are key impediments to the widespread deployment of such architectures in the real-time traffic operations domain. This article proposes an Internet-based on-line control architecture that uses a Beowulf cluster as its computational backbone and provides an automated mechanism for real-time route guidance to drivers. To investigate this concept, the computationally intensive optimization modules are implemented on a low-cost 16-processor Beowulf cluster and a commercially available supercomputer, and the performance of these systems on representative computations is measured. The results highlight the effectiveness of the cluster in generating substantial computational performance scalability, and suggest that its performance is comparable to that of the more expensive supercomputer. [source]


Parallel heterogeneous CBIR system for efficient hyperspectral image retrieval using spectral mixture analysis

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2010
Antonio J. Plaza
Abstract The purpose of content-based image retrieval (CBIR) is to retrieve, from real data stored in a database, information that is relevant to a query. In remote sensing applications, the wealth of spectral information provided by latest-generation (hyperspectral) instruments has quickly introduced the need for parallel CBIR systems able to effectively retrieve features of interest from ever-growing data archives. To address this need, this paper develops a new parallel CBIR system that has been specifically designed to be run on heterogeneous networks of computers (HNOCs). These platforms have soon become a standard computing architecture in remote sensing missions due to the distributed nature of data repositories. The proposed heterogeneous system first extracts an image feature vector able to characterize image content with sub-pixel precision using spectral mixture analysis concepts, and then uses the obtained feature as a search reference. The system is validated using a complex hyperspectral image database, and implemented on several networks of workstations and a Beowulf cluster at NASA's Goddard Space Flight Center. Our experimental results indicate that the proposed parallel system can efficiently retrieve hyperspectral images from complex image databases by efficiently adapting to the underlying parallel platform on which it is run, regardless of the heterogeneity in the compute nodes and communication links that form such parallel platform. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Parallel processing of remotely sensed hyperspectral imagery: full-pixel versus mixed-pixel classification

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2008
Antonio J. Plaza
Abstract The rapid development of space and computer technologies allows for the possibility to store huge amounts of remotely sensed image data, collected using airborne and satellite instruments. In particular, NASA is continuously gathering high-dimensional image data with Earth observing hyperspectral sensors such as the Jet Propulsion Laboratory's airborne visible,infrared imaging spectrometer (AVIRIS), which measures reflected radiation in hundreds of narrow spectral bands at different wavelength channels for the same area on the surface of the Earth. The development of fast techniques for transforming massive amounts of hyperspectral data into scientific understanding is critical for space-based Earth science and planetary exploration. Despite the growing interest in hyperspectral imaging research, only a few efforts have been devoted to the design of parallel implementations in the literature, and detailed comparisons of standardized parallel hyperspectral algorithms are currently unavailable. This paper compares several existing and new parallel processing techniques for pure and mixed-pixel classification in hyperspectral imagery. The distinction of pure versus mixed-pixel analysis is linked to the considered application domain, and results from the very rich spectral information available from hyperspectral instruments. In some cases, such information allows image analysts to overcome the constraints imposed by limited spatial resolution. In most cases, however, the spectral bands collected by hyperspectral instruments have high statistical correlation, and efficient parallel techniques are required to reduce the dimensionality of the data while retaining the spectral information that allows for the separation of the classes. In order to address this issue, this paper also develops a new parallel feature extraction algorithm that integrates the spatial and spectral information. The proposed technique is evaluated (from the viewpoint of both classification accuracy and parallel performance) and compared with other parallel techniques for dimensionality reduction and classification in the context of three representative application case studies: urban characterization, land-cover classification in agriculture, and mapping of geological features, using AVIRIS data sets with detailed ground-truth. Parallel performance is assessed using Thunderhead, a massively parallel Beowulf cluster at NASA's Goddard Space Flight Center. The detailed cross-validation of parallel algorithms conducted in this work may specifically help image analysts in selection of parallel algorithms for specific applications. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Retrieval of spectral and dynamic properties from two-dimensional infrared pump-probe experiments

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 9 2008
Riccardo Chelli
Abstract We have developed a fitting algorithm able to extract spectral and dynamic properties of a three level oscillator from a two-dimensional infrared spectrum (2D-IR) detected in time resolved nonlinear experiments. Such properties go from the frequencies of the ground-to-first and first-to-second vibrational transitions (and hence anharmonicity) to the frequency-fluctuation correlation function. This last is represented through a general expression that allows one to approach the various strategies of modeling proposed in the literature. The model is based on the Kubo picture of stochastic fluctuations of the transition frequency as a result of perturbations by a fluctuating surrounding. To account for the line-shape broadening due to pump pulse spectral width in double-resonance measurements, we supply the fitting algorithm with the option to perform the convolution of the spectral signal with a Lorentzian function in the pump-frequency dimension. The algorithm is tested here on 2D-IR pump-probe spectra of a Gly-Ala dipeptide recorded at various pump-probe delay times. Speedup benchmarks have been performed on a small Beowulf cluster. The program is written in FORTRAN language for both serial and parallel architectures and is available free of charge to the interested reader. © 2008 Wiley Periodicals, Inc. J Comput Chem, 2008 [source]


SnB version 2.2: an example of crystallographic multiprocessing

JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 3 2002
Jason Rappleye
The computer program SnB implements a direct-methods algorithm, known as Shake-and-Bake, which optimizes trial structures consisting of randomly positioned atoms. Although large Shake-and-Bake applications require significant amounts of computing time, the algorithm can be easily implemented in parallel in order to decrease the real time required to achieve a solution. By using a master,worker model, SnB version 2.2 is amenable to all of the prevalent modern parallel-computing platforms, including (i) shared-memory multiprocessor machines, such as the SGI Origin2000, (ii) distributed-memory multiprocessor machines, such as the IBM SP, and (iii) collections of workstations, including Beowulf clusters. A linear speedup in the processing of a fixed number of trial structures can be obtained on each of these platforms. [source]