Performance Models (performance + models)

Distribution by Scientific Domains


Selected Abstracts


Incorporating Maintenance Effectiveness in the Estimation of Dynamic Infrastructure Performance Models

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 3 2008
Chih-Yuan Chu
Specifically, we consider state-space specifications of autoregressive moving averages with exogenous inputs models to develop deterioration and inspection models for infrastructure facilities, and intervention analysis to estimate transitory and permanent effects of maintenance, for example, performance jumps or deterioration rate changes. To illustrate the methodology, we analyze the effectiveness of an overlay on a flexible pavement section from the AASHO Road Test. The results show the effect of the overlay on improvements both on surface distress, that is, rutting and slope variance, as well as on the pavement's underlying serviceability. The results also provide evidence that the overlay changes the pavement's response to traffic, that is, the overlay causes a reduction in the rate at which traffic damages the pavement. [source]


Grouping Pavement Condition Variables for Performance Modeling Using Self-Organizing Maps

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 2 2001
Nii O. Attoh-Okine
Different modeling techniques have been employed for the evaluation of pavement performance, determination of structural capacity, and performance predictions. The evaluation of performance involves the functional analysis of pavements based on the history of the riding quality. The riding comfort and pavement performance can be conveniently defined in terms of roughness and pavement distresses. Thus different models have been developed relating roughness with distresses to predict pavement performance. These models are too complex and require parsimonious equations involving fewer variables. Artificial neural networks have been used successfully in the development of performance-prediction models. This article demonstrates the use of an artificial intelligence neural networks self-organizing maps for the grouping of pavement condition variables in developing pavement performance models to evaluate pavement conditions on the basis of pavement distresses. [source]


Optimizing process allocation of parallel programs for heterogeneous clusters

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2009
Shuichi Ichikawa
Abstract The performance of a conventional parallel application is often degraded by load-imbalance on heterogeneous clusters. Although it is simple to invoke multiple processes on fast processing elements to alleviate load-imbalance, the optimal process allocation is not obvious. Kishimoto and Ichikawa presented performance models for high-performance Linpack (HPL), with which the sub-optimal configurations of heterogeneous clusters were actually estimated. Their results on HPL are encouraging, whereas their approach is not yet verified with other applications. This study presents some enhancements of Kishimoto's scheme, which are evaluated with four typical scientific applications: computational fluid dynamics (CFD), finite-element method (FEM), HPL (linear algebraic system), and fast Fourier transform (FFT). According to our experiments, our new models (NP-T models) are superior to Kishimoto's models, particularly when the non-negative least squares method is used for parameter extraction. The average errors of the derived models were 0.2% for the CFD benchmark, 2% for the FEM benchmark, 1% for HPL, and 28% for the FFT benchmark. This study also emphasizes the importance of predictability in clusters, listing practical examples derived from our study. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Parallel space-filling curve generation through sorting

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2007
J. Luitjens
Abstract In this paper we consider the scalability of parallel space-filling curve generation as implemented through parallel sorting algorithms. Multiple sorting algorithms are studied and results show that space-filling curves can be generated quickly in parallel on thousands of processors. In addition, performance models are presented that are consistent with measured performance and offer insight into performance on still larger numbers of processors. At large numbers of processors, the scalability of adaptive mesh refined codes depends on the individual components of the adaptive solver. One such component is the dynamic load balancer. In adaptive mesh refined codes, the mesh is constantly changing resulting in load imbalance among the processors requiring a load-balancing phase. The load balancing may occur often, requiring the load balancer to perform quickly. One common method for dynamic load balancing is to use space-filling curves. Space-filling curves, in particular the Hilbert curve, generate good partitions quickly in serial. However, at tens and hundreds of thousands of processors serial generation of space-filling curves will hinder scalability. In order to avoid this issue we have developed a method that generates space-filling curves quickly in parallel by reducing the generation to integer sorting. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Performance of computationally intensive parameter sweep applications on Internet-based Grids of computers: the mapping of molecular potential energy hypersurfaces

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2007
S. Reyes
Abstract This work focuses on the use of computational Grids for processing the large set of jobs arising in parameter sweep applications. In particular, we tackle the mapping of molecular potential energy hypersurfaces. For computationally intensive parameter sweep problems, performance models are developed to compare the parallel computation in a multiprocessor system with the computation on an Internet-based Grid of computers. We find that the relative performance of the Grid approach increases with the number of processors, being independent of the number of jobs. The experimental data, obtained using electronic structure calculations, fit the proposed performance expressions accurately. To automate the mapping of potential energy hypersurfaces, an application based on GRID superscalar is developed. It is tested on the prototypical case of the internal dynamics of acetone. Copyright © 2006 John Wiley & Sons, Ltd. [source]


A performance comparison between the Earth Simulator and other terascale systems on a characteristic ASCI workload,

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2005
Darren J. Kerbyson
Abstract This work gives a detailed analysis of the relative performance of the recently installed Earth Simulator and the next top four systems in the Top500 list using predictive performance models. The Earth Simulator uses vector processing nodes interconnected using a single-stage, cross-bar network, whereas the next top four systems are built using commodity based superscalar microprocessors and interconnection networks. The performance that can be achieved results from an interplay of system characteristics, application requirements and scalability behavior. Detailed performance models are used here to predict the performance of two codes representative of the ASCI workload, namely SAGE and Sweep3D. The performance models encapsulate fully the behavior of these codes and have been previously validated on many large-scale systems. One result of this analysis is to size systems, built from the same nodes and networks as those in the top five, that will have the same performance as the Earth Simulator. In particular, the largest ASCI machine, ASCI Q, is expected to achieve a similar performance to the Earth Simulator on the representative workload. Published in 2005 by John Wiley & Sons, Ltd. [source]


Towards a methodology for the characterization of fire resistive materials with respect to thermal performance models,

FIRE AND MATERIALS, Issue 4 2006
Dale P. Bentz
Abstract A methodology is proposed for the characterization of fire resistive materials with respect to thermal performance models. Typically in these models, materials are characterized by their densities, heat capacities, thermal conductivities, and any enthalpies (of reaction or phase changes). For true performance modelling, these thermophysical properties need to be determined as a function of temperature for a wide temperature range from room temperature to over 1000°C. Here, a combined experimental/theoretical/modelling approach is proposed for providing these critical input parameters. Particularly, the relationship between the three-dimensional microstructure of the fire resistive materials and their thermal conductivities is highlighted. Published in 2005 by John Wiley & Sons, Ltd. [source]