Data Center (data + center)

Distribution by Scientific Domains


Selected Abstracts


An analysis of P times reported in the Reviewed Event Bulletin for Chinese underground explosions

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2005
A. Douglas
SUMMARY Analysis of variance is used to estimate the measurement error and path effects in the P times reported in the Reviewed Event Bulletins (REBs, produced by the provisional International Data Center, Arlington, USA) and in times we have read, for explosions at the Chinese Test Site. Path effects are those differences between traveltimes calculated from tables and the true times that result in epicentre error. The main conclusions of the study are: (1) the estimated variance of the measurement error for P times reported in the REB at large signal-to-noise ratio (SNR) is 0.04 s2, the bulk of the readings being analyst-adjusted automatic-detections, whereas for our times the variance is 0.01 s2 and (2) the standard deviation of the path effects for both sets of observations is about 0.6 s. The study shows that measurement error is about twice (,0.2 s rather than ,0.1 s) and path effects about half the values assumed for the REB times. However, uncertainties in the estimated epicentres are poorly described by treating path effects as a random variable with a normal distribution. Only by estimating path effects and using these to correct onset times can reliable estimates of epicentre uncertainty be obtained. There is currently an international programme to do just this. The results imply that with P times from explosions at three or four stations with good SNR (so that the measurement error is around 0.1 s) and well distributed in azimuth, then with correction for path effects the area of the 90 per cent coverage ellipse should be much less than 1000 km2,the area allowed for an on-site inspection under the Comprehensive Test Ban Treaty,and should cover the true epicentre with the given probability. [source]


Accuracy assessment of the MODIS snow products,

HYDROLOGICAL PROCESSES, Issue 12 2007
Dorothy K. Hall
Abstract A suite of Moderate-Resolution Imaging Spectroradiometer (MODIS) snow products at various spatial and temporal resolutions from the Terra satellite has been available since February 2000. Standard products include daily and 8-day composite 500 m resolution swath and tile products (which include fractional snow cover (FSC) and snow albedo), and 0·05° resolution products on a climate-modelling grid (CMG) (which also include FSC). These snow products (from Collection 4 (C4) reprocessing) are mature and most have been validated to varying degrees and are available to order through the National Snow and Ice Data Center. The overall absolute accuracy of the well-studied 500 m resolution swath (MOD10_L2) and daily tile (MOD10A1) products is ,93%, but varies by land-cover type and snow condition. The most frequent errors are due to snow/cloud discrimination problems, however, improvements in the MODIS cloud mask, an input product, have occurred in ,Collection 5' reprocessing. Detection of very thin snow (<1 cm thick) can also be problematic. Validation of MOD10_L2 and MOD10A1 applies to all higher-level products because all the higher-level products are all created from these products. The composited products may have larger errors due, in part, to errors propagated from daily products. Recently, new products have been developed. A fractional snow cover algorithm for the 500 m resolution products was developed, and is part of the C5 daily swath and tile products; a monthly CMG snow product at 0·05° resolution and a daily 0·25° resolution CMG snow product are also now available. Similar, but not identical products are also produced from the MODIS on the Aqua satellite, launched in May 2002, but the accuracy of those products has not yet been assessed in detail. Published in 2007 by John Wiley & Sons, Ltd. [source]


A regional climate study of Central America using the MM5 modeling system: results and comparison to observations

INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 15 2006
Jose L. Hernandez
Abstract The Mesoscale Modeling system, version 3.6 (MM5) regional modeling system has been applied to Central America and has been evaluated against National Oceanic and Atmospheric Administration/National Climatic Data Center (NOAA/NCDC) daily observations and the Global Precipitation Climatology Project (GPCP) precipitation data. We compare model results and observations for 1997 and evaluate various climate parameters (temperature, wind speed, precipitation and water vapor mixing ratio), emphasizing the differences within the context of the station dependent geographical features and the land use (LU) categories. At 9 of the 16 analyzed stations the modeled temperature, wind speed and vapor mixing ratio are in agreement with observations with average model-observation differences consistently lower than 25%. MM5 has better performance at stations strongly impacted by monsoon systems, regions typified by low topography in coastal areas and areas characterized by evergreen, broad-leaf and shrub land vegetation types. At four stations the model precipitation is about a factor of 3,5 higher than the observations, while the simulated wind is roughly twice what is observed. These stations include two inland stations characterized by croplands close to water bodies; one coastal station in El Salvador adjacent to a mountain-based cropland area and one station at sea-level. This suggests that the model does not adequately represent the influence of topographic features and water bodies close to these stations. In general, the model agrees reasonably well with measurements and therefore provides an acceptable description of regional climate. The simulations in this study use only two seasonal maps of land cover. The main model discrepancies are likely attributable to the actual annual cycle of land,atmosphere vapor and energy exchange that has a temporal scale of days to weeks. These fluxes are impacted by surface moisture availability, albedo and thermal inertia parameters. Copyright © 2006 Royal Meteorological Society. [source]


Prediction of sea surface temperature from the global historical climatology network data

ENVIRONMETRICS, Issue 3 2004
Samuel S. P. Shen
Abstract This article describes a spatial prediction method that predicts the monthly sea surface temperature (SST) anomaly field from the land only data. The land data are from the Global Historical Climatology Network (GHCN). The prediction period is 1880,1999 and the prediction ocean domain extends from 60°S to 60°N with a spatial resolution 5°×5°. The prediction method is a regression over the basis of empirical orthogonal functions (EOFs). The EOFs are computed from the following data sets: (a) the Climate Prediction Center's optimally interpolated sea surface temperature (OI/SST) data (1982,1999); (b) the National Climatic Data Center's blended product of land-surface air temperature (1992,1999) produced from combining the Special Satellite Microwave Imager and GHCN; and (c) the National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalysis data (1982,1999). The optimal prediction method minimizes the first- M -mode mean square error between the true and predicted anomalies over both land and ocean. In the optimization process, the data errors of the GHCN boxes are used, and their contribution to the prediction error is taken into account. The area-averaged root mean square error of prediction is calculated. Numerical experiments demonstrate that this EOF prediction method can accurately recover the global SST anomalies during some circulation patterns and add value to the SST bias correction in the early history of SST observations and the validation of general circulation models. Our results show that (i) the land only data can accurately predict the SST anomaly in the El Nino months when the temperature anomaly structure has very large correlation scales, and (ii) the predictions for La Nina, neutral, or transient months require more EOF modes because of the presence of the small scale structures in the anomaly field. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Concepts for computer center power management

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2 2010
A. DiRienzo
Abstract Electrical power usage contributes significantly to the operational costs of large computer systems. At the Hypersonic Missile Technology Research and Operations Center (HMT-ROC) our system usage patterns provide a significant opportunity to reduce operating costs since there are a small number of dedicated users. The relatively predictable nature of our usage patterns allows for the scheduling of computational resource availability. We take advantage of this predictability to shut down systems during periods of low usage to reduce power consumption. With interconnected computer cluster systems, reducing the number of online nodes is more than a simple matter of throwing the power switch on a portion of the cluster. The paper discusses these issues and an approach for power reduction strategies for a computational system with a heterogeneous system mix that includes a large (1560-node) Apple Xserve PowerPC supercluster. In practice, the average load on computer systems may be much less than the peak load although the infrastructure supporting the operation of large computer systems in a computer or data center must still be designed with the peak loads in mind. Given that a significant portion of the time, systems loads can be less than full peak, an opportunity exists for cost savings if idle systems can be dynamically throttled back, slept, or shut off entirely. The paper describes two separate strategies that meet the requirements for both power conservation and system availability at HMT-ROC. The first approach, for legacy systems, is not much more than a brute force approach to power management which we call Time-Driven System Management (TDSM). The second approach, which we call Dynamic-Loading System Management (DLSM), is applicable to more current systems with ,Wake-on-LAN' capability and takes a more granular approach to the management of system resources. The paper details the rule sets that we have developed and implemented in the two approaches to system power management and discusses some results with these approaches. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Automated application component placement in data centers using mathematical programming

INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 6 2008
Xiaoyun Zhu
In this article we address the application component placement (ACP) problem for a data center. The problem is defined as follows: for a given topology of a network consisting of switches, servers and storage devices with varying capabilities, and for a given specification of a component-based distributed application, decide which physical server should be assigned to each application component, such that the application's processing, communication and storage requirements are satisfied without creating bottlenecks in the infrastructure, and that scarce resources are used most efficiently. We explain how the ACP problem differs from traditional task assignment in distributed systems, or existing grid scheduling problems. We describe our approach of formalizing this problem using a mathematical optimization framework and further formulating it as a mixed integer program (MIP). We then present our ACP solver using GAMS and CPLEX to automate the decision-making process. The solver was numerically tested on a number of examples, ranging from a 125-server real data center to a set of hypothetical data centers with increasing size. In all cases the ACP solver found an optimal solution within a reasonably short time. In a numerical simulation comparing our solver to a random selection algorithm, our solver resulted in much more efficient use of scarce network resources and allowed more applications to be placed in the same infrastructure. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Enhanced energy efficiency and reliability of telecommunication equipment with the introduction of novel air cooled thermal architectures

BELL LABS TECHNICAL JOURNAL, Issue 2 2010
Domhnaill Hernon
In the past, thermal management was an afterthought in the design process of a product owing to the fact that heat dissipation loads and densities were minute and did not adversely affect component reliability. In fact, it may be stated that, historically, the sole purpose of thermal management was to ensure component operation below a critical temperature thereby providing reliable equipment operation for a given time period. However, this mindset has evolved in recent years given current economic and energy concerns. Climate change concern owing to vast green house gas emissions, increasing fuel and electricity costs, and a general trend towards energy-efficiency awareness has promoted thermal management to the forefront of "green" innovation within the information and communications technology (ICT) sector. If one considers the fact that up to 50 percent of the energy budget of a data center is spent on cooling equipment and that two percent of the United States' annual electricity is consumed by telecommunications equipment, it becomes obvious that thermal management has a key role to play in the development of eco-sustainable solutions. This paper will provide an overview of the importance of thermal management for reliable component operation and highlight the research areas where improved energy efficiency can be achieved. Novel air-cooled thermal solutions demonstrating significant energy savings and improved reliability over existing technology will be presented including three dimensional (3D) monolithic heat sinks and vortex generators. © 2010 Alcatel-Lucent. [source]


Automated application component placement in data centers using mathematical programming

INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 6 2008
Xiaoyun Zhu
In this article we address the application component placement (ACP) problem for a data center. The problem is defined as follows: for a given topology of a network consisting of switches, servers and storage devices with varying capabilities, and for a given specification of a component-based distributed application, decide which physical server should be assigned to each application component, such that the application's processing, communication and storage requirements are satisfied without creating bottlenecks in the infrastructure, and that scarce resources are used most efficiently. We explain how the ACP problem differs from traditional task assignment in distributed systems, or existing grid scheduling problems. We describe our approach of formalizing this problem using a mathematical optimization framework and further formulating it as a mixed integer program (MIP). We then present our ACP solver using GAMS and CPLEX to automate the decision-making process. The solver was numerically tested on a number of examples, ranging from a 125-server real data center to a set of hypothetical data centers with increasing size. In all cases the ACP solver found an optimal solution within a reasonably short time. In a numerical simulation comparing our solver to a random selection algorithm, our solver resulted in much more efficient use of scarce network resources and allowed more applications to be placed in the same infrastructure. Copyright © 2008 John Wiley & Sons, Ltd. [source]


CyberCarrier service and network management

BELL LABS TECHNICAL JOURNAL, Issue 4 2000
Michael R. Brenner
This paper presents an overview of the service and network management architecture of Lucent Technologies' CyberCarrier Solution. Businesses of all sizes and from all sectors are choosing to outsource large portions of their information technology (IT) operations to Internet-based data centers that host application service providers (ASPs). Many network service providers (NSPs) have decided to become CyberCarrier service providers (CCSPs),that is, they have decided to expand their businesses to include ASP data center hosting services. Managing these new ASP data center hosting services is one of the most urgent challenges encountered by a CCSP, and its solution is arguably critical to a CCSP's long-term success. Although introducing ASP data center hosting services increases and diversifies a CCSP's revenue, it also significantly complicates the CCSP's management processes. This paper defines an abstract management functional architecture that divides the CCSP management problem into tractable pieces and addresses each of them. Then it explains how the CyberCarrier Solution maps onto that functional architecture. Finally, it explores how Lucent will evolve its CyberCarrier Solution through future management system innovations. [source]