Home About us Contact | |||
Metrics Used (metric + used)
Selected AbstractsCLIMATE FORECASTS IN FLOOD PLANNING: PROMISE AND AMBIGUITY,JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 6 2002Kris Wernstedt ABSTRACT: Recent technical and scientific advances have increased the potential use of long term, seasonal climate forecasts for improving water resource management. This paper examines the role that forecasts, in particular those based on the El Niño Southern Oscillation (ENSO) cycle, can play in flood planning in the Pacific Northwest. While strong evidence exists of an association between ENSO signals and flooding in the region, this association is open to more than one interpretation depending on: (a) the metric used to test the strength of the association; (b) the definition of critical flood events; (c) site specific features of watersheds; and (d) the decision environment of flood management institutions. A better understanding and appreciation of such ambiguities, both social and statistical, will help facilitate the use of climate forecast information for flood planning and response. [source] Optimization of Internet Protocol network design and routingNETWORKS: AN INTERNATIONAL JOURNAL, Issue 1 2004Kaj Holmberg Abstract We consider network design and routing for Internet Protocol (IP) traffic. The design problem concerns capacity dimensioning of communication links, where the design cost consists of fixed charges and linear capacity expansion costs. The optimization problem also concerns determining the amount of traffic demand to be carried by the network and the metric used by a shortest path routing protocol. We present a novel linear mixed-integer mathematical formulation and two heuristic solution procedures. The first heuristic uses mixed-integer programming to generate a sequence of routing solutions. The second solution approach is a simulated annealing meta heuristic. Computational experiments for synthesized and real-life networks show that high-quality solutions can be obtained by both approaches. © 2003 Wiley Periodicals, Inc. [source] Library analysis of SCHEMA-guided protein recombinationPROTEIN SCIENCE, Issue 8 2003Michelle M. Meyer Abstract The computational algorithm SCHEMA was developed to estimate the disruption caused when amino acid residues that interact in the three-dimensional structure of a protein are inherited from different parents upon recombination. To evaluate how well SCHEMA predicts disruption, we have shuffled the distantly-related ,-lactamases PSE-4 and TEM-1 at 13 sites to create a library of 214 (16,384) chimeras and examined which ones retain lactamase function. Sequencing the genes from ampicillin-selected clones revealed that the percentage of functional clones decreased exponentially with increasing calculated disruption (E = the number of residue,residue contacts that are broken upon recombination). We also found that chimeras with low E have a higher probability of maintaining lactamase function than chimeras with the same effective level of mutation but chosen at random from the library. Thus, the simple distance metric used by SCHEMA to identify interactions and compute E allows one to predict which chimera sequences are most likely to retain their function. This approach can be used to evaluate crossover sites for recombination and to create highly mosaic, folded chimeras. [source] The effects of aerosols on intense convective precipitation in the northeastern United States,THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 643 2009Alexandros A. Ntelekos Abstract A fully coupled meteorology-chemistry-aerosol mesoscale model (WRF-Chem) is used to assess the effects of aerosols on intense convective precipitation over the northeastern United States. Numerical experiments are performed for three intense convective storm days and for two scenarios representing ,typical' and ,low' aerosol conditions. The results of the simulations suggest that increasing concentrations of aerosols can lead to either enhancement or suppression of precipitation. Quantification of the aerosol effect is sensitive to the metric used due to a shift of rainfall accumulation distribution when realistic aerosol concentrations are included in the simulations. Maximum rainfall accumulation amounts and areas with rainfall accumulations exceeding specified thresholds provide robust metrics of the aerosol effect on convective precipitation. Storms developing over areas with medium to low aerosol concentrations showed a suppression effect on rainfall independent of the meteorological environment. Storms developing in areas of relatively high particulate concentrations showed enhancement of rainfall when there were simultaneous high values of convective available potential energy, relative humidity and wind shear. In these cases, elevated aerosol concentrations resulted in stronger updraughts and downdraughts and more coherent organization of convection. For the extreme case, maximum rainfall accumulation differences exceeded 40 mm. The modelling results suggest that areas of the northeastern US urban corridor that are close to or downwind of intense sources of aerosols, could be more favourable for rainfall enhancement due to aerosols for the aerosol concentrations typical of this area. Copyright © 2009 Royal Meteorological Society [source] The characteristics of Hessian singular vectors using an advanced data assimilation schemeTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 642 2009A. R. Lawrence Abstract Initial condition uncertainty is a significant source of forecast error in numerical weather prediction. Singular vectors of the tangent linear propagator can identify directions in phase-space where initial errors are likely to make the largest contribution to forecast-error variance. The physical characteristics of these singular vectors depend on the choice of initial-time metric used to represent analysis-error covariances: the total-energy norm serves as a proxy to the analysis-error covariance matrix, whereas the Hessian of the cost function of a 4D-Var assimilation scheme represents a more sophisticated estimate of the analysis-error covariances, consistent with observation and background-error covariances used in the 4D-Var scheme. This study examines and compares the structure of singular vectors computed with the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecasting System using these two types of initial metrics. Unlike earlier studies that use background errors derived from lagged forecast differences (the NMC method), the background-error covariance matrix in the Hessian metric is based on statistics from an ensemble of 4D-Vars using perturbed observations, which produces tighter correlations of background-error statistics than in previous formulations. In light of these new background-error statistics, this article re-examines the properties of Hessian singular vectors (and their relationship to total-energy singular vectors) using cases from different periods between 2003 and 2005. Energy profiles and wavenumber spectra reveal that the total-energy singular vectors are similar to Hessian singular vectors that use all observation types in the operational 4D-Var assimilation. This is in contrast to the structure of Hessian singular vectors without observations. Increasing the observation density tends to reduce the spatial scale of the Hessian singular vectors. Copyright © 2009 Royal Meteorological Society [source] Metrics: HRM's Holy Grail?HUMAN RESOURCE MANAGEMENT JOURNAL, Issue 4 2009A New Zealand case study What gets measured in business is noticed and acted on. The importance of human resource management (HRM) to be noticed as a vital key to business success has been argued profusely by the HRM profession over the last three decades. While the importance of human resource (HR) measurement is not disputed by business managers, the search for meaningful generic HR metrics is like HRM's Holy Grail. The purpose of this research is to investigate the issues confronting a sample of business organisations concerning measurement issues. It examines the current measurement practices used and their HR measurement needs. Developing appropriate HR measures, in terms of adding value, allows organisations to refocus their resources for leverage. Inappropriate measures simply encourage inappropriate behaviours not in the long-term interests of the business. We know that HRM is less prepared than other business functions (like finance or management information systems) to quantify its impact on business performance. Our results suggest that HR metrics as the Holy Grail of HRM remain elusive. This research signals the importance of developing relevant and meaningful HR measurement models, while acknowledging that the actual metrics used (unlike accounting measures) may vary from business to business. [source] Evaluating forecasts: a look at aggregate bias and accuracy measuresJOURNAL OF FORECASTING, Issue 6 2005Benito E. Flores Abstract In this paper an investigation is made of the properties and use of two aggregate measures of forecast bias and accuracy. These are metrics used in business to calculate aggregate forecasting performance for a family (group) of products. We find that the aggregate measures are not particularly informative if some of the one-step-ahead forecasts are biased. This is likely to be the case in practice if frequently employed forecasting methods are used to generate a large number of individual forecasts. In the paper, examples are constructed to illustrate some potential problems in the use of the metrics. We propose a simple graphical display of forecast bias and accuracy to supplement the information yielded by the accuracy measures. This support includes relevant boxplots of measures of individual forecasting success. This tool is simple but helpful as the graphic display has the potential to indicate forecast deterioration that can be masked by one or both of the aggregate metrics. The procedures are illustrated with data representing sales of food items. Copyright © 2005 John Wiley & Sons, Ltd. [source] Investigating the interaction between the homeostatic and circadian processes of sleep,wake regulation for the prediction of waking neurobehavioural performanceJOURNAL OF SLEEP RESEARCH, Issue 3 2003Hans P. A. Van Dongen Summary The two-process model of sleep regulation has been applied successfully to describe, predict, and understand sleep,wake regulation in a variety of experimental protocols such as sleep deprivation and forced desynchrony. A non-linear interaction between the homeostatic and circadian processes was reported when the model was applied to describe alertness and performance data obtained during forced desynchrony. This non-linear interaction could also be due to intrinsic non-linearity in the metrics used to measure alertness and performance, however. Distinguishing these possibilities would be of theoretical interest, but could also have important implications for the design and interpretation of experiments placing sleep at different circadian phases or varying the duration of sleep and/or wakefulness. Although to date no resolution to this controversy has been found, here we show that the issue can be addressed with existing data sets. The interaction between the homeostatic and circadian processes of sleep,wake regulation was investigated using neurobehavioural performance data from a laboratory experiment involving total sleep deprivation. The results provided evidence of an actual non-linear interaction between the homeostatic and circadian processes of sleep,wake regulation for the prediction of waking neurobehavioural performance. [source] An Integrated Framework for Measuring Product Development Performance in High Technology IndustriesPRODUCTION AND OPERATIONS MANAGEMENT, Issue 2 2005Debasish N. Mallick We present an integrated framework for measuring product development performance. The framework consists of a three stage model for exploring the relationships between metrics used by design, manufacturing, marketing functions, and overall commercial success. Using a cross-sectional survey of 383 product development professionals working on 38 product development projects in the high-tech electronic assembled goods manufacturing sector, we provide empirical evidence of the proposed framework. The findings indicate that in the high-tech manufacturing sector (1) commercial success of new product development projects is primarily determined by market share, (2) gain in market share is primarily driven by lower unit cost and not by technical performance, and (3) reduction in unit cost is primarily driven by the increased speed of new product development and not by the R&D budget. The study failed to identify any significant association between R&D budget and technical performance, and development speed and technical performance. [source] Evaluating Emergency Care Research Networks: What Are the Right Metrics?ACADEMIC EMERGENCY MEDICINE, Issue 10 2009Jill M. Baren MD Abstract Research networks can enable the inclusion of large, diverse patient populations in different settings. However, the optimal measures of a research network's failure or success are not well defined or standardized. To define a framework for metrics used to measure the performance and effectiveness of emergency care research networks (ECRN), a conference for emergency care investigators, funding agencies, patient advocacy groups, and other stakeholders was held and yielded the following major recommendations: 1) ECRN metrics should be measurable, explicitly defined, and customizable for the multiple stakeholders involved and 2) continuing to develop and institute metrics to evaluate ECRNs will be critical for their accountability and sustainability. [source] Quality of protein crystal structuresACTA CRYSTALLOGRAPHICA SECTION D, Issue 9 2007Eric N. Brown The genomics era has seen the propagation of numerous databases containing easily accessible data that are routinely used by investigators to interpret results and generate new ideas. Most investigators consider data extracted from scientific databases to be error-free. However, data generated by all experimental techniques contain errors and some, including the coordinates in the Protein Data Bank (PDB), also integrate the subjective interpretations of experimentalists. This paper explores the determinants of protein structure quality metrics used routinely by protein crystallographers. These metrics are available for most structures in the database, including the R factor, Rfree, real-space correlation coefficient, Ramachandran violations etc. All structures in the PDB were analyzed for their overall quality based on nine different quality metrics. Multivariate statistical analysis revealed that while technological improvements have increased the number of structures determined, the overall quality of structures has remained constant. The quality of structures deposited by structural genomics initiatives are generally better than the quality of structures from individual investigator laboratories. The most striking result is the association between structure quality and the journal in which the structure was first published. The worst offenders are the apparently high-impact general science journals. The rush to publish high-impact work in the competitive atmosphere may have led to the proliferation of poor-quality structures. [source] Measuring corporate environmental performance: the trade-offs of sustainability ratingsBUSINESS STRATEGY AND THE ENVIRONMENT, Issue 4 2010Magali Delmas Abstract Socially responsible investing (SRI) represents an investment process that reflects environmental and social preferences. The financial industry is in a unique position to move corporations towards corporate sustainability. However, there is often little transparency regarding the metrics used to evaluate corporate social and environmental performance and the trade-offs involved in the evaluation. In this paper we discuss the various trade-offs of sustainability screening methodologies. We show that the rating of companies varies significantly according to whether the screening is based on toxic releases and regulatory compliance or on the quality of environmental policy and disclosure. We base our analysis on the evaluation of the performance of 15 firms in the chemical sector. The analysis indicates that firms that have the most advanced reporting and environmental management practices tend also to have higher levels of toxic releases and lower environmental compliance. We provide methodological recommendations to help stakeholders evaluate corporate environmental performance. Copyright © 2010 John Wiley & Sons, Ltd and ERP Environment. [source] Corporate environmental reporting: what's in a metric?BUSINESS STRATEGY AND THE ENVIRONMENT, Issue 2 2003R. Scott Marshall Assistant Professor of Management Although there has been increased attention to corporate environmental reports (CERs), there has yet to be a close examination of the metrics used in these reports. Metrics do not address the content of CERs, but, perhaps more importantly, metrics provide the means for conveying the content. In this paper, we analyze metrics used in 79 corporations' recent CER reports. We define and use an 'environmental sustainability' lens, and apply two environmental metrics taxonomies to CER metrics. We also consider the implications of key internal and external firm factors on CER metrics. Our findings suggest that (i) firms' compliance with ISO 14001 increases the presence of future oriented metrics, (ii) a majority of CER content uses lagging metrics with descriptive and operational performance information, (iii) larger firms are more likely than smaller firms to use future oriented metrics and (iv) there are noticeable differences across countries/regions in terms of CER metrics. Several important issues seem evident from the study. First, the metrics most commonly used in CERs provide little information about future performance. Second, the majority of metrics describe operations performance rather than environmental impact. Third, even though the sample was chosen based on a priori indicators of corporate environmental awareness, only about half of the companies sampled had a CER available. Copyright © 2003 John Wiley & Sons, Ltd. and ERP Environment [source] |