Distribution by Scientific Domains
Distribution within Life Sciences

Kinds of Metrics

  • community metric
  • different metric
  • distance metric
  • diversity metric
  • dti metric
  • evaluation metric
  • macroinvertebrate metric
  • new metric
  • other metric
  • performance metric
  • quality metric
  • quantitative metric
  • same metric
  • several metric
  • similarity metric
  • useful metric

  • Terms modified by Metrics

  • metric data
  • metric space
  • metric tensor
  • metric ton
  • metric used

  • Selected Abstracts

    Application of the Levenshtein Distance Metric for the Construction of Longitudinal Data Files

    Harold C. Doran
    The analysis of longitudinal data in education is becoming more prevalent given the nature of testing systems constructed for No Child Left Behind Act (NCLB). However, constructing the longitudinal data files remains a significant challenge. Students move into new schools, but in many cases the unique identifiers (ID) that should remain constant for each student change. As a result, different students frequently share the same ID, and merging records for an ID that is erroneously assigned to different students clearly becomes problematic. In small data sets, quality assurance of the merge can proceed through human reviews of the data to ensure all merged records are properly joined. However, in data sets with hundreds of thousands of cases, quality assurance via human review is impossible. While the record linkage literature has many applications in other disciplines, the educational measurement literature lacks details of formal protocols that can be used for quality assurance procedures for longitudinal data files. This article presents an empirical quality assurance procedure that may be used to verify the integrity of the merges performed for longitudinal analysis. We also discuss possible extensions that would permit merges to occur even when unique identifiers are not available. [source]

    Bizygomatic breadth determination in damaged skulls

    C. L. Oskam
    Abstract Metric and discriminant function analyses of the skull have been used successfully to determine ancestry and sex from human skeletal remains in both forensic and archaeological contexts. However, skulls are frequently discovered in damaged condition. One structure that is commonly fragmented, even when the rest of the skull is preserved, is the zygomatic arch. The bizygomatic width is an important measurement in craniometry and in forensic facial reconstruction for determining facial width; therefore we propose a simple linear regression model to predict the bizygomatic width of skulls with damaged zygomatic arches. Thirty-one adult skulls originating from the Indian sub-continent were used to measure the bizygomatic width. Then, on the same skulls, a straight steel wire was placed at the superior surface of the temporal and zygomatic origins of the zygomatic arch to simulate the zygomatic arch reconstruction on damaged skulls. These wire measurements were used to fit a simple linear regression model between the bizygomatic widths and the wire measurements, and the estimated regression model; Bizygomatic Width (bone),=,0.61,+,1.02(wire measurement), has a very high R2 value of 0.91. Hence, this model could effectively be used to predict bizygomatic widths based on wire measurements. In addition, the bizygomatic widths and wire measurements were collected from 14 New Zealand European skulls to test the ability of the regression model to determine bizygomatic widths from different ethnic groups. This model accurately predicted the bizygomatic widths in New Zealands of European origin skulls and therefore suggests that this regression model could be used for other ethnic groups. The importance of the bizygomatic width for craniometric analysis makes this regression model particularly useful for analysing archaeological samples. Furthermore, this regression line can be used in the field of forensic facial reconstruction to reconstruct damaged zygomatic arches prior to facial reconstructions. Copyright 2009 John Wiley & Sons, Ltd. [source]

    A Metric of Maternal Prenatal Risk Drinking Predicts Neurobehavioral Outcomes in Preschool Children

    ALCOHOLISM, Issue 4 2009
    Lisa M. Chiodo
    Background:, Fetal Alcohol Spectrum Disorders (FASDs), including Fetal Alcohol Syndrome, continue to be high-incidence developmental disorders. Detection of patterns of maternal drinking that place fetuses at risk for these disorders is critical to diagnosis, treatment, and prevention, but is challenging and often insufficient during pregnancy. Various screens and measures have been used to identify maternal risk drinking but their ability to predict child outcome has been inconsistent. This study hypothesized that a metric of fetal "at-risk" alcohol exposure (ARAE) derived from several indicators of maternal self-reported drinking would predict alcohol-related neurobehavioral dysfunctions in children better than individual measures of maternal alcohol consumption alone. Methods:, Self-reported peri-conceptional and repeated maternal drinking during pregnancy were assessed with semi-structured interviews and standard screens, i.e., the CAGE, T-ACE, and MAST, in a prospective sample of 75 African-American mothers. Drinking volumes per beverage type were converted to standard quantity and frequency measures. From these individual measures and screening instruments, a simple dichotomous index of prenatal ARAE was defined and used to predict neurobehavioral outcomes in the 4- to 5-year-old offspring of these women. Study outcomes included IQ, attention, memory, visual-motor integration, fine motor skill, and behavior. Statistical analyses controlled for demographic and other potential confounders. Results:, The current "at-risk" drinking metric identified over 62% of the mothers as drinking at risk levels,23% more than the selection criterion identified,and outperformed all individual quantity and frequency consumption measures, including averages of weekly alcohol use and "binge" alcohol exposures (assessed as intake per drinking occasion), as well as an estimate of the Maternal Substance Abuse Checklist (Coles et al., 2000), in predicting prenatal alcohol-related cognitive and behavioral dysfunction in 4- to 5-year-old children. Conclusions:, A metric reflecting multiple indices of "at-risk" maternal alcohol drinking in pregnancy had greater utility in predicting various prenatal alcohol-related neurobehavioral dysfunction and deficits in children compared to individual measures of maternal self-reported alcohol consumption or a previous maternal substance abuse index. Assessing fetal risk drinking in pregnant women was improved by including multiple indicators of both alcohol consumption and alcohol-related consequences and, if appropriate practical applications are devised, may facilitate intervention by health care workers during pregnancy and potentially reduce the incidence or severity of FASDs. [source]

    Assessing maintainability change over multiple software releases

    Denis Kozlov
    Abstract The focus of the paper is to reveal the relationships between software maintainability and other internal software quality attributes. The source code characteristics of five Java-based open-source software products are analyzed using the software measurement tool SoftCalc. The relationships between maintainability and internal quality attributes are identified based on the Pearson product moment correlation analysis. Our results show negative correlations between maintainability and some well-known internal software quality attributes, as well as the ones between maintainability and complexity metrics. Particularly, according to our results, the Number of Data Variables Declared and the Decisional Complexity McClure Metric have the strongest correlations with maintainability. The results of our study, that is to say, knowledge about the relationships between internal software quality attributes and maintainability, can be used as a basis for improvement of software maintainability at earlier stages of the software development process. Copyright 2007 John Wiley & Sons, Ltd. [source]

    A Dependence Metric for Possibly Nonlinear Processes

    C. W. Granger
    Abstract., A transformed metric entropy measure of dependence is studied which satisfies many desirable properties, including being a proper measure of distance. It is capable of good performance in identifying dependence even in possibly nonlinear time series, and is applicable for both continuous and discrete variables. A nonparametric kernel density implementation is considered here for many stylized models including linear and nonlinear MA, AR, GARCH, integrated series and chaotic dynamics. A related permutation test of independence is proposed and compared with several alternatives. [source]

    Metric and nonmetric dental variation and the population structure of the Ainu

    Tsunehiko Hanihara
    Gene flow and genetic drift are important factors affecting geographic variations in human phenotypic traits. In the present study, the effects of gene flow from an outside source on the pattern of within- and among-group variation of the Ainu from Sakhalin Island and three local groups of Hokkaido are examined by applying an R-matrix approach to metric and nonmetric dental data. The comparative samples consist of their ancestral and neighboring populations, such as the Neolithic Jomon, the subsequent Epi-Jomon/Satsumon, the Okhotsk culture people who migrated from Northeast Asia to the northeastern part of Hokkaido during a period 1600,900 years B.P., and modern non-Ainu Japanese. The results obtained by using the census population sizes of the regional groups of the Ainu as an estimate of relative effective population size suggest the possibility of an admixture between the Okhotsk culture people and the indigenous inhabitants in Hokkaido, at least in the coastal region along the Sea of Okhotsk. Such gene flow from Northeast Asian continent may have exerted an effect on the genetic structure of the contemporary Ainu. The present findings indicate that the population structure, as represented by genetic drift and gene flow, tend to be obscured in the results obtained by standard statistical methods such as Mahalanobis' generalized distance and Smith's MMDs. The present extension of the R-matrix approach to metric and nonmetric dental data provide results that can be interpreted in terms of a genetically, archaeologically, and prehistorically suggested pattern of gene flow and isolation. Am. J. Hum. Biol., 2010. 2009 Wiley-Liss, Inc. [source]

    Tiling among stereotyped dendritic branches in an identified Drosophila motoneuron,,

    F. Vonhoff
    Abstract Different types of neurons can be distinguished by the specific targeting locations and branching patterns of their dendrites, which form the blueprint for wiring the brain. Unraveling which specific signals control different aspects of dendritic architecture, such as branching and elongation, pruning and cessation of growth, territory formation, tiling, and self-avoidance requires a quantitative comparison in control and genetically manipulated neurons. The highly conserved shapes of individually identified Drosophila neurons make them well suited for the analysis of dendritic architecture principles. However, to date it remains unclear how tightly dendritic architecture principles of identified central neurons are regulated. This study uses quantitative reconstructions of dendritic architecture of an identified Drosophila flight motoneuron (MN5) with a complex dendritic tree, comprising more than 4,000 dendritic branches and 6 mm total length. MN5 contains a fixed number of 23 dendritic subtrees, which tile into distinct, nonoverlapping volumes of the diffuse motor neuropil. Across-animal comparison and quantitative analysis suggest that tiling of the different dendritic subtrees of the same neuron is caused by competitive and repulsive interactions among subtrees, perhaps allowing different dendritic compartments to be connected to different circuit elements. We also show that dendritic architecture is similar among different wildtype and GAL4 driver fly lines. Metric and topological dendritic architecture features are sufficiently constant to allow for studies of the underlying control mechanisms by genetic manipulations. Dendritic territory and certain topological measures, such as tree compactness, are most constant, suggesting that these reflect the intrinsic molecular identity of the neuron. J. Comp. Neurol. 518:2169,2185, 2010. 2010 Wiley-Liss, Inc. [source]

    Littoral macroinvertebrates as indicators of lake acidification within the UK

    Ben McFarland
    Abstract 1.The Water Framework Directive (WFD) requires the assessment of acidification in sensitive water bodies. Chemical and littoral macroinvertebrate samples were collected to assess acidification of clear and humic lakes in the UK. 2.Of three acid-sensitive metrics that were regressed against acid neutralizing capacity (ANC) and pH, highly significant responses were detected using the Lake Acidification Macroinvertebrate Metric (LAMM). This metric was used to assign high, good, moderate, poor and bad status classes, as required by the WFD. 3.In clear-water lakes, macroinvertebrate changes with increasing acidification did not indicate any discontinuities, so a chemical model was used to define boundaries. In humic lakes, biological data were able to indicate a distinct, good,moderate boundary between classes. 4.Humic lakes had significantly lower pH than clear lakes in the same class, not only at the good,moderate boundary where different methods were used to set boundaries, but also at the high,good boundary, where the same chemical modelling was used for both lake types. These findings support the hypothesis that toxic effects are reduced on waters rich in dissolved organic carbon (DOC). 5.A typology is needed that splits humic and clear lakes to avoid naturally acidic lakes from being inappropriately labelled as acidified. 6.Validation using data from independent lakes demonstrated that the LAMM is transportable, with predicted environmental quality ratios (EQRs) derived from mean observed ANC, accurately reflecting the observed EQR and final status class. 7.Detecting and quantifying acidification is important for conservation, in the context of appropriate restoration, for example, by ensuring that naturally acid lakes are not treated as anthropogenically acidified. Copyright 2009 John Wiley & Sons, Ltd and Crown Copyright 2009 [source]

    Diversity of Interactions: A Metric for Studies of Biodiversity

    BIOTROPICA, Issue 3 2010
    Lee A. Dyer
    ABSTRACT Multitrophic interactions play key roles in the origin and maintenance of species diversity, and the study of these interactions has contributed to important theoretical advances in ecology and evolutionary biology. Nevertheless, most biodiversity inventories focus on static species lists, and prominent theories of diversity still ignore trophic interactions. The lack of a simple interaction metric that is analogous to species richness is one reason why diversity of interactions is not examined as a response or predictor variable in diversity studies. Using plant,herbivore,enemy trophic chains as an example, we develop a simple metric of diversity in which richness, diversity indices (e.g., Simpson's 1/D), and rarefaction diversity are calculated with links as the basic unit rather than species. Interactions include all two-link (herbivore,plant and enemy,herbivore) and three-link (enemy,herbivore,plant) chains found in a study unit. This metric is different from other indices, such as traditional diversity measures, connectivity and interaction diversity in food-web studies, and the diversity of interaction index in behavioral studies, and it is easier to compute. Using this approach to studying diversity provides novel insight into debates about neutrality and correlations between diversity, stability, productivity, and ecosystem services. Abstract in Spanish is available at [source]

    Evaluation Metrics in Classification: A Quantification of Distance-Bias

    Ricardo Vilalta
    This article provides a characterization of bias for evaluation metrics in classification (e.g., Information Gain, Gini, ,2, etc.). Our characterization provides a uniform representation for all traditional evaluation metrics. Such representation leads naturally to a measure for the distance between the bias of two evaluation metrics. We give a practical value to our measure by observing the distance between the bias of two evaluation metrics and its correlation with differences in predictive accuracy when we compare two versions of the same learning algorithm that differ in the evaluation metric only. Experiments on real-world domains show how the expectations on accuracy differences generated by the distance-bias measure correlate with actual differences when the learning algorithm is simple (e.g., search for the best single feature or the best single rule). The correlation, however, weakens with more complex algorithms (e.g., learning decision trees). Our results show how interaction among learning components is a key factor to understand learning performance. [source]

    Metric spaces in NMR crystallography

    David M. Grant
    Abstract The anisotropic character of the chemical shift can be measured by experiments that provide shift tensor values and comparing these experimental components, obtained from microcrystalline powders, with 3D nuclear shielding tensor components, calculated with quantum chemistry, yields structural models of the analyzed molecules. The use of a metric tensor for evaluating the mean squared deviations, d2, between two or more tensors provides a statistical way to verify the molecular structure governing the theoretical shielding components. The sensitivity of the method is comparable with diffraction methods for the heavier organic atoms (i.e., C, O, N, etc.) but considerably better for the positions of H atoms. Thus, the method is especially powerful for H-bond structure, the position of water molecules in biomolecular species, and other proton important structural features, etc. Unfortunately, the traditional Cartesian tensor components appear as reducible metric representations and lack the orthogonality of irreducible icosahedral and irreducible spherical tensors, both of which are also easy to normalize. Metrics give weighting factors that carry important statistical significance in a structure determination. Details of the mathematical analysis are presented and examples given to illustrate the reason nuclear magnetic resonance are rapidly assuming an important synergistic relationship with diffraction methods (X-ray, neutron scattering, and high energy synchrotron irradiation). 2009 Wiley Periodicals, Inc.Concepts Magn Reson Part A 34A: 217,237, 2009. [source]

    Integrated Environmental and Financial Performance Metrics for Investment Analysis and Portfolio Management

    Simon Thomas
    This paper introduces a new measure, based on a study by Trucost and Dr Robert Repetto, combining external environmental costs with established measures of economic value added, and demonstrates how this measure can be incorporated into financial analysis. We propose that external environmental costs are relevant to all investors: universal investors are concerned about the scale of external costs whether or not regulations to internalise them are likely; mainstream investors need to understand external costs as an indication of future regulatory compliance costs; and SRI investors need to evaluate companies on both financial and social performance. The paper illustrates our new measure with data from US electric utilities and illustrates how the environmental exposures of different fund managers and portfolios can be compared. With such measures fund managers can understand and control portfolio-wide environmental risks, demonstrate their environmental credentials quantitatively and objectively and compete for the increasing number of investment mandates that have an environmental component. [source]

    Metrics in the Science of Surge

    Jonathan A. Handler MD
    Metrics are the driver to positive change toward better patient care. However, the research into the metrics of the science of surge is incomplete, research funding is inadequate, and we lack a criterion standard metric for identifying and quantifying surge capacity. Therefore, a consensus working group was formed through a "viral invitation" process. With a combination of online discussion through a group e-mail list and in-person discussion at a breakout session of the Academic Emergency Medicine 2006 Consensus Conference, "The Science of Surge," seven consensus statements were generated. These statements emphasize the importance of funded research in the area of surge capacity metrics; the utility of an emergency medicine research registry; the need to make the data available to clinicians, administrators, public health officials, and internal and external systems; the importance of real-time data, data standards, and electronic transmission; seamless integration of data capture into the care process; the value of having data available from a single point of access through which data mining, forecasting, and modeling can be performed; and the basic necessity of a criterion standard metric for quantifying surge capacity. Further consensus work is needed to select a criterion standard metric for quantifying surge capacity. These consensus statements cover the future research needs, the infrastructure needs, and the data that are needed for a state-of-the-art approach to surge and surge capacity. [source]

    Assessing macroinvertebrate metrics for classifying acidified rivers across northern Europe

    FRESHWATER BIOLOGY, Issue 7 2010
    Summary 1. The effects of acidification on ecological status of rivers in Northern Europe must be assessed according to the EU Water Framework Directive (WFD). Several acidification metrics based on macroinvertebrates already exist in different countries, and the WFD requires that they be comparable across northern Europe. Thus, we compiled macroinvertebrate monitoring data from the U.K. (n = 191 samples), Norway (n = 740) and Sweden (n = 531) for analysis against pH. 2. We tested new and existing acidification metrics developed nationally and used within the Northern Geographical Intercalibration Group. The new metrics were based on the acidification sensitivity of selected species and are proposed as a first step towards a new common indicator for acidification for Northern Europe. 3. Metrics were assessed according to responsiveness to mean pH, degree of nonlinearity in response and consistency in responses across countries. We used flexible, nonparametric regression models to explore various properties of the pressure,response relationships. Metrics were also analysed with humic content (total organic carbon above/below 5 mg L,1) as a covariate. 4. Most metrics responded clearly to pH, with the following metrics explaining most of the variance: Acid Water Indicator Community, Number of ephemeropteran families, Medin's index, Multimetric Indicator of Stream Acidification and the new metric ,Proportion of sensitive Ephemeroptera'. 5. Most metrics were significantly higher in humic than in clear-water rivers, suggesting smaller acidification effects in humic rivers. This result supports the proposed use of humic level as a typological factor in the assessment of acidification. 6. Some potentially important effects could not be considered in this study, such as the additional effects of metals, episodic acidification and the contrasting effects of natural versus anthropogenic acidity. We advocate further data collection and testing of metrics to incorporate these factors. [source]

    An application of canonical correspondence analysis for developing ecological quality assessment metrics for river macrophytes

    FRESHWATER BIOLOGY, Issue 5 2005
    Summary 1. Aquatic macrophyte composition and abundance is required by the European Union's Water Framework Directive for determining ecological status. Five metrics were produced that can be combined to determine the deviation of aquatic macrophytes from reference conditions in Northern Ireland's rivers. 2. Species optima and niche breadths along silt, nitrate, pH, conductivity and dissolved oxygen gradients were generated from aquatic macrophyte and water quality surveys conducted at 273 sites throughout Northern Ireland using Canonical Correspondence Analysis (CCA). Five metric scores based on these environmental gradients were determined at new monitoring sites using the mean optima of the species occurring at the site, weighted by percentage cover and niche breadth of each species. 3. A preliminary reference network of 32 sites of high physico-chemical and hydromorphological quality, and representative of the range of river types in Northern Ireland, enabled reference metric scores to be produced for each river type. Five unimpacted and twenty impacted sites were used for testing the performance of the metrics. By subtracting reference metric scores from metric scores at a monitoring site measures of ecological impact could be determined along five different impact gradients. Metrics were also combined to give a measure of total ecological change. 4. The metrics system distinguished unimpacted from impacted sites and correctly identified 77% of the known impacts. The metrics distinguished different types of impact, e.g. silt and nitrate. 5. Aquatic macrophyte occurrence and abundance has high natural variability at a site, both temporally and spatially. This method was designed to be sensitive to ecological change whilst reducing noise caused by natural variation. [source]

    Metrics: HRM's Holy Grail?

    A New Zealand case study
    What gets measured in business is noticed and acted on. The importance of human resource management (HRM) to be noticed as a vital key to business success has been argued profusely by the HRM profession over the last three decades. While the importance of human resource (HR) measurement is not disputed by business managers, the search for meaningful generic HR metrics is like HRM's Holy Grail. The purpose of this research is to investigate the issues confronting a sample of business organisations concerning measurement issues. It examines the current measurement practices used and their HR measurement needs. Developing appropriate HR measures, in terms of adding value, allows organisations to refocus their resources for leverage. Inappropriate measures simply encourage inappropriate behaviours not in the long-term interests of the business. We know that HRM is less prepared than other business functions (like finance or management information systems) to quantify its impact on business performance. Our results suggest that HR metrics as the Holy Grail of HRM remain elusive. This research signals the importance of developing relevant and meaningful HR measurement models, while acknowledging that the actual metrics used (unlike accounting measures) may vary from business to business. [source]

    Characterizing user-perceived impairment events using end-to-end measurements

    Soshant Bali
    Abstract Measures of quality of service (QoS) must correlate to end-user experience. For multimedia services, these metrics should focus on the phenomena that are observable by the end-user. Metrics such as delay and loss may have little direct meaning to the end-user because knowledge of specific coding and/or adaptive techniques is required to translate delay and loss to the user-perceived performance. Impairment events, as defined in this paper, are observable by the end-users independent of coding, adaptive playout or packet loss concealment techniques employed by their multimedia applications. Time between impairments and duration of impairments are metrics that are easily understandable by a network user. Methods to detect these impairment events using end-to-end measurements are developed here. In addition, techniques to identify Layer 2 route changes and congestion events using end-to-end measurements are also developed. These are useful in determining what caused the impairments. End-to-end measurements were conducted for about 26 days on 9 different node pairs to evaluate the developed techniques. Impairments occurred at a high rate on the two paths on which congestion events were detected. On these two paths, congestion occurred for 6,8 hours during the day on weekdays. Impairments caused by route changes were rare but lasted for several minutes. Copyright 2005 John Wiley & Sons, Ltd. [source]

    Priority research areas for ecosystem services in a changing world

    Emily Nicholson
    Summary 1.,Ecosystem services are the benefits humans obtain from ecosystems. The importance of research into ecosystem services has been widely recognized, and rapid progress is being made. However, the prevailing approach to quantifying ecosystem services is still based on static analyses and single services, ignoring system dynamics, uncertainty and feedbacks. This is not only partly due to a lack of mechanistic understanding of processes and a dearth of empirical data, but also due to a failure to engage fully with the interdisciplinarity of the problem. 2.,We argue that there is a tendency to ignore the feedbacks between and within both social and ecological systems, and a lack of explicit consideration of uncertainty. Metrics need to be developed that can predict thresholds, which requires strong linkages to underlying processes, while the development of policy for management of ecosystem services needs to be based on a broader understanding of value and drivers of human well-being. 3.,We highlight the complexities, gaps in current knowledge and research, and the potentially promising avenues for future investigation in four priority research areas: agendas, processes, metrics and uncertainty. 4.,Synthesis and applications. The research interest in the field of ecosystem services is rapidly expanding, and can contribute significantly to the sustainable management of natural resources. However, a narrow disciplinary approach, or an approach which does not consider feedbacks within and between ecological and social systems, has the potential to produce dangerously misleading policy recommendations. In contrast, if we explicitly acknowledge and address uncertainties and complexities in the provision of ecosystem services, progress may appear slower but our models will be substantially more robust and informative about the effects of environmental change. [source]

    Metrics for the scope of a collection

    Robert B. Allen
    Some collections cover many topics, while others are narrowly focused on a limited number of topics. We introduce the concept of the "scope" of a collection of documents and we compare two ways of measuring it. These measures are based on the distances between documents. The first uses the overlap of words between pairs of documents. The second measure uses a novel method that calculates the semantic relatedness to pairs of words from the documents. Those values are combined to obtain an overall distance between the documents. The main validation for the measures compared Web pages categorized by Yahoo. Sets of pages sampled from broad categories were determined to have a higher scope than sets derived from subcategories. The measure was significant and confirmed the expected difference in scope. Finally, we discuss other measures related to scope. [source]

    Metrics-based process redesign with the MIT Process Handbook

    Alessandro Margherita
    This article describes how the business process re-engineering (BPR) concepts, knowledge base and software tools developed as part of the MIT Process Handbook can be extended and improved through the use of a systematic taxonomization of business process metrics. After introducing the key concepts underlying the Handbook, we propose a taxonomy of cost, dimension and value-related process metrics and show how the capture of such measures, combined with a set of special query types, enables more effective BPR. These innovations are illustrated using an example wherein the manager of a travel agency redesigns a booking process to reduce costs and increase customer satisfaction. Copyright 2007 John Wiley & Sons, Ltd. [source]

    Metrics and meaning for environmental sustainability

    Gioia Thompson
    Indicators of environmental performance are complex, critical, and increasingly in demand. Colleges and universities will be best served by focusing on core indicators and collaboration between sustainability and institutional research offices. [source]

    Pain: Memories, Moods, and Metrics

    PAIN PRACTICE, Issue 3 2006
    Craig T. Hartrick MD, FIPP Editor-in-Chief
    No abstract is available for this article. [source]

    Metrics or Peer Review?

    Evaluating the 2001 UK Research Assessment Exercise in Political Science
    Evaluations of research quality in universities are now widely used in the advanced economies. The UK's Research Assessment Exercise (RAE) is the most highly developed of these research evaluations. This article uses the results from the 2001 RAE in political science to assess the utility of citations as a measure of outcome, relative to other possible indicators. The data come from the 4,400 submissions to the RAE political science panel. The 28,128 citations analysed relate not only to journal articles, but to all submitted publications , including authored and edited books and book chapters. The results show that citations are the most important predictor of the RAE outcome, followed by whether or not a department had a representative on the RAE panel. The results highlight the need to develop robust quantitative indicators to evaluate research quality which would obviate the need for a peer evaluation based on a large committee. Bibliometrics should form the main component of such a portfolio of quantitative indicators. [source]

    Metrics versus Peer Review?

    Albert Weale
    First page of article [source]

    Evaluating Emergency Care Research Networks: What Are the Right Metrics?

    Jill M. Baren MD
    Abstract Research networks can enable the inclusion of large, diverse patient populations in different settings. However, the optimal measures of a research network's failure or success are not well defined or standardized. To define a framework for metrics used to measure the performance and effectiveness of emergency care research networks (ECRN), a conference for emergency care investigators, funding agencies, patient advocacy groups, and other stakeholders was held and yielded the following major recommendations: 1) ECRN metrics should be measurable, explicitly defined, and customizable for the multiple stakeholders involved and 2) continuing to develop and institute metrics to evaluate ECRNs will be critical for their accountability and sustainability. [source]

    Assessment of cognitive function in systemic lupus erythematosus, rheumatoid arthritis, and multiple sclerosis by computerized neuropsychological tests

    ARTHRITIS & RHEUMATISM, Issue 5 2010
    John G. Hanly
    Objective Computerized neuropsychological testing may facilitate screening for cognitive impairment in systemic lupus erythematosus (SLE). This study was undertaken to compare patients with SLE, patients with rheumatoid arthritis (RA), and patients with multiple sclerosis (MS) with healthy controls using the Automated Neuropsychological Assessment Metrics (ANAM). Methods Patients with SLE (n = 68), RA (n = 33), and MS (n = 20) were compared with healthy controls (n = 29). Efficiency of cognitive performance on 8 ANAM subtests was examined using throughput (TP), inverse efficiency (IE), and adjusted IE scores. The latter is more sensitive to higher cognitive functions because it adjusts for the impact of simple reaction time on performance. The results were analyzed using O'Brien's generalized least squares test. Results Control subjects were the most efficient in cognitive performance. MS patients were least efficient overall (as assessed by TP and IE scores) and were less efficient than both SLE patients (P = 0.01) and RA patients (P < 0.01), who did not differ. Adjusted IE scores were similar between SLE patients, RA patients, and controls, reflecting the impact of simple reaction time on cognitive performance. Thus, 50% of SLE patients, 61% of RA patients, and 75% of MS patients had impaired performance on ,1 ANAM subtest. Only 9% of RA patients and 11% of SLE patients had impaired performance on ,4 subtests, whereas this was true for 20% of MS patients. Conclusion ANAM is sensitive to cognitive impairment. While such computerized testing may be a valuable screening tool, our results emphasize the lack of specificity of slowed performance as a reliable indicator of impairment of higher cognitive function in SLE patients. [source]

    Real-time measurement of cytosolic free calcium concentration in DEM-treated HL-60 cells during static magnetic field exposure and activation by ATP

    Camilla Rozanski
    Abstract This study investigated whether glutathione depletion affected the sensitivity of HL-60 cells to static magnetic fields. The effect of Diethylmaleate (DEM) on static magnetic field induced changes in cytosolic free calcium concentration ([Ca2+]c) was examined. Cells were loaded with a fluorescent dye and exposed to a uniform static magnetic field at a strength of 0 mT (sham) or 100 mT. [Ca2+]c was monitored during field and sham exposure using a ratiometric fluorescence spectroscopy system. Cells were activated by the addition of ATP. Metrics extracted from the [Ca2+]c time series included: average [Ca2+]c during the Pre-Field and Field Conditions, peak [Ca2+]c following ATP activation and the full width at half maximum (FWHM) of the peak ATP response. Comparison of each calcium metric between the sham and 100 mT experiments revealed the following results: average [Ca2+]c measured during the Field condition was 53,,2 nM and 58,,2 nM for sham and 100 mT groups, respectively. Average FWHM was 51,,3 s and 54,,3 s for sham and 100 mT groups, respectively. An effect of experimental order on the peak [Ca2+]c response to ATP in sham/sham experiments complicated the statistical analysis and did not allow pooling of the first and second order experiments. No statistically significant difference between the sham and 100 mT groups was observed for any of the calcium metrics. These data suggested that manipulation of free radical buffering capacity in HL-60 cells did not affect the sensitivity of the cells to a 100 mT static magnetic field. Bioelectromagnetics 30:213,221, 2009. 2008 Wiley-Liss, Inc. [source]

    Statistical Metrics for Quality Assessment of High-Density Tiling Array Data

    BIOMETRICS, Issue 2 2010
    Hui Tang
    Summary High-density tiling arrays are designed to blanket an entire genomic region of interest using tiled oligonucleotides at very high resolution and are widely used in various biological applications. Experiments are usually conducted in multiple stages, in which unwanted technical variations may be introduced. As tiling arrays become more popular and are adopted by many research labs, it is pressing to develop quality control tools as was done for expression microarrays. We propose a set of statistical quality metrics analogous to those in expression microarrays with application to tiling array data. We also develop a method to estimate the significance level of an observed quality measurement using randomization tests. These methods have been applied to multiple real data sets, including three independent ChIP-chip experiments and one transcriptom mapping study, and they have successfully identified good quality chips as well as outliers in each study. [source]

    Teaching and Assessing Procedural Skills Using Simulation: Metrics and Methodology

    Richard L. Lammers MD
    Abstract Simulation allows educators to develop learner-focused training and outcomes-based assessments. However, the effectiveness and validity of simulation-based training in emergency medicine (EM) requires further investigation. Teaching and testing technical skills require methods and assessment instruments that are somewhat different than those used for cognitive or team skills. Drawing from work published by other medical disciplines as well as educational, behavioral, and human factors research, the authors developed six research themes: measurement of procedural skills; development of performance standards; assessment and validation of training methods, simulator models, and assessment tools; optimization of training methods; transfer of skills learned on simulator models to patients; and prevention of skill decay over time. The article reviews relevant and established educational research methodologies and identifies gaps in our knowledge of how physicians learn procedures. The authors present questions requiring further research that, once answered, will advance understanding of simulation-based procedural training and assessment in EM. [source]

    Defining Team Performance for Simulation-based Training: Methodology, Metrics, and Opportunities for Emergency Medicine

    Marc J. Shapiro MD
    Abstract Across health care, teamwork is a critical element for effective patient care. Yet, numerous well-intentioned training programs may fail to achieve the desired outcomes in team performance. Hope for the improvement of teamwork in health care is provided by the success of the aviation and military communities in utilizing simulation-based training (SBT) for training and evaluating teams. This consensus paper 1) proposes a scientifically based methodology for SBT design and evaluation, 2) reviews existing team performance metrics in health care along with recommendations, and 3) focuses on leadership as a target for SBT because it has a high likelihood to improve many team processes and ultimately performance. It is hoped that this discussion will assist those in emergency medicine (EM) and the larger health care field in the design and delivery of SBT for training and evaluating teamwork. [source]