New Metric (new + metric)

Distribution by Scientific Domains


Selected Abstracts


A universal metric for sequential MIMO detection,

EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 8 2007
Christian Kuhn
Conventionally, detection in multiple-antenna systems is based on a tree-search or a lattice-search with a metric that can be computed by recursively accumulating the corresponding metric increments for a given hypothesis. For that purpose, a multiple-antenna detector traditionally applies a preprocessing to obtain the search-metric in a suitable form. In contrast to that, we present a reformulation of the search-metric that directly allows for an appropriate evaluation of the metric on the underlying structure without the need for a computationally costly preprocessing step. Unlike the traditional approach, the new metric can also be applied when the system has fewer receive than transmit antennas. We present simulation results in which the new metric is applied for turbo detection involving the list-sequential (LISS) detector that was pioneered by Joachim Hagenauer. Copyright 2007 John Wiley & Sons, Ltd. [source]


Assessing macroinvertebrate metrics for classifying acidified rivers across northern Europe

FRESHWATER BIOLOGY, Issue 7 2010
S. JANNICKE MOE
Summary 1. The effects of acidification on ecological status of rivers in Northern Europe must be assessed according to the EU Water Framework Directive (WFD). Several acidification metrics based on macroinvertebrates already exist in different countries, and the WFD requires that they be comparable across northern Europe. Thus, we compiled macroinvertebrate monitoring data from the U.K. (n = 191 samples), Norway (n = 740) and Sweden (n = 531) for analysis against pH. 2. We tested new and existing acidification metrics developed nationally and used within the Northern Geographical Intercalibration Group. The new metrics were based on the acidification sensitivity of selected species and are proposed as a first step towards a new common indicator for acidification for Northern Europe. 3. Metrics were assessed according to responsiveness to mean pH, degree of nonlinearity in response and consistency in responses across countries. We used flexible, nonparametric regression models to explore various properties of the pressure,response relationships. Metrics were also analysed with humic content (total organic carbon above/below 5 mg L,1) as a covariate. 4. Most metrics responded clearly to pH, with the following metrics explaining most of the variance: Acid Water Indicator Community, Number of ephemeropteran families, Medin's index, Multimetric Indicator of Stream Acidification and the new metric ,Proportion of sensitive Ephemeroptera'. 5. Most metrics were significantly higher in humic than in clear-water rivers, suggesting smaller acidification effects in humic rivers. This result supports the proposed use of humic level as a typological factor in the assessment of acidification. 6. Some potentially important effects could not be considered in this study, such as the additional effects of metals, episodic acidification and the contrasting effects of natural versus anthropogenic acidity. We advocate further data collection and testing of metrics to incorporate these factors. [source]


A new metric for evaluating the correspondence of spatial patterns in vegetation models

GLOBAL ECOLOGY, Issue 4 2008
Guoping Tang
ABSTRACT Aim, To present a new metric, the ,opposite and identity' (OI) index, for evaluating the correspondence between two sets of simulated time-series dynamics of an ecological variable. Innovation, The OI index is introduced and its mathematical expression is defined using vectors to denote simulated variations of an ecological variable on the basis of the vector addition rule. The value of the OI index varies from 0 to 1 with a value 0 (or 1) indicating that compared simulations are opposite (or identical). An OI index with a value near 0.5 suggests that the difference in the amplitudes of variations between compared simulations is large. The OI index can be calculated in a grid cell, for a given biome and for time-series simulations. The OI indices calculated in each grid cell can be used to map the spatial agreement between compared simulations, allowing researchers to pinpoint the extent of agreement or disagreement between two simulations. The OI indices calculated for time-series simulations allow researchers to identify the time at which one simulation differs from another. A case study demonstrates the application and reliability of the OI index for comparing two simulated time-series dynamics of terrestrial net primary productivity in Asia from 1982 to 2000. In the case study, the OI index performs better than the correlation coefficient at accurately quantifying the agreement between two simulated time-series dynamics of terrestrial net primary productivity in Asia. Main conclusions, The OI index provides researchers with a useful tool and multiple flexible ways to compare two simulation results or to evaluate simulation results against observed spatiotemporal data. The OI index can, in some cases, quantify the agreement between compared spatiotemporal data more accurately than the correlation coefficient because of its insensitivity to influential data and outliers and the autocorrelation of simulated spatiotemporal data. [source]


A probability-based analysis of temporal and spatial co-occurrence in grassland birds

JOURNAL OF BIOGEOGRAPHY, Issue 12 2006
Joseph A. Veech
Abstract Aim, To test for non-random co-occurrence in 36 species of grassland birds using a new metric and the C -score. The analysis used presence,absence data of birds distributed among 305 sites (or landscapes) over a period of 35 years. This analysis departs from traditional analyses of species co-occurrence in its use of temporal data and of individual species' probabilities of occurrence to derive analytically the expected co-occurrence between paired species. Location, Great Plains region, USA. Methods, Presence,absence data for the bird species were obtained from the North American Breeding Bird Survey. The analysis was restricted to species pairs whose geographic ranges overlapped. Each of 541 species pairs was classified as a positive, negative, or non-significant association depending on the mean difference between the observed and expected frequencies of co-occurrence over the 35-year time-span. Results, Of the 541 species pairs that were examined, 202 to 293 (37,54%) were positively associated, depending on which of two null models was used. However, only a few species pairs (<5%) were negatively associated. An additional 89 species pairs did not have overlapping ranges and hence represented de facto negative associations. The results from analyses based on C -scores generally agreed with the analyses based on the difference between observed and expected co-occurrence, although the latter analyses were slightly more powerful. Main conclusions, Grassland birds within the Great Plains region are primarily distributed among landscapes either independently or in conjunction with one another. Only a few species pairs exhibited repulsed or segregated distributions. This indicates that the shared preference for grassland habitat may be more important in producing coexistence than are negative species interactions in preventing it. The large number of non-significant associations may represent random associations and thereby indicate that the presence/absence of other grassland bird species may have little effect on whether a given focal species is also found within the landscape. In a broader context, the probability-based approach used in this study may be useful in future studies of species co-occurrence. [source]


Hyperspectral NIR image regression part II: dataset preprocessing diagnostics

JOURNAL OF CHEMOMETRICS, Issue 3-4 2006
James Burger
Abstract When known reference values such as concentrations are available, the spectra from near infrared (NIR) hyperspectral images can be used for building regression models. The sets of spectra must be corrected for errors, transformed to reflectance or absorbance values, and trimmed of bad pixel outliers in order to build robust models and minimize prediction errors. Calibration models can be computed from small (<100) sets of spectra, where each spectrum summarizes an individual image or spatial region of interest (ROI), and used to predict large (>20,000) test sets of spectra. When the distributions of these large populations of predicted values are viewed as histograms they provide mean sample concentrations (peak centers) as well as uniformity (peak widths) and purity (peak shape) information. The same predicted values can also be viewed as concentration maps or images adding spatial information to the uniformity or purity presentations. Estimates of large population statistics enable a new metric for determining the optimal number of model components, based on a combination of global bias and pooled standard deviation values computed from multiple test images or ROIs. Two example datasets are presented: an artificial mixture design of three chemicals with distinct NIR spectra and samples of different cheeses. In some cases it was found that baseline correction by taking first derivatives gave more useful prediction results by reducing optical problems. Other data pretreatments resulted in negligible changes in prediction errors, overshadowed by the variance associated with sample preparation or presentation and other physical phenomena. Copyright 2007 John Wiley & Sons, Ltd. [source]


STORM: software for testing hypotheses of relatedness and mating patterns

MOLECULAR ECOLOGY RESOURCES, Issue 6 2008
TIMOTHY R. FRASIER
Abstract Storm is a software package that allows users to test a variety of hypotheses regarding patterns of relatedness and patterns of mate choice and/or mate compatibility within a population. These functions are based on four main calculations that can be conducted either independently or in the hypothesis-testing framework: internal relatedness; homozygosity by loci; pairwise relatedness; and a new metric called allele inheritance, which calculates the proportion of loci at which an offspring inherits a paternal allele different from that inherited from its mother. STORM allows users to test four hypotheses based on these calculations and Monte Carlo simulations: (i) are individuals within observed associations or groupings more/less related than expected; (ii) do observed offspring have more/less genetic variability (based on internal relatedness or homozygosity by loci) than expected from the gene pool; (iii) are observed mating pairs more/less related than expected if mating is random with respect to relatedness; and (iv) do observed offspring inherit paternal alleles different from those inherited from the mother more/less often than expected based on Mendelian inheritance. [source]


Voltage dependent photocurrent collection in CdTe/CdS solar cells

PROGRESS IN PHOTOVOLTAICS: RESEARCH & APPLICATIONS, Issue 7 2007
Steven Hegedus
Abstract The voltage dependence of the photocurrent JL(V) of CdTe/CdS solar cells has been characterized by separating the forward current from the photocurrent at several illumination intensities. JL(V) reduces the fill factor (FF) of typical cells by 10,15 points, the open circuit voltage (VOC) by 20,50,mV, and the efficiency by 2,4 points. Eliminating the effect of JL(V) establishes superposition between light and dark J(V) curves for some cells. Two models for voltage dependent collection give reasonable fits to the data: (1) a single carrier Hecht model developed for drift collection in p-i-n solar cells in which fitting yields a parameter consistent with lifetimes of 10,9,s as measured by others; or (2) the standard depletion region and bulk diffusion length model fits almost as well. The simple Hecht-like drift collection model for photocurrent gives very good agreement to J(V) curves measured under AM15 light on CdTe/CdS solar cells with FF from 53% to 70%, CdTe thickness from 18 to 70,m, in initial and stressed states. Accelerated thermal and bias stressing increases JL(V) losses as does insufficient Cu. This method provides a new metric for tracking device performance, characterizes transport in the high field depletion region, and quantifies a significant FF loss in CdTe solar cells. Copyright 2007 John Wiley & Sons, Ltd. [source]


A new metric for continuing medical education credit

THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS, Issue 3 2004
Dr. Nancy L. Davis PhD Director
Abstract The two major continuing medical education (CME) credit systems for allopathic physicians in the United States are administered by the American Medical Association (AMA) and the American Academy of Family Physicians (AAFP). This article explores the history of AMA and AAFP CME credit and its value to physicians and the patients they serve. Historically, CME credit has been awarded as hours for participation, but this approach is inadequate as a measure of CME and its impact on improving physician practice. New credit systems are needed to measure a CME activity by its value in bettering the physician's knowledge base, competence, and performance in practice. [source]


Using fractional exhaled nitric oxide to guide asthma therapy: design and methodological issues for ASthma TReatment ALgorithm studies

CLINICAL & EXPERIMENTAL ALLERGY, Issue 4 2009
P. G. Gibson Prof.
Summary Background Current asthma guidelines recommend treatment based on the assessment of asthma control using symptoms and lung function. Noninvasive markers are an attractive way to modify therapy since they offer improvedselection of active treatment(s) based on individual response, and improvedtitration of treatment using markers that are better related to treatment outcomes. Aims: To review the methodological and design features of noninvasive marker studies in asthma. Methods Systematic assessment of published randomized trials of asthma therapy guided by fraction of exhaled nitric oxide(FENO). Results FENO has appeal as a marker to adjust asthma therapy since it is readily measured, gives reproducible results, and is responsive to changes in inhaled corticosteroid doses. However, the five randomised trials of FENO guided therapy have had mixed results. This may be because there are specific design and methodological issues that need to be addressed in the conduct of ASthma TReatment ALgorithm(ASTRAL) studies. There needs to be a clear dose response relationship for the active drugs used and the outcomes measured. The algorithm decision points should be based on outcomes in the population of interest rather than the range of values in healthy people, and the algorithm used needs to provide a sufficiently different result to clinical decision making in order for there to be any discernible benefit. A new metric is required to assess the algorithm performance, and the discordance:concordance(DC) ratio can assist with this. Conclusion Incorporating these design features into future FENO studies should improve the study performance and aid in obtaining a better estimate of the value of FENO guided asthma therapy. [source]


Family Networks of Obesity and Type 2 Diabetes in Rural Appalachia

CLINICAL AND TRANSLATIONAL SCIENCE, Issue 6 2009
Petr Pancoska Ph.D.
Abstract The prevalence of obesity and diabetes has been studied in adolescent and adult populations in poor, medically underserved rural Appalachia of West Virginia. A web-based questionnaire about obesity and diabetes was obtained in 989 family members of 210 Community Based Clinical Research (CBPR) trained adolescent members of a network of 18 science clubs, incorporating 142 families. After age-correction in < 20 years old, 50% of both adolescents and adults were obese. The frequency distribution of obesity was trimodal. In the overall population 10.4% had type 2 diabetes, while 24% of adult, obese subjects had type 2 diabetes. A new metric, the family diabetes risk potential, identified a trimodal distribution of risk potential. In the lowest most common distribution 43% of families had a diabetic family member. In the intermediate distribution, 69% had a diabetic family member, and in the distribution with highest scores all the families had a diabetic member. In conclusion, the poorest counties of rural Appalachia are at crisis level with the prevalence of obesity and diabetes. The distribution of age-corrected obesity and family diabetes risk potential are not normally distributed. We suggest that targeting individual family units at greatest risk offers the most efficient strategy for ameliorating this epidemic. [source]


Making the case for objective performance metrics in newborn screening by tandem mass spectrometry

DEVELOPMENTAL DISABILITIES RESEARCH REVIEW, Issue 4 2006
Piero Rinaldo
Abstract The expansion of newborn screening programs to include multiplex testing by tandem mass spectrometry requires understanding and close monitoring of performance metrics. This is not done consistently because of lack of defined targets, and interlaboratory comparison is almost nonexistent. Between July 2004 and April 2006 (N = 176,185 cases), the overall performance metrics of the Minnesota program, limited to MS/MS testing, were as follows: detection rate 1:1,816, positive predictive value 37% (54% in 2006 till date), and false positive rate 0.09%. The repeat rate and the proportion of cases with abnormal findings actually been reported are new metrics proposed here as an objective mean to express the overall noise in a program, where noise is defined as the total number of abnormal results obtained using a given set of cut-off values. On the basis of our experience, we propose the following targets as evidence of adequate analytical and postanalytical performance: detection rate 1:3,000 or higher, positive predictive value >20%, and false positive rate <0.3%. 2006 Wiley-Liss, Inc. MRDD Research Reviews 2006;12:255,261. [source]


Three target document range metrics for university web sites

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 6 2003
Mike Thelwall
Three new metrics are introduced that measure the range of use of a university Web site by its peers through different heuristics for counting links targeted at its pages. All three give results that correlate significantly with the research productivity of the target institution. The directory range model, which is based upon summing the number of distinct directories targeted by each other university, produces the most promising results of any link metric yet. Based upon an analysis of changes between models, it is suggested that range models measure essentially the same quantity as their predecessors but are less susceptible to spurious causes of multiple links and are therefore more robust. [source]


Assessing suturing techniques using a virtual reality surgical simulator

MICROSURGERY, Issue 6 2010
Hamed Kazemi M.Eng.
Advantages of virtual-reality simulators surgical skill assessment and training include more training time, no risk to patient, repeatable difficulty level, reliable feedback, without the resource demands, and ethical issues of animal-based training. We tested this for a key subtask and showed a strong link between skill in the simulator and in reality. Suturing performance was assessed for four groups of participants, including experienced surgeons and naive subjects, on a custom-made virtual-reality simulator. Each subject tried the experiment 30 times using five different types of needles to perform a standardized suture placement task. Traditional metrics of performance as well as new metrics enabled by our system were proposed, and the data indicate difference between trained and untrained performance. In all traditional parameters such as time, number of attempts, and motion quantity, the medical surgeons outperformed the other three groups, though differences were not significant. However, motion smoothness, penetration and exit angles, tear size areas, and orientation change were statistically significant in the trained group when compared with untrained group. This suggests that these parameters can be used in virtual microsurgery training. 2010 Wiley-Liss, Inc. Microsurgery 30:479,486, 2010. [source]


Science, Ethics, and the "Problems" of Governing Nanotechnologies

THE JOURNAL OF LAW, MEDICINE & ETHICS, Issue 4 2009
Linda F. Hogle
Commentators continue to weigh in on whether there are ethical, social, and policy issues unique to nanotechnology, whether new regulatory schemes should be devised, and if so, how. Many of these commentaries fail to take into account the historical and political environment for nanotechnologies. That context affects regulatory and oversight systems as much as any new metrics to measure the effects of nanoscale materials, or organizational changes put in place to facilitate data analysis. What comes to count as a technical or social "problem" says much about the sociotechnical and political-historical networks in which technologies exist. This symposium's case studies provide insight into procedural successes and failures in the regulation of novel products, and ethical or social analyses that have attended to implications of novel, disruptive technologies. Yet what may be needed is a more fundamental consideration of forms of governance that may not just handle individual products or product types more effectively, but may also be flexible enough to respond to radically new technological systems. Nanotechnology presents an opportunity to think in transdisciplinary terms about both scientific and social concerns, rethink "knowns" about risk and how best to ameliorate or manage it, and consider how to incorporate ethical, social, and legal analyses in the conceptualization, planning, and execution of innovations. [source]