Error Rate (error + rate)

Distribution by Scientific Domains
Distribution within Life Sciences

Kinds of Error Rate

  • bit error rate
  • family-wise error rate
  • genotyping error rate
  • i error rate
  • ii error rate
  • type i error rate

  • Terms modified by Error Rate

  • error rate performance

  • Selected Abstracts


    Multicanier Modulation with Multistage Encoding/Decoding for a Nakagami Fading Channels

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 5 2000
    Lev Goldfeld
    The Multi Carrier Modulation (MCM) system with a multistage encoding/decoding scheme based on repetition and erasures-correcting decoding of block codes applied for a Nakagami fading channel is considered. Bit Error Rate (BER) as a function of Signal-to-Noise Ratio (SNR) has been found to agree well with the simulated results. It is shown that for low SNR the proposed system has a lower BER than both the MCM with Forward Error Correction (FEC) and MCM with optimal diversity reception and FEC. [source]


    Generalized Additive Models and Lucilia sericata Growth: Assessing Confidence Intervals and Error Rates in Forensic Entomology,

    JOURNAL OF FORENSIC SCIENCES, Issue 4 2008
    Aaron M. Tarone Ph.D.
    Abstract:, Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error. [source]


    Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    BIOMETRICAL JOURNAL, Issue 5 2008
    Sandrine Dudoit
    Abstract This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP (q,g) = Pr(g (Vn,Sn) > q), and generalized expected value (gEV) error rates, gEV (g) = E [g (Vn,Sn)], for arbitrary functions g (Vn,Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g (Vn,Sn) = Vn /(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E [Vn /(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Error rate on the antisaccade task: Heritability and developmental change in performance among preadolescent and late-adolescent female twin youth

    PSYCHOPHYSIOLOGY, Issue 5 2002
    Stephen M. Malone
    We examined heritability of error rate on the antisaccade task among female twin youths. This task appears to be sensitive to prefrontal functioning, providing a measure of individual differences in inhibitory control associated with genetic risk for schizophrenia. The sample consisted of 674 11-year-olds and 616 17-year-olds, comprising the two cohorts of female twins from the Minnesota Twin Family Study, a population-based investigation of substance abuse and related psychopathology. We used biometric model-fitting methods to determine the relative magnitude of genetic and environmental influences on performance. In both age cohorts, the best fitting model contained additive genes and nonshared environment. Despite substantial age-related differences in mean performance levels (effect size = .81), additive genes accounted for greater than half the variance in performance in both age cohorts. These results are consistent with the hypothesis that antisaccade error rate might serve as an endophenotype for behavior disorders reflecting frontal lobe dysfunction or problems with inhibitory control. [source]


    3D ultrasound in robotic surgery: performance evaluation with stereo displays,

    THE INTERNATIONAL JOURNAL OF MEDICAL ROBOTICS AND COMPUTER ASSISTED SURGERY, Issue 3 2006
    Paul M. Novotny
    Abstract Background The recent advent of real-time 3D ultrasound (3DUS) imaging enables a variety of new surgical procedures. These procedures are hampered by the difficulty of manipulating tissue guided by the distorted, low-resolution 3DUS images. To lessen the effects of these limitations, we investigated stereo displays and surgical robots for 3DUS-guided procedures. Methods By integrating real-time stereo rendering of 3DUS with the binocular display of a surgical robot, we compared stereo-displayed 3DUS with normally displayed 3DUS. To test the efficacy of stereo-displayed 3DUS, eight surgeons and eight non-surgeons performed in vitro tasks with the surgical robot. Results Error rates dropped by 50% with a stereo display. In addition, subjects completed tasks faster with the stereo-displayed 3DUS as compared to normal-displayed 3DUS. A 28% decrease in task time was seen across all subjects. Conclusions The results highlight the importance of using a stereo display. By reducing errors and increasing speed, it is an important enhancement to 3DUS-guided robotics procedures. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    SORTAL ANAPHORA RESOLUTION IN MEDLINE ABSTRACTS

    COMPUTATIONAL INTELLIGENCE, Issue 1 2007
    Manabu Torii
    This paper reports our investigation of machine learning methods applied to anaphora resolution for biology texts, particularly paper abstracts. Our primary concern is the investigation of features and their combinations for effective anaphora resolution. In this paper, we focus on the resolution of demonstrative phrases and definite determiner phrases, the two most prevalent forms of anaphoric expressions that we find in biology research articles. Different resolution models are developed for demonstrative and definite determiner phrases. Our work shows that models may be optimized differently for each of the phrase types. Also, because a significant number of definite determiner phrases are not anaphoric, we induce a model to detect anaphoricity, i.e., a model that classifies phrases as either anaphoric or nonanaphoric. We propose several novel features that we call highlighting features, and consider their utility particularly for processing paper abstracts. The system using the highlighting features achieved accuracies of 78% and 71% for demonstrative phrases and definite determiner phrases, respectively. The use of the highlighting features reduced the error rate by about 10%. [source]


    Assessment of Optical Coherence Tomography Imaging in the Diagnosis of Non-Melanoma Skin Cancer and Benign Lesions Versus Normal Skin: Observer-Blinded Evaluation by Dermatologists and Pathologists

    DERMATOLOGIC SURGERY, Issue 6 2009
    METTE MOGENSEN MD
    BACKGROUND Optical coherence tomography (OCT) is an optical imaging technique that may be useful in diagnosis of non-melanoma skin cancer (NMSC). OBJECTIVES To describe OCT features in NMSC such as actinic keratosis (AK) and basal cell carcinoma (BCC) and in benign lesions and to assess the diagnostic accuracy of OCT in differentiating NMSC from benign lesions and normal skin. METHODS AND MATERIALS OCT and polarization-sensitive (PS) OCT from 104 patients were studied. Observer-blinded evaluation of OCT images from 64 BCCs, 1 baso-squamous carcinoma, 39 AKs, two malignant melanomas, nine benign lesions, and 105 OCT images from perilesional skin was performed; 50 OCT images of NMSC and 50 PS-OCT images of normal skin were evaluated twice. RESULTS Sensitivity was 79% to 94% and specificity 85% to 96% in differentiating normal skin from lesions. Important features were absence of well-defined layering in OCT and PS-OCT images and dark lobules in BCC. Discrimination of AK from BCC had an error rate of 50% to 52%. CONCLUSION OCT features in NMSC are identified, but AK and BCC cannot be differentiated. OCT diagnosis is less accurate than clinical diagnosis, but high accuracy in distinguishing lesions from normal skin, crucial for delineating tumor borders, was obtained. [source]


    Performance analysis of optically preamplified DC-coupled burst mode receivers

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 3 2009
    T. J. Zuo
    Bit error rate and threshold acquisition penalty evaluation is performed for an optically preamplified DC-coupled burst mode receiver using a moment generating function (MGF) description of the signal plus noise. The threshold itself is a random variable and is also described using an appropriate MGF. Chernoff bound (CB), modified Chernoff bound (MCB) and the saddle-point approximation (SPA) techniques make use of the MGF to provide the performance analyses. This represents the first time that these widely used approaches to receiver performance evaluation have been applied to an optically preamplified burst mode receiver and it is shown that they give threshold acquisition penalty results in good agreement with a prior existing approach, whilst having the facility to incorporate arbitrary receiver filtering, receiver thermal noise and non-ideal extinction ratio. A traditional Gaussian approximation (GA) is also calculated and comparison shows that it is clearly less accurate (it exceeds the upper bounds provided by CB and MCB) in the realistic cases examined. It is deduced, in common with the equivalent continuous mode analysis, that the MCB is the most sensible approach. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Analysis of the effects of Nyquist pulse-shaping on the performance of OFDM systems with carrier frequency offset

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 1 2009
    Peng Tan
    An exact method for calculating the bit error rate (BER) of an uncoded orthogonal frequency-division multiplexing (OFDM) system with transmitter Nyquist pulse-shaping over AWGN channels in the presence of frequency offset is derived. This method represents a unified way to calculate the BER of this system with different one- and two-dimensional subcarrier modulation formats. The precise BER expressions are obtained using a characteristic function method. The effects of several widely referenced Nyquist pulse-shapings, including the Franks pulse, the raised-cosine pulse, the ,better than' raised-cosine (BTRC) pulse, the second-order continuous window (SOCW), the double-jump pulse and the polynomial pulse on intercarrier interference (ICI) reduction and BER improvement of the system with carrier frequency offset are examined in the AWGN channel. The dependence of the BER on the roll-off factor of the pulse employed for a specific system in the presence of frequency offset is investigated. Analysis and numerical results show that the Franks pulse exhibits the best performance among the Nyquist pulses considered in most cases. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Wireless signal-preamble assisted Mach,Zehnder modulator bias stabilisation in wireless signal transmission over optical fibre

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 6 2008
    Debashis Chanda
    Lithium niobate based Mach,Zehnder electro-optic modulators are increasingly being used in high-speed digital as well as analog optical links. Depending on the application, digital or analog, the bias point of such a modulator is held constant at a particular point on the sinusoidal electrical to optical power transfer characteristics of the modulator. Bias point drift is one of the major limitations of lithium niobate based Mach,Zehnder electro-optic modulators. This increases the bit error rate of the system and affects adjacent channel performances. In one of the most popular methods of bias control, a pilot tone is used to track the bias point drift. However, pilot tone based bias tracking reduces overall intermodulation free dynamic range of the link. In this paper we propose a method where Mach,Zehnder modulator bias drift is tracked and maintained at the desired point by tracking the power variation of the preamble of wireless signal data frames. The method has no detrimental effects on system performances as no external signal is exclusively injected into the system for bias tracking purposes. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Fuzzy-based multiuser detector for impulsive CDMA channel

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 7 2007
    Adel M. Hmidat
    A new fuzzy multiuser detector for non-Gaussian synchronous direct sequence code division multiple access (DS-CDMA) is proposed for jointly mitigating the effects of impulsive noise and multiple access interference (MAI). The proposed scheme combines a linear decorrelator and antenna array with a nonlinear preprocessor based on fuzzy logic and rank ordering. The fuzzy rule is incorporated to combat impulsive noise by eliminating outliers from the received signal. The performance of the proposed scheme is assessed by Monte Carlo simulations and the obtained results demonstrate that the proposed fuzzy detector outperforms other reported schemes in terms of bit error rate (BER) and channel capacity. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Joint data detection and estimation of time-varying multipath rayleigh fading channels in asynchronous DS-CDMA systems with long spreading sequences,

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 2 2007
    Pei Xiao
    In this paper, we present a joint approach to data detection and channel estimation for the asynchronous direct-sequence code-division multiple access (DS-CDMA) systems employing orthogonal signaling formats and long scrambling codes. Our emphasis is placed on different channel estimation algorithms since the performance of a communication system depends largely on its ability to retrieve an accurate measurement of the underlying channel. We investigate channel estimation algorithms under different conditions. The estimated channel information is used to enable coherent data detection to combat the detrimental effect of the multiuser interference and the multipath propagation of the transmitted signal. In the considered multiuser detector, we mainly use interference cancellation techniques, which are suitable for long-code CDMA systems. Interference cancellation and channel estimation using soft estimates of the transmitted signal is also proposed in this paper. Different channel estimation schemes are evaluated and compared in terms of mean square error (MSE) of channel estimation and bit error rate (BER) performance. Based on our analysis and numerical results, some recommendations are made on how to choose appropriate channel estimators in practical systems. Copyright © 2006 AEIT [source]


    Serially concatenated continuous phase modulation with symbol interleavers: performance, properties and design principles

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 4 2006
    Ming Xiao
    Serially concatenated continuous phase modulation (SCCPM) systems with symbol interleavers are investigated. The transmitted signals are disturbed by additive white Gaussian noise (AWGN). Compared to bit interleaved SCCPM systems, this scheme shows a substantial improvement in the convergence threshold at the price of a higher error floor. In addition to showing this property, we also investigate the underlying reason by error event analysis. In order to estimate bit error rate (BER) performance, we generalise traditional union bounds for a bit interleaver to this non-binary interleaver. For the latter, both the order and the position of permuted non-zero symbols have to be considered. From the analysis, some principal properties are identified. Finally, some design principles are proposed. Our paper concentrates on SCCPM, but the proposed analysis methods and conclusions can be widely used in many other systems such as serially concatenated trellis coded modulation (SCTCM) et cetera. Copyright © 2006 AEIT [source]


    Load-Adaptive MUI/ISI-Resilient Generalized Multi-Carrier CDMA with Linear and DF Receivers

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 6 2000
    Georgios B. Giannakis
    A plethora of single-carrier and multi-carrier (MC) CDMA systems have been proposed recently to mitigate intersymbol interference (ISI) and eliminate multiuser interference (MUI). We present a unifying all-digital Generalized Multicanier CDMA framework which enables us to describe existing CDMA schemes and to highlight thorny problems associated with them. To improve the bit error rate (BER) performance of existing schemes, we design block FIR transmitters and decision feedback (DF) receivers based on an inner-code/outer-code principle, which guarantees MUI/ISI-elimination regardless of the frequency-selective physical channel. The flexibility of our framework allows further BER enhancements by taking into account the load in the system (number of active users), while blind channel estimation results in bandwidth savings. Simulations illustrate the superiority of our framework over competing MC CDMA alternatives especially in the presence of uplink multipath channels. [source]


    WHY DOES A METHOD THAT FAILS CONTINUE TO BE USED?

    EVOLUTION, Issue 4 2009
    THE ANSWER
    It has been claimed that hundreds of researchers use nested clade phylogeographic analysis (NCPA) based on what the method promises rather than requiring objective validation of the method. The supposed failure of NCPA is based upon the argument that validating it by using positive controls ignored type I error, and that computer simulations have shown a high type I error. The first argument is factually incorrect: the previously published validation analysis fully accounted for both type I and type II errors. The simulations that indicate a 75% type I error rate have serious flaws and only evaluate outdated versions of NCPA. These outdated type I error rates fall precipitously when the 2003 version of single-locus NCPA is used or when the 2002 multilocus version of NCPA is used. It is shown that the tree-wise type I errors in single-locus NCPA can be corrected to the desired nominal level by a simple statistical procedure, and that multilocus NCPA reconstructs a simulated scenario used to discredit NCPA with 100% accuracy. Hence, NCPA is a not a failed method at all, but rather has been validated both by actual data and by simulated data in a manner that satisfies the published criteria given by its critics. The critics have come to different conclusions because they have focused on the pre-2002 versions of NCPA and have failed to take into account the extensive developments in NCPA since 2002. Hence, researchers can choose to use NCPA based upon objective critical validation that shows that NCPA delivers what it promises. [source]


    STATISTICAL ANALYSIS OF DIVERSIFICATION WITH SPECIES TRAITS

    EVOLUTION, Issue 1 2005
    Emmanuel Paradis
    Abstract Testing whether some species traits have a significant effect on diversification rates is central in the assessment of macroevolutionary theories. However, we still lack a powerful method to tackle this objective. I present a new method for the statistical analysis of diversification with species traits. The required data are observations of the traits on recent species, the phylogenetic tree of these species, and reconstructions of ancestral values of the traits. Several traits, either continuous or discrete, and in some cases their interactions, can be analyzed simultaneously. The parameters are estimated by the method of maximum likelihood. The statistical significance of the effects in a model can be tested with likelihood ratio tests. A simulation study showed that past random extinction events do not affect the Type I error rate of the tests, whereas statistical power is decreased, though some power is still kept if the effect of the simulated trait on speciation is strong. The use of the method is illustrated by the analysis of published data on primates. The analysis of these data showed that the apparent overall positive relationship between body mass and species diversity is actually an artifact due to a clade-specific effect. Within each clade the effect of body mass on speciation rate was in fact negative. The present method allows to take both effects (clade and body mass) into account simultaneously. [source]


    Free space quantum key distribution: Towards a real life application

    FORTSCHRITTE DER PHYSIK/PROGRESS OF PHYSICS, Issue 8-10 2006
    H. Weier
    Abstract Quantum key distribution (QKD) [1] is the first method of quantum information science that will find its way into our everyday life. It employs fundamental laws of quantum physics to ensure provably secure symmetric key generation between two parties. The key can then be used to encrypt and decrypt sensitive data with unconditional security. Here, we report on a free space QKD implementation using strongly attenuated laser pulses over a distance of 480 m. It is designed to work continuously without human interaction. Until now, it produces quantum keys unattended at night for more than 12 hours with a sifted key rate of more than 50 kbit/s and a quantum bit error rate between 3% and 5%. [source]


    Modeling maternal-offspring gene-gene interactions: the extended-MFG test

    GENETIC EPIDEMIOLOGY, Issue 5 2010
    Erica J. Childs
    Abstract Maternal-fetal genotype (MFG) incompatibility is an interaction between the genes of a mother and offspring at a particular locus that adversely affects the developing fetus, thereby increasing susceptibility to disease. Statistical methods for examining MFG incompatibility as a disease risk factor have been developed for nuclear families. Because families collected as part of a study can be large and complex, containing multiple generations and marriage loops, we create the Extended-MFG (EMFG) Test, a model-based likelihood approach, to allow for arbitrary family structures. We modify the MFG test by replacing the nuclear-family based "mating type" approach with Ott's representation of a pedigree likelihood and calculating MFG incompatibility along with the Mendelian transmission probability. In order to allow for extension to arbitrary family structures, we make a slightly more stringent assumption of random mating with respect to the locus of interest. Simulations show that the EMFG test has appropriate type-I error rate, power, and precise parameter estimation when random mating holds. Our simulations and real data example illustrate that the chief advantages of the EMFG test over the earlier nuclear family version of the MFG test are improved accuracy of parameter estimation and power gains in the presence of missing genotypes. Genet. Epidemiol. 34: 512,521, 2010.© 2010 Wiley-Liss, Inc. [source]


    Quantifying bias due to allele misclassification in case-control studies of haplotypes

    GENETIC EPIDEMIOLOGY, Issue 7 2006
    Usha S. Govindarajulu
    Abstract Objectives Genotyping errors can induce biases in frequency estimates for haplotypes of single nucleotide polymorphisms (SNPs). Here, we considered the impact of SNP allele misclassification on haplotype odds ratio estimates from case-control studies of unrelated individuals. Methods We calculated bias analytically, using the haplotype counts expected in cases and controls under genotype misclassification. We evaluated the bias due to allele misclassification across a range of haplotype distributions using empirical haplotype frequencies within blocks of limited haplotype diversity. We also considered simple two- and three-locus haplotype distributions to understand the impact of haplotype frequency and number of SNPs on misclassification bias. Results We found that for common haplotypes (>5% frequency), realistic genotyping error rates (0.1,1% chance of miscalling an allele), and moderate relative risks (2,4), the bias was always towards the null and increases in magnitude with increasing error rate, increasing odds ratio. For common haplotypes, bias generally increased with increasing haplotype frequency, while for rare haplotypes, bias generally increased with decreasing frequency. When the chance of miscalling an allele is 0.5%, the median bias in haplotype-specific odds ratios for common haplotypes was generally small (<4% on the log odds ratio scale), but the bias for some individual haplotypes was larger (10,20%). Bias towards the null leads to a loss in power; the relative efficiency using a test statistic based upon misclassified haplotype data compared to a test based on the unobserved true haplotypes ranged from roughly 60% to 80%, and worsened with increasing haplotype frequency. Conclusions The cumulative effect of small allele-calling errors across multiple loci can induce noticeable bias and reduce power in realistic scenarios. This has implications for the design of candidate gene association studies that utilize multi-marker haplotypes. Genet. Epidemiol. 2006. © 2006 Wiley-Liss, Inc. [source]


    Comparison of single-nucleotide polymorphisms and microsatellite markers for linkage analysis in the COGA and simulated data sets for Genetic Analysis Workshop 14: Presentation Groups 1, 2, and 3

    GENETIC EPIDEMIOLOGY, Issue S1 2005
    Marsha A. Wilcox
    Abstract The papers in presentation groups 1,3 of Genetic Analysis Workshop 14 (GAW14) compared microsatellite (MS) markers and single-nucleotide polymorphism (SNP) markers for a variety of factors, using multiple methods in both data sets provided to GAW participants. Group 1 focused on data provided from the Collaborative Study on the Genetics of Alcoholism (COGA). Group 2 focused on data simulated for the workshop. Group 3 contained analyses of both data sets. Issues examined included: information content, signal strength, localization of the signal, use of haplotype blocks, population structure, power, type I error, control of type I error, the effect of linkage disequilibrium, and computational challenges. There were several broad resulting observations. 1) Information content was higher for dense SNP marker panels than for MS panels, and dense SNP markers sets appeared to provide slightly higher linkage scores and slightly higher power to detect linkage than MS markers. 2) Dense SNP panels also gave higher type I errors, suggesting that increased test thresholds may be needed to maintain the correct error rate. 3) Dense SNP panels provided better trait localization, but only in the COGA data, in which the MS markers were relatively loosely spaced. 4) The strength of linkage signals did not vary with the density of SNP panels, once the marker density was ,1 SNP/cM. 5) Analyses with SNPs were computationally challenging, and identified areas where improvements in analysis tools will be necessary to make analysis practical for widespread use. Genet. Epidemiol. 29:(Suppl. 1): S7,S28, 2005. © 2005 Wiley-Liss, Inc. [source]


    Analysis of multilocus models of association

    GENETIC EPIDEMIOLOGY, Issue 1 2003
    B. Devlin
    Abstract It is increasingly recognized that multiple genetic variants, within the same or different genes, combine to affect liability for many common diseases. Indeed, the variants may interact among themselves and with environmental factors. Thus realistic genetic/statistical models can include an extremely large number of parameters, and it is by no means obvious how to find the variants contributing to liability. For models of multiple candidate genes and their interactions, we prove that statistical inference can be based on controlling the false discovery rate (FDR), which is defined as the expected number of false rejections divided by the number of rejections. Controlling the FDR automatically controls the overall error rate in the special case that all the null hypotheses are true. So do more standard methods such as Bonferroni correction. However, when some null hypotheses are false, the goals of Bonferroni and FDR differ, and FDR will have better power. Model selection procedures, such as forward stepwise regression, are often used to choose important predictors for complex models. By analysis of simulations of such models, we compare a computationally efficient form of forward stepwise regression against the FDR methods. We show that model selection includes numerous genetic variants having no impact on the trait, whereas FDR maintains a false-positive rate very close to the nominal rate. With good control over false positives and better power than Bonferroni, the FDR-based methods we introduce present a viable means of evaluating complex, multivariate genetic models. Naturally, as for any method seeking to explore complex genetic models, the power of the methods is limited by sample size and model complexity. Genet Epidemiol 25:36,47, 2003. © 2003 Wiley-Liss, Inc. [source]


    Detecting genotype combinations that increase risk for disease: Maternal-Fetal genotype incompatibility test

    GENETIC EPIDEMIOLOGY, Issue 1 2003
    Janet S. Sinsheimer
    Abstract Biological mechanisms that involve gene-by-environment interactions have been hypothesized to explain susceptibility to complex familial disorders. Current research provides compelling evidence that one environmental factor, which acts prenatally to increase susceptibility, arises from a maternal-fetal genotype incompatibility. Because it is genetic in origin, a maternal-fetal incompatibility is one possible source of an adverse environment that can be detected in genetic analyses and precisely studied, even years after the adverse environment was present. Existing statistical models and tests for gene detection are not optimal or even appropriate for identifying maternal-fetal genotype incompatibility loci that may increase the risk for complex disorders. We describe a new test, the maternal-fetal genotype incompatibility (MFG) test, that can be used with case-parent triad data (affected individuals and their parents) to identify loci for which a maternal-fetal genotype incompatibility increases the risk for disease. The MFG test adapts a log-linear approach for case-parent triads in order to detect maternal-fetal genotype incompatibility at a candidate locus, and allows the incompatibility effects to be estimated separately from direct effects of either the maternal or the child's genotype. Through simulations of two biologically plausible maternal-fetal genotype incompatibility scenarios, we show that the type-I error rate of the MFG test is appropriate, that the estimated parameters are accurate, and that the test is powerful enough to detect a maternal-fetal genotype incompatibility of moderate effect size. Genet Epidemiol 24:1,13, 2003. © 2003 Wiley-Liss, Inc. [source]


    Errors in technological systems

    HUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 4 2003
    R.B. Duffey
    Massive data and experience exist on the rates and causes of errors and accidents in modern industrial and technological society. We have examined the available human record, and have shown the existence of learning curves, and that there is an attainable and discernible minimum or asymptotic lower bound for error rates. The major common contributor is human error, including in the operation, design, manufacturing, procedures, training, maintenance, management, and safety methodologies adopted for technological systems. To analyze error and accident rates in many diverse industries and activities, we used a combined empirical and theoretical approach. We examine the national and international reported error, incident and fatal accident rates for multiple modern technologies, including shipping losses, industrial injuries, automobile fatalities, aircraft events and fatal crashes, chemical industry accidents, train derailments and accidents, medical errors, nuclear events, and mining accidents. We selected national and worldwide data sets for time spans of up to ,200 years, covering many millions of errors in diverse technologies. We developed and adopted a new approach using the accumulated experience; thus, we show that all the data follow universal learning curves. The vast amounts of data collected and analyzed exhibit trends consistent with the existence of a minimum error rate, and follow failure rate theory. There are potential and key practical impacts for the management of technological systems, the regulatory practices for complex technological processes, the assignment of liability and blame, the assessment of risk, and for the reporting and prediction of errors and accident rates. The results are of fundamental importance to society as we adopt, manage, and use modern technology. © 2003 Wiley Periodicals, Inc. Hum Factors Man 13: 279,291, 2003. [source]


    Cognitive effects of a Ginkgo biloba/vinpocetine compound in normal adults: systematic assessment of perception, attention and memory

    HUMAN PSYCHOPHARMACOLOGY: CLINICAL AND EXPERIMENTAL, Issue 5 2001
    John Polich
    Abstract A computerized test battery was used in a double-blind design to assess the cognitive effects of a nutrient compound containing Ginkgo biloba in 24 normal adults. Ten tasks (perceptual, attention and short-term memory) were presented in a standardized manner designed to maximize performance, with substantial pre-test practice employed to minimize response variability. Subjects were given either placebo or Ginkgo biloba extract capsules to consume for 14 days, after which they performed all tasks twice. They then received the other condition, and after 14 days completed the final test session. Response time and error rate stabilized after pre-test practice. A ,working memory capacity' paradigm demonstrated a reliable 50,ms response time decrease between the placebo and Ginkgo biloba testing, suggesting that Ginkgo biloba speeds short-term working memory processing in normal adults. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Non-random reassortment in human influenza A viruses

    INFLUENZA AND OTHER RESPIRATORY VIRUSES, Issue 1 2008
    Raul Rabadan
    Background, The influenza A virus has two basic modes of evolution. Because of a high error rate in the process of replication by RNA polymerase, the viral genome drifts via accumulated mutations. The second mode of evolution is termed a shift, which results from the reassortment of the eight segments of this virus. When two different influenza viruses co-infect the same host cell, new virions can be released that contain segments from both parental strains. This type of shift has been the source of at least two of the influenza pandemics in the 20th century (H2N2 in 1957 and H3N2 in 1968). Objectives, The methods to measure these genetic shifts have not yet provided a quantitative answer to questions such as: what is the rate of genetic reassortment during a local epidemic? Are all possible reassortments equally likely or are there preferred patterns? Methods, To answer these questions and provide a quantitative way to measure genetic shifts, a new method for detecting reassortments from nucleotide sequence data was created that does not rely upon phylogenetic analysis. Two different sequence databases were used: human H3N2 viruses isolated in New York State between 1995 and 2006, and human H3N2 viruses isolated in New Zealand between 2000 and 2005. Results, Using this new method, we were able to reproduce all the reassortments found in earlier works, as well as detect, with very high confidence, many reassortments that were not detected by previous authors. We obtain a lower bound on the reassortment rate of 2,3 events per year, and find a clear preference for reassortments involving only one segment, most often hemagglutinin or neuraminidase. At a lower frequency several segments appear to reassort in vivo in defined groups as has been suggested previously in vitro. Conclusions, Our results strongly suggest that the patterns of reassortment in the viral population are not random. Deciphering these patterns can be a useful tool in attempting to understand and predict possible influenza pandemics. [source]


    Improved GMM with parameter initialization for unsupervised adaptation of Brain,Computer interface

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 6 2010
    Guangquan Liu
    Abstract An important property of brain signals is their nonstationarity. How to adapt a brain,computer interface (BCI) to the changing brain states is one of the challenges faced by BCI researchers, especially in real application where the subject's real intent is unknown to the system. Gaussian mixture model (GMM) has been used for the unsupervised adaptation of the classifier in BCI. In this paper, a method of initializing the model parameters is proposed for expectation maximization-based GMM parameter estimation. This improved GMM method and other two existing unsupervised adaptation methods are applied to groups of constructed artificial data with different data properties. Performances of these methods in different situations are analyzed. Compared with the other two unsupervised adaptation methods, this method shows a better ability of adapting to changes and discovering class information from unlabelled data. The methods are also applied to real EEG data recorded in 19 experiments. For real data, the proposed method achieves an error rate significantly lower than the other two unsupervised methods. Results of the real data agree with the analysis based on the artificial data, which confirms not only the effectiveness of our method but also the validity of the constructed data. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Corruption-aware adaptive increase and adaptive decrease algorithm for TCP error and congestion controls in wireless networks

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 5 2009
    Lin Cui
    Abstract The conventional TCP tends to suffer from performance degradation due to packet corruptions in the wireless lossy channels, since any corruption event is regarded as an indication of network congestion. This paper proposes a TCP error and congestion control scheme using corruption-aware adaptive increase and adaptive decrease algorithm to improve TCP performance over wireless networks. In the proposed scheme, the available network bandwidth is estimated based on the amount of the received integral data as well as the received corrupted data. The slow start threshold is updated only when a lost but not corrupted segment is detected by sender, since the corrupted packets still arrive at the TCP receiver. In the proposed scheme, the duplicated ACKs are processed differently by sender depending on whether there are any lost but not corrupted segments at present. Simulation results show that the proposed scheme could significantly improve TCP throughput over the heterogeneous wired and wireless networks with a high bit error rate, compared with the existing TCP and its variants. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Performance evaluation for asynchronous MC-CDMA systems with a symbol timing offset

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 4 2009
    Myonghee Park
    Abstract This paper models a symbol timing offset (STO) with respect to the guard period and the maximum access delay time for asynchronous multicarrier-code division multiple access systems over frequency-selective multipath fading channels. Analytical derivation shows that STO causes desired signal power degradation and generates self-interferences. This effect of the STO on the average bit error rate (BER) and the effective signal-to-noise ratio (SNR) is evaluated using the semi-analytical method, and the approximated BER and the SNR loss caused by STO are then obtained as closed-form expressions. The tightness between the semi-analytical result and the approximated one is verified for the different STOs and SNRs. Furthermore, the derived analytical results are verified via Monte Carlo simulations. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Error-aware and energy-efficient routing approach in MANETs

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 1 2009
    Liansheng Tan
    Abstract The lifetime of a network is the key design factor of mobile ad hoc networks (MANETs). To prolong the lifetime of MANETs, one is forced to attain a tradeoff of minimizing the energy consumption and load balancing. In MANETs, energy waste resulting from retransmission due to high bit error rate (BER) and high frame error rate (FER) of wireless channel is significant. In this paper, we propose two novel protocols termed multi-threshold routing protocol (MTRP) and enhanced multi-threshold routing protocol (EMTRP). MTRP divides the total energy of a wireless node into multiple ranges. The lower bound of each range corresponds to a threshold. The protocol iterates from the highest threshold to the lowest one and chooses those routes with bottleneck energy being larger than the current threshold during each iteration. This approach thus avoids overusing certain routes and achieves load balancing. If multiple routes satisfy the threshold constraint, MTRP selects a route with the smallest hop count to further attain energy efficiency. Based on MTRP, EMTRP further takes channel condition into consideration and selects routes with better channel condition and consequently reduces the number of retransmissions and saves energy. We analyze the average loss probability (ALP) of the uniform error model and Gilbert error model and give a distributed algorithm to obtain the maximal ALP along a route. Descriptions of MTRP and EMTRP are given in pseudocode form. Simulation results demonstrate that our proposed EMTRP outperforms the representative protocol CMMBCR in terms of total energy consumption and load balancing. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Kalman filter-based channel estimation and ICI suppression for high-mobility OFDM systems

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 10 2008
    Prerana Gupta
    Abstract The use of orthogonal frequency division multiplexing (OFDM) in frequency-selective fading environments has been well explored. However, OFDM is more prone to time-selective fading compared with single-carrier systems. Rapid time variations destroy the subcarrier orthogonality and introduce inter-carrier interference (ICI). Besides this, obtaining reliable channel estimates for receiver equalization is a non-trivial task in rapidly fading systems. Our work addresses the problem of channel estimation and ICI suppression by viewing the system as a state-space model. The Kalman filter is employed to estimate the channel; this is followed by a time-domain ICI mitigation filter that maximizes the signal-to-interference plus noise ratio (SINR) at the receiver. This method is seen to provide good estimation performance apart from significant SINR gain with low training overhead. Suitable bounds on the performance of the system are described; bit error rate (BER) performance over a time-invariant Rayleigh fading channel serves as the lower bound, whereas BER performance over a doubly selective system with ICI as the dominant impairment provides the upper bound. Copyright © 2008 John Wiley & Sons, Ltd. [source]