Error Correction (error + correction)

Distribution by Scientific Domains

Kinds of Error Correction

  • measurement error correction

  • Terms modified by Error Correction

  • error correction model
  • error correction models

  • Selected Abstracts


    Multicanier Modulation with Multistage Encoding/Decoding for a Nakagami Fading Channels

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 5 2000
    Lev Goldfeld
    The Multi Carrier Modulation (MCM) system with a multistage encoding/decoding scheme based on repetition and erasures-correcting decoding of block codes applied for a Nakagami fading channel is considered. Bit Error Rate (BER) as a function of Signal-to-Noise Ratio (SNR) has been found to agree well with the simulated results. It is shown that for low SNR the proposed system has a lower BER than both the MCM with Forward Error Correction (FEC) and MCM with optimal diversity reception and FEC. [source]


    An Algorithmic Approach to Error Correction: An Empirical Study

    FOREIGN LANGUAGE ANNALS, Issue 1 2006
    Alice Y. W. Chan
    This article reports on the results of a research study that investigated the effectiveness of using an algorithmic approach to error correction to help Hong Kong English-as-a-second-language (ESL) learners overcome persistent lexico-grammatical problems. Ten error types were selected for the experiment, and one set of remedial instructional materials was designed for each error type. The materials were implemented with more than 450 students at both secondary and tertiary levels. Pretests, posttests, and delayed posttests were administered to test the effectiveness of the approach, and a plenary review meeting was organized to gather feedback. The results showed that the approach was versatile and effective and that the students showed significant improvements for the items taught. It is argued that form-focused remedial instruction is effective in enhancing learners' language accuracy in their second language (L2) output. [source]


    Notice of Error Correction

    ACCOUNTING PERSPECTIVES, Issue 1 2010
    Article first published online: 3 JUN 2010
    No abstract is available for this article. [source]


    Price Dynamics in the International Wheat Market: Modeling with Error Correction and Directed Acyclic Graphs

    JOURNAL OF REGIONAL SCIENCE, Issue 1 2003
    David A Bessler
    In this paper we examine dynamic relationships among wheat prices from five countries for the years 1981,1999. Error correction models and directed acyclic graphs are employed with observational data to sort,out the dynamic causal relationships among prices from major wheat producing regions: Canada, the European Union, Argentina, Australia, and the United States. An ambiguity related to the cyclic or acyclic flow of information between Canada and Australia is uncovered. We condition our analysis on the assumption that information flow is acyclic. The empirical results show that Canada and the U.S. are leaders in the pricing of wheat in these markets. The U.S. has a significant effect on three markets excluding Canada. [source]


    Testing for Error Correction in Panel Data,

    OXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 6 2007
    Joakim Westerlund
    Abstract This paper proposes new error correction-based cointegration tests for panel data. The limiting distributions of the tests are derived and critical values provided. Our simulation results suggest that the tests have good small-sample properties with small size distortions and high power relative to other popular residual-based panel cointegration tests. In our empirical application, we present evidence suggesting that international healthcare expenditures and GDP are cointegrated once the possibility of an invalid common factor restriction has been accounted for. [source]


    A Markov regime switching approach for hedging stock indices

    THE JOURNAL OF FUTURES MARKETS, Issue 7 2004
    Amir Alizadeh
    In this paper we describe a new approach for determining time-varying minimum variance hedge ratio in stock index futures markets by using Markov Regime Switching (MRS) models. The rationale behind the use of these models stems from the fact that the dynamic relationship between spot and futures returns may be characterized by regime shifts, which, in turn, suggests that by allowing the hedge ratio to be dependent upon the "state of the market," one may obtain more efficient hedge ratios and hence, superior hedging performance compared to other methods in the literature. The performance of the MRS hedge ratios is compared to that of alternative models such as GARCH, Error Correction and OLS in the FTSE 100 and S&P 500 markets. In and out-of-sample tests indicate that MRS hedge ratios outperform the other models in reducing portfolio risk in the FTSE 100 market. In the S&P 500 market the MRS model outperforms the other hedging strategies only within sample. Overall, the results indicate that by using MRS models market agents may be able to increase the performance of their hedges, measured in terms of variance reduction and increase in their utility. © 2004 Wiley Periodicals, Inc. Jrl Fut Mark 24:649,674, 2004 [source]


    A Flexible Approach to Measurement Error Correction in Case,Control Studies

    BIOMETRICS, Issue 4 2008
    A. Guolo
    Summary We investigate the use of prospective likelihood methods to analyze retrospective case,control data where some of the covariates are measured with error. We show that prospective methods can be applied and the case,control sampling scheme can be ignored if one adequately models the distribution of the error-prone covariates in the case,control sampling scheme. Indeed, subject to this, the prospective likelihood methods result in consistent estimates and information standard errors are asymptotically correct. However, the distribution of such covariates is not the same in the population and under case,control sampling, dictating the need to model the distribution flexibly. In this article, we illustrate the general principle by modeling the distribution of the continuous error-prone covariates using the skewnormal distribution. The performance of the method is evaluated through simulation studies, which show satisfactory results in terms of bias and coverage. Finally, the method is applied to the analysis of two data sets which refer, respectively, to a cholesterol study and a study on breast cancer. [source]


    Occupational exposure to methyl tertiary butyl ether in relation to key health symptom prevalence: the effect of measurement error correction

    ENVIRONMETRICS, Issue 6 2003
    Aparna P. Keshaviah
    Abstract In 1995, White et al. reported that methyl tertiary butyl ether (MTBE), an oxygenate added to gasoline, was significantly associated with key health symptoms, including headaches, eye irritation, and burning of the nose and throat, among 44 people occupationally exposed to the compound and for whom serum MTBE measurements were available (odds ratio (OR),=,8.9, 95% CI,=,[1.2, 75.6]). However, these serum MTBE measurements were available for only 29 per cent of the 150 subjects enrolled. Around the same time, Mannino et al. conducted a similar study among individuals occupationally exposed to low levels of MTBE and did not find a significant association between exposure to MTBE and the presence of one or more key health symptoms among the 264 study participants (OR,=,0.60, 95% CI,=,[0.3, 1.21]). In this article, we evaluate the effect of MTBE on the prevalence of key health symptoms by applying a regression calibration method to White et al.'s and Mannino et al.'s data. Unlike White et al., who classified exposure using actual MTBE levels among a subset of the participants, and Mannino et al., who classified exposure based on job category among all participants, we use all of the available data to obtain an estimate of the effect of MTBE in units of serum concentration, adjusted for measurement error due to using job category instead of measured exposure. After adjusting for age, gender and smoking status, MTBE exposure was found to be significantly associated with a 50 per cent increase in the prevalence of one or more key health symptoms per order of magnitude increase in blood concentration on the log10 scale, using data from the 409 study participants with complete information on the covariates (95% CI,=,[1.00, 2.25]). Simulation results indicated that under conditions similar to those observed in these data, the estimator is unbiased and has a coverage probability close to the nominal value. The methodology illustrated in this article is advantageous because all of the available data were used in the analysis, obtaining a more precise estimate of exposure effect on health outcome, and the estimate is adjusted for measurement error due to using job category instead of measured exposure. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    An Algorithmic Approach to Error Correction: An Empirical Study

    FOREIGN LANGUAGE ANNALS, Issue 1 2006
    Alice Y. W. Chan
    This article reports on the results of a research study that investigated the effectiveness of using an algorithmic approach to error correction to help Hong Kong English-as-a-second-language (ESL) learners overcome persistent lexico-grammatical problems. Ten error types were selected for the experiment, and one set of remedial instructional materials was designed for each error type. The materials were implemented with more than 450 students at both secondary and tertiary levels. Pretests, posttests, and delayed posttests were administered to test the effectiveness of the approach, and a plenary review meeting was organized to gather feedback. The results showed that the approach was versatile and effective and that the students showed significant improvements for the items taught. It is argued that form-focused remedial instruction is effective in enhancing learners' language accuracy in their second language (L2) output. [source]


    Measuring Monetary Policy in Germany: A Structural Vector Error Correction Approach

    GERMAN ECONOMIC REVIEW, Issue 3 2003
    Imke Brüggemann
    Monetary policy; cointegration; structural VAR analysis Abstract. A structural vector error correction (SVEC) model is used to investigate several monetary policy issues. While being data-oriented the SVEC framework allows structural modeling of the short-run and long-run properties of the data. The statistical model is estimated with monthly German data for 1975,98 where a structural break is detected in 1984. After splitting the sample, three stable long-run relations are found in each subsample which can be interpreted in terms of a money-demand equation, a policy rule and a relation for real output, respectively. Since the cointegration restrictions imply a particular shape of the long-run covariance matrix this information can be used to distinguish between permanent and transitory innovations in the estimated system. Additional restrictions are introduced to identify a monetary policy shock. [source]


    An opportunistic cross-layer architecture for voice in multi-hop wireless LANs

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 4 2009
    Suhaib A. Obeidat
    Abstract We propose an opportunistic cross-layer architecture for adaptive support of Voice over IP in multi-hop wireless LANs. As opposed to providing high call quality, we target emergencies where it is important to communicate, even if at low quality, no matter the harshness of the network conditions. With the importance of delay on voice quality in mind, we select adaptation parameters that control the ratio of real-time traffic load to available bandwidth. This is achieved in two ways: minimizing the load and maximizing the bandwidth. The PHY/MAC interaction improves the use of the spectral resources by opportunistically exploiting rate-control and packet bursts, while the MAC/application interaction controls the demand per source through voice compression. The objective is to maximize the number of calls admitted that satisfy the end-to-end delay budget. The performance of the protocol is studied extensively in the ns-2 network simulator. Results indicate that call quality degrades as load increases and overlonger paths, and a larger packet size improves performance. For long paths having low-quality channels, forward error correction, header compression, and relaxing the delay budget of the system are required to maintain call admission and quality. The proposed adaptive protocol achieves high performance improvements over the traditional, non-adaptive approach. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    A backstepping controller for path-tracking of an underactuated autonomous airship

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 4 2009
    José Raul Azinheira
    Abstract In this paper we propose a nonlinear control approach for the path-tracking of an autonomous underactuated airship. A backstepping controller is designed from the airship nonlinear dynamic model including wind disturbances, and further enhanced to consider actuators saturation. Control implementation issues related to airship underactuation are also addressed, namely control allocation and an attitude reference shaping to obtain a faster error correction with smoother input requests. The results obtained demonstrate the capacity of an underactuated unmanned airship to execute a realistic mission including vertical take-off and landing, stabilization and path-tracking, in the presence of wind disturbances, with a single robust control law. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Vision-Enhancing Interventions in Nursing Home Residents and Their Short-Term Effect on Physical and Cognitive Function

    JOURNAL OF AMERICAN GERIATRICS SOCIETY, Issue 2 2009
    Amanda F. Elliott PhD
    OBJECTIVES: To evaluate the effect of vision-enhancing interventions (cataract surgery or refractive error correction) on physical function and cognitive status in nursing home residents. DESIGN: Longitudinal cohort study. SETTING: Seventeen nursing homes in Birmingham, Alabama. PARTICIPANTS: A total of 187 English-speaking adults aged 55 and older. INTERVENTION: Participants took part in one of two vision-enhancing interventions: cataract surgery or refractive error correction. Each group was compared against a control group (persons eligible for but who declined cataract surgery or who received delayed correction of refractive error). MEASUREMENTS: Physical function (ability to perform activities of daily living and mobility) was assessed using a series of self-report and certified nursing assistant ratings at baseline and at 2 months for the refractive error correction group and at 4 months for the cataract surgery group. The Mini Mental State Examination was also administered. RESULTS: No significant differences existed within or between groups from baseline to follow-up on any of the measures of physical function. Mental status scores significantly declined from baseline to follow-up for the immediate (P=.05) and delayed (P<.02) refractive error correction groups and for the cataract surgery control group (P=.05). CONCLUSION: Vision-enhancing interventions did not lead to short-term improvements in physical functioning or cognitive status in this sample of elderly nursing home residents. [source]


    Beating the random walk in Central and Eastern Europe

    JOURNAL OF FORECASTING, Issue 3 2005
    Jesús Crespo Cuaresma
    Abstract We compare the accuracy of vector autoregressive (VAR), restricted vector autoregressive (RVAR), Bayesian vector autoregressive (BVAR), vector error correction (VEC) and Bayesian error correction (BVEC) models in forecasting the exchange rates of five Central and Eastern European currencies (Czech Koruna, Hungarian Forint, Slovak Koruna, Slovenian Tolar and Polish Zloty) against the US Dollar and the Euro. Although these models tend to outperform the random walk model for long-term predictions (6 months ahead and beyond), even the best models in terms of average prediction error fail to reject the test of equality of forecasting accuracy against the random walk model in short-term predictions. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    An error correction almost ideal demand system for meat in Greece

    AGRICULTURAL ECONOMICS, Issue 1 2000
    G. Karagiannis
    Abstract This paper represents a dynamic specification of the Almost Ideal Demand System (AIDS) based on recent developments on cointegration techniques and error correction models. Based on Greek meat consumption data over the period 1958,1993, it was found that the proposed formulation performs well on both theoretical and statistical grounds, as the theoretical properties of homogeneity and symmetry are supported by the data and the LeChatelier principle holds. Regardless of the time horizon, beef and chicken may be considered as luxuries while mutton-lamb and pork as necessities. In the short-run, beef was found to have price elastic demand, pork an almost unitary elasticity, whereas mutton-lamb, chicken and sausages had inelastic demands; in the long-run, beef, and pork were found to have a demand elasticity greater than one, whereas mutton-lamb, chicken, and sausages still had inelastic demands. All meat items are found to be substitutes to each other except chicken and mutton-lamb, and pork and chicken. [source]


    A STRUCTURAL EQUATION MODELING OF ALCOHOL USE AMONG YOUNG ADULTS IN THE U.S. MILITARY: COMPLEXITIES AMONG STRESS, DRINKING MOTIVES, IMPULSIVITIY, ALCOHOL USE AND JOB PERFORMANCE

    ALCOHOLISM, Issue 2008
    Sunju Sohn
    Aims:, Young male adults in the U. S. military drink at much higher rates than civilians and females of the same age. Drinking has been shown to be associated with stress and individuals' ability to effectively cope with stressors. Despite numerous studies conducted on young adults' drinking behaviors such as college drinking, current literature is limited in fully understanding alcohol use patterns of the young military population. The aim of the present study was to develop and test the hypothesized Structural Equation Model (SEM) of alcohol use to determine if stress coping styles moderate the relationship between stress, drinking motives, impulsivity, alcohol consumption and job performance. Methods:, Structural equation models for multiple group comparisons were estimated based on a sample of 1,715 young (aged 18 to 25) male military personnel using the 2005 Department of Defense Survey of Health Related Behaviors among Military Personnel. Coping style was used as the grouping factor in the multi-group analysis and this variable was developed through numerous steps to reflect positive and negative behaviors of coping. The equivalences of the structural relations between the study variables were then compared across two groups at a time, controlling for installation region, race/ethnicity, marital status, education, and pay grade, resulting in two model comparisons with four coping groups. If the structural weight showed differences across groups, each parameter was constrained and tested one at a time to see where the models are different. Results:, The results showed that the hypothesized model applies across all groups. The structural weights revealed that a moderation effect exists between a group whose tendency is to mostly use positive coping strategies and a group whose tendency is to mostly use negative coping strategies (,,2(39)= 65.116, p<.05). More specifically, the models were different (with and without Bonferroni Type I error correction) in the paths between "motive and alcohol use" and "alcohol use and alcohol-related consequences (job performance)." Conclusions:, It seems plausible that coping style significantly factors into moderating alcohol use among young male military personnel who reportedly drink more excessively than civilians of the same age. The results indicate that it may be particularly important for the military to assess different stress coping styles ofyoung male military personnel so as to limit excessive drinking as well as to promote individual wellness and improve job performance. [source]


    Question-driven segmentation of lecture speech text: Towards intelligent e-learning systems

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 2 2008
    Ming Lin
    Recently, lecture videos have been widely used in e-learning systems. Envisioning intelligent e-learning systems, this article addresses the challenge of information seeking in lecture videos by retrieving relevant video segments based on user queries, through dynamic segmentation of lecture speech text. In the proposed approach, shallow parsing such as part of-speech tagging and noun phrase chunking are used to parse both questions and Automated Speech Recognition (ASR) transcripts. A sliding-window algorithm is proposed to identify the start and ending boundaries of returned segments. Phonetic and partial matching is utilized to correct the errors from automated speech recognition and noun phrase chunking. Furthermore, extra knowledge such as lecture slides is used to facilitate the ASR transcript error correction. The approach also makes use of proximity to approximate the deep parsing and structure match between question and sentences in ASR transcripts. The experimental results showed that both phonetic and partial matching improved the segmentation performance, slides-based ASR transcript correction improves information coverage, and proximity is also effective in improving the overall performance. [source]


    Improving Wikipedia's accuracy: Is edit age a solution?

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 2 2008
    Brendan Luyt
    Wikipedia is fast becoming a key information source for many despite criticism that it is unreliable and inaccurate. A number of recommendations have been made to sort the chaff from the wheat in Wikipedia, among which is the idea of color-coding article segment edits according to age (Cross, 2006). Using data collected as part of a wider study published in Nature, this article examines the distribution of errors throughout the life of a select group of Wikipedia articles. The survival time of each "error edit" in terms of the edit counts and days was calculated and the hypothesis that surviving material added by older edits is more trustworthy was tested. Surprisingly, we find that roughly 20% of errors can be attributed to surviving text added by the first edit, which confirmed the existence of a "first-mover" effect (Viegas, Wattenberg, & Kushal, 2004) whereby material added by early edits are less likely to be removed. We suggest that the sizable number of errors added by early edits is simply a result of more material being added near the beginning of the life of the article. Overall, the results do not provide support for the idea of trusting surviving segments attributed to older edits because such edits tend to add more material and hence contain more errors which do not seem to be offset by greater opportunities for error correction by later edits. [source]


    Nonlinear error correction models

    JOURNAL OF TIME SERIES ANALYSIS, Issue 5 2002
    ALVARO ESCRIBANO
    The relationship between cointegration and error correction (EC) models is well characterized in a linear context, but the extension to the nonlinear context is still a challenge. Few extensions of the linear framework have been done in the context of nonlinear error correction (NEC) or asymmetric and time varying error correction models. In this paper, we propose a theoretical framework based on the concept of near epoch dependence (NED) that allows us to formally address these issues. In particular, we partially extend the Granger Representation Theorem to the nonlinear case. [source]


    High raman gain of dispersion compensation fiber using RZ-DPSK format for long-haul DWDM transmission system

    MICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 11 2010
    Hsiu-Sheng Lin
    Abstract We investigated the transport of a 16 channels 40 Gb/s dense wavelength division multiplexing (DWDM) system which uses high Raman gain of dispersion compensation fiber (DCF) for long-haul DWDM transmission. Using the return to zero differential phase shift keying (RZ-DPSK) modulation format with an optimized dispersion compensation format, we demonstrate DWDM transmission with a capacity of 640 Gb/s with 0.4 nm channel spacing over 4500 km of transmission fiber. The transmission system structure uses 120 km single mode fiber and 30 km DCF for thirty spans in the C band wavelength range and a Raman Amplifier with high Raman gain to achieve long-haul transmission. We also used enhanced forward error correction for high capacity transmission over several thousand kilometers with the RZ-DPSK modulation format. © 2010 Wiley Periodicals, Inc. Microwave Opt Technol Lett 52:2548,2551, 2010; View this article online at wileyonlinelibrary.com. DOI 10.1002/mop.25552 [source]


    Testing for Error Correction in Panel Data,

    OXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 6 2007
    Joakim Westerlund
    Abstract This paper proposes new error correction-based cointegration tests for panel data. The limiting distributions of the tests are derived and critical values provided. Our simulation results suggest that the tests have good small-sample properties with small size distortions and high power relative to other popular residual-based panel cointegration tests. In our empirical application, we present evidence suggesting that international healthcare expenditures and GDP are cointegrated once the possibility of an invalid common factor restriction has been accounted for. [source]


    Evaluation of a High Exposure Solar UV Dosimeter for Underwater Use

    PHOTOCHEMISTRY & PHOTOBIOLOGY, Issue 4 2007
    Peter W. Schouten
    ABSTRACT Solar ultraviolet radiation (UV) is known to have a significant effect upon the marine ecosystem. This has been documented by many previous studies using a variety of measurement methods in aquatic environments such as oceans, streams and lakes. Evidence gathered from these investigations has shown that UVB radiation (280,320 nm) can negatively affect numerous aquatic life forms, while UVA radiation (320,400 nm) can both damage and possibly even repair certain types of underwater life. Chemical dosimeters such as polysulphone have been tested to record underwater UV exposures and in turn quantify the relationship between water column depth and dissolved organic carbon levels to the distribution of biologically damaging UV underwater. However, these studies have only been able to intercept UV exposures over relatively short time intervals. This paper reports on the evaluation of a high exposure UV dosimeter for underwater use. The UV dosimeter was fabricated from poly 2,6-dimethyl-1,4-phenylene oxide (PPO) film. This paper presents the dose response, cosine response, exposure additivity and watermarking effect relating to the PPO dosimeter as measured in a controlled underwater environment and will also detail the overnight dark reaction and UVA and visible radiation response of the PPO dosimeter, which can be used for error correction to improve the reliability of the UV data measured by the PPO dosimeters. These results show that this dosimeter has the potential for long-term underwater UV exposure measurements. [source]


    Coin flipping from a cosmic source: On error correction of truly random bits

    RANDOM STRUCTURES AND ALGORITHMS, Issue 4 2005
    Elchanan Mossel
    We study a problem related to coin flipping, coding theory, and noise sensitivity. Consider a source of truly random bits x , {0, 1}n, and k parties, who have noisy version of the source bits yi , {0, 1}n, when for all i and j, it holds that P[y = xj] = 1 , ,, independently for all i and j. That is, each party sees each bit correctly with probability 1 , ,, and incorrectly (flipped) with probability ,, independently for all bits and all parties. The parties, who cannot communicate, wish to agree beforehand on balanced functions fi: {0, 1}n , {0, 1} such that P[f1(y1) = , = fk(yk)] is maximized. In other words, each party wants to toss a fair coin so that the probability that all parties have the same coin is maximized. The function fi may be thought of as an error correcting procedure for the source x. When k = 2,3, no error correction is possible, as the optimal protocol is given by fi(yi) = y. On the other hand, for large values of k, better protocols exist. We study general properties of the optimal protocols and the asymptotic behavior of the problem with respect to k, n, and ,. Our analysis uses tools from probability, discrete Fourier analysis, convexity, and discrete symmetrization. © 2005 Wiley Periodicals, Inc. Random Struct. Alg., 2005 [source]


    Order imbalance and the dynamics of index and futures prices

    THE JOURNAL OF FUTURES MARKETS, Issue 12 2007
    Joseph K.W. Fung
    This study uses transaction records of index futures and index stocks, with bid/ask price quotes, to examine the impact of stock market order imbalance on the dynamic behavior of index futures and cash index prices. Spurious correlation in the index is purged by using an estimate of the "true" index with highly synchronous and active quotes of individual stocks. A smooth transition autoregressive error correction model is used to describe the nonlinear dynamics of the index and futures prices. Order imbalance in the cash stock market is found to affect significantly the error correction dynamics of index and futures prices. Order imbalance impedes error correction particularly when the market impact of order imbalance works against the error correction force of the cash index, explaining why real potential arbitrage opportunities may persist over time. Incorporating order imbalance in the framework significantly improves its explanatory power. The findings indicate that a stock market microstructure that allows a quick resolution of order imbalance promotes dynamic arbitrage efficiency between futures and underlying stocks. The results also suggest that the unloading of cash stocks by portfolio managers in a falling market situation aggravates the price decline and increases the real cost of hedging with futures. © 2007 Wiley Periodicals, Inc. Jrl Fut Mark 27:1129,1157, 2007 [source]


    MPE-IFEC: An enhanced burst error protection for DVB-SH systems

    BELL LABS TECHNICAL JOURNAL, Issue 1 2009
    Bessem Sayadi
    Digital video broadcasting-satellite services to handhelds (DVB-SH) is a new hybrid satellite/terrestrial system for the broadcasting of multimedia services to mobile receivers. To improve the link budget, DVB-SH uses a long interleaver to cope with land mobile satellite (LMS) channel impairments. Multi-protocol encapsulation,inter-burst forward error correction (MPE-IFEC) is an attractive alternative to the long physical interleaving option of the standard and is suited for terminal receivers with limited de-interleaving memory. In this paper, we present a tutorial overview of this powerful error-correcting technique and report new simulation results that show MPE-IFEC improves the quality of broadcast mobile television (TV) reception. © 2009 Alcatel-Lucent. [source]


    Efficient repair mechanism of real-time broadcast services in hybrid DVB-SH and cellular systems

    BELL LABS TECHNICAL JOURNAL, Issue 1 2009
    Bessem Sayadi
    In order to ensure good video quality and location-independent access to multimedia content, digital video broadcasting-satellite service to handhelds (DVB-SH) takes advantage of several innovative techniques. The major one is the forward error correction (FEC) scheme implemented at the link layer called multiprotocol encapsulation-inter-burst FEC (MPE-IFEC). MPE-IFEC supports reception in situations of long duration erasure spanning several consecutive time slice bursts (lasting several seconds) due to characteristics of the land mobile satellite channel which are easily hampered by obstacles such as trees, buildings, or overpasses. However, when deep signal fades last for larger durations, the MPE-IFEC correction capacity is insufficient and MPEIFEC fails, causing a service interruption. In this paper , we propose a repair mechanism and a suitable architecture for real-time streaming service error handling with reduced degrading effects such as picture freeze, video frame degradation, and video lag. Based on an analytical model of the performance of the MPE-IFEC, an iterative algorithm is proposed where the probability of recovery of lost bursts is computed and updated. The proposed algorithm controls the retransmission request on the cellular network. Simulation results show that by recovering only some specific lost bursts via the cellular path, the quality of experience here expressed in terms of burst error rate is improved. © 2009 Alcatel-Lucent. [source]


    Protecting IPTV against packet loss: Techniques and trade-offs

    BELL LABS TECHNICAL JOURNAL, Issue 1 2008
    Natalie Degrande
    Packet loss ratios that are harmless to the quality of data and Voice over Internet Protocol (VoIP) services may still seriously jeopardize that of Internet Protocol television (IPTV) services. In digital subscriber line (DSL)-based access networks, the last mile in particular suffers from packet loss, but other parts of the network may do so too. While on the last mile link, the packet loss is due to bit errors, in other parts of the network it is caused by buffers overflowing or the network experiencing (short) outages due to link or node failures. To retrieve lost packets, the application layer (AL) can use either a forward error correction (FEC) or a retransmission scheme. These schemes, when properly tuned, increase the quality of an IPTV service to an adequate level, at the expense of some overhead bit rate, extra latency, and possibly an increase in channel change time. This paper compares the performance of FEC schemes based on Reed-Solomon (RS) codes with that of retransmission schemes, all tuned to conform to the same maximum overhead bit rate allowed on the last mile link and on the feeder link, and their possible impact on the channel change time. We take into account two kinds of loss processes that can occur: isolated packet losses and burst packet losses. In almost all scenarios, retransmission outperforms FEC. © 2008 Alcatel-Lucent. [source]


    Modeling Data with Excess Zeros and Measurement Error: Application to Evaluating Relationships between Episodically Consumed Foods and Health Outcomes

    BIOMETRICS, Issue 4 2009
    Victor Kipnis
    Summary Dietary assessment of episodically consumed foods gives rise to nonnegative data that have excess zeros and measurement error. Tooze et al. (2006,,Journal of the American Dietetic Association,106, 1575,1587) describe a general statistical approach (National Cancer Institute method) for modeling such food intakes reported on two or more 24-hour recalls (24HRs) and demonstrate its use to estimate the distribution of the food's usual intake in the general population. In this article, we propose an extension of this method to predict individual usual intake of such foods and to evaluate the relationships of usual intakes with health outcomes. Following the regression calibration approach for measurement error correction, individual usual intake is generally predicted as the conditional mean intake given 24HR-reported intake and other covariates in the health model. One feature of the proposed method is that additional covariates potentially related to usual intake may be used to increase the precision of estimates of usual intake and of diet-health outcome associations. Applying the method to data from the Eating at America's Table Study, we quantify the increased precision obtained from including reported frequency of intake on a food frequency questionnaire (FFQ) as a covariate in the calibration model. We then demonstrate the method in evaluating the linear relationship between log blood mercury levels and fish intake in women by using data from the National Health and Nutrition Examination Survey, and show increased precision when including the FFQ information. Finally, we present simulation results evaluating the performance of the proposed method in this context. [source]


    RETURNS TO EQUITY, INVESTMENT AND Q: EVIDENCE FROM THE UK

    THE MANCHESTER SCHOOL, Issue 2005
    SIMON PRICE
    Conventional wisdom has it that Tobin's Q cannot help explain aggregate investment. However, the standard linearized present-value asset price decomposition suggests that it should be able to predict other variables, such as stock returns. Using a new data set for the UK, we find that Q has strong predictive power for debt accumulation, stock returns and UK business investment. The correctly signed results on both returns and investment appear to be robust, and are supported by the commonly used and bootstrapped standard error corrections, as well as recently developed asymptotic corrections. [source]


    Haplotype-Based Regression Analysis and Inference of Case,Control Studies with Unphased Genotypes and Measurement Errors in Environmental Exposures

    BIOMETRICS, Issue 3 2008
    Iryna Lobach
    Summary It is widely believed that risks of many complex diseases are determined by genetic susceptibilities, environmental exposures, and their interaction. Chatterjee and Carroll (2005, Biometrika92, 399,418) developed an efficient retrospective maximum-likelihood method for analysis of case,control studies that exploits an assumption of gene,environment independence and leaves the distribution of the environmental covariates to be completely nonparametric. Spinka, Carroll, and Chatterjee (2005, Genetic Epidemiology29, 108,127) extended this approach to studies where certain types of genetic information, such as haplotype phases, may be missing on some subjects. We further extend this approach to situations when some of the environmental exposures are measured with error. Using a polychotomous logistic regression model, we allow disease status to have K+ 1 levels. We propose use of a pseudolikelihood and a related EM algorithm for parameter estimation. We prove consistency and derive the resulting asymptotic covariance matrix of parameter estimates when the variance of the measurement error is known and when it is estimated using replications. Inferences with measurement error corrections are complicated by the fact that the Wald test often behaves poorly in the presence of large amounts of measurement error. The likelihood-ratio (LR) techniques are known to be a good alternative. However, the LR tests are not technically correct in this setting because the likelihood function is based on an incorrect model, i.e., a prospective model in a retrospective sampling scheme. We corrected standard asymptotic results to account for the fact that the LR test is based on a likelihood-type function. The performance of the proposed method is illustrated using simulation studies emphasizing the case when genetic information is in the form of haplotypes and missing data arises from haplotype-phase ambiguity. An application of our method is illustrated using a population-based case,control study of the association between calcium intake and the risk of colorectal adenoma. [source]