Home About us Contact | |||
Heavy Tails (heavy + tail)
Selected AbstractsThe distribution of file transmission duration in the webINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 5 2004R. Nossenson Abstract It is well known that the distribution of files transmission duration in the Web is heavy-tailed (A practical guide to Heavy Tails: Statistical Techniques and Application. Birkhauser: Boston, 1998; 3,26). This paper attempts to understand the reasons for this phenomenon by isolating the three major factors influencing the transmission duration: file size, network conditions and server load. We present evidence that the transmission-duration distribution (TDD) of the same file from the same server to the same client in the Web is Pareto and therefore heavy tailed. Furthermore, text files transmission delay for a specific client/server pair is not significantly affected by the file sizes: all files transmitted from the same server to the same client have very similar transmission duration distributions, regardless of their size. We use simulations to estimate the impact of network conditions and server load on the TDD. When the server and the client are on the same local network, the TDD of each file is usually Pareto as well (for server files and client requests that are distributed in a realistic way). By examining a wide-area network situation, we conclude that the network conditions do not have a major influence on the heavy-tailed behaviour of TDD. In contrast, the server load is shown to have a significant impact on the high variability of this distribution. Copyright © 2004 John Wiley & Sons, Ltd. [source] On the Quantile Regression Based Tests for Asymmetry in Stock Return VolatilityASIAN ECONOMIC JOURNAL, Issue 2 2002Beum-Jo Park This paper attempts to examine whether the asymmetry of stock return volatility varies with the level of volatility. Thus, quantile regression based tests (,-tests) are presupposed. These tests differ from the diagnostic tests introduced by Engle and Ng (1993) insofar as they can provide a complete picture of asymmetries in volatility across quantiles of variance distribution and, in case of non-normal errors, they have improved power due to their robustness against non-normality. A small Monte Carlo evidence suggests that the Wald and likelihood ratio (LR) tests out of ,-tests are reasonable, showing that they outperform the Lagrange multiplier (LM) test based on least squares residuals when the innovations exhibit heavy tail. Using the normalized residuals obtained from AR(1)-GARCH(1, 1) estimation, the test results demonstrated that only the TOPIX out of six stock-return series had asymmetry in volatility at moderate level, while all stock return series except the FAZ and FA100 had more significant asymmetry in volatility at higher levels. Interestingly, it is clear from the empirical findings that, like hypothesis of leverage effects, volatility of the TOPIX, CAC40, and, MIB tends to respond significantly to extremely negative shock at high level, but is not correlated with any positive shock. These might be valuable findings that have not been seriously considered in past research, which has focussed only on mean level of volatility. [source] Multi-step forecasting for nonlinear models of high frequency ground ozone data: a Monte Carlo approachENVIRONMETRICS, Issue 4 2002Alessandro Fassò Abstract Multi-step prediction using high frequency environmental data is considered. The complex dynamics of ground ozone often requires models involving covariates, multiple frequency periodicities, long memory, nonlinearity and heteroscedasticity. For these reasons parametric models, which include seasonal fractionally integrated components, self-exciting threshold autoregressive components, covariates and autoregressive conditionally heteroscedastic errors with heavy tails, have been recently introduced. Here, to obtain an h step ahead forecast for these models we use a Monte Carlo approach. The performance of the forecast is evaluated on different nonlinear models comparing some statistical indices with respect to the prediction horizon. As an application of this method, the forecast precision of a 2 year hourly ozone data set coming from an air traffic pollution station located in Bergamo, Italy, is analyzed. Copyright © 2002 John Wiley & Sons, Ltd. [source] Class-based weighted fair queueing: validation and comparison by trace-driven simulationINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 10 2005Rachid El Abdouni Khayari Abstract World-wide web as well as proxy servers rely for their scheduling on services provided by the underlying operating system. In practice, this means that some form of first-come-first-served (FCFS) scheduling is utilized. Although FCFS is a reasonable scheduling strategy for job sequences that do not show much variance, for the world-wide web it has been shown that the requested-object sizes do exhibit heavy tails. Under these circumstances, job scheduling on the basis of shortest-job first (SJF) or shortest remaining processing time (SRPT) has been shown to minimize the total average waiting time. However, these methods have the disadvantage of potential job starvation. In order to avoid the problems of both FCFS and SJF we present in this paper a new scheduling approach called class-based interleaving weighted fair queueing (CI-WFQ). This scheduling approach exploits the specific characteristics of the job stream being served, that is, the distribution of the sizes of the objects being requested, to set its parameters such that good mean response times are obtained and starvation does not occur. In that sense, the new scheduling strategy can be made adaptive to the characteristics of the job stream being served. In this paper we compare the new scheduling approach (using trace-driven simulations) to FCFS, SJF and the recently introduced ,-scheduling, and show that CI-WFQ combines very good performance (as far as mean and variance of response time and blocking probability are concerned) with a scheduling complexity almost as low as for FCFS (and hence, lower than for SJF and ,-scheduling). The use of trace-driven simulation is essential, since the special properties of the arrival process makes analytical solutions very difficult to achieve. Copyright © 2005 John Wiley & Sons, Ltd. [source] A comparison between multivariate Slash, Student's t and probit threshold models for analysis of clinical mastitis in first lactation cowsJOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 5 2006Y-M. Chang Summary Robust threshold models with multivariate Student's t or multivariate Slash link functions were employed to infer genetic parameters of clinical mastitis at different stages of lactation, with each cow defining a cluster of records. The robust fits were compared with that from a multivariate probit model via a pseudo-Bayes factor and an analysis of residuals. Clinical mastitis records on 36 178 first-lactation Norwegian Red cows from 5286 herds, daughters of 245 sires, were analysed. The opportunity for infection interval, going from 30 days pre-calving to 300 days postpartum, was divided into four periods: (i) ,30 to 0 days pre-calving; (ii) 1,30 days; (iii) 31,120 days; and (iv) 121,300 days of lactation. Within each period, absence or presence of clinical mastitis was scored as 0 or 1 respectively. Markov chain Monte Carlo methods were used to draw samples from posterior distributions of interest. Pseudo-Bayes factors strongly favoured the multivariate Slash and Student's t models over the probit model. The posterior mean of the degrees of freedom parameter for the Slash model was 2.2, indicating heavy tails of the liability distribution. The posterior mean of the degrees of freedom for the Student's t model was 8.5, also pointing away from a normal liability for clinical mastitis. A residual was the observed phenotype (0 or 1) minus the posterior mean of the probability of mastitis. The Slash and Student's t models tended to have smaller residuals than the probit model in cows that contracted mastitis. Heritability of liability to clinical mastitis was 0.13,0.14 before calving, and ranged from 0.05 to 0.08 after calving in the robust models. Genetic correlations were between 0.50 and 0.73, suggesting that clinical mastitis resistance is not the same trait across periods, corroborating earlier findings with probit models. [source] Asymptotic self-similarity and wavelet estimation for long-range dependent fractional autoregressive integrated moving average time series with stable innovationsJOURNAL OF TIME SERIES ANALYSIS, Issue 2 2005Stilian Stoev Primary 60G18; 60E07; Secondary 62M10; 63G20 Abstract., Methods for parameter estimation in the presence of long-range dependence and heavy tails are scarce. Fractional autoregressive integrated moving average (FARIMA) time series for positive values of the fractional differencing exponent d can be used to model long-range dependence in the case of heavy-tailed distributions. In this paper, we focus on the estimation of the Hurst parameter H = d + 1/, for long-range dependent FARIMA time series with symmetric , -stable (1 < , < 2) innovations. We establish the consistency and the asymptotic normality of two types of wavelet estimators of the parameter H. We do so by exploiting the fact that the integrated series is asymptotically self-similar with parameter H. When the parameter , is known, we also obtain consistent and asymptotically normal estimators for the fractional differencing exponent d = H , 1/,. Our results hold for a larger class of causal linear processes with stable symmetric innovations. As the wavelet-based estimation method used here is semi-parametric, it allows for a more robust treatment of long-range dependent data than parametric methods. [source] Prediction and nonparametric estimation for time series with heavy tailsJOURNAL OF TIME SERIES ANALYSIS, Issue 3 2002PETER HALL Motivated by prediction problems for time series with heavy-tailed marginal distributions, we consider methods based on `local least absolute deviations' for estimating a regression median from dependent data. Unlike more conventional `local median' methods, which are in effect based on locally fitting a polynomial of degree 0, techniques founded on local least absolute deviations have quadratic bias right up to the boundary of the design interval. Also in contrast to local least-squares methods based on linear fits, the order of magnitude of variance does not depend on tail-weight of the error distribution. To make these points clear, we develop theory describing local applications to time series of both least-squares and least-absolute-deviations methods, showing for example that, in the case of heavy-tailed data, the conventional local-linear least-squares estimator suffers from an additional bias term as well as increased variance. [source] Portfolio Value-at-Risk with Heavy-Tailed Risk FactorsMATHEMATICAL FINANCE, Issue 3 2002Paul Glasserman This paper develops efficient methods for computing portfolio value-at-risk (VAR) when the underlying risk factors have a heavy-tailed distribution. In modeling heavy tails, we focus on multivariate t distributions and some extensions thereof. We develop two methods for VAR calculation that exploit a quadratic approximation to the portfolio loss, such as the delta-gamma approximation. In the first method, we derive the characteristic function of the quadratic approximation and then use numerical transform inversion to approximate the portfolio loss distribution. Because the quadratic approximation may not always yield accurate VAR estimates, we also develop a low variance Monte Carlo method. This method uses the quadratic approximation to guide the selection of an effective importance sampling distribution that samples risk factors so that large losses occur more often. Variance is further reduced by combining the importance sampling with stratified sampling. Numerical results on a variety of test portfolios indicate that large variance reductions are typically obtained. Both methods developed in this paper overcome difficulties associated with VAR calculation with heavy-tailed risk factors. The Monte Carlo method also extends to the problem of estimating the conditional excess, sometimes known as the conditional VAR. [source] Robust modelling of DTARCH modelsTHE ECONOMETRICS JOURNAL, Issue 2 2005Yer Van Hui Summary, Autoregressive conditional heteroscedastic (ARCH) models and its extensions are widely used in modelling volatility in financial time series. One of the variants, the double-threshold autoregressive conditional heteroscedastic (DTARCH) model, has been proposed to model the conditional mean and the conditional variance that are piecewise linear. The DTARCH model is also useful for modelling conditional heteroscedasticity with nonlinear structures such as asymmetric cycles, jump resonance and amplitude-frequence dependence. Since asset returns often display heavy tails and outliers, it is worth studying robust DTARCH modelling without specific distribution assumption. This paper studies DTARCH structures for conditional scale instead of conditional variance. We examine L1 -estimation of the DTARCH model and derive limiting distributions for the proposed estimators. A robust portmanteau statistic based on the L1 -norm fit is constructed to test the model adequacy. This approach captures various nonlinear phenomena and stylized facts with desirable robustness. Simulations show that the L1 -estimators are robust against innovation distributions and accurate for a moderate sample size, and the proposed test is not only robust against innovation distributions but also powerful in discriminating the delay parameters and ARCH models. It is noted that the quasi-likelihood modelling approach used in ARCH models is inappropriate to DTARCH models in the presence of outliers and heavy tail innovations. [source] Time variation in the tail behavior of Bund future returnsTHE JOURNAL OF FUTURES MARKETS, Issue 4 2004Thomas Werner The literature on the tail behavior of asset prices focuses mainly on the foreign exchange and stock markets, with only a few articles dealing with bonds or bond futures. The present article addresses this omission. It focuses on three questions using extreme value analysis: (a) Does the distribution of Bund future returns have heavy tails? (b) Do the tails change over time? (c) Does the tail index provide information that is not captured by a standard VaR approach? The results are as follows: (a) The distribution of high-frequency returns of the Bund future is indeed characterized by heavy tails. The tails are thinner for lower frequencies, but remain significantly heavy even for daily data. (b) There are statistically significant breaks in the tails of the return distribution. (c) The likelihood of extreme price movements suggested by extreme value theory differs from that obtained by standard risk measures. This suggests that the tail index does indeed provide information not contained in volatility measures. © 2004 Wiley Periodicals, Inc. Jrl Fut Mark 24:387,398, 2004 [source] Robust linear mixed models using the skew t distribution with application to schizophrenia dataBIOMETRICAL JOURNAL, Issue 4 2010Hsiu J. Ho Abstract We consider an extension of linear mixed models by assuming a multivariate skew t distribution for the random effects and a multivariate t distribution for the error terms. The proposed model provides flexibility in capturing the effects of skewness and heavy tails simultaneously among continuous longitudinal data. We present an efficient alternating expectation-conditional maximization (AECM) algorithm for the computation of maximum likelihood estimates of parameters on the basis of two convenient hierarchical formulations. The techniques for the prediction of random effects and intermittent missing values under this model are also investigated. Our methodologies are illustrated through an application to schizophrenia data. [source] |