Random Walk Model (random + walk_model)

Distribution by Scientific Domains


Selected Abstracts


Measurement Error in a Random Walk Model with Applications to Population Dynamics

BIOMETRICS, Issue 4 2006
John Staudenmayer
Summary Population abundances are rarely, if ever, known. Instead, they are estimated with some amount of uncertainty. The resulting measurement error has its consequences on subsequent analyses that model population dynamics and estimate probabilities about abundances at future points in time. This article addresses some outstanding questions on the consequences of measurement error in one such dynamic model, the random walk with drift model, and proposes some new ways to correct for measurement error. We present a broad and realistic class of measurement error models that allows both heteroskedasticity and possible correlation in the measurement errors, and we provide analytical results about the biases of estimators that ignore the measurement error. Our new estimators include both method of moments estimators and "pseudo"-estimators that proceed from both observed estimates of population abundance and estimates of parameters in the measurement error model. We derive the asymptotic properties of our methods and existing methods, and we compare their finite-sample performance with a simulation experiment. We also examine the practical implications of the methods by using them to analyze two existing population dynamics data sets. [source]


Shrinkability Maps for Content-Aware Video Resizing

COMPUTER GRAPHICS FORUM, Issue 7 2008
Yi-Fei Zhang
Abstract A novel method is given for content-aware video resizing, i.e. targeting video to a new resolution (which may involve aspect ratio change) from the original. We precompute a per-pixel cumulative shrinkability map which takes into account both the importance of each pixel and the need for continuity in the resized result. (If both x and y resizing are required, two separate shrinkability maps are used, otherwise one suffices). A random walk model is used for efficient offline computation of the shrinkability maps. The latter are stored with the video to create a multi-sized video, which permits arbitrary-sized new versions of the video to be later very efficiently created in real-time, e.g. by a video-on-demand server supplying video streams to multiple devices with different resolutions. These shrinkability maps are highly compressible, so the resulting multi-sized videos are typically less than three times the size of the original compressed video. A scaling function operates on the multi-sized video, to give the new pixel locations in the result, giving a high-quality content-aware resized video. Despite the great efficiency and low storage requirements for our method, we produce results of comparable quality to state-of-the-art methods for content-aware image and video resizing. [source]


Measuring dispersal and detecting departures from a random walk model in a grasshopper hybrid zone

ECOLOGICAL ENTOMOLOGY, Issue 2 2003
R. I. Bailey
Abstract. 1. The grasshopper species Chorthippus brunneus and C. jacobsi form a complex mosaic hybrid zone in northern Spain. Two mark,release,recapture studies were carried out near the centre of the zone in order to make direct estimates of lifetime dispersal. 2. A model framework based on a simple random walk in homogeneous habitat was extended to include the estimation of philopatry and flying propensity. Each model was compared with the real data, correcting for spatial and temporal biases in the data sets. 3. All four data sets (males and females at each site) deviated significantly from a random walk. Three of the data sets showed strong philopatry and three had a long dispersal tail, indicating a low propensity to move further than predicted by the random walk model. 4. Neighbourhood size estimates were 76 and 227 for the two sites. These estimates may underestimate effective population size, which could be increased by the long tail to the dispersal function. The random walk model overestimates lifetime dispersal and hence the minimum spatial scale of adaptation. 5. Best estimates of lifetime dispersal distance of 7,33 m per generation were considerably lower than a previous indirect estimate of 1344 m per generation. This discrepancy could be influenced by prezygotic isolation, an inherent by-product of mosaic hybrid zone structure. [source]


Forecasting the Direction of Policy Rate Changes: The Importance of ECB Words

ECONOMIC NOTES, Issue 1-2 2009
Carlo Rosa
This paper evaluates the predictive power of different information sets for the European Central Bank (ECB) interest-rate-setting behaviour. We employ an ordered probit model, i.e. a limited dependent variable framework, to take into account the discreteness displayed by policy rate changes. The results show that the forecasting ability of standard Taylor-type variables, such as inflation and output gap, is fairly low both in-sample and out-of-sample, and is comparable to the performance of the random walk model. Instead by using broader information sets that include measures of core inflation, exchange rates, monetary aggregates and financial conditions, the accuracy of the forecasts about ECB future actions substantially improves. Moreover, ECB rhetoric considerably contributes to a better understanding of its policy reaction function. Finally, we find that that the ECB has been fairly successful in educating the public to anticipate the overall future direction of its monetary policy, but has been less successful in signalling the exact timing of rate changes. [source]


Automated generation of new knowledge to support managerial decision-making: case study in forecasting a stock market

EXPERT SYSTEMS, Issue 4 2004
Se-Hak Chun
Abstract: The deluge of data available to managers underscores the need to develop intelligent systems to generate new knowledge. Such tools are available in the form of learning systems from artificial intelligence. This paper explores how the novel tools can support decision-making in the ubiquitous managerial task of forecasting. For concreteness, the methodology is examined in the context of predicting a financial index whose chaotic properties render the time series difficult to predict. The study investigates the circumstances under which enough new knowledge is extracted from temporal data to overturn the efficient markets hypothesis. The efficient markets hypothesis precludes the possibility of anticipating in financial markets. More precisely, the markets are deemed to be so efficient that the best forecast of a price level for the subsequent period is precisely the current price. Certain anomalies to the efficient market premise have been observed, such as calendar effects. Even so, forecasting techniques have been largely unable to outperform the random walk model which corresponds to the behavior of prices under the efficient markets hypothesis. This paper tests the validity of the efficient markets hypothesis by developing knowledge-based tools to forecast a market index. The predictions are examined across several horizons: single-period forecasts as well as multiple periods. For multiperiod forecasts, the predictive methodology takes two forms: a single jump from the current period to the end of the forecast horizon, and a multistage web of forecasts which progresses systematically from one period to the next. These models are first evaluated using neural networks and case-based reasoning, and are then compared against a random walk model. The computational models are examined in the context of forecasting a composite for the Korean stock market. [source]


Multiple pathology and tails of disability: Space,time structure of disability in longevity

GERIATRICS & GERONTOLOGY INTERNATIONAL, Issue 4 2003
Satoru Matsushita
Disability and the resulting lowered quality of life are serious issues accompanying increased longevity. Curiously, despite its potential contribution to aging theory, complete statistical and etiological structures of this common and unwelcome aging phenotype before death have not been well identified. Another neglected issue in aging and disability is the principles of phylogenesis and morphogenesis, which contemporary life science invariably starts with. In the present review these two related subjects are addressed, with an introduction of an analysis on patients and published data. Statistically rigorous log,normal and normal distributions distinguish disability for its duration and age-wise distribution, respectively. Multiple pathology and diverse effects of various endogenous diseases on disability are confirmed. The robust long-tailed log,normal distribution for various phases of disability validates the fact that patients in disability undergo series of stochastic subprocesses of many independent endogenous diseases until death. For 60% of patients, the log,normal distribution is mimicked by a random walk model. Diseases of core organs are major causes of the long tails. A declining force of natural selection after reproduction and trade-off of life history through pleiotropy of the genes are considered to be the roots of aging. The attenuated selection pressure and the resulting decrease of genetic constraints produce an increased opportunity for chance and stochastics. Elucidated stochastic behaviors of disability underscore the key role of chance in aging. Evolutionary modifications in the development of the structure tend to favor developmentally later stages first. Distal parts are developmentally last, therefore most subject to modification. The rate of molecular evolution of the genes is also found to be relatively slow at the core and rapid at the edge of cells and organs. Therefore, systems at the core must be relatively slow and inactive to comply with pleiotropy and trade-offs in comparison with systems at the edge. Hence, against flat and probabilistic aging, the core organs must be moulded to be more robust with a lower threshold for dysfunction, to age relatively slowly, and should have less of a disease quota in aging. The principle of core protective aging assures possibilities not only to reduce disability but also to accomplish the Third Age as well. Finally, it must also be acknowledged that the principle is a double-edged sword. Paradoxically, the developed biological and societal organization provides protection for the injured core, and so develops long tails of disability. The principle of core protective aging re-emphasizes the key role of prevention in order to reduce the amount of disability. [source]


Forecasting and Finite Sample Performance of Short Rate Models: International Evidence,

INTERNATIONAL REVIEW OF FINANCE, Issue 3-4 2005
SIRIMON TREEPONGKARUNA
ABSTRACT This paper evaluates the forecasting and finite sample performance of short-term interest rate models in a number of countries. Specifically, we run a series of in-sample and out-of-sample tests for both the conditional mean and volatility of one-factor short rate models, and compare the results to the random walk model. Overall, we find that the out-of-sample forecasting performance of one-factor short rate models is poor, stemming from the inability of the models to accommodate jumps and discontinuities in the time series data. In addition, we perform a series of Monte Carlo analyses similar to Chapman and Pearson to document the finite sample performance of the short rate models when ,3 is not restricted to be equal to one. Our results indicate the potential dangers of over-parameterization and highlight the limitations of short-term interest rate models. [source]


Predictability in Financial Analyst Forecast Errors: Learning or Irrationality?

JOURNAL OF ACCOUNTING RESEARCH, Issue 4 2006
STANIMIR MARKOV
ABSTRACT In this paper, we propose a rational learning-based explanation for the predictability in financial analysts' earnings forecast errors documented in prior literature. In particular, we argue that the serial correlation pattern in analysts' quarterly earnings forecast errors is consistent with an environment in which analysts face parameter uncertainty and learn rationally about the parameters over time. Using simulations and real data, we show that the predictability evidence is more consistent with rational learning than with irrationality (fixation on a seasonal random walk model or some other dogmatic belief). [source]


A patent analysis of global food and beverage firms: The persistence of innovation

AGRIBUSINESS : AN INTERNATIONAL JOURNAL, Issue 3 2002
Oscar Alfranca
We explore whether current innovation has an enduring effect on future innovative activity in large, global food and beverage (F&B) companies. We analyze a sample of 16,698 patents granted in the United States over the period 1977 to 1994 to 103 F&B firms selected from the world's largest F&B multinationals. We test whether patent time series are trend stationary or difference stationary in order to detect how large the autoregressive parameter is and how enduring the impact of past innovation in these companies is. We conclude that the patent series are not consistent with the random walk model. The null hypothesis of a unit root can be rejected at the 5% level when a constant and a time trend are considered. Both utility and design patent series are stationary around a constant and a time trend. Moreover, there is a permanent component in the patent time series. Thus, global F&B firms show a stable pattern of technological accumulation in which "success breeds success." "Old" innovators are the ones to foster both important changes and new ways of packaging products among F&B multinationals. The effect of past innovation is almost permanent. By contrast, other potential stimuli to technological change have only transitory effects on innovation. Patterns of technological accumulation vary in specific F&B industries. Past experience in design is important in highly processed foods and beverages, but not in agribusinesses and basic foodstuffs. Patterns of technological accumulation are similar in both smaller multinationals/newcomers and large, established multinationals. [EconLit citations : O330, F230, L660] © 2002 Wiley Periodicals, Inc. [source]


The variance ratio and trend stationary model as extensions of a constrained autoregressive model

JOURNAL OF FORECASTING, Issue 5 2010
Shlomo Zilca
Abstract This paper shows that a constrained autoregressive model that assigns linearly decreasing weights to past observations of a stationary time series has important links to the variance ratio methodology and trend stationary model. It is demonstrated that the proposed autoregressive model is asymptotically related to the variance ratio through the weighting schedules that these two tools use. It is also demonstrated that under a trend stationary time series process the proposed autoregressive model approaches a trend stationary model when the memory of the autoregressive model is increased. These links create a theoretical foundation for tests that confront the random walk model simultaneously against a trend stationary and a variety of short- and long-memory autoregressive alternatives. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Assessing the forecasting accuracy of alternative nominal exchange rate models: the case of long memory

JOURNAL OF FORECASTING, Issue 5 2006
David Karemera
Abstract This paper presents an autoregressive fractionally integrated moving-average (ARFIMA) model of nominal exchange rates and compares its forecasting capability with the monetary structural models and the random walk model. Monthly observations are used for Canada, France, Germany, Italy, Japan and the United Kingdom for the period of April 1973 through December 1998. The estimation method is Sowell's (1992) exact maximum likelihood estimation. The forecasting accuracy of the long-memory model is formally compared to the random walk and the monetary models, using the recently developed Harvey, Leybourne and Newbold (1997) test statistics. The results show that the long-memory model is more efficient than the random walk model in steps-ahead forecasts beyond 1 month for most currencies and more efficient than the monetary models in multi-step-ahead forecasts. This new finding strongly suggests that the long-memory model of nominal exchange rates be studied as a viable alternative to the conventional models.,,Copyright © 2006 John Wiley & Sons, Ltd. [source]


Random walk hypothesis in exchange rate reconsidered

JOURNAL OF FORECASTING, Issue 4 2006
Chia-Shang J. Chu
Abstract An econometric model for exchange rate based on the behavior of dynamic international asset allocation is considered. The capital movement intensity index is constructed from the adjustment of a fully hedged international portfolio. Including this index as an additional explanatory variable helps to explain the fluctuation of the exchange rate and predict better than the competing random walk model. Supporting empirical evidence is found in Germany,USA, Japan,USA, Singapore,USA and Taiwan,USA exchange markets.,,Copyright © 2006 John Wiley & Sons, Ltd. [source]


Beating the random walk in Central and Eastern Europe

JOURNAL OF FORECASTING, Issue 3 2005
Jesús Crespo Cuaresma
Abstract We compare the accuracy of vector autoregressive (VAR), restricted vector autoregressive (RVAR), Bayesian vector autoregressive (BVAR), vector error correction (VEC) and Bayesian error correction (BVEC) models in forecasting the exchange rates of five Central and Eastern European currencies (Czech Koruna, Hungarian Forint, Slovak Koruna, Slovenian Tolar and Polish Zloty) against the US Dollar and the Euro. Although these models tend to outperform the random walk model for long-term predictions (6 months ahead and beyond), even the best models in terms of average prediction error fail to reject the test of equality of forecasting accuracy against the random walk model in short-term predictions. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Deterministic random walks on regular trees

RANDOM STRUCTURES AND ALGORITHMS, Issue 3 2010
Joshua Cooper
Abstract Jim Propp's rotor,router model is a deterministic analog of a random walk on a graph. Instead of distributing chips randomly, each vertex serves its neighbors in a fixed order. Cooper and Spencer (Comb Probab Comput 15 (2006) 815,822) show a remarkable similarity of both models. If an (almost) arbitrary population of chips is placed on the vertices of a grid ,d and does a simultaneous walk in the Propp model, then at all times and on each vertex, the number of chips on this vertex deviates from the expected number the random walk would have gotten there by at most a constant. This constant is independent of the starting configuration and the order in which each vertex serves its neighbors. This result raises the question if all graphs do have this property. With quite some effort, we are now able to answer this question negatively. For the graph being an infinite k -ary tree (k , 3), we show that for any deviation D there is an initial configuration of chips such that after running the Propp model for a certain time there is a vertex with at least D more chips than expected in the random walk model. However, to achieve a deviation of D it is necessary that at least exp(,(D2)) vertices contribute by being occupied by a number of chips not divisible by k at a certain time. © 2010 Wiley Periodicals, Inc. Random Struct. Alg., 2010 [source]


Forecasting oil price movements: Exploiting the information in the futures market

THE JOURNAL OF FUTURES MARKETS, Issue 1 2008
Andrea Coppola
Relying on the cost of carry model, the long-run relationship between spot and futures prices is investigated and the information implied in these cointegrating relationships is used to forecast out of sample oil spot and futures price movements. To forecast oil price movements, a vector error correction model (VECM) is employed, where the deviations from the long-run relationships between spot and futures prices constitute the equilibrium error. To evaluate forecasting performance, the random walk model (RWM) is used as a benchmark. It was found that (a) in-sample, the information in the futures market can explain a sizable portion of oil price movements; and (b) out-of-sample, the VECM outperforms the RWM in forecasting price movements of 1-month futures contracts. © 2008 Wiley Periodicals, Inc. Jrl Fut Mark 28:34,56, 2008 [source]


Numerical simulation of flow and heat transfer in connection of gasifier to the radiant syngas cooler

ASIA-PACIFIC JOURNAL OF CHEMICAL ENGINEERING, Issue 5 2009
Jianjun Ni
Abstract The connection of gasifier to the radiant syngas cooler has been regarded as a key technology for heat recovery system. Multiphase flow and heat transfer processes presented in this work considers particle deposition and radiation model to mixture of non-gray gas with particles. An axisymmetric simulation of the multiphase flow in an industrial scale connection is performed. The standard k -, model, Renormalization group (RNG) k -, model and Realizable k -, model turbulence model are proposed. The particle motion is modeled by discrete random walk model. The discrete ordinates model (DOM), P-1 and discrete transfer model (DTRM) are used to model the radiative heat transfer. The effect of particles on the radiative heat transfer was taken into account when the DOM and P-1 model were used. The absorption coefficient of the gas mixture is calculated by means of a weighted-sum-of-gray-gas (WSGG) model. The results with the DOM and P-1 model are very similar and close to practical condition. A large number of particles are deposited on the cone of gasifier which is the top of connection. Maximum temperature difference is approximate 7 K when the cooling tube heights change from 0.5 m to 1.5 m. The temperature inside has a linear relationship with operating temperature. Copyright © 2009 Curtin University of Technology and John Wiley & Sons, Ltd. [source]