Home About us Contact | |||
Discrete Time (discrete + time)
Selected AbstractsMODELING LIQUIDITY EFFECTS IN DISCRETE TIMEMATHEMATICAL FINANCE, Issue 1 2007Umut Çetin We study optimal portfolio choices for an agent with the aim of maximizing utility from terminal wealth within a market with liquidity costs. Under some mild conditions, we show the existence of optimal portfolios and that the marginal utility of the optimal terminal wealth serves as a change of measure to turn the marginal price process of the optimal strategy into a martingale. Finally, we illustrate our results numerically in a Cox,Ross,Rubinstein binomial model with liquidity costs and find the reservation ask prices for simple European put options. [source] The Fundamental Theorem of Asset Pricing under Proportional Transaction Costs in Finite Discrete TimeMATHEMATICAL FINANCE, Issue 1 2004Walter SchachermayerArticle first published online: 24 DEC 200 We prove a version of the Fundamental Theorem of Asset Pricing, which applies to Kabanov's modeling of foreign exchange markets under transaction costs. The financial market is described by a d×d matrix-valued stochastic process (,t)Tt=0 specifying the mutual bid and ask prices between d assets. We introduce the notion of "robust no arbitrage," which is a version of the no-arbitrage concept, robust with respect to small changes of the bid-ask spreads of (,t)Tt=0. The main theorem states that the bid-ask process (,t)Tt=0 satisfies the robust no-arbitrage condition iff it admits a strictly consistent pricing system. This result extends the theorems of Harrison-Pliska and Kabanov-Stricker pertaining to the case of finite ,, as well as the theorem of Dalang, Morton, and Willinger and Kabanov, Rásonyi, and Stricker, pertaining to the case of general ,. An example of a 5 × 5 -dimensional process (,t)2t=0 shows that, in this theorem, the robust no-arbitrage condition cannot be replaced by the so-called strict no-arbitrage condition, thus answering negatively a question raised by Kabanov, Rásonyi, and Stricker. [source] No Arbitrage in Discrete Time Under Portfolio ConstraintsMATHEMATICAL FINANCE, Issue 3 2001Laurence Carassus In frictionless securities markets, the characterization of the no-arbitrage condition by the existence of equivalent martingale measures in discrete time is known as the fundamental theorem of asset pricing. In the presence of convex constraints on the trading strategies, we extend this theorem under a closedness condition and a nondegeneracy assumption. We then provide connections with the superreplication problem solved in Föllmer and Kramkov (1997). [source] Social Learning in One-Arm Bandit ProblemsECONOMETRICA, Issue 6 2007Dinah Rosenberg We study a two-player one-arm bandit problem in discrete time, in which the risky arm can have two possible types, high and low, the decision to stop experimenting is irreversible, and players observe each other's actions but not each other's payoffs. We prove that all equilibria are in cutoff strategies and provide several qualitative results on the sequence of cutoffs. [source] Threshold Dynamics of Short-term Interest Rates: Empirical Evidence and Implications for the Term StructureECONOMIC NOTES, Issue 1 2008Theofanis Archontakis This paper studies a nonlinear one-factor term structure model in discrete time. The short-term interest rate follows a self-exciting threshold autoregressive (SETAR) process that allows for shifts in the intercept and the variance. In comparison with a linear model, we find empirical evidence in favour of the threshold model for Germany and the US. Based on the estimated short-rate dynamics we derive the implied arbitrage-free term structure of interest rates. Since analytical solutions are not feasible, bond prices are computed by means of Monte Carlo integration. The resulting term structure captures stylized facts of the data. In particular, it implies a nonlinear relation between long rates and the short rate. [source] Compression of time-generated matrices in two-dimensional time-domain elastodynamic BEM analysisINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 8 2004D. Soares Jr Abstract This paper describes a new scheme to improve the efficiency of time-domain BEM algorithms. The discussion is focused on the two-dimensional elastodynamic formulation, however, the ideas presented apply equally to any step-by-step convolution based algorithm whose kernels decay with time increase. The algorithm presented interpolates the time-domain matrices generated along the time-stepping process, for time-steps sufficiently far from the current time. Two interpolation procedures are considered here (a large number of alternative approaches is possible): Chebyshev,Lagrange polynomials and linear. A criterion to indicate the discrete time at which interpolation should start is proposed. Two numerical examples and conclusions are presented at the end of the paper. Copyright © 2004 John Wiley & Sons, Ltd. [source] Linear quadratic optimal sliding mode flow control for connection-oriented communication networksINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 4 2009Przemys, aw Ignaciuk Abstract In this paper, a new sliding mode flow controller for multi-source connection-oriented communication networks is proposed. The networks are modelled as discrete time, nth-order systems. On the basis of the system state space description, novel sliding mode controllers with linear quadratic (LQ) optimal and sub-optimal switching planes are designed. The control law derivation focuses on the minimization of the LQ cost functional and solving the resultant matrix Riccati equation. Closed-loop system stability is demonstrated, and conditions for no data loss and full bottleneck link bandwidth utilization in the network are presented and strictly proved. To the best of our knowledge, this paper presents the first attempt to design a discrete time sliding mode flow control algorithm for connection-oriented communication networks. Copyright © 2008 John Wiley & Sons, Ltd. [source] Plant zero structure and further order reduction of a singular H, controllerINTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 7 2002Takao Watanabe Abstract A new class of reduced-order controllers is obtained for the H, problem. The reduced-order controller does not compromise the performance attained by the full-order controller. Algorithms for deriving reduced-order H, controllers are presented in both continuous and discrete time. The reduction in order is related to unstable transmission zeros of the subsystem from disturbance inputs to measurement outputs. In the case where the subsystem has no infinite zeros, the resulting order of the H, controller is lower than that of the existing reduced-order H, controller designs which are based on reduced-order observer design. Furthermore, the mechanism of the controller order reduction is analysed on the basis of the two-Riccati equation approach. The structure of the reduced-order H, controller is investigated. Copyright © 2002 John Wiley & Sons, Ltd. [source] Discrete,continuous analysis of optimal equipment replacementINTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 5 2010Yuri Yatsenko Abstract In operations research, the equipment replacement process is usually modeled in discrete time. An alternative approach is represented by continuous-time vintage capital models that explicitly involve the equipment lifetime and are described by nonlinear integral equations. The paper introduces and analyzes a model that unites both these approaches. The structure of optimal replacement, transition, and long-term dynamics, and clustering and splitting of replaced machines are discussed and illustrated with numeric examples. Equipment splitting is demonstrated when the optimal equipment lifetime increases. [source] Stochastic modeling of particle motion along a sliding conveyorAICHE JOURNAL, Issue 1 2010Kevin Cronin Abstract The sliding conveyor consists of a plane surface, known as the track, along which particles are induced to move by vibrating the bed sinusoidal with respect to time. The forces on the particle include gravity, bed reaction force and friction. Because friction coefficients are inherently variable, particle motion along the bed is erratic and unpredictable. A deterministic model of particle motion (where friction is considered to be known and invariant) is selected and its output validated by experiment. Two probabilistic solution techniques are developed and applied to the deterministic model, in order to account for the randomness that is present. The two methods consider particle displacement to be represented by discrete time and continuous time random processes, respectively, and permits analytical solutions for mean and variance in displacement versus time to be found. These are compared with experimental measurements of particle motion. Ultimately this analysis can be employed to calculate residence-time distributions for such items of process equipment. © 2009 American Institute of Chemical Engineers AIChE J, 2010 [source] No Arbitrage in Discrete Time Under Portfolio ConstraintsMATHEMATICAL FINANCE, Issue 3 2001Laurence Carassus In frictionless securities markets, the characterization of the no-arbitrage condition by the existence of equivalent martingale measures in discrete time is known as the fundamental theorem of asset pricing. In the presence of convex constraints on the trading strategies, we extend this theorem under a closedness condition and a nondegeneracy assumption. We then provide connections with the superreplication problem solved in Föllmer and Kramkov (1997). [source] Continuous accumulation games on discrete locationsNAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 1 2002Kensaku Kikuta Abstract In an accumulation game, a HIDER attempts to accumulate a certain number of objects or a certain quantity of material before a certain time, and a SEEKER attempts to prevent this. In a continuous accumulation game the HIDER can pile material either at locations $1, 2, ,, n, or over a region in space. The HIDER will win (payoff 1) it if accumulates N units of material before a given time, and the goal of the SEEKER will win (payoff 0) otherwise. We assume the HIDER can place continuous material such as fuel at discrete locations i = 1, 2, ,, n, and the game is played in discrete time. At each time k > 0 the HIDER acquires h units of material and can distribute it among all of the locations. At the same time, k, the SEEKER can search a certain number s < n of the locations, and will confiscate (or destroy) all material found. After explicitly describing what we mean by a continuous accumulation game on discrete locations, we prove a theorem that gives a condition under which the HIDER can always win by using a uniform distribution at each stage of the game. When this condition does not hold, special cases and examples show that the resulting game becomes complicated even when played only for a single stage. We reduce the single stage game to an optimization problem, and also obtain some partial results on its solution. We also consider accumulation games where the locations are arranged in either a circle or in a line segment and the SEEKER must search a series of adjacent locations. © 2002 John Wiley & Sons, Inc. Naval Research Logistics, 49: 60,77, 2002; DOI 10.1002/nav.1048 [source] Set theoretic formulation of performance reliability of multiple response time-variant systems due to degradations in system componentsQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 2 2007Young Kap Son Abstract This paper presents a design stage method for assessing performance reliability of systems with multiple time-variant responses due to component degradation. Herein the system component degradation profiles over time are assumed to be known and the degradation of the system is related to component degradation using mechanistic models. Selected performance measures (e.g. responses) are related to their critical levels by time-dependent limit-state functions. System failure is defined as the non-conformance of any response and unions of the multiple failure regions are required. For discrete time, set theory establishes the minimum union size needed to identify a true incremental failure region. A cumulative failure distribution function is built by summing incremental failure probabilities. A practical implementation of the theory can be manifest by approximating the probability of the unions by second-order bounds. Further, for numerical efficiency probabilities are evaluated by first-order reliability methods (FORM). The presented method is quite different from Monte Carlo sampling methods. The proposed method can be used to assess mean and tolerance design through simultaneous evaluation of quality and performance reliability. The work herein sets the foundation for an optimization method to control both quality and performance reliability and thus, for example, estimate warranty costs and product recall. An example from power engineering shows the details of the proposed method and the potential of the approach. Copyright © 2006 John Wiley & Sons, Ltd. [source] Moment based regression algorithms for drift and volatility estimation in continuous-time Markov switching modelsTHE ECONOMETRICS JOURNAL, Issue 2 2008Robert J. Elliott Summary, We consider a continuous time Markov switching model (MSM) which is widely used in mathematical finance. The aim is to estimate the parameters given observations in discrete time. Since there is no finite dimensional filter for estimating the underlying state of the MSM, it is not possible to compute numerically the maximum likelihood parameter estimate via the well known expectation maximization (EM) algorithm. Therefore in this paper, we propose a method of moments based parameter estimator. The moments of the observed process are computed explicitly as a function of the time discretization interval of the discrete time observation process. We then propose two algorithms for parameter estimation of the MSM. The first algorithm is based on a least-squares fit to the exact moments over different time lags, while the second algorithm is based on estimating the coefficients of the expansion (with respect to time) of the moments. Extensive numerical results comparing the algorithm with the EM algorithm for the discretized model are presented. [source] Nonlinear asymmetric models of the short-term interest rateTHE JOURNAL OF FUTURES MARKETS, Issue 9 2006K. Ozgur DemirtasArticle first published online: 18 JUL 200 This study introduces a generalized discrete time framework to evaluate the empirical performance of a wide variety of well-known models in capturing the dynamic behavior of short-term interest rates. A new class of models that displays nonlinearity and asymmetry in the drift, and incorporates the level effect and stochastic volatility in the diffusion function is introduced in discrete time and tested against the popular diffusion, GARCH, and level-GARCH models. Based on the statistical test results, the existing models are strongly rejected in favor of the newly proposed models because of the nonlinear asymmetric drift of the short rate, and the presence of nonlinearity, GARCH, and level effects in its volatility. The empirical results indicate that the nonlinear asymmetric models are better than the existing models in forecasting the future level and volatility of interest rate changes. © 2006 Wiley Periodicals, Inc. Jrl Fut Mark 26:869,894, 2006 [source] THE USE OF AGGREGATE DATA TO ESTIMATE GOMPERTZ-TYPE OLD-AGE MORTALITY IN HETEROGENEOUS POPULATIONSAUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 4 2009Christopher R. Heathcote Summary We consider two related aspects of the study of old-age mortality. One is the estimation of a parameterized hazard function from grouped data, and the other is its possible deceleration at extreme old age owing to heterogeneity described by a mixture of distinct sub-populations. The first is treated by half of a logistic transform, which is known to be free of discretization bias at older ages, and also preserves the increasing slope of the log hazard in the Gompertz case. It is assumed that data are available in the form published by official statistical agencies, that is, as aggregated frequencies in discrete time. Local polynomial modelling and weighted least squares are applied to cause-of-death mortality counts. The second, related, problem is to discover what conditions are necessary for population mortality to exhibit deceleration for a mixture of Gompertz sub-populations. The general problem remains open but, in the case of three groups, we demonstrate that heterogeneity may be such that it is possible for a population to show decelerating mortality and then return to a Gompertz-like increase at a later age. This implies that there are situations, depending on the extent of heterogeneity, in which there is at least one age interval in which the hazard function decreases before increasing again. [source] |