Poisson Process (poisson + process)

Distribution by Scientific Domains

Kinds of Poisson Process

  • compound poisson process
  • inhomogeneous poisson process


  • Selected Abstracts


    Analysis of Single-Molecule Fluorescence Spectroscopic Data with a Markov-Modulated Poisson Process

    CHEMPHYSCHEM, Issue 14 2009
    Mark Jäger Dr.
    Abstract We present a photon-by-photon analysis framework for the evaluation of data from single-molecule fluorescence spectroscopy (SMFS) experiments using a Markov-modulated Poisson process (MMPP). A MMPP combines a discrete (and hidden) Markov process with an additional Poisson process reflecting the observation of individual photons. The algorithmic framework is used to automatically analyze the dynamics of the complex formation and dissociation of Cu2+ ions with the bidentate ligand 2,2,-bipyridine-4,4,dicarboxylic acid in aqueous media. The process of association and dissociation of Cu2+ ions is monitored with SMFS. The dcbpy-DNA conjugate can exist in two or more distinct states which influence the photon emission rates. The advantage of a photon-by-photon analysis is that no information is lost in preprocessing steps. Different model complexities are investigated in order to best describe the recorded data and to determine transition rates on a photon-by-photon basis. The main strength of the method is that it allows to detect intermittent phenomena which are masked by binning and that are difficult to find using correlation techniques when they are short-lived. [source]


    Effective page refresh policy

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 3 2007
    Kai Gao
    Abstract Web pages are created or updated randomly. As for a search engine, keeping up with the evolving Web is necessary. But previous studies have shown the crawler's refresh ability is limited because it is not easy to detect the change instantly, especially when the resources are limited. This article concerns modeling on an effective Web page refresh policy and finding the refresh interval with minimum total waiting time. The major concern is how to model the change and which part should be updated more often. Toward this goal, the Poisson process is used to model the process. Further, the relevance is also used to adjust the process, and the probability on some sites is higher than others so these sites will be given more opportunities to be updated. It is essential when the bandwidth is not wide enough or the resource is limited. The experimental results validate the feasibility of the approach. On the basis of the above works, an educational search engine has been developed. © 2007 Wiley Periodicals, Inc. Comput Appl Eng Educ 14: 240,247, 2007; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20155 [source]


    Stochastic and Relaxation Processes in Argon by Measurements of Dynamic Breakdown Voltages

    CONTRIBUTIONS TO PLASMA PHYSICS, Issue 7 2005
    V. Lj.
    Abstract Statistically based measurements of breakdown voltages Ub and breakdown delay times td and their variations in transient regimes of establishment and relaxation of discharges are a convenient method to study stochastic processes of electrical breakdown of gases, as well as relaxation kinetics in afterglow. In this paper the measurements and statistical analysis of the dynamic breakdown voltages Ub for linearly rising (ramp) pulses in argon at 1.33 mbar and the rates of voltage rise k up to 800 V s,1 are presented. It was found that electrical breakdowns by linearly rising (ramp) pulses is an inhomogeneous Poisson process caused by primary and secondary ionization coefficients , , , and electron yield Y variations on the voltage (time). The experimental breakdown voltage distributions were fitted by theoretical distributions by applying approximate analytical and numerical models. The afterglow kinetics in argon was studied based on the dependence of the initial electron yield on the relaxation time Y0 (, ) derived from fitting of distributions. The space charge decay was explained by the surface recombination of nitrogen atoms present as impurities. The afterglow kinetics and the surface recombination coefficients on the gas tube and cathode were determined from a gas-phase model. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Spatial point-process statistics: concepts and application to the analysis of lead contamination in urban soil,

    ENVIRONMETRICS, Issue 4 2005
    Christian Walter
    Abstract This article explores the use of spatial point-process analysis as an aid to describe topsoil lead distribution in urban environments. The data used were collected in Glebe, an inner suburb of Sydney. The approach focuses on the locations of punctual events defining a point pattern, which can be statistically described through local intensity estimates and between-point distance functions. F -, G - and K -surfaces of a marked spatial point pattern were described and used to estimate nearest distance functions over a sliding band of quantiles belonging to the marking variable. This provided a continuous view of the point pattern properties as a function of the marking variable. Several random fields were simulated by selecting points from random, clustered or regular point processes and diffusing them. Recognition of the underlying point process using variograms derived from dense sampling was difficult because, structurally, the variograms were very similar. Point-event distance functions were useful complimentary tools that, in most cases, enabled clear recognition of the clustered processes. Spatial sampling quantile point pattern analysis was defined and applied to the Glebe data set. The analysis showed that the highest lead concentrations were strongly clustered. The comparison of this data set with the simulation confidence limits of a Poisson process, a short-radius clustered point process and a geostatistical simulation showed a random process for the third quartile of lead concentrations but strong clustering for the data in the upper quartile. Thus the distribution of topsoil lead concentrations over Glebe may have resulted from several contamination processes, mainly from regular or random processes with large diffusion ranges and short-range clustered processes for the hot spots. Point patterns with the same characteristics as the Glebe experimental pattern could be generated by separate additive geostatistical simulation. Spatial sampling quantile point patterns statistics can, in an easy and accurate way, be used complementarily with geostatistical methods. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Earnings-Based Bonus Compensation

    FINANCIAL REVIEW, Issue 4 2009
    António Câmara
    G39; M52 Abstract This article studies the cost of contingent earnings-based bonus compensation. We assume that the firm has normal and abnormal earnings. The normal earnings result from normal firm activities and are modeled as an arithmetic Brownian motion. The abnormal earnings result from surprising activities (e.g., introduction of an unexpected new product, an unexpected strike) and are modeled as a compound Poisson process where the earnings jump sizes have a normal distribution. We investigate, in a simple general equilibrium model, how normal and abnormal earnings affect the cost of contingent bonus compensation to the firm. [source]


    Variability of shallow overland flow velocity and soil aggregate transport observed with digital videography

    HYDROLOGICAL PROCESSES, Issue 20 2008
    A. Sidorchuk
    Abstract Field experiments at Tiramoana station 30 km north of Christchurch, New Zealand using an erosion plot 16·5 m long, 0·6 m wide, and with a slope of 14,14·5° on rendzina soil aimed to measure the variability of flow velocity and of soil aggregates transport rate in shallow overland flow. Discharge/cross-section area ratio was used to estimate mean velocity, and high-speed digital video camera and image analysis provided information about flow and sediment transport variability. Six flow runs with 0·5,3·0 L s,1 discharges were supercritical with Froude numbers close to or more than 1. Mean flow velocity followed Poiseuille law, float numbers were more than 1·5 and hydraulic resistance was an inverse proportional function of the Reynolds number, which is typical for laminar flows. Hence actual velocity varied through time significantly and the power spectrum was of ,red-noise', which is typical for turbulent flow. Sediment transport rates had even higher variability, and soil aggregates transport was a compound Poisson process. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Expedited forwarding end-to-end delay and jitter in DiffServ

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 8 2008
    Hamada Alshaer
    Abstract The scheduling disciplines and active buffer management represent the main components employed in the differentiated services (DiffServ) data plane, which provide qualitative per-hop behaviors corresponding to the QoS required by supported traffic classes. In the first part of this paper, we compute the per-hop delay bound that should be guaranteed by the different multiservice scheduling disciplines, so that the end-to-end (e2e) delay required by expedited forwarding (EF) traffic can be guaranteed. Consequently, we derive the e2e delay bound of EF traffic served by priority queuing,weighted fair queuing (PQ,WFQ) at every hop along its routing path. Although real-time flows are principally offered EF service class, some simulations on DiffServ-enabled network show that these flows suffer from delay jitter and they are negatively impacted by lower priority traffic. In the second part of this paper, we clarify the passive impact of delay jitter on EF traffic, where EF flows are represented by renewal periodic ON,OFF flows, and the background (BG) flows are characterized by the Poisson process. We analyze through different scenarios the jitter effects of these BG flows on EF flow patterns when they are served by a single class scheduling discipline, such as first-input first-output, and a multiclass or multiservice scheduling discipline, such as static priority service discipline. As a result, we have found out that the EF per-hop behaviors (PHBs) configuration according to RFCs 2598 and 3246 (IETF RFC 2598, June 1999; RFC 3246, IETF, March 2002) cannot stand alone in guaranteeing the delay jitter required by EF flows. Therefore, playout buffers must be added to DiffServ-enabled networks for handling delay jitter problem that suffers from EF flows. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Keeping off the grass?

    JOURNAL OF APPLIED ECONOMETRICS, Issue 4 2004
    An econometric model of cannabis consumption in Britain
    This paper presents estimates of a dynamic individual-level model of cannabis consumption, using data from a 1998 survey of young people in Britain. The econometric model is a split-population generalization of the non-stationary Poisson process, allowing for separate dynamic process for initiation into cannabis use and subsequent consumption. The model allows for heterogeneity in consumption levels and behavioural shifts induced by leaving education and the parental home. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    The Effect of Exercise Date Uncertainty on Employee Stock Option Value

    JOURNAL OF BUSINESS FINANCE & ACCOUNTING, Issue 5-6 2003
    Brian A. Maris
    The IASC recently recommended that employee compensation in the form of stock options be measured at the ,fair value' based on an option pricing model and the value should be recognized in financial statements. This follows adoption of SFAS No. 123 in the United States, which requires firms to estimate the value of employee stock options using either a Black-Scholes or binomial model. Most US firms used the B-S model for their 1996 financial statements. This study assumes that option life follows a Gamma distribution, allowing the variance of option life to be separate from its expected life. The results indicate the adjusted Black-Scholes model could overvalue employee stock options on the grant date by as much as 72 percent for nondividend paying firms and by as much as 84 percent for dividend paying firms. The results further demonstrate the sensitivity of ESO values to the volatility of the expected option life, a parameter that the B-S model or a Poisson process cannot accommodate. The variability of option life has an especially big impact on ESO value for firms whose ESOs have a relatively short life (5 years, for example) and high employee turnover. For such firms, the results indicate a binomial option pricing model is more appropriate for estimating ESO value than the B-S type model. [source]


    Martingale Approach for Moments of Discounted Aggregate Claims

    JOURNAL OF RISK AND INSURANCE, Issue 2 2004
    Ji-Wook Jang
    We examine the Laplace transform of the distribution of the shot noise process using the martingale. Applying the piecewise deterministic Markov processes theory and using the relationship between the shot noise process and the accumulated/discounted aggregate claims process, the Laplace transform of the distribution of the accumulated aggregate claims is obtained. Assuming that the claim arrival process follows the Poisson process and claim sizes are assumed to be exponential and mixture of exponential, we derive the explicit expressions of the actuarial net premiums and variances of the discounted aggregate claims, which are the annuities paid continuously. Numerical examples are also provided based on them. [source]


    Exploring spatiotemporal patterns in early stages of primary succession on former lignite mining sites

    JOURNAL OF VEGETATION SCIENCE, Issue 2 2008
    Birgit Felinks
    Abstract Questions: 1. Does random colonization predominate in early stages of primary succession? 2. Do pioneer species facilitate the establishment of later arriving species? 3. Does an initially random distribution change to an aggregated pattern with ongoing succession? Location: Lignite mining region of Lower Lusatia, eastern Germany. Methods: Individual plants were mapped along a 2 m × 28 m transect during three successive years and classified into two groups (1) the pioneer Corynephorus canescens and (2),all other species'. Using the pair-correlation function, univariate point pattern analysis was carried out by applying a heterogeneous Poisson process as null model. Bivariate analysis and a toroidal shift null model were applied to test for independence between the spatial patterns of the two groups separately for each year, as well by exploring spatiotemporal patterns from different years. Results: In the first year Corynephorus and ,all other species' showed an aggregated pattern on a spatial scale > 40 cm and in the second and third years a significant attraction for distances between 4 and 12 cm, with an increasing radius in the third year. The analyses of interspecific spatiotemporal dynamics revealed a change from independence to attraction between distances of 4 cm and 16 cm when using Corynephorus as focal species. However, applying ,all other species' as focal points results in a significant attraction at distances up to 60 cm in the first year and a diminishing attraction in the second and third years with distances , 6 cm. Conclusions: Facilitative species-species interactions are present in early stages of primary succession, resulting mainly from pioneer species acting as physical barriers and their ability to capture diaspores being drifted by secondary dispersal along the substrate surface. However, due to gradual establishment of perennial species and their ability of lateral extension by vegetative dispersal, facilitation may influence spatial pattern formation predominantly on short temporal and fine spatial scales. [source]


    OPTIMAL CONTINUOUS-TIME HEDGING WITH LEPTOKURTIC RETURNS

    MATHEMATICAL FINANCE, Issue 2 2007

    We examine the behavior of optimal mean,variance hedging strategies at high rebalancing frequencies in a model where stock prices follow a discretely sampled exponential Lévy process and one hedges a European call option to maturity. Using elementary methods we show that all the attributes of a discretely rebalanced optimal hedge, i.e., the mean value, the hedge ratio, and the expected squared hedging error, converge pointwise in the state space as the rebalancing interval goes to zero. The limiting formulae represent 1-D and 2-D generalized Fourier transforms, which can be evaluated much faster than backward recursion schemes, with the same degree of accuracy. In the special case of a compound Poisson process we demonstrate that the convergence results hold true if instead of using an infinitely divisible distribution from the outset one models log returns by multinomial approximations thereof. This result represents an important extension of Cox, Ross, and Rubinstein to markets with leptokurtic returns. [source]


    PORTFOLIO OPTIMIZATION WITH JUMPS AND UNOBSERVABLE INTENSITY PROCESS

    MATHEMATICAL FINANCE, Issue 2 2007
    Nicole Bäuerle
    We consider a financial market with one bond and one stock. The dynamics of the stock price process allow jumps which occur according to a Markov-modulated Poisson process. We assume that there is an investor who is only able to observe the stock price process and not the driving Markov chain. The investor's aim is to maximize the expected utility of terminal wealth. Using a classical result from filter theory it is possible to reduce this problem with partial observation to one with complete observation. With the help of a generalized Hamilton,Jacobi,Bellman equation where we replace the derivative by Clarke's generalized gradient, we identify an optimal portfolio strategy. Finally, we discuss some special cases of this model and prove several properties of the optimal portfolio strategy. In particular, we derive bounds and discuss the influence of uncertainty on the optimal portfolio strategy. [source]


    Models of sensor operations for border surveillance,

    NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 1 2008
    Roberto Szechtman
    Abstract This article is motivated by the diverse array of border threats, ranging from terrorists to arms dealers and human traffickers. We consider a moving sensor that patrols a certain section of a border with the objective to detect infiltrators who attempt to penetrate that section. Infiltrators arrive according to a Poisson process along the border with a specified distribution of arrival location, and disappear a random amount of time after their arrival. The measures of effectiveness are the target (infiltrator) detection rate and the time elapsed from target arrival to target detection. We study two types of sensor trajectories that have constant endpoints, are periodic, and maintain constant speed: (1) a sensor that jumps instantaneously from the endpoint back to the starting-point, and (2) a sensor that moves continuously back and forth. The controlled parameters (decision variables) are the starting and end points of the patrolled sector and the velocity of the sensor. General properties of these trajectories are investigated. © 2007 Wiley Periodicals, Inc. Naval Research Logistics, 2008 [source]


    STOCK LEVELS AND DELIVERY RATES IN VENDORMANAGED INVENTORY PROGRAMS

    PRODUCTION AND OPERATIONS MANAGEMENT, Issue 1 2001
    BEN A. CHAOUCH
    Using the latest information technology, powerful retailers like Wal-Mart have taken the lead in forging shorter replenishment-cycles, automated supply systems with suppliers. With the objective to reduce cost, these retailers are directing suppliers to take full responsibility for managing stocks and deliveries. Suppliers' performance is measured according to how often inventory is shipped to the retailer, and how often customers are unable to purchase the product because it is out of stock. This emerging trend also implies that suppliers are absorbing a large part of the inventory and delivery costs and, therefore, must plan delivery programs including delivery frequency to ensure that the inherent costs are minimized. With the idea to incorporate this shift in focus, this paper looks at the problem facing the supplier who wants quicker replenishment at lower cost. In particular, we present a model that seeks the best trade-off among inventory investment, delivery rates, and permitting shortages to occur, given some random demand pattern for the product. The process generating demand consists of two components: one is deterministic and the other is random. The random part is assumed to follow a compound Poisson process. Furthermore, we assume that the supplier may fail to meet uniform shipping schedules, and, therefore, uncertainty is present in delivery times. The solution to this transportationinventory problem requires determining jointly delivery rates and stock levels that will minimize transportation, inventory, and shortage costs. Several numerical results are presented to give a feel of the optimal policy's general behavior. [source]


    Transient behavior of time-between-failures of complex repairable systems

    QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 4 2002
    J. Bert Keats
    Abstract It is well known for complex repairable systems (with as few as four components), regardless of the time-to-failure (TTF) distribution of each component, that the time-between-failures (TBFs) tends toward the exponential. This is a long-term or ,steady-state' property. Aware of this property, many of those modeling such systems tend to base spares provisioning, maintenance personnel availability and other decisions on an exponential TBFs distribution. Such a policy may suffer serious drawbacks. A non-homogeneous Poisson process (NHPP) accounts for these intervals for some time prior to ,steady-state'. Using computer simulation, the nature of transient TBF behavior is examined. The number of system failures until the exponential TBF assumption is valid is of particular interest. We show, using a number of system configurations and failure and repair distributions, that the transient behavior quickly drives the TBF distribution to the exponential. We feel comfortable with achieving exponential results for the TBF with 30 system failures. This number may be smaller for configurations with more components. However, at this point, we recommend 30 as the systems failure threshold for using the exponential assumption. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Properties of the Duane plot for repairable systems

    QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 1 2002
    Steven E. Rigdon
    Abstract The Duane plot is a plot of cumulative operating time against the cumulative failure rate on double logarithmic axes. It is often claimed that the power law process, a non-homogeneous Poisson process with intensity function , implies a linear Duane plot. We show that this implication and its converse are false. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Pricing and hedging of quanto range accrual notes under Gaussian HJM with cross-currency Levy processes

    THE JOURNAL OF FUTURES MARKETS, Issue 10 2009
    Szu-Lang Liao
    This study analyzes the pricing and hedging problems for quanto range accrual notes (RANs) under the Heath-Jarrow-Morton (HJM) framework with Levy processes for instantaneous domestic and foreign forward interest rates. We consider the effects of jump risk on both interest rates and exchange rates in the pricing of the notes. We first derive the pricing formula for quanto double interest rate digital options and quanto contingent payoff options; then we apply the method proposed by Turnbull (Journal of Derivatives, 1995, 3, 92,101) to replicate the quanto RAN by a combination of the quanto double interest rate digital options and the quanto contingent payoff options. Using the pricing formulas derived in this study, we obtain the hedging position for each issue of quanto RANs. In addition, by simulation and assuming the jump risk to follow a compound Poisson process, we further analyze the effects of jump risk and exchange rate risk on the coupons receivable in holding a RAN. © 2009 Wiley Periodicals, Inc. Jrl Fut Mark 29:973,998, 2009 [source]


    Towards quantifying droplet clustering in clouds

    THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 582 2002
    R. A. Shaw
    Abstract Droplet positions in atmospheric clouds are random but possibly correlated on some scales. This ,clustering' must be quantified in order to account for it in theories of cloud evolution and radiative transfer. Tools as varied as droplet concentration power spectrum, Fishing test, and fractal correlation analysis have been used to describe the small-scale nature of clouds, and it has been difficult to compare conclusions systematically. Here we show, by using the correlation-fluctuation theorem and the Wiener,Khinchin theorem, that all of these measures can be related to the pair-correlation function. It is argued that the pair-correlation function is ideal for quantifying droplet clustering because it contains no scale memory and because of its quantitative link to the Poisson process. Copyright © 2002 Royal Meteorological Society [source]


    Risk-minimizing hedging strategies with restricted information and cost

    APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 4 2010
    Jianqi Yang
    Abstract With the assumption that information cost is characterized by a Poisson process, this paper presents risk-minimizing problems under jump-diffusion models. First, the explicit optimal strategy under complete information is given using Itô formula. Second, the optimal strategy problem under restricted information is solved by projection. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    On a class of renewal risk model with random income

    APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 6 2009
    Hu Yang
    Abstract In this paper, we consider a renewal risk process with random premium income based on a Poisson process. Generating function for the discounted penalty function is obtained. We show that the discounted penalty function satisfies a defective renewal equation and the corresponding explicit expression can be obtained via a compound geometric tail. Finally, we consider the Laplace transform of the time to ruin, and derive the closed-form expression for it when the claims have a discrete Km distribution (i.e. the generating function of the distribution function is a ratio of two polynomials of order m,,+). Copyright © 2008 John Wiley & Sons, Ltd. [source]


    The compound Poisson process perturbed by a diffusion with a threshold dividend strategy

    APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 1 2009
    Kam C. Yuen
    Abstract In this paper, we consider the compound Poisson process perturbed by a diffusion in the presence of the so-called threshold dividend strategy. Within this framework, we prove the twice continuous differentiability of the expected discounted value of all dividends until ruin. We also derive integro-differential equations for the expected discounted value of all dividends until ruin and obtain explicit expressions for the solution to the equations. Along the same line, we establish explicit expressions for the Laplace transform of the time of ruin and the Laplace transform of the aggregate dividends until ruin. In the case of exponential claims, some examples are provided. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    A new risk model based on policy entrance process and its weak convergence properties

    APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 3 2007
    Zehui Li
    Abstract In this paper, we construct a new risk model based on the policy entrance process. The model is concerned with n kinds of independent policies, and each policy is allowed to claim more than once before it expires. As each kind of policy is issued according to a non-homogeneous Poisson process, the long run behaviour of the new risk process is investigated. When the tail of the claim size distribution is regularly varying, the standardized risk process is proved to converge to a stable law. When each kind of policy is issued according to a homogeneous Poisson process, we also give a diffusion approximation of the new risk process. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Conditional length distributions induced by the coverage of two points by a Poisson Voronoï tessellation: application to a telecommunication model

    APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 4 2006
    Catherine Gloaguen
    Abstract The end points of a fixed segment in the Euclidian plane covered by a Poisson Voronoï tessellation belong to the same cell or to two distinct cells. This marks off one or two points of the underlying Poisson process that are the nucleus(i) of the cell(s). Our interest lies in the geometrical relationship between these nuclei and the segment end points as well as between the nuclei. We investigate their probability distribution functions conditioning on the number of nuclei, taking into account the length of the segment. The aim of the study is to establish some tools to be used for the analysis of a telecommunication problem related to the pricing of leased lines. We motivate and give accurate approximations of the probability of common coverage and of the length distributions that can be included in spreadsheet codes as an element of simple cost functions. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Modelling the process of incoming problem reports on released software products

    APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 2 2004
    Geurt Jongbloed
    Abstract For big software developing companies, it is important to know the amount of problems of a new software product that are expected to be reported in a period after the date of release, on a weekly basis. For each of a number of past releases, weekly data are present on the number of such reports. Based on the type of data that is present, we construct a stochastic model for the weekly number of problems to be reported. The (non-parametric) maximum likelihood estimator for the crucial model parameter, the intensity of an inhomogeneous Poisson process, is defined. Moreover, the expectation maximization algorithm is described, which can be used to compute this estimate. The method is illustrated using simulated data. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    On a cumulative damage process and resulting first passages times

    APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 1 2004
    Waltraud Kahle
    Abstract The reliability of a product is often affected by a damage process. In this paper we consider a shock model where at random times a shock causes a random damage. The cumulative damage process is assumed to be generated by a position-dependent marking of a doubly stochastic Poisson process. Some characteristics of this process are described. For general and special cases the probability that the damage process does not cross a certain threshold before a time t is calculated. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Cluster Pattern Detection in Spatial Data Based on Monte Carlo Inference

    BIOMETRICAL JOURNAL, Issue 4 2007
    Radu Stefan Stoica
    Abstract Clusters in a data point field exhibit spatially specified regions in the observation window. The method proposed in this paper addresses the cluster detection problem from the perspective of detection of these spatial regions. These regions are supposed to be formed of overlapping random disks driven by a marked point process. The distribution of such a process has two components. The first is related to the location of the disks in the field of observation and is defined as an inhomogeneous Poisson process. The second one is related to the interaction between disks and is constructed by the superposition of an area-interaction and a pairwise interaction processes. The model is applied on spatial data coming from animal epidemiology. The proposed method tackles several aspects related to cluster pattern detection: heterogeneity of data, smoothing effects, statistical descriptors, probability of cluster presence, testing for the cluster presence. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Cumulative Damage Survival Models for the Randomized Complete Block Design

    BIOMETRICAL JOURNAL, Issue 2 2003
    Gavin G. Gregory
    Abstract A continuous time discrete state cumulative damage process {X(t), t , 0} is considered, based on a non-homogeneous Poisson hit-count process and discrete distribution of damage per hit, which can be negative binomial, Neyman type A, Polya-Aeppli or Lagrangian Poisson. Intensity functions considered for the Poisson process comprise a flexible three-parameter family. The survival function is S(t) = P(X(t) , L) where L is fixed. Individual variation is accounted for within the construction for the initial damage distribution {P(X(0) = x) | x = 0, 1, ,,}. This distribution has an essential cut-off before x = L and the distribution of L , X(0) may be considered a tolerance distribution. A multivariate extension appropriate for the randomized complete block design is developed by constructing dependence in the initial damage distributions. Our multivariate model is applied (via maximum likelihood) to litter-matched tumorigenesis data for rats. The litter effect accounts for 5.9 percent of the variance of the individual effect. Cumulative damage hazard functions are compared to nonparametric hazard functions and to hazard functions obtained from the PVF-Weibull frailty model. The cumulative damage model has greater dimensionality for interpretation compared to other models, owing principally to the intensity function part of the model. [source]


    Joint Spatial Modeling of Recurrent Infection and Growth with Processes under Intermittent Observation

    BIOMETRICS, Issue 2 2010
    F. S. Nathoo
    Summary In this article, we present a new statistical methodology for longitudinal studies in forestry, where trees are subject to recurrent infection, and the hazard of infection depends on tree growth over time. Understanding the nature of this dependence has important implications for reforestation and breeding programs. Challenges arise for statistical analysis in this setting with sampling schemes leading to panel data, exhibiting dynamic spatial variability, and incomplete covariate histories for hazard regression. In addition, data are collected at a large number of locations, which poses computational difficulties for spatiotemporal modeling. A joint model for infection and growth is developed wherein a mixed nonhomogeneous Poisson process, governing recurring infection, is linked with a spatially dynamic nonlinear model representing the underlying height growth trajectories. These trajectories are based on the von Bertalanffy growth model and a spatially varying parameterization is employed. Spatial variability in growth parameters is modeled through a multivariate spatial process derived through kernel convolution. Inference is conducted in a Bayesian framework with implementation based on hybrid Monte Carlo. Our methodology is applied for analysis in an 11-year study of recurrent weevil infestation of white spruce in British Columbia. [source]


    Partial-Likelihood Analysis of Spatio-Temporal Point-Process Data

    BIOMETRICS, Issue 2 2010
    Peter J. Diggle
    Summary We investigate the use of a partial likelihood for estimation of the parameters of interest in spatio-temporal point-process models. We identify an important distinction between spatially discrete and spatially continuous models. We focus our attention on the spatially continuous case, which has not previously been considered. We use an inhomogeneous Poisson process and an infectious disease process, for which maximum-likelihood estimation is tractable, to assess the relative efficiency of partial versus full likelihood, and to illustrate the relative ease of implementation of the former. We apply the partial-likelihood method to a study of the nesting pattern of common terns in the Ebro Delta Natural Park, Spain. [source]