Random Number (random + number)

Distribution by Scientific Domains


Selected Abstracts


On Models for Binomial Data with Random Numbers of Trials

BIOMETRICS, Issue 2 2007
W. Scott Comulada
Summary A binomial outcome is a count s of the number of successes out of the total number of independent trials n=s+f, where f is a count of the failures. The n are random variables not fixed by design in many studies. Joint modeling of (s, f) can provide additional insight into the science and into the probability , of success that cannot be directly incorporated by the logistic regression model. Observations where n= 0 are excluded from the binomial analysis yet may be important to understanding how , is influenced by covariates. Correlation between s and f may exist and be of direct interest. We propose Bayesian multivariate Poisson models for the bivariate response (s, f), correlated through random effects. We extend our models to the analysis of longitudinal and multivariate longitudinal binomial outcomes. Our methodology was motivated by two disparate examples, one from teratology and one from an HIV tertiary intervention study. [source]


The influences of data precision on the calculation of temperature percentile indices

INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 3 2009
Xuebin Zhang
Abstract Percentile-based temperature indices are part of the suite of indices developed by the WMO CCl/CLIVAR/JCOMM Expert Team on Climate Change Detection and Indices. They have been used to analyse changes in temperature extremes for various parts of the world. We identify a bias in percentile-based indices which consist of annual counts of threshold exceedance. This bias occurs when there is insufficient precision in temperature data, and affects the estimation of the means and trends of percentile-based indices. Such imprecision occurs when temperature observations are truncated or rounded prior to being recorded and archived. The impacts on the indices depend upon the type of relation (i.e. temperature greater than or greater than or equal to) used to determine the exceedance rate. This problem can be solved when the loss of precision is not overly severe by adding a small random number to artificially restore data precision. While these adjustments do not improve the accuracy of individual observations, the exceedance rates that are computed from data adjusted in this way have properties, such as long-term mean and trend, which are similar to those directly estimated from data that are originally of the same precision as the adjusted data. Copyright © 2008 Royal Meteorological Society [source]


A new traffic model for backbone networks and its application to performance analysis

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 6 2008
Ming Yu
Abstract In this paper, we present a new traffic model constructed from a random number of shifting level processes (SLP) aggregated over time, in which the lengths of the active periods of the SLP are of Pareto or truncated Pareto distribution. For both cases, the model has been proved to be asymptotically second-order self-similar. However, based on extensive traffic data we collected from a backbone network, we find that the active periods of the constructing SLPs can be approximated better by a truncated Pareto distribution, instead of the Pareto distribution as assumed in existing traffic model constructions. The queueing problem of a single server fed with a traffic described by the model is equivalently converted to a problem with a traffic described by Norros' model. For the tail probability of the queue length distribution, an approximate expression and upper bound have been found in terms of large deviation estimates and are mathematically more tractable than existing results. The effectiveness of the traffic model and performance results are demonstrated by our simulations and experimental studies on a backbone network. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Optical switch dimensioning and the classical occupancy problem

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 2-3 2002
Vincenzo Eramo
Abstract Results for optical switch dimensioning are obtained by analysing an urn occupancy problem in which a random number of balls is used. This analysis is applied to a high speed bufferless optical switch which uses tuneable wavelength converters to resolve contention between packets at the output fibres. Under symmetric packet routing the urn problem reduces to the classical occupancy problem. Since the problem is large scale and the loss probabilities are small, exact analysis by combinatorial methods is problematic. As an alternative, we outline a large deviations approximation which may be generalised in various ways. Copyright © 2002 John Wiley & Sons, Ltd. [source]


The influence of spatial errors in species occurrence data used in distribution models

JOURNAL OF APPLIED ECOLOGY, Issue 1 2008
Catherine H Graham
Summary 1Species distribution modelling is used increasingly in both applied and theoretical research to predict how species are distributed and to understand attributes of species' environmental requirements. In species distribution modelling, various statistical methods are used that combine species occurrence data with environmental spatial data layers to predict the suitability of any site for that species. While the number of data sharing initiatives involving species' occurrences in the scientific community has increased dramatically over the past few years, various data quality and methodological concerns related to using these data for species distribution modelling have not been addressed adequately. 2We evaluated how uncertainty in georeferences and associated locational error in occurrences influence species distribution modelling using two treatments: (1) a control treatment where models were calibrated with original, accurate data and (2) an error treatment where data were first degraded spatially to simulate locational error. To incorporate error into the coordinates, we moved each coordinate with a random number drawn from the normal distribution with a mean of zero and a standard deviation of 5 km. We evaluated the influence of error on the performance of 10 commonly used distributional modelling techniques applied to 40 species in four distinct geographical regions. 3Locational error in occurrences reduced model performance in three of these regions; relatively accurate predictions of species distributions were possible for most species, even with degraded occurrences. Two species distribution modelling techniques, boosted regression trees and maximum entropy, were the best performing models in the face of locational errors. The results obtained with boosted regression trees were only slightly degraded by errors in location, and the results obtained with the maximum entropy approach were not affected by such errors. 4Synthesis and applications. To use the vast array of occurrence data that exists currently for research and management relating to the geographical ranges of species, modellers need to know the influence of locational error on model quality and whether some modelling techniques are particularly robust to error. We show that certain modelling techniques are particularly robust to a moderate level of locational error and that useful predictions of species distributions can be made even when occurrence data include some error. [source]


Increasing the Homogeneity of CAT's Item-Exposure Rates by Minimizing or Maximizing Varied Target Functions While Assembling Shadow Tests

JOURNAL OF EDUCATIONAL MEASUREMENT, Issue 3 2005
Yuan H. Li
A computerized adaptive testing (CAT) algorithm that has the potential to increase the homogeneity of CAT's item-exposure rates without significantly sacrificing the precision of ability estimates was proposed and assessed in the shadow-test (van der Linden & Reese, 1998) CAT context. This CAT algorithm was formed by a combination of maximizing or minimizing varied target functions while assembling shadow tests. There were four target functions to be separately used in the first, second, third, and fourth quarter test of CAT. The elements to be used in the four functions were associated with (a) a random number assigned to each item, (b) the absolute difference between an examinee's current ability estimate and an item difficulty, (c) the absolute difference between an examinee's current ability estimate and an optimum item difficulty, and (d) item information. The results indicated that this combined CAT fully utilized all the items in the pool, reduced the maximum exposure rates, and achieved more homogeneous exposure rates. Moreover, its precision in recovering ability estimates was similar to that of the maximum item-information method. The combined CAT method resulted in the best overall results compared with the other individual CAT item-selection methods. The findings from the combined CAT are encouraging. Future uses are discussed. [source]


The Use of the RAFT-Technique for the Preparation of Temperature/pH Sensitive Polymers in Different Architectures

MACROMOLECULAR SYMPOSIA, Issue 1 2009
Angel Licea-Claveríe
Abstract In this contribution we report the use of the RAFT-technique for the preparation of three types of responsive polymeric materials with a high potential of application in the biomedical field: 1.-Diblock copolymers with reversible self-assembly capacity as a function of pH based on N,N, -diethylaminoethyl methacrylate (DEAEM) and 2-methacryloyloxy benzoic acid (MAOB); 2.-Diblock copolymers with reversible self-assembly capacity as a function of temperature, based N -isopropylacrylamide (NIPAAm) and n-hexyl acrylate (HA); and 3.-Polymeric stars with random number of arms consisting either in NIPAAm-arms or copolymeric NIPAAm-arms and hydrophobic core. [source]


The cutoff phenomenon for randomized riffle shuffles

RANDOM STRUCTURES AND ALGORITHMS, Issue 3 2008
Guan-Yu Chen
Abstract We study the cutoff phenomenon for generalized riffle shuffles where, at each step, the deck of cards is cut into a random number of packs of multinomial sizes which are then riffled together. © 2007 Wiley Periodicals, Inc. Random Struct. Alg., 2008 [source]


Asymptotics in Knuth's parking problem for caravans,

RANDOM STRUCTURES AND ALGORITHMS, Issue 1 2006
Jean Bertoin
Abstract We consider a generalized version of Knuth's parking problem, in which caravans consisting of a random number of cars arrive at random on the unit circle. Then each car turns clockwise until it finds a free space to park. Extending a recent work by Chassaing and Louchard Random Struct Algor 21(1) (2002), 76,119, we relate the asymptotics for the sizes of blocks formed by occupied spots with the dynamics of the additive coalescent. According to the behavior of the caravans' size tail distribution, several qualitatively different versions of the eternal additive coalescent are involved. © 2005 Wiley Periodicals, Inc. Random Struct. Alg., 2006 [source]


Implementing quality control on a random number stream to improve a stochastic weather generator,,

HYDROLOGICAL PROCESSES, Issue 8 2008
Charles R. Meyer
Abstract For decades, stochastic modellers have used computerized random number generators to produce random numeric sequences fitting a specified statistical distribution. Unfortunately, none of the random number generators we tested satisfactorily produced the target distribution. The result is generated distributions whose mean even diverges from the mean used to generate them, regardless of the length of run. Non-uniform distributions from short sequences of random numbers are a major problem in stochastic climate generation, because truly uniform distributions are required to produce the intended climate parameter distributions. In order to ensure generation of a representative climate with the stochastic weather generator CLIGEN within a 30-year run, we tested the climate output resulting from various random number generators. The resulting distributions of climate parameters showed significant departures from the target distributions in all cases. We traced this failure back to the uniform random number generators themselves. This paper proposes a quality control approach to select only those numbers that conform to the expected distribution being retained for subsequent use. The approach is based on goodness-of-fit analysis applied to the random numbers generated. Normally distributed deviates are further tested with confidence interval tests on their means and standard deviations. The positive effect of the new approach on the climate characteristics generated and the subsequent deterministic process-based hydrology and soil erosion modelling are illustrated for four climatologically diverse sites. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Sampling Procedures for Coordinating Stratified Samples: Methods Based on Microstrata

INTERNATIONAL STATISTICAL REVIEW, Issue 3 2008
Desislava Nedyalkova
Summary The aim of sampling coordination is to maximize or minimize the overlap between several samples drawn successively in a population that changes over time. Therefore, the selection of a new sample will depend on the samples previously drawn. In order to obtain a larger (or smaller) overlap of the samples than the one obtained by independent selection of samples, a dependence between the samples must be introduced. This dependence will emphasize (or limit) the number of common units in the selected samples. Several methods for coordinating stratified samples, such as the Kish & Scott method, the Cotton & Hesse method, and the Rivière method, have already been developed. Using simulations, we compare the optimality of these methods and their quality of coordination. We present six new methods based on permanent random numbers (PRNs) and microstrata. These new methods have the advantage of allowing us to choose between positive or negative coordination with each of the previous samples. Simulations are run to test the validity of each of them. Résumé Le but de la coordination d'échantillons est de maximiser ou minimiser le recouvrement de plusieurs échantillons à l'intérieur d'une population qui évolue au fil des temps. Pour effectuer une coordination, la sélection d'un nouvel échantillon dépendra donc des échantillons précédemment tirés. Afin d'obtenir un recouvrement plus fort ou plus faible que celui fourni par des tirages indépendants, une dépendance entre les échantillons doit être introduite. Cette dépendance va augmenter ou limiter le nombre d'unités communes à tous les échantillons sélectionnés. Plusieurs méthodes pour coordonner des échantillons stratifiés ont déjàété développées. Parmi eux les méthodes de Kish and Scott, de Cotton and Hesse, et de Rivière sont présentées en détail. En utilisant des simulations, on compare l'optimalité et la qualité de la coordination pour chacune de ces trois méthodes. On présente six nouvelles méthodes basées sur l'utilisation de nombres aléatoires permanents et des microstrates et on essaye de les valider à l'aide des simulations. [source]


Transport-equilibrium schemes for computing nonclassical shocks.

NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 4 2008
Scalar conservation laws
Abstract This paper presents a new numerical strategy for computing the nonclassical weak solutions of scalar conservation laws which fail to be genuinely nonlinear. We concentrate on the typical situation of concave,convex and convex,concave flux functions. In such situations the so-called nonclassical shocks, violating the classical Oleinik entropy criterion and selected by a prescribed kinetic relation, naturally arise in the resolution of the Riemann problem. Enforcing the kinetic relation from a numerical point of view is known to be a crucial but challenging issue. By means of an algorithm made of two steps, namely an Equilibrium step and a Transport step, we show how to force the validity of the kinetic relation at the discrete level. The proposed strategy is based on the use of a numerical flux function and random numbers. We prove that the resulting scheme enjoys important consistency properties. Numerous numerical evidences illustrate the validity of our approach. © 2007 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2008 [source]


Sequential design in quality control and validation of land cover databases

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 2 2009
Elisabetta Carfagna
Abstract We have faced the problem of evaluating the quality of land cover databases produced through photo-interpretation of remote-sensing data according to a legend of land cover types. First, we have considered the quality control, that is, the comparison of a land cover database with the result of the photo-interpretation made by a more expert photo-interpreter, on a sample of the polygons. Then we have analysed the problem of validation, that is, the check of the photo-interpretation through a ground survey. We have used the percentage of area correctly photo-interpreted as a quality measure. Since the kind of land cover type and the size of the polygons affect the probability of making mistakes in the photo-interpretation, we stratify the polygons according to two variables: the land cover type of the photo-interpretation and the size of the polygons. We have proposed an adaptive sequential procedure with permanent random numbers in which the sample size per stratum is dependent on the previously selected units but the sample selection is not, and the stopping rule is not based on the estimates of the quality parameter. We have proved that this quality control and validation procedure allows unbiased and efficient estimates of the quality parameters and allows reaching high precision of estimates with the smallest sample size. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Dynamics of an age-structured population drawn from a random numbers table

AUSTRAL ECOLOGY, Issue 4 2000
Bertram G. Murray JR
Abstract I constructed age-structured populations by drawing numbers from a random numbers table, the constraints being that within a cohort each number be smaller than the preceding number (indicating that some individuals died between one year and the next) and that the first two-digit number following 00 or 01 ending one cohort's life be the number born into the next cohort. Populations constructed in this way showed prolonged existence with total population numbers fluctuating about a mean size and with long-term growth rate (r) , 0. The populations' birth rates and growth rates and the females' per capita fecundity decreased significantly with population size, whereas the death rates showed no significant relationship to population size. These results indicate that age-structured populations can persist for long periods of time with long-term growth rates of zero in the absence of negative-feedback loops between a population's present or prior density and its birth rate, growth rate, and fecundity, contrary to the assumption of density-dependent regulation hypotheses. Thus, a long-term growth rate of zero found in natural populations need not indicate that a population's numbers are regulated by density-dependent factors. [source]


Dynamics of an age-structured population drawn from a random numbers table

AUSTRAL ECOLOGY, Issue 4 2000
BERTRAM G. MURRAY JR
Abstract I constructed age-structured populations by drawing numbers from a random numbers table, the constraints being that within a cohort each number be smaller than the preceding number (indicating that some individuals died between one year and the next) and that the first two-digit number following 00 or 01 ending one cohort's life be the number born into the next cohort. Populations constructed in this way showed prolonged existence with total population numbers fluctuating about a mean size and with long-term growth rate (r) , 0. The populations' birth rates and growth rates and the females' per capita fecundity decreased significantly with population size, whereas the death rates showed no significant relationship to population size. These results indicate that age-structured populations can persist for long periods of time with long-term growth rates of zero in the absence of negative-feedback loops between a population's present or prior density and its birth rate, growth rate, and fecundity, contrary to the assumption of density-dependent regulation hypotheses. Thus, a long-term growth rate of zero found in natural populations need not indicate that a population's numbers are regulated by density-dependent factors. [source]