Lognormal Distribution (lognormal + distribution)

Distribution by Scientific Domains


Selected Abstracts


Application of Sartwell's Model (Lognormal Distribution of Incubation Periods) to Age at Onset and Age at Death of Foals with Rhodococcus equi Pneumonia as Evidence of Perinatal Infection

JOURNAL OF VETERINARY INTERNAL MEDICINE, Issue 3 2001
Miriam L. Horowitz
The distributions of the incubation periods for infectious and neoplastic diseases originating from point-source exposures, and for genetic diseases, follow a lognormal distribution (Sartwell's model). Conversely, incubation periods in propagated outbreaks and diseases with strong environmental components do not follow a lognormal distribution. In this study Sartwell's model was applied to the age at onset and age at death of foals with Rhodococcus equi pneumonia. The age at onset of clinical signs and age at death were compiled for 107 foals that had been diagnosed with R equi pneumonia at breeding farms in Argentina and Japan. For each outcome (disease and death), these data followed a lognormal distribution. A group of 115 foals with colic from the University of California were used as a comparison group. The age at onset of clinical signs for these foals did not follow a lognormal distribution. These results were consistent with the hypothesis that foals are infected with R equi during the 1st several days of life, similar to a point-source exposure. [source]


Experimental determination of saltating glass particle dispersion in a turbulent boundary layer

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 14 2006
H. T. Wang
Abstract A horizontal saltation layer of glass particles in air is investigated experimentally over a flat bed and also over a triangular ridge in a wind tunnel. Particle concentrations are measured by light scattering diffusion (LSD) and digital image processing, and velocities using particle image velocimetry (PIV). All the statistical moments of the particle concentration are determined such as mean concentration, root mean square concentration fluctuations, skewness and flatness coefficients. Over the flat bed, it is confirmed that the mean concentration decreases exponentially with height, the mean dispersion height being a significant length scale. It is shown that the concentration distribution follows quite well a lognormal distribution. Over the ridge, measurements were made at the top of the ridge and in the cavity region and are compared with measurements without the ridge. On the hill crest, particles are retarded, the saltation layer decreases in thickness and concentration is increased. Downwind of the ridge, particle flow behaves like a jet, in particular no particle return flow is observed. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Applying species-sensitivity distributions in ecological risk assessment: Assumptions of distribution type and sufficient numbers of species,

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 2 2000
Michael C. Newman
Abstract Species-sensitivity distribution methods assemble single-species toxicity data to predict hazardous concentrations (HCps) affecting a certain percentage (p) of species in a community. The fit of the lognormal model and required number of individual species values were evaluated with 30 published data sets. The increasingly common assumption that a lognormal model best fits these data was not supported. Fifteen data sets failed a formal test of conformity to a lognormal distribution; other distributions often provided better fit to the data than the lognormal distribution. An alternate bootstrap method provided accurate estimates of HCp without the assumption of a specific distribution. Approximate sample sizes producing HC5 estimates with minimal variance ranged from 15 to 55, and had a median of 30 species-sensitivity values. These sample sizes are higher than those suggested in recent regulatory documents. A bootstrap method is recommended that predicts with 95% confidence the concentration affecting 5% or fewer species. [source]


A compound Poisson model for the annual area burned by forest fires in the province of Ontario

ENVIRONMETRICS, Issue 5 2010
Justin J. Podur
Abstract We use the compound Poisson probability distribution to model the annual area burned by forest fires in the Canadian province of Ontario. Models for sums-of-random variables, relevant for modeling aggregate insurance claims and assessing insurance risk are also relevant in modeling aggregate area burned based on sums of sizes of individual fires. Researchers have fit the distribution of fire sizes to the truncated power-law (or Pareto) distribution (Ward et al., 2001) and a four-parameter Weibull distribution (Reed and McKelvey, 2002). Armstrong (1999) fitted a lognormal distribution to annual proportion of area burned by forest fires in a region of Alberta. We derive expressions and moments for aggregate area burned in Ontario using fire data from the Ontario Ministry of Natural Resources (OMNR). We derive expressions for the distribution of area burned for "severe" and "mild" fire weather scenarios and for "intensive suppression" and "no suppression" scenarios (represented by the intensive and extensive fire protection zones of the province). These distributions can be used to perform risk analysis of annual area burned. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Estimating the unknown change point in the parameters of the lognormal distribution

ENVIRONMETRICS, Issue 2 2007
V. K. Jandhyala
Abstract We develop change-point methodology for identifying dynamic trends in the parameters of a two-parameter lognormal distribution. The methodology primarily considers the asymptotic distribution of the maximum likelihood estimate of the unknown change point. Among others, the asymptotic distribution enables one to construct confidence interval estimates for the unknown change point. The methodology is applied to identify changes in the monthly water discharges of the Nacetinsky Creek in the German part of the Ergebirge Mountains. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Predictive distributions in risk analysis and estimation for the triangular distribution

ENVIRONMETRICS, Issue 7 2001
Yongsung Joo
Abstract Many Monte Carlo simulation studies have been done in the field of risk analysis. This article demonstrates the importance of using predictive distributions (the estimated distributions of the explanatory variable accounting for uncertainty in point estimation of parameters) in the simulations. We explore different types of predictive distributions for the normal distribution, the lognormal distribution and the triangular distribution. The triangular distribution poses particular problems, and we found that estimation using quantile least squares was preferable to maximum likelihood. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Undersampling bias: the null hypothesis for singleton species in tropical arthropod surveys

JOURNAL OF ANIMAL ECOLOGY, Issue 3 2009
Jonathan A. Coddington
Summary 1Frequency of singletons , species represented by single individuals , is anomalously high in most large tropical arthropod surveys (average, 32%). 2We sampled 5965 adult spiders of 352 species (29% singletons) from 1 ha of lowland tropical moist forest in Guyana. 3Four common hypotheses (small body size, male-biased sex ratio, cryptic habits, clumped distributions) failed to explain singleton frequency. Singletons are larger than other species, not gender-biased, share no particular lifestyle, and are not clumped at 0·25,1 ha scales. 4Monte Carlo simulation of the best-fit lognormal community shows that the observed data fit a random sample from a community of ~700 species and 1,2 million individuals, implying approximately 4% true singleton frequency. 5Undersampling causes systematic negative bias of species richness, and should be the default null hypothesis for singleton frequencies. 6Drastically greater sampling intensity in tropical arthropod inventory studies is required to yield realistic species richness estimates. 7The lognormal distribution deserves greater consideration as a richness estimator when undersampling bias is severe. [source]


Small-angle neutron and X-ray scattering of dispersions of oleic-acid-coated magnetic iron particles

JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 6 2004
Karen Butter
This paper describes the characterization of dispersions of oleic-acid-coated magnetic iron particles by small-angle neutron and X-ray scattering (SANS and SAXS). Both oxidized and non-oxidized dilute samples were studied by SANS at different contrasts. The non-oxidized samples are found to consist of non-interacting superparamagnetic single dipolar particles, with a lognormal distribution of iron cores, surrounded by a surfactant shell, which is partially penetrated by solvent. This model is supported by SAXS measurements on the same dispersion. Small iron particles are expected to oxidize upon exposure to air. SANS was used to study the effect of this oxidation, both on single particles, as well as on interparticle interactions. It is found that on exposure to air, a non-magnetic oxide layer is formed around the iron cores, which causes an increase of particle size. In addition, particles are found to aggregate upon oxidation, presumably because the surfactant density on the particle surfaces is decreased. [source]


Fatigue behaviour of industrial polymers , a microbeam small-angle X-ray scattering investigation

JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 3-1 2003
Stephan V. Roth
The results of a microbeam small-angle X-ray scattering investigation performed at ID13/ESRF on the machine fatigue induced by short-term high force bending of an industrial polystyrene sample are presented. A clear indication of craze formation in the deformation zone is seen. For this zone the results suggest a model of cylindrical voids, their radii following a lognormal distribution. Furthermore, the craze's orientation shows a locally varying angular distribution around the axis normal to the force direction. [source]


Immediate drug release from solid oral dosage forms

JOURNAL OF PHARMACEUTICAL SCIENCES, Issue 1 2005
Thomas Schreiner
Abstract Fast drug release from solid dosage forms requires a very fast contact of the vast majority of the drug particles with the solvent; this, however, is particularly delayed in tablets and granulations. Starch and cellulose substances favor the matrix disintegration during the starting phase and the generation of the effective dissolution surface of the drug substance, thereby. To investigate the very complex interrelation between the functionality of commonly used excipients and the structural effects of the production processes, wettability, porosity, water uptake, and drug release rates of several ketoprofen-excipient preparations (powder blends, granulations, tablets) were measured. Significant linear correlation between these parameters, however, was not achieved; only qualitative tendencies of the effects could be detected. In consequence, a general mathematical model describing the mechanistic steps of drug dissolution from solid dosage forms in a fully correct way was not realized. However, the time-dependent change of the effective dissolution surface follows stochastic models: a new dissolution equation is based on the differential Noyes-Whitney equation combined with a distribution function, e.g. the lognormal distribution, and numerically solved with the software system EASY-FIT by fitting to the observations. This new model coincides with the data to a considerably higher degree of accuracy than the Weibull function alone, particularly during the starting, matrix disintegration, and end phases. In combination with a procedure continuously quantifying the dissolved drug, this mathematical model is suitable for the characterization and optimization of immediate drug release by the choice and modification of excipients and unit operations. The interdependence of some characteristic effects of excipients and production methods is discussed. © 2004 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 94:120,133, 2005 [source]


Influence of Dissolved Oxygen Concentration on the Pharmacokinetics of Alcohol in Humans

ALCOHOLISM, Issue 5 2010
In-hwan Baek
Background:, Ethanol oxidation by the microsomal ethanol oxidizing system requires oxygen for alcohol metabolism, and a higher oxygen uptake increases the rate of ethanol oxidation. We investigated the effect of dissolved oxygen on the pharmacokinetics of alcohol in healthy humans (n = 49). The concentrations of dissolved oxygen were 8, 20, and 25 ppm in alcoholic drinks of 240 and 360 ml (19.5% v/v). Methods:, Blood alcohol concentrations (BACs) were determined by converting breath alcohol concentrations. Breath samples were collected every 30 min when the BAC was higher than 0.015%, 20 min at BAC ,0.015%, 10 min at BAC ,0.010%, and 5 min at BAC ,0.006%. Results:, The high dissolved oxygen groups (20, 25 ppm) descended to 0.000% and 0.050% BAC faster than the normal dissolved oxygen groups (8 ppm; p < 0.05). In analyzing pharmacokinetic parameters, AUCinf and Kel of the high oxygen groups were lower than in the normal oxygen group, while Cmax and Tmax were not significantly affected. In a Monte Carlo simulation, the lognormal distribution of mean values of AUCinf and t1/2 was expected to be reduced in the high oxygen group compared to the normal oxygen group. Conclusions:, In conclusion, elevated dissolved oxygen concentrations in alcoholic drinks accelerate the metabolism and elimination of alcohol. Thus, enhanced dissolved oxygen concentrations in alcohol may have a role to play in reducing alcohol-related side effects and accidents. [source]


Spatial variation of soil test phosphorus in a long-term grazed experimental grassland fieldWeijun Fu1, 2

JOURNAL OF PLANT NUTRITION AND SOIL SCIENCE, Issue 3 2010
Hubert Tunney
Abstract The spatial variation of soil test P (STP) in grassland soils is becoming important because of the use of STP as a basis for policies such as the recently EU-introduced Nitrate Directive. This research investigates the spatial variation of soil P in grazed grassland plots with a long-term (38 y) experiment. A total of 326 soil samples (including 14 samples from an adjacent grass-wood buffer zone) were collected based on a 10 × 10 m2 grid system. The samples were measured for STP and other nutrients. The results were analyzed using conventional statistics, geostatistics, and a geographic information system (GIS). Soil test P concentrations followed a lognormal distribution, with a median of 5.30 mg L,1 and a geometric mean of 5.35 mg L,1. Statistically significant (p < 0.01) positive correlation between STP and pH was found. Spatial clusters and spatial outliers were detected using the local Moran's I index (a local indicator of spatial association) and were mapped using GIS. An obvious low-value spatial-cluster area was observed on the plots that received zero-P fertilizer application from 1968 to 1998 and a large high-value spatial-cluster area was found on the relatively high-P fertilizer application plots (15,kg ha,1 y,1). The local Moran's I index was also effective in detecting spatial outliers, especially at locations close to spatial-cluster areas. To obtain a reliable and stable spatial structure, semivariogram of soil-P data was produced after elimination of spatial outliers. A spherical model with a nugget effect was chosen to fit the experimental semivariogram. The spatial-distribution map of soil P was produced using the kriging interpolation method. The interpolated distribution map was dominated by medium STP values, ranging from 3 mg to 8 mg L,1. An evidently low-P-value area was present in the upper side of the study area, as zero or short-term P fertilizer was applied on the plots. Meanwhile, high-P-value area was located mainly on the plots receiving 15,kg P ha,1 y,1 (for 38 y) as these plots accumulated excess P after a long-term P-fertilizer spreading. The high- or low-value patterns were in line with the spatial clusters. Geostatistics, combined with GIS and the local spatial autocorrelation index, provides a useful tool for analyzing the spatial variation in soil nutrients. [source]


A SIMPLE METHOD FOR ESTIMATING BASEFLOW AT UNGAGED LOCATIONS,

JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 1 2001
Kenneth W. Potter
ABSTRACT: Baseflow, or water that enters a stream from slowly varying sources such as ground water, can be critical to humans and ecosystems. We evaluate a simple method for estimating base-flow parameters at ungaged sites. The method uses one or more baseflow discharge measurements at the ungaged site and longterm streamflow data from a nearby gaged site. A given baseflow parameter, such as the median, is estimated as the product of the corresponding gage site parameter and the geometric mean of the ratios of the measured baseflow discharges and the concurrent discharges at the gage site. If baseflows at gaged and ungaged sites have a bivariate lognormal distribution with high correlation and nearly equal log variances, the estimated baseflow parameters are very accurate. We tested the proposed method using long-term streamflow data from two watershed pairs in the Driftless Area of southwestern Wisconsin. For one watershed pair, the theoretical assumptions are well met; for the other the log-variances are substantially different. In the first case, the method performs well for estimating both annual and long-term baseflow parameters. In the second, the method performs remarkably well for estimating annual mean and annual median baseflow discharge, but less well for estimating the annual lower decile and the long-term mean, median, and lower decile. In general, the use of four measurements in a year is not substantially better than the use of two. [source]


Application of Sartwell's Model (Lognormal Distribution of Incubation Periods) to Age at Onset and Age at Death of Foals with Rhodococcus equi Pneumonia as Evidence of Perinatal Infection

JOURNAL OF VETERINARY INTERNAL MEDICINE, Issue 3 2001
Miriam L. Horowitz
The distributions of the incubation periods for infectious and neoplastic diseases originating from point-source exposures, and for genetic diseases, follow a lognormal distribution (Sartwell's model). Conversely, incubation periods in propagated outbreaks and diseases with strong environmental components do not follow a lognormal distribution. In this study Sartwell's model was applied to the age at onset and age at death of foals with Rhodococcus equi pneumonia. The age at onset of clinical signs and age at death were compiled for 107 foals that had been diagnosed with R equi pneumonia at breeding farms in Argentina and Japan. For each outcome (disease and death), these data followed a lognormal distribution. A group of 115 foals with colic from the University of California were used as a comparison group. The age at onset of clinical signs for these foals did not follow a lognormal distribution. These results were consistent with the hypothesis that foals are infected with R equi during the 1st several days of life, similar to a point-source exposure. [source]


An empirical model for the polarization of pulsar radio emission

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 2 2006
Don Melrose
ABSTRACT We present an empirical model for single pulses of radio emission from pulsars based on Gaussian probability distributions for relevant variables. The radiation at a specific pulse phase is represented as the superposition of radiation in two (approximately) orthogonally polarized modes (OPMs) from one or more subsources in the emission region of the pulsar. For each subsource, the polarization states are drawn randomly from statistical distributions, with the mean and the variance on the Poincaré sphere as free parameters. The intensity of one OPM is chosen from a lognormal distribution, and the intensity of the other OPM is assumed to be partially correlated, with the degree of correlation also chosen from a Gaussian distribution. The model is used to construct simulated data described in the same format as real data: distributions of the polarization of pulses on the Poincaré sphere and histograms of the intensity and other parameters. We concentrate on the interpretation of data for specific phases of PSR B0329+54 for which the OPMs are not orthogonal, with one well defined and the other spread out around an annulus on the Poincaré sphere at some phases. The results support the assumption that the radiation emerges in two OPMs with closely correlated intensities, and that in a statistical fraction of pulses one OPM is invisible. [source]


Abundance and co-occurrence patterns of core and satellite species of ground beetles on small lake islands

OIKOS, Issue 2 2006
Werner Ulrich
15 lake islands and two mainland sites of Mamry lake in Poland were sampled to investigate community structures and patterns of co-occurrences of ground beetles (Carabidae). The total ground beetle metacommunity of 71 species was divided into a group of core species occupying at least half of all study sites and of satellite species, which occurred at two sites or less. This division is mirrored by reduced dispersal abilities and non-random patterns of site occupancy. Core and satellite species also differed in patterns of relative abundance. The core group followed a lognormal distribution, the satellite group a power function as predicted by the self-similarity model of occurrence. We conclude that the division into core and satellite species is not a sample artefact but reflects different life history strategies. We also conclude that current models of niche division and co-occurrence might miss important aspects of community structure if they do not refer to patterns of dispersal. From these findings we infer that the regional distribution of core species might be shaped by species interactions and processes of niche divisions whereas the spatial distribution of satellite species are best interpreted as stemming from random dispersal. [source]


Comparison of sample size formulae for 2 × 2 cross-over designs applied to bioequivalence studies

PHARMACEUTICAL STATISTICS: THE JOURNAL OF APPLIED STATISTICS IN THE PHARMACEUTICAL INDUSTRY, Issue 4 2005
Arminda Lucia Siqueira
Abstract We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross-over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (,0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non-central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under- or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Reciprocal-space mapping of epitaxic thin films with crystallite size and shape polydispersity

ACTA CRYSTALLOGRAPHICA SECTION A, Issue 1 2006
A. Boulle
A development is presented that allows the simulation of reciprocal-space maps (RSMs) of epitaxic thin films exhibiting fluctuations in the size and shape of the crystalline domains over which diffraction is coherent (crystallites). Three different crystallite shapes are studied, namely parallelepipeds, trigonal prisms and hexagonal prisms. For each shape, two cases are considered. Firstly, the overall size is allowed to vary but with a fixed thickness/width ratio. Secondly, the thickness and width are allowed to vary independently. The calculations are performed assuming three different size probability density functions: the normal distribution, the lognormal distribution and a general histogram distribution. In all cases considered, the computation of the RSM only requires a two-dimensional Fourier integral and the integrand has a simple analytical expression, i.e. there is no significant increase in computing times by taking size and shape fluctuations into account. The approach presented is compatible with most lattice disorder models (dislocations, inclusions, mosaicity, ,) and allows a straightforward account of the instrumental resolution. The applicability of the model is illustrated with the case of an yttria-stabilized zirconia film grown on sapphire. [source]


A Generalization of the Brennan,Rubinstein Approach for the Pricing of Derivatives

THE JOURNAL OF FINANCE, Issue 2 2003
António Câmara
This paper derives preference-free option pricing equations in a discrete time economy where asset returns have continuous distributions. There is a representative agent who has risk preferences with an exponential representation. Aggregate wealth and the underlying asset price have transformed normal distributions which may or may not belong to the same family of distributions. Those pricing results are particularly valuable (a) to show new sufficient conditions for existing risk-neutral option pricing equations (e.g., the Black,Scholes model), and (b) to obtain new analytical solutions for the price of European-style contingent claims when the underlying asset has a transformed normal distribution (e.g., a negatively skew lognormal distribution). [source]


A hybrid multivariate Normal and lognormal distribution for data assimilation

ATMOSPHERIC SCIENCE LETTERS, Issue 2 2006
Steven J. Fletcher
Abstract In this article, we define and prove a distribution, which is a combination of a multivariate Normal and lognormal distribution. From this distribution, we apply a Bayesian probability framework to derive a non-linear cost function similar to the one that is in current variational data assimilation (DA) applications. Copyright © 2006 Royal Meteorological Society [source]


Population fluctuations, power laws and mixtures of lognormal distributions

ECOLOGY LETTERS, Issue 1 2001
A.P. Allen
A number of investigators have invoked a cascading local interaction model to account for power-law-distributed fluctuations in ecological variables. Invoking such a model requires that species be tightly coupled, and that local interactions among species influence ecosystem dynamics over a broad range of scales. Here we reanalyse bird population data used by Keitt & Stanley (1998, Dynamics of North American breeding bird populations. Nature, 393, 257,260) to support a cascading local interaction model. We find that the power law they report can be attributed to mixing of lognormal distributions. More tentatively, we propose that mixing of distributions accounts for other empirical power laws reported in the ecological literature. [source]


A cost analysis of ranked set sampling to estimate a population mean

ENVIRONMETRICS, Issue 3 2005
Rebecca A. Buchanan
Abstract Ranked set sampling (RSS) can be a useful environmental sampling method when measurement costs are high but ranking costs are low. RSS estimates of the population mean can have higher precision than estimates from a simple random sample (SRS) of the same size, leading to potentially lower sampling costs from RSS than from SRS for a given precision. However, RSS introduces ranking costs not present in SRS; these costs must be considered in determining whether RSS is cost effective. We use a simple cost model to determine the minimum ratio of measurement to ranking costs (cost ratio) necessary in order for RSS to be as cost effective as SRS for data from the normal, exponential, and lognormal distributions. We consider both equal and unequal RSS allocations and two types of estimators of the mean: the typical distribution-free (DF) estimator and the best linear unbiased estimator (BLUE). The minimum cost ratio necessary for RSS to be as cost effective as SRS depends on the underlying distribution of the data, as well as the allocation and type of estimator used. Most minimum necessary cost ratios are in the range of 1,6, and are lower for BLUEs than for DF estimators. The higher the prior knowledge of the distribution underlying the data, the lower the minimum necessary cost ratio and the more attractive RSS is over SRS. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Optimum step-stress accelerated life test plans for log-location-scale distributions

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 6 2008
Haiming Ma
Abstract This article presents new tools and methods for finding optimum step-stress accelerated life test plans. First, we present an approach to calculate the large-sample approximate variance of the maximum likelihood estimator of a quantile of the failure time distribution at use conditions from a step-stress accelerated life test. The approach allows for multistep stress changes and censoring for general log-location-scale distributions based on a cumulative exposure model. As an application of this approach, the optimum variance is studied as a function of shape parameter for both Weibull and lognormal distributions. Graphical comparisons among test plans using step-up, step-down, and constant-stress patterns are also presented. The results show that depending on the values of the model parameters and quantile of interest, each of the three test plans can be preferable in terms of optimum variance. © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008 [source]