Cumulative Distribution Function (cumulative + distribution_function)

Distribution by Scientific Domains


Selected Abstracts


A NEW CONFIDENCE BAND FOR CONTINUOUS CUMULATIVE DISTRIBUTION FUNCTIONS

AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 3 2009
Xingzhong Xu
Summary We consider confidence bands for continuous distribution functions. Following a review of the literature we find that previously considered confidence bands, which have exact coverage, are all step-functions jumping only at the sample points. We find that the step-function bands can be constructed through rectangular tolerance regions for an ordered sample from the uniform distribution R(0, 1). We then construct a set of new bands. Two criteria for assessing confidence bands are presented. One is the power criterion, and the other is the average-width criterion that we propose. Numerical comparisons between our new bands and the old bands are carried out, and show that our new bands perform much better than the old ones. [source]


Fuzzy Monte Carlo Simulation and Risk Assessment in Construction

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 4 2010
N. Sadeghi
However, subjective and linguistically expressed information results in added non-probabilistic uncertainty in construction management. Fuzzy logic has been used successfully for representing such uncertainties in construction projects. In practice, an approach that can handle both random and fuzzy uncertainties in a risk assessment model is necessary. This article discusses the deficiencies of the available methods and proposes a Fuzzy Monte Carlo Simulation (FMCS) framework for risk analysis of construction projects. In this framework, we construct a fuzzy cumulative distribution function as a novel way to represent uncertainty. To verify the feasibility of the FMCS framework and demonstrate its main features, the authors have developed a special purpose simulation template for cost range estimating. This template is employed to estimate the cost of a highway overpass project. [source]


Approaches for linking whole-body fish tissue residues of mercury or DDT to biological effects thresholds

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 8 2005
Nancy Beckvar
Abstract A variety of methods have been used by numerous investigators attempting to link tissue concentrations with observed adverse biological effects. This paper is the first to evaluate in a systematic way different approaches for deriving protective (i.e., unlikely to have adverse effects) tissue residue-effect concentrations in fish using the same datasets. Guidelines for screening papers and a set of decision rules were formulated to provide guidance on selecting studies and obtaining data in a consistent manner. Paired no-effect (NER) and low-effect (LER) whole-body residue concentrations in fish were identified for mercury and DDT from the published literature. Four analytical approaches of increasing complexity were evaluated for deriving protective tissue residues. The four methods were: Simple ranking, empirical percentile, tissue threshold-effect level (t-TEL), and cumulative distribution function (CDF). The CDF approach did not yield reasonable tissue residue thresholds based on comparisons to synoptic control concentrations. Of the four methods evaluated, the t-TEL approach best represented the underlying data. A whole-body mercury t-TEL of 0.2 mg/kg wet weight, based largely on sublethal endpoints (growth, reproduction, development, behavior), was calculated to be protective of juvenile and adult fish. For DDT, protective whole-body concentrations of 0.6 mg/kg wet weight in juvenile and adult fish, and 0.7 mg/kg wet weight for early life-stage fish were calculated. However, these DDT concentrations are considered provisional for reasons discussed in this paper (e.g., paucity of sublethal studies). [source]


Masticatory performance in patients with anterior disk displacement without reduction in comparison with symptom-free volunteers

EUROPEAN JOURNAL OF ORAL SCIENCES, Issue 5 2002
Ingrid Peroz
Masticatory function can be impaired by craniomandibular disorders. The aim of this study was to assess masticatory performance in patients with an anterior disc displacement (ADD) without reduction. In the experiments, 29 patients and 33 age- and gender-matched volunteers chewed artificial test food for 60 chewing strokes. The collected remains of the test food were filtered, dried, fractionated by a sieving procedure, and weighed. The particle size distribution was then described using a cumulative distribution function. Patients and controls were clinically examined, and patients were asked to complete a pain questionnaire. Comparison with controls, patients showed significantly reduced masticatory performance. Patients that had had a disorder longer than 3 yr tended to display less reduction of their masticatory performance. Neither the treatment methods used, nor restriction of daily life activities or pain intensity were significantly correlated with masticatory performance. Jaw mobility was significantly reduced in patients. More than half of the patients and none of the controls had joint noises and trigger points in the masticatory muscles. Pain was present, in particular, during chewing and maximal opening of the mouth. It was concluded that patients with ADD without reduction have a significantly reduced masticatory performance. [source]


Stochastic approach for output SINR computation at SC diversity systems with correlated Nakagami- m fading

EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 5 2009
Daniela M. Milovi
In this paper, we derive the cumulative distribution function of the signal-to-interference,+,,noise ratio (SINR) achieved by the selection combining (SC) diversity receiver operating over correlated Nakagami- m channel in the presence of co-channel interference. Numerical and simulation results are presented to show the effects of fading severity and signal and interference imbalance on the system's performance. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Cost-effectiveness acceptability curves , facts, fallacies and frequently asked questions

HEALTH ECONOMICS, Issue 5 2004
Elisabeth Fenwick
Abstract Cost-effectiveness acceptability curves (CEACs) have been widely adopted as a method to quantify and graphically represent uncertainty in economic evaluation studies of health-care technologies. However, there remain some common fallacies regarding the nature and shape of CEACs that largely result from the ,textbook' illustration of the CEAC. This ,textbook' CEAC shows a smooth curve starting at probability 0, with an asymptote to 1 for higher money values of the health outcome (,). But this familiar ,ogive' shape which makes the ,textbook' CEAC look like a cumulative distribution function is just one special case of the CEAC. The reality is that the CEAC can take many shapes and turns because it is a graphic transformation from the cost-effectiveness plane, where the joint density of incremental costs and effects may ,straddle' quadrants with attendant discontinuities and asymptotes. In fact CEACs: (i) do not have to cut the y -axis at 0; (ii) do not have to asymptote to 1; (iii) are not always monotonically increasing in ,; and (iv) do not represent cumulative distribution functions (cdfs). Within this paper we present a ,gallery' of CEACs in order to identify the fallacies and illustrate the facts surrounding the CEAC. The aim of the paper is to serve as a reference tool to accompany the increased use of CEACs within major medical journals. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Performance analysis of system with L-branch selection combining over correlated Weibull fading channels in the presence of cochannel interference

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 2 2010
. Stefanovi, Mihajlo
Abstract In this paper, the performance of L-branch selection combining receiver over correlated Weibull fading channels in the presence of correlated Weibull-distributed cochannel interference is analyzed. Closed-form expressions for probability density function and cumulative distribution function of the signal-to-interference ratio at the output of the selection combining receiver present main contribution of this paper. Numerical results are also presented to show the effects of various parameters as the fading severity, correlation and number of branches on outage probability. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Fade correlation and diversity effects in satellite broadcasting to mobile users in S-band

INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 5 2008
Albert Heuberger
Abstract In this paper, we present measurement results for fade correlation in time and space of signals from two satellites in geostationary orbit with 30° separation. Fade data for urban, residential and rural environments are analyzed. In addition to fade cumulative distribution function, rice factor and coherence length of individual fade signals, also the joint probability density function and the cross-correlation for the fade from the two satellites are presented. The coherence length of single satellite fades extends to about 18,m in the urban area and is around 2,m in the rural area. The correlation coefficient of dual satellite fades is below 0.3 in the residential and rural area. In the urban area larger correlations around 0.7 occur. Based on the measured fade data the diversity gain for various network configurations are determined by simulation for a forward error correction scheme using concatenated codes in combination with random interleavers. Network configurations of interest are single-satellite space diversity, two-satellite space diversity, one-satellite time diversity and two-satellite space and time diversity. For short interleavers of 5,m the diversity gain is in the residential area of 2.3,dB for the two-satellite space diversity, 0.3,dB for one-satellite time diversity and 4.1,dB for two-satellite space and time diversity. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Protection of FS receivers from the interference produced by HEO FSS satellites in the 18,GHz band: Effect of the roll-off characteristics of the HEO system satellite antenna beams

INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 3 2008
Anna Carolina Finamore
Abstract This paper focuses on the protection of fixed service (FS) receivers from the aggregate interference produced by the satellites of multiple highly elliptical orbit satellite systems (HEOs). It analyzes the protection given to FS receivers operating in the 18,GHz frequency band by the power flux-density (pfd) mask contained in Article 21 of the 2003 edition of the Radio Regulations [International Telecommunication Union, 2003.]. This mask establishes the maximum allowable value for the pfd produced by any of the satellites of a non-geostationary system at the Earth's surface. The protection offered to FS receivers by this mask is analyzed in four interfering environments, each containing three identical HEO systems. Four types of HEO systems, with different orbital characteristics, are considered: three having satellites that operate only in the northern hemisphere and one having satellites that operate in both hemispheres. All satellite antennas are assumed to radiate 0.3° spot beams. Each HEO satellite is modelled so that the maximum pfd it produces at the Earth's surface just meets the RR Article 21 mask and the analysis takes into account the roll-off characteristics of the satellite antenna beams. To reflect the multiplicity of possibilities concerning the geographical location and technical characteristics of the victim FS receiver (e.g. latitude, longitude, azimuth and elevation of its receiving antenna, antenna gain, receiver noise temperature, etc.) a number of cases were evaluated. The concept of interference in excess [Int. J. Satellite Commun. Networking 2006; 24: 73,95] was used to combine the results corresponding to FS receivers located at the same latitude and having the same receiving antenna elevation angle but for which the location longitude and the azimuth of the pointing direction of its receiving antenna are randomly chosen. Results are expressed in terms of the cumulative distribution function of the interference in excess. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Minimizing errors in identifying Lévy flight behaviour of organisms

JOURNAL OF ANIMAL ECOLOGY, Issue 2 2007
DAVID W. SIMS
Summary 1Lévy flights are specialized random walks with fundamental properties such as superdiffusivity and scale invariance that have recently been applied in optimal foraging theory. Lévy flights have movement lengths chosen from a probability distribution with a power-law tail, which theoretically increases the chances of a forager encountering new prey patches and may represent an optimal solution for foraging across complex, natural habitats. 2An increasing number of studies are detecting Lévy behaviour in diverse organisms such as microbes, insects, birds, and mammals including humans. A principal method for detecting Lévy flight is whether the exponent (µ) of the power-law distribution of movement lengths falls within the range 1 < µ , 3. The exponent can be determined from the histogram of frequency vs. movement (step) lengths, but different plotting methods have been used to derive the Lévy exponent across different studies. 3Here we investigate using simulations how different plotting methods influence the µ-value and show that the power-law plotting method based on 2k (logarithmic) binning with normalization prior to log transformation of both axes yields low error (1·4%) in identifying Lévy flights. Furthermore, increasing sample size reduced variation about the recovered values of µ, for example by 83% as sample number increased from n = 50 up to 5000. 4Simple log transformation of the axes of the histogram of frequency vs. step length underestimated µ by c.40%, whereas two other methods, 2k (logarithmic) binning without normalization and calculation of a cumulative distribution function for the data, both estimate the regression slope as 1 , µ. Correction of the slope therefore yields an accurate Lévy exponent with estimation errors of 1·4 and 4·5%, respectively. 5Empirical reanalysis of data in published studies indicates that simple log transformation results in significant errors in estimating µ, which in turn affects reliability of the biological interpretation. The potential for detecting Lévy flight motion when it is not present is minimized by the approach described. We also show that using a large number of steps in movement analysis such as this will also increase the accuracy with which optimal Lévy flight behaviour can be detected. [source]


MATHEMATICAL MODEL FOR THE SURVIVAL OF LISTERIA MONOCYTOGENES IN MEXICAN-STYLE SAUSAGE

JOURNAL OF FOOD SAFETY, Issue 4 2005
M.N. HAJMEER
ABSTRACT Survival of Listeria monocytogenes in chorizos (Mexican-style sausages) was modeled in relation to initial water activity (aw0) and storage conditions using the Weibull cumulative distribution function. Twenty survival curves were generated from chorizos formulated at aw0 = 0.85,0.97 then stored under four temperature (T) and air inflow velocity (F) conditions. The Weibull model parameters (, and ,) were determined for every curve. Predicted survival curves agreed with experimental curves with R2 = 0.945,0.992. Regression models (R2 = 0.981,0.984) were developed to relate , and , to operating conditions. The times to one- and two-log reduction in count (t1D and t2D) were derived from the Weibull model in terms of , and ,. A parametric study revealed that L. monocytogenes survival was most sensitive to aw0 between 0.90 and 0.95. The inactivation of L. monocytogenes could be maximized with higher T and lower aw0; however, F did not significantly influence survival. [source]


A new reconstruction of multivariate normal orthant probabilities

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 1 2008
Peter Craig
Summary., A new method is introduced for geometrically reconstructing orthant probabilities for non-singular multivariate normal distributions. Orthant probabilities are expressed in terms of those for auto-regressive sequences and an efficient method is developed for numerical approximation of the latter. The approach allows more efficient accurate evaluation of the multivariate normal cumulative distribution function than previously, for many situations where the original distribution arises from a graphical model. An implementation is available as a package for the statistical software R and an application is given to multivariate probit models. [source]


A Pareto model for classical systems

MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 1 2008
Saralees Nadarajah
Abstract A new Pareto distribution is introduced for pooling knowledge about classical systems. It takes the form of the product of two Pareto probability density functions (pdfs). Various structural properties of this distribution are derived, including its cumulative distribution function (cdf), moments, mean deviation about the mean, mean deviation about the median, entropy, asymptotic distribution of the extreme order statistics, maximum likelihood estimates and the Fisher information matrix. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Constraints on the angular distribution of satellite galaxies about spiral hosts

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 3 2008
Jason H. Steffen
ABSTRACT We present, using a novel technique, a study of the angular distribution of satellite galaxies around a sample of isolated, blue host galaxies selected from the sixth data release of the Sloan Digital Sky Survey. As a complement to previous studies, we subdivide the sample of galaxies into bins of differing inclination and use the systematic differences that would exist between the different bins as the basis for our approach. We parametrize the cumulative distribution function of satellite galaxies and apply a maximum likelihood, Monte Carlo technique to determine allowable distributions, which we show as an exclusion plot. We find that the allowed distributions of the satellites of spiral hosts are very nearly isotropic. We outline our formalism and our analysis and discuss how this technique may be refined for future studies and future surveys. [source]


Profit Maximizing Warranty Period with Sales Expressed by a Demand Function

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 3 2007
Shaul P. Ladany
Abstract The problem of determining the optimal warranty period, assumed to coincide with the manufacturer's lower specification limit for the lifetime of the product, is addressed. It is assumed that the quantity sold depends via a Cobb,Douglas-type demand function on the sale price and on the warranty period, and that both the cost incurred for a non-conforming item and the sale price increase with the warranty period. A general solution is derived using Response Modeling Methodology (RMM) and a new approximation for the standard normal cumulative distribution function. The general solution is compared with the exact optimal solutions derived under various distributional scenarios. Relative to the exact optimal solutions, RMM-based solutions are accurate to at least the first three significant digits. Some exact results are derived for the uniform and the exponential distributions. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Distributional properties of estimated capability indices based on subsamples

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 2 2003
Kerstin Vännman
Abstract Under the assumption of normality, the distribution of estimators of a class of capability indices, containing the indices , , and , is derived when the process parameters are estimated from subsamples. The process mean is estimated using the grand average and the process variance is estimated using the pooled variance from subsamples collected over time for an in-control process. The derived theory is then applied to study the use of hypothesis testing to assess process capability. Numerical investigations are made to explore the effect of the size and number of subsamples on the efficiency of the hypothesis test for some indices in the studied class. The results for and indicate that, even when the total number of sampled observations remains constant, the power of the test decreases as the subsample size decreases. It is shown how the power of the test is dependent not only on the subsample size and the number of subsamples, but also on the relative location of the process mean from the target value. As part of this investigation, a simple form of the cumulative distribution function for the non-central -distribution is also provided. Copyright © 2003 John Wiley & Sons, Ltd. [source]


ROBUST ESTIMATION OF SMALL-AREA MEANS AND QUANTILES

AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 2 2010
Nikos Tzavidis
Summary Small-area estimation techniques have typically relied on plug-in estimation based on models containing random area effects. More recently, regression M-quantiles have been suggested for this purpose, thus avoiding conventional Gaussian assumptions, as well as problems associated with the specification of random effects. However, the plug-in M-quantile estimator for the small-area mean can be shown to be the expected value of this mean with respect to a generally biased estimator of the small-area cumulative distribution function of the characteristic of interest. To correct this problem, we propose a general framework for robust small-area estimation, based on representing a small-area estimator as a functional of a predictor of this small-area cumulative distribution function. Key advantages of this framework are that it naturally leads to integrated estimation of small-area means and quantiles and is not restricted to M-quantile models. We also discuss mean squared error estimation for the resulting estimators, and demonstrate the advantages of our approach through model-based and design-based simulations, with the latter using economic data collected in an Australian farm survey. [source]


Fitting and comparing seed germination models with a focus on the inverse normal distribution

AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 3 2004
Michael E. O'Neill
Summary This paper reviews current methods for fitting a range of models to censored seed germination data and recommends adoption of a probability-based model for the time to germination. It shows that, provided the probability of a seed eventually germinating is not on the boundary, maximum likelihood estimates, their standard errors and the resultant deviances are identical whether only those seeds which have germinated are used or all seeds (including seeds ungerminated at the end of the experiment). The paper recommends analysis of deviance when exploring whether replicate data are consistent with a hypothesis that the underlying distributions are identical, and when assessing whether data from different treatments have underlying distributions with common parameters. The inverse normal distribution, otherwise known as the inverse Gaussian distribution, is discussed, as a natural distribution for the time to germination (including a parameter to measure the lag time to germination). The paper explores some of the properties of this distribution, evaluates the standard errors of the maximum likelihood estimates of the parameters and suggests an accurate approximation to the cumulative distribution function and the median time to germination. Additional material is on the web, at http://www.agric.usyd.edu.au/staff/oneill/. [source]


Small-scale variability in surface moisture on a fine-grained beach: implications for modeling aeolian transport

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 10 2009
Brandon L. Edwards
Abstract Small-scale variations in surface moisture content were measured on a fine-grained beach using a Delta-T Theta probe. The resulting data set was used to examine the implications of small-scale variability for estimating aeolian transport potential. Surface moisture measurements were collected on a 40 cm × 40 cm grid at 10 cm intervals, providing a total of 25 measurements for each grid data set. A total of 44 grid data sets were obtained from a representative set of beach sub-environments. Measured moisture contents ranged from about 0% (dry) to 25% (saturated), by weight. The moisture content range within a grid data set was found to vary from less than 1% to almost 15%. The magnitude of within-grid variability varied consistently with the mean moisture content of the grid sets, following an approximately normal distribution. Both very wet and very dry grid data sets exhibited little internal variability in moisture content, while intermediate moisture contents were associated with higher levels of variability. Thus, at intermediate moisture contents it was apparent that some portions of the beach surface could be dry enough to allow aeolian transport (i.e. moisture content is below the critical threshold), while adjacent portions are too wet for transport to occur. To examine the implications of this finding, cumulative distribution functions were calculated to model the relative proportions of beach surface area expected to be above or below specified threshold moisture levels (4%, 7%, and 14%). It was found that the implicit inclusion of small-scale variability in surface moisture levels typically resulted in changes of less than 1% in the beach area available for transport, suggesting that this parameter can be ignored at larger spatial scales. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Characterization of yeast strains by fluorescence lifetime imaging microscopy

FEMS YEAST RESEARCH, Issue 1 2008
Hemant Bhatta
Abstract The results of fluorescence lifetime imaging microscopy of selected yeast strains were presented and the fact that the lifetime distributions can be successfully used for strain characterization and differentiation was demonstrated. Four strains of industrially relevant yeast Saccharomyces were excited at 405 nm and the autofluorescence observed within 440,540 nm. Using statistical tools such as empirical cumulative distribution functions with Kolmogorov,Smirnov testing, the four studied strains were categorized into three different groups for normal sample size of 70 cells slide,1 at a significance level of 5%. The differentiation of all of the examined strains from one another was shown to be possible by increasing the sample size to 420 cells, which is achievable by taking the lifetime data at six different positions in the slide. [source]


Cost-effectiveness acceptability curves , facts, fallacies and frequently asked questions

HEALTH ECONOMICS, Issue 5 2004
Elisabeth Fenwick
Abstract Cost-effectiveness acceptability curves (CEACs) have been widely adopted as a method to quantify and graphically represent uncertainty in economic evaluation studies of health-care technologies. However, there remain some common fallacies regarding the nature and shape of CEACs that largely result from the ,textbook' illustration of the CEAC. This ,textbook' CEAC shows a smooth curve starting at probability 0, with an asymptote to 1 for higher money values of the health outcome (,). But this familiar ,ogive' shape which makes the ,textbook' CEAC look like a cumulative distribution function is just one special case of the CEAC. The reality is that the CEAC can take many shapes and turns because it is a graphic transformation from the cost-effectiveness plane, where the joint density of incremental costs and effects may ,straddle' quadrants with attendant discontinuities and asymptotes. In fact CEACs: (i) do not have to cut the y -axis at 0; (ii) do not have to asymptote to 1; (iii) are not always monotonically increasing in ,; and (iv) do not represent cumulative distribution functions (cdfs). Within this paper we present a ,gallery' of CEACs in order to identify the fallacies and illustrate the facts surrounding the CEAC. The aim of the paper is to serve as a reference tool to accompany the increased use of CEACs within major medical journals. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Rainfall propagation impairments for medium elevation angle satellite-to-earth 12,GHz in the tropics

INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 4 2008
J.S. Mandeep
Abstract Rain attenuation is the dominant propagation impairment for satellite communication systems operating at frequencies above about 10,GHz. The rainfall path attenuation at 12.255,GHz measured at Universiti Sains Malaysia (USM) for 4 years (2 January to 5 December) is presented. This paper presents an empirical analysis of rain rate and rain attenuation cumulative distribution functions obtained using 1-min integrated rainfall data and comparison of the measured data with data obtained from well-established rain model attenuation predictions. Copyright © 2008 John Wiley & Sons, Ltd. [source]


A general class of hierarchical ordinal regression models with applications to correlated roc analysis

THE CANADIAN JOURNAL OF STATISTICS, Issue 4 2000
Hemant Ishwaran
Abstract The authors discuss a general class of hierarchical ordinal regression models that includes both location and scale parameters, allows link functions to be selected adaptively as finite mixtures of normal cumulative distribution functions, and incorporates flexible correlation structures for the latent scale variables. Exploiting the well-known correspondence between ordinal regression models and parametric ROC (Receiver Operating Characteristic) curves makes it possible to use a hierarchical ROC (HROC) analysis to study multilevel clustered data in diagnostic imaging studies. The authors present a Bayesian approach to model fitting using Markov chain Monte Carlo methods and discuss HROC applications to the analysis of data from two diagnostic radiology studies involving multiple interpreters. RÉSUMÉ Les auteurs s'intéressent à une classe assez vaste de modèles de régression ordinale avec paramètres de localisation et d'échelle, laquelle permet la sélection adaptative de fonctions de lien s'exprimant comme mélanges finis de fonctions de répartition normales et fournit des structures de correlation flexibles pour les variables d'échelle latentes. En exploitant la correspondance bien connue entre les modèles de régression ordinale et les courbes d'efficacité paramétriques (CEP) des tests diagnostiques, il est possible d'analyser des données d'imagerie médicate diagnostique regroupées à plusieurs niveaux au moyen d'une CEP hiéiarchique. Les auteurs décrivent une approche bayésienne pour l'ajustement de tels modèles au moyen des méthodes de Monte Carlo à cha,ne de Markov et présentent deux applications concrètes concernant l'interprétation de clichés radiologiques [source]


Estimates of the twinning fraction for macromolecular crystals using statistical models accounting for experimental errors

ACTA CRYSTALLOGRAPHICA SECTION D, Issue 11 2007
Vladimir Y. Lunin
An advanced statistical model is suggested that is designed to estimate the twinning fraction in merohedrally (or pseudo-merohedrally) twinned crystals. The model takes experimental errors of the measured intensities into account and is adapted to the accuracy of a particular X-ray experiment through the standard deviations of the reflection intensities. The theoretical probability distributions for the improved model are calculated using a Monte Carlo-type simulation procedure. The use of different statistical criteria (including likelihood) to estimate the optimal twinning-fraction value is discussed. The improved model enables better agreement of theoretical and observed cumulative distribution functions to be obtained and produces twinning-fraction estimates that are closer to the refined values in comparison to the conventional model, which disregards experimental errors. The results of the two approaches converge when applied to selected subsets of measured intensities of high accuracy. [source]


Closed-form approximations to the error and complementary error functions and their applications in atmospheric science

ATMOSPHERIC SCIENCE LETTERS, Issue 3 2007
C. Ren
Abstract The error function, as well as related functions, occurs in theoretical aspects of many parts of atmospheric science. This note presents a closed-form approximation for the error, complementary error, and scaled complementary error functions, with maximum relative errors within 0.8%. Unlike other approximate solutions, this single equation gives answers within the stated accuracy for real variable x , [0,). The approximation is very useful in solving atmospheric science problems by providing analytical solutions. Examples of the utility of the approximation are: the computation of cirrus cloud physics inside a general circulation model, the cumulative distribution functions of normal and log-normal distributions, and the recurrence period for risk assessment. Copyright © 2007 Royal Meteorological Society [source]