Home About us Contact | |||
Exponential Distribution (exponential + distribution)
Selected AbstractsPolyaniline-multiwalled carbon nanotube composites: Characterization by WAXS and TGAJOURNAL OF APPLIED POLYMER SCIENCE, Issue 1 2008T. Jeevananda Abstract Polyaniline/carboxylated multi-walled carbon nanotube (PAni/c-MWNT) nanocomposites have been synthesized by micellar aided emulsion polymerization with various c-MWNTs compositions, viz., 0.5, 1, 5, and 10 wt %. The microcrystalline parameters such as the nanocrystal size (,N,), lattice strain (g), interplanar distance (dhkl), width of the crystallite size distribution, surface weighted crystal size (Ds), and volume of the ordered regions were calculated from the X-ray data by using two mathematical models, namely the Exponential distribution and Reinhold distribution methods. The effects of heat ageing on the microcrystalline parameters of the PAni/c-MWNT nanocomposites were also studied and the results are correlated. The thermal stability and electrical resistivity of the PAni/c-MWNT nanocomposites were examined with thermogravimetric analysis (TGA) and a conventional two-probe method. The TGA data indicate that the thermal stability of the nanocomposites improved after the incorporation of c-MWNTs. The influence of temperature on the resistivity of the nanocomposites was also measured. © 2008 Wiley Periodicals, Inc. J Appl Polym Sci 2008 [source] Investigation of the Influence of Overvoltage, Auxiliary Glow Current and Relaxation Time on the Electrical Breakdown Time Delay Distributions in NeonCONTRIBUTIONS TO PLASMA PHYSICS, Issue 2 2005. A. Maluckov Abstract Results of the statistical analysis of the electrical breakdown time delay for neon-filled tube at 13.3 mbar are presented in this paper. Experimental distributions of the breakdown time delay were established on the basis of 200 successive and independent measurements, for different overvoltages, relaxation times and auxiliary glows. Obtained experimental distributions deviate from usual exponential distribution. Breakdown time delay distributions are numerically generated, usingMonte-Carlo method, as the compositions of the two independent random variables with an exponential and a Gaussian distribution. Theoretical breakdown time delay distribution is obtained from the convolution of the exponential and Gaussian distribution. Performed analysis shows that the crucial parameter that determines the complex structure of time delay is the overvoltage and if it is of the order of few percentage, then distribution of time delay must be treated as an convolution of two random variables. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Damage-based design with no repairs for multiple events and its sensitivity to seismicity modelEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 3 2007S. Das Abstract Conventional design methodology for the earthquake-resistant structures is based on the concept of ensuring ,no collapse' during the most severe earthquake event. This methodology does not envisage the possibility of continuous damage accumulation during several not-so-severe earthquake events, as may be the case in the areas of moderate to high seismicity, particularly when it is economically infeasible to carry out repairs after damaging events. As a result, the structure may collapse or may necessitate large scale repairs much before the design life of the structure is over. This study considers the use of design force ratio (DFR) spectrum for taking an informed decision on the extent to which yield strength levels should be raised to avoid such a scenario. DFR spectrum gives the ratios by which the yield strength levels of single-degree-of-freedom oscillators of different initial periods should be increased in order to limit the total damage caused by all earthquake events during the lifetime to a specified level. The DFR spectra are compared for three different seismicity models in case of elasto-plastic oscillators: one corresponding to the exponential distribution for return periods of large events and the other two corresponding to the lognormal and Weibull distributions. It is shown through numerical study for a hypothetical seismic region that the use of simple exponential model may be acceptable only for small values of the seismic gap length. For moderately large to large seismic gap lengths, it may be conservative to use the lognormal model, while the Weibull model may be assumed for very large seismic gap lengths. Copyright © 2006 John Wiley & Sons, Ltd. [source] The exponentiated Gumbel distribution with climate applicationENVIRONMETRICS, Issue 1 2006Saralees Nadarajah Abstract The Gumbel distribution is perhaps the most widely applied statistical distribution for climate modeling. In this article we introduce a distribution that generalizes the standard Gumbel distribution in the same way the exponentiated exponential distribution generalizes the standard exponential distribution. We refer to this new distribution as the exponentiated Gumbel distribution. We provide a comprehensive treatment of the mathematical properties of this new distribution and illustrate its use for modeling rainfall data from Orlando, Florida. Among the mathematical properties, we derive the analytical shapes of the corresponding probability density function and the hazard rate function, calculate expressions for the nth moment and the asymptotic distribution of the extreme order statistics, and investigate the variation of the skewness and kurtosis measures. We also discuss estimation by the method of maximum likelihood. Copyright © 2005 John Wiley & Sons, Ltd. [source] Hydraulic pathways in the crystalline rock of the KTBGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2000Günter Zimmermann Fracture systems and fluid pathways must be analysed in order to understand the dynamical processes in the upper crust. Various deterministic as well as stochastic fracture networks in the depth section of the Franconian Lineament (6900 to 7140 m), which appears as a brittle ductile shear zone and prominent seismic reflector, were modelled to simulate the hydraulic situation at the two boreholes of the Continental Deep Drilling Program (KTB). They led to estimations of the hydraulic permeability in crystalline rock. The geometrical parameters of the fractures, such as fracture locations and orientations, were determined from structural borehole measurements, which create an image of the borehole wall. The selection of potentially open fractures was decided according to the stress field. Only fractures with the dip direction (azimuth) of the fracture plane perpendicular to the maximum horizontal stress field were assumed to be open. The motivation for this assumption is the fact that the maximum horizontal stress is higher than the vertical stress from the formation, indicating that the state of stress is a strike-slip faulting. Therefore, the probability of open fractures due to this particular stress field at the KTB sites is enhanced. Length scales for fracture apertures and extensions were stochastically varied and calibrated by hydraulic experiments. The mean fracture aperture was estimated to be 25 ,m, assuming an exponential distribution, with corresponding permeability in the range of 10,16 m2. Similar results were also obtained for log-normal and normal distributions, with a variation of permeability of the order of a factor of 2. The influence of the fracture length on permeability of the stochastic networks was also studied. Decreasing the fracture length beyond a specific threshold of 10 m led to networks with vanishing connectivity and hence vanishing permeability. Therefore, we assume a mean fracture length exceeding the threshold of 10 m as a necessary assumption for a macroscopic hydraulically active fracture system at the KTB site. The calculated porosity due to the fracture network is of the order of 10,3 per cent, which at first sight contradicts the estimated matrix porosity of 1 to 2 per cent from borehole measurements and core measurements. It can be concluded from these results, however, that if the fluid transport is due to a macroscopic fracture system, only very low porosity is needed for hydraulic flow with permeabilities up to several 10,16 m2, and hence the contribution of matrix porosity to the hydraulic transport is of a subordinate nature. [source] Solute transport in sand and chalk: a probabilistic approachHYDROLOGICAL PROCESSES, Issue 5 2006E. Carlier Abstract A probabilistic approach is used to simulate particle tracking for two types of porous medium. The first is sand grains with a single intergranular porosity. Particle tracking is carried out by advection and dispersion. The second is chalk granulates with intergranular and matrix porosities. Sorption can occur with advection and dispersion during particle tracking. Particle tracking is modelled as the sum of elementary steps with independent random variables in the sand medium. An exponential distribution is obtained for each elementary step and shows that the whole process is Markovian. A Gamma distribution or probability density function is then deduced. The relationships between dispersivity and the elementary step are given using the central limit theorem. Particle tracking in the chalky medium is a non-Markovian process. The probability density function depends on a power of the distance. Experimental simulations by dye tracer tests on a column have been performed for different distances and discharges. The probabilistic approach computations are in good agreement with the experimental data. The probabilistic computation seems an interesting and complementary approach to simulate transfer phenomena in porous media with respect to the traditional numerical methods. Copyright © 2006 John Wiley & Sons, Ltd. [source] A geomorphological explanation of the unit hydrograph conceptHYDROLOGICAL PROCESSES, Issue 4 2004C. Cudennec Abstract The water path from any point of a basin to the outlet through the self-similar river network was considered. This hydraulic path was split into components within the Strahler ordering scheme. For the entire basin, we assumed the probability density functions of the lengths of these components, reduced by the scaling factor, to be independent and isotropic. As with these assumptions, we propose a statistical physics reasoning (similar to Maxwell's reasoning) that considers a hydraulic length symbolic space, built on the self-similar lengths of the components. Theoretical expressions of the probability density functions of the hydraulic length and of the lengths of all the components were derived. These expressions are gamma laws expressed in terms of simple geomorphological parameters. We validated our theory with experimental observations from two French basins, which are different in terms of size and relief. From the comparisons, we discuss the relevance of the assumptions and show how a gamma law structure underlies the river network organization, but under the influence of a strong hierarchy constraint. These geomorphological results have been translated into travel time probability density functions, through the hydraulic linear hypothesis. This translation provides deterministic explanations of some famous a priori assumptions of the unit hydrograph and the geomorphological unit hydrograph theories, such as the gamma law general shape and the exponential distribution of residence time in Strahler states. Copyright © 2004 John Wiley & Sons, Ltd. [source] Radio-tracking gravel particles in a large braided river in New Zealand: a field test of the stochastic theory of bed load transport proposed by EinsteinHYDROLOGICAL PROCESSES, Issue 3 2001H. M. Habersack Abstract Hans A. Einstein initiated a probabilistic approach to modelling sediment transport in rivers. His formulae were based on theory and were stimulated by laboratory investigations. The theory assumes that bed load movement occurs in individual steps of rolling, sliding or saltation and rest periods. So far very few attempts have been made to measure stochastic elements in nature. For the first time this paper presents results of radio-tracing the travel path of individual particles in a large braided gravel bed river: the Waimakariri River of New Zealand. As proposed by Einstein, it was found that rest periods can be modelled by an exponential distribution, but particle step lengths are better represented by a gamma distribution. Einstein assumed an average travel distance of 100 grain-diameters for any bed load particle between consecutive points of deposition, but larger values of 6·7 m or 150 grain-diameters and 6·1 m or 120 grain-diameters were measured for two test particle sizes. Together with other available large scale field data, a dependence of the mean step length on particle diameter relative to the D50 of the bed surface was found. During small floods the time used for movement represents only 2·7% of the total time from erosion to deposition. The increase in percentage of time being used for transport means that it then has to be regarded in stochastic transport models. Tracing the flow path of bed load particles between erosion and deposition sites is a step towards explaining the interactions between sediment transport and river morphology. Copyright © 2001 John Wiley & Sons, Ltd. [source] The distribution of QTL additive and dominance effects in porcine F2 crossesJOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 3 2010J. Bennewitz Summary The present study used published quantitative trait loci (QTL) mapping data from three F2 crosses in pigs for 34 meat quality and carcass traits to derive the distribution of additive QTL effects as well as dominance coefficients. Dominance coefficients were calculated as the observed QTL dominance deviation divided by the absolute value of the observed QTL additive effect. The error variance of this ratio was approximated using the delta method. Mixtures of normal distributions (mixtures of normals) were fitted to the dominance coefficient using a modified EM-algorithm that considered the heterogeneous error variances of the data points. The results suggested clearly to fit one component which means that the dominance coefficients are normally distributed with an estimated mean (standard deviation) of 0.193 (0.312). For the additive effects mixtures of normals and a truncated exponential distribution were fitted. Two components were fitted by the mixtures of normals. The mixtures of normals did not predict enough QTL with small effects compared to the exponential distribution and to literature reports. The estimated rate parameter of the exponential distribution was 5.81 resulting in a mean effect of 0.172. [source] Distribution of Patients, Paroxysmal Atrial Tachyarrhythmia Episodes: Implications for Detection of Treatment EfficacyJOURNAL OF CARDIOVASCULAR ELECTROPHYSIOLOGY, Issue 2 2001WILLIAM F. KAEMMERER Ph.D. Distribution of Paroxysmal Atrial Tachyarrhythmia Episodes.Introduction: Clinical trials of treatments for paroxysmal atrial tachyarrhythmia (pAT) often compare different treatment groups using the time to first episode recurrence. This approach assumes that the time to the first recurrence is representative of all times between successive episodes in a given patient. We subjected this assumption to an empiric test. Methods and Results: Records of pAT onsets from a chronologic series of 134 patients with dual chamber implantable defibrillators were analyzed; 14 had experienced > 10 pAT episodes, which is sufficient for meaningful statistical modeling of the time intervals between episodes. Episodes were independent and randomly distributed in 9 of 14 patients, but a fit of the data to an exponential distribution, required by the stated assumption, was rejected in 13 of 14. In contrast, a Weibull distribution yielded an adequate goodness of fit in 5 of the 9 cases with independent and randomly distributed data. Monte Carlo methods were used to determine the impact of violations of the exponential distribution assumption on clinical trials using time from cardioversion to first episode recurrence as the dependent measure. In a parallel groups design, substantial loss of power occurs with sample sizes < 500 patients per group. In a cross-over design, there is insufficient power to detect a 30% reduction in episode frequency even with 300 patients. Conclusion: Clinical trials that rely on time to first episode recurrence may be considerably less able to detect efficacious treatments than may have been supposed. Analysis of multiple episode onsets recorded over time should be used to avoid this pitfall. [source] A new method for analyzing scientific productivityJOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 13 2001John C. Huber Previously, a new method for measuring scientific productivity was demonstrated for authors in mathematical logic and some subareas of 19th-century physics. The purpose of this article is to apply this new method to other fields to support its general applicability. We show that the method yields the same results for modern physicists, biologists, psychologists, inventors, and composers. That is, each individual's production is constant over time, and the time-period fluctuations follow the Poisson distribution. However, the productivity (e.g., papers per year) varies widely across individuals. We show that the distribution of productivity does not follow the normal (i.e., bell curve) distribution, but rather follows the exponential distribution. Thus, most authors produce at the lowest rate and very few authors produce at the higher rates. We also show that the career duration of individuals follows the exponential distribution. Thus, most authors have a very short career and very few have a long career. The principal advantage of the new method is that the detail structure of author productivity can be examined, such as trends, etc. Another advantage is that information science studies have guidance for the length of time interval being examined and estimating when an author's entire body of work has been recorded. [source] Exact likelihood inference for the exponential distribution under generalized Type-I and Type-II hybrid censoringNAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 7 2004B. Chandrasekar Abstract Chen and Bhattacharyya [Exact confidence bounds for an exponential parameter under hybrid censoring, Commun Statist Theory Methods 17 (1988), 1857,1870] considered a hybrid censoring scheme and obtained the exact distribution of the maximum likelihood estimator of the mean of an exponential distribution along with an exact lower confidence bound. Childs et al. [Exact likelihood inference based on Type-I and Type-II hybrid censored samples from the exponential distribution, Ann Inst Statist Math 55 (2003), 319,330] recently derived an alternative simpler expression for the distribution of the MLE. These authors also proposed a new hybrid censoring scheme and derived similar results for the exponential model. In this paper, we propose two generalized hybrid censoring schemes which have some advantages over the hybrid censoring schemes already discussed in the literature. We then derive the exact distribution of the maximum likelihood estimator as well as exact confidence intervals for the mean of the exponential distribution under these generalized hybrid censoring schemes. © 2004 Wiley Periodicals, Inc. Naval Research Logistics, 2004 [source] Optimal service rates of a service facility with perishable inventory itemsNAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 5 2002O. Berman In this paper we optimally control service rates for an inventory system of service facilities with perishable products. We consider a finite capacity system where arrivals are Poisson-distributed, lifetime of items have exponential distribution, and replenishment is instantaneous. We determine the service rates to be employed at each instant of time so that the long-run expected cost rate is minimized for fixed maximum inventory level and capacity. The problem is modelled as a semi-Markov decision problem. We establish the existence of a stationary optimal policy and we solve it by employing linear programming. Several numerical examples which provide insight to the behavior of the system are presented. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 464,482, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/nav.10021 [source] Full counting statistics for electron number in quantum dotsPHYSICA STATUS SOLIDI (C) - CURRENT TOPICS IN SOLID STATE PHYSICS, Issue 1 2008Yasuhiro Utsumi Abstract Measurements of the average current and its fluctuations (noise) have been powerful tools to study the quantumtransport in mesoscopic systems. Recently it became possible to measure the probability distribution of current, ,full counting statistics' (FCS), by using quantum point-contact charge-detectors. Motivated by recent experiments, we developed the FCS theory for the joint probability distribution of the current and the electron number inside quantum dots (QDs). We show that a non-Gaussian exponential distribution appears when there is no dot state close to the lead chemical potentials. We show that the measurement of the joint probability distribution of current and electron number would reveal nontrivial correlations, which reflect the asymmetry of tunnel barriers. We also show that for increasing strength of tunneling, the quantum fluctuations of charge qualitatively change the probability distribution of the electron number. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] A Bayesian zero-failure reliability demonstration test of high quality electro-explosive devicesQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 8 2009Tsai-Hung Fan Abstract Usually, for high reliability products the production cost is high and the lifetime is much longer, which may not be observable within a limited time. In this paper, an accelerated experiment is employed in which the lifetime follows an exponential distribution with the failure rate being related to the accelerated factor exponentially. The underlying parameters are also assumed to have the exponential prior distributions. A Bayesian zero-failure reliability demonstration test is conducted to design forehand the minimum sample size and testing length subject to a certain specified reliability criterion. Probability of passing the test design as well as predictive probability for additional experiments is also derived. Sensitivity analysis of the design is investigated by a simulation study. Copyright © 2009 John Wiley & Sons, Ltd. [source] A study of time-between-events control chart for the monitoring of regularly maintained systemsQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 7 2009Michael B. C. Khoo Abstract Owing to usage, environment and aging, the condition of a system deteriorates over time. Regular maintenance is often conducted to restore its condition and to prevent failures from occurring. In this kind of a situation, the process is considered to be stable, thus statistical process control charts can be used to monitor the process. The monitoring can help in making a decision on whether further maintenance is worthwhile or whether the system has deteriorated to a state where regular maintenance is no longer effective. When modeling a deteriorating system, lifetime distributions with increasing failure rate are more appropriate. However, for a regularly maintained system, the failure time distribution can be approximated by the exponential distribution with an average failure rate that depends on the maintenance interval. In this paper, we adopt a modification for a time-between-events control chart, i.e. the exponential chart for monitoring the failure process of a maintained Weibull distributed system. We study the effect of changes on the scale parameter of the Weibull distribution while the shape parameter remains at the same level on the sensitivity of the exponential chart. This paper illustrates an approach of integrating maintenance decision with statistical process monitoring methods. Copyright © 2008 John Wiley & Sons, Ltd. [source] The Tobit model with a non-zero thresholdTHE ECONOMETRICS JOURNAL, Issue 3 2007Richard T. Carson Summary, The standard Tobit maximum likelihood estimator under zero censoring threshold produces inconsistent parameter estimates, when the constant censoring threshold , is non-zero and unknown. Unfortunately, the recording of a zero rather than the actual censoring threshold value is typical of economic data. Non-trivial minimum purchase prices for most goods, fixed cost for doing business or trading, social customs such as those involving charitable donations, and informal administrative recording practices represent common examples of non-zero constant censoring threshold where the constant threshold is not readily available to the econometrician. Monte Carlo results show that this bias can be extremely large in practice. A new estimator is proposed to estimate the unknown censoring threshold. It is shown that the estimator is superconsistent and follows an exponential distribution in large samples. Due to the superconsistency, the asymptotic distribution of the maximum likelihood estimator of other parameters is not affected by the estimation uncertainty of the censoring threshold. [source] Dividend payments in the classical risk model under absolute ruin with debit interestAPPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 3 2009Chunwei Wang Abstract This paper attempts to study the dividend payments in a compound Poisson surplus process with debit interest. Dividends are paid to the shareholders according to a barrier strategy. An alternative assumption is that business can go on after ruin, as long as it is profitable. When the surplus is negative, a debit interest is applied. At first, we obtain the integro-differential equations satisfied by the moment-generating function and moments of the discounted dividend payments and we also prove the continuous property of them at zero. Then, applying these results, we get the explicit expressions of the moment-generating function and moments of the discounted dividend payments for exponential claims. Furthermore, we discuss the optimal dividend barrier when the claim sizes have a common exponential distribution. Finally, we give the numerical examples for exponential claims and Erlang (2) claims. Copyright © 2008 John Wiley & Sons, Ltd. [source] Modelling a general standby system and evaluation of its performanceAPPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 2 2008Ji Hwan Cha Abstract Redundancy or standby is a technique that has been widely applied to improving system reliability and availability in system design. In this paper, a general method for modelling standby system is proposed and system performance measures are derived. It is shown that the proposed general standby system includes the cases of cold, hot and warm standby systems with units of exponential distribution, which were studied in the literature, as special cases. An optimal allocation problem for a standby system is also discussed. Copyright © 2007 John Wiley & Sons, Ltd. [source] Comparison of Trap-state Distribution and Carrier Transport in Nanotubular and Nanoparticulate TiO2 Electrodes for Dye-Sensitized Solar CellsCHEMPHYSCHEM, Issue 10 2010Raheleh Mohammadpour Abstract Dye-sensitized solar cells (DSCs) with nanotubular TiO2 electrodes of varying thicknesses are compared to DSCs based on conventional nanoparticulate electrodes. Despite the higher degree of order in one-dimensional nanotubular electrodes, electron transport times and diffusion coefficients, determined under short-circuit conditions, are comparable to those of nanoparticulate electrodes. The quasi-Fermi level, however, is much lower in the nanotubes, suggesting a lower concentration of conduction band electrons. This provides evidence for a much higher diffusion coefficient for conduction band electrons in nanotubes than in nanoparticulate films. The electron lifetime and the diffusion length are significantly longer in nanotubular TiO2 electrodes than in nanoparticulate films. Nanotubular electrodes have a trap distribution that differs significantly from nanoparticulate electrodes; they possess relatively deeper traps and have a characteristic energy of the exponential distribution that is more than two times that of nanoparticulate electrodes. [source] THE ADDITIVE GENETIC VARIANCE AFTER BOTTLENECKS IS AFFECTED BY THE NUMBER OF LOCI INVOLVED IN EPISTATIC INTERACTIONSEVOLUTION, Issue 4 2003Yamama Naciri-Graven Abstract We investigated the role of the number of loci coding for a neutral trait on the release of additive variance for this trait after population bottlenecks. Different bottleneck sizes and durations were tested for various matrices of genotypic values, with initial conditions covering the allele frequency space. We used three different types of matrices. First, we extended Cheverud and Routman's model by defining matrices of "pure" epistasis for three and four independent loci; second, we used genotypic values drawn randomly from uniform, normal, and exponential distributions; and third we used two models of simple metabolic pathways leading to physiological epistasis. For all these matrices of genotypic values except the dominant metabolic pathway, we find that, as the number of loci increases from two to three and four, an increase in the release of additive variance is occurring. The amount of additive variance released for a given set of genotypic values is a function of the inbreeding coefficient, independently of the size and duration of the bottleneck. The level of inbreeding necessary to achieve maximum release in additive variance increases with the number of loci. We find that additive-by-additive epistasis is the type of epistasis most easily converted into additive variance. For a wide range of models, our results show that epistasis, rather than dominance, plays a significant role in the increase of additive variance following bottlenecks. [source] PROBABILISTIC CHARACTERISTICS OF STRESS CHANGES DURING CEREAL SNACK PUNCTUREJOURNAL OF TEXTURE STUDIES, Issue 2 2007YOSHIKI TSUKAKOSHI ABSTRACT During puncture tests of Japanese cereal snacks, the force increases and decreases alternately. We herein compare the force,deformation curves recorded by two different testing machines and show that the number of changes in the curves depends on the testing machine. Thus, it is impossible to compare results obtained using different instruments. By removing the higher-frequency components of the force,deformation curves, small events are easily missed. The number of large events decreases when lower-frequency components are eliminated. This suggests the importance of providing the information on the frequency range of the testing machines. Nevertheless, the number of large force changes is similar between the examined machines. To model the size,frequency distribution, we selected a parametric probabilistic model from among the Weibull, exponential and Pareto distributions using Akaike information criterion and found that the Weibull or exponential distributions have a fit better than the Pareto distribution. PRACTICAL APPLICATIONS The methods developed in this work can be used to evaluate the quality of crisp snack food. By analyzing the samples obtained from a lot, samples with poor texture because of abnormal moisture levels and/or ingredients can be discerned and can be used to accept or reject the lot. [source] Endogenous Random Asset Prices in Overlapping Generations EconomiesMATHEMATICAL FINANCE, Issue 1 2000Volker Böhm This paper derives a general explicit sequential asset price process for an economy with overlapping generations of consumers. They maximize expected utility with respect to subjective transition probabilities given by Markov kernels. The process is determined primarily by the interaction of exogenous random dividends and the characteristics of consumers, given by arbitrary preferences and expectations, yielding an explicit random dynamical system with expectations feedback. The paper studies asset prices and equity premia for a parametrized class of examples with CARA utilities and exponential distributions. It provides a complete analysis of the role of risk aversion and of subjective as well as rational beliefs. [source] Profit Maximizing Warranty Period with Sales Expressed by a Demand FunctionQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 3 2007Shaul P. Ladany Abstract The problem of determining the optimal warranty period, assumed to coincide with the manufacturer's lower specification limit for the lifetime of the product, is addressed. It is assumed that the quantity sold depends via a Cobb,Douglas-type demand function on the sale price and on the warranty period, and that both the cost incurred for a non-conforming item and the sale price increase with the warranty period. A general solution is derived using Response Modeling Methodology (RMM) and a new approximation for the standard normal cumulative distribution function. The general solution is compared with the exact optimal solutions derived under various distributional scenarios. Relative to the exact optimal solutions, RMM-based solutions are accurate to at least the first three significant digits. Some exact results are derived for the uniform and the exponential distributions. Copyright © 2006 John Wiley & Sons, Ltd. [source] Maximum weight independent sets and matchings in sparse random graphs.RANDOM STRUCTURES AND ALGORITHMS, Issue 1 2006Exact results using the local weak convergence method Let G(n,c/n) and Gr(n) be an n -node sparse random graph and a sparse random r -regular graph, respectively, and let I(n,r) and I(n,c) be the sizes of the largest independent set in G(n,c/n) and Gr(n). The asymptotic value of I(n,c)/n as n , ,, can be computed using the Karp-Sipser algorithm when c , e. For random cubic graphs, r = 3, it is only known that .432 , lim infnI(n,3)/n , lim supnI(n,3)/n , .4591 with high probability (w.h.p.) as n , ,, as shown in Frieze and Suen [Random Structures Algorithms 5 (1994), 649,664] and Bollabas [European J Combin 1 (1980), 311,316], respectively. In this paper we assume in addition that the nodes of the graph are equipped with nonnegative weights, independently generated according to some common distribution, and we consider instead the maximum weight of an independent set. Surprisingly, we discover that for certain weight distributions, the limit limnI(n,c)/n can be computed exactly even when c > e, and limnI(n,r)/n can be computed exactly for some r , 1. For example, when the weights are exponentially distributed with parameter 1, limnI(n,2e)/n , .5517, and limnI(n,3)/n , .6077. Our results are established using the recently developed local weak convergence method further reduced to a certain local optimality property exhibited by the models we consider. We extend our results to maximum weight matchings in G(n,c/n) and Gr(n). For the case of exponential distributions, we compute the corresponding limits for every c > 0 and every r , 2. © 2005 Wiley Periodicals, Inc. Random Struct. Alg., 2006 [source] |