Randomness

Distribution by Scientific Domains


Selected Abstracts


Reliability-based preform shape design in forging

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 11 2005
Jalaja Repalle
Abstract A reliability-based optimization method is developed for preform shape design in forging. Forging is a plastic deformation process that transforms a simple shape of workpiece into a predetermined complex shape through a number of intermediate shapes by the application of compressive forces. Traditionally, these intermediate shapes are designed in a deterministic manufacturing domain. In reality, there exist various uncertainties in the forging environment, such as variations in process conditions, billet/die temperatures, and material properties. Randomness in these parameters could lead to variations in product quality and often induce heavy manufacturing losses. In this research, a robust preform design methodology is developed in which various randomnesses in parameters are quantified and incorporated through reliability analysis and uncertainty quantification techniques. The stochastic response surface approach is used to reduce computation time by establishing a relationship between the process performance and shape and random parameters. Finally, reliability-based optimization is utilized for preform shape design of an engine component to improve the product quality and robustness. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Altered mating behaviour in a Cry1Ac-resistant strain of Helicoverpa armigera (Lepidoptera: Noctuidae)

JOURNAL OF APPLIED ENTOMOLOGY, Issue 5 2008
X. C. Zhao
Abstract Randomness of mating between susceptible and resistant individuals is a major factor that closely relates to the refuge strategy of resistance management for Helicoverpa armigera (Hübner) to Bacillus thuringiensis cotton. The mating behaviour of Cry1Ac-susceptible and Cry1Ac-resistant strains of H. armigera was compared to investigate the randomness of their mating. The percentage of mating was lower for Cry1Ac-resistant H. armigera compared with that of the susceptible strain under both no-choice and multiple-choice conditions. The low percentage of mating in the resistant strain indicates a reduced incidence of successful mating. The percentage of spermatophore-containing mated female H. armigera in the crossing of susceptible females × resistant males was significantly lower than in the crossing of resistant females × susceptible males, but the observed mating frequencies of these two types of cross were similar to each other. This indicates that resistant males reduce the incidence of mating paternity more than they do their mating frequency. The percentages of heterogametic matings (susceptible females × resistant males, resistant females × susceptible males) in the multiple-choice experiment were lower than those of homogametic matings (susceptible × susceptible, resistant × resistant) on peak mating nights. However, the difference between heterogametic and homogametic mating was not significant, indicating that there was a random mating between susceptible and resistant strains. The results presented here do not reflect reality in mating associated with Cry1Ac resistance but can provide insight into variable expression. [source]


Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model

JOURNAL OF EDUCATIONAL MEASUREMENT, Issue 4 2006
Wen-Chung Wang
This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random coefficients multinomial logit model (MRCMLM) (Adams, Wilson, & Wang, 1997) so that the estimation procedures for the MRCMLM can be directly applied. The results of the simulation indicated that when the data were generated from the RSM, using the RSM and the RE-RSM to fit the data made little difference: both resulting in accurate parameter recovery. When the data were generated from the RE-RSM, using the RE-RSM to fit the data resulted in unbiased estimates, whereas using the RSM resulted in biased estimates, large fit statistics for the thresholds, and inflated test reliability. An empirical example of 10 items with four-point rating scales was illustrated in which four models were compared: the RSM, the RE-RSM, the partial credit model (Masters, 1982), and the constrained random-effects partial credit model. In this real data set, the need for a random-effects formulation becomes clear. [source]


An Improved Non-Isothermal Kinetic Model for Prediction of Extent of Transesterification Reaction and Degree of Randomness in PET/PEN Blends

MACROMOLECULAR THEORY AND SIMULATIONS, Issue 4-5 2008
Mahdi Golriz
Abstract An improved non-isothermal kinetic model was developed based on mass balance and Arrhenius laws using a second-order reversible reaction capable of predicting the extent of transesterification reaction (X) and the degree of randomness (RD) in poly(ethylene terephthalate) (PET)/poly(ethylene naphthalene-2,6-dicarboxylate) (PEN) blends prepared by a twin screw micro-compounder over a full composition range under different processing conditions. The experimental values of X and RD were determined by 1H-NMR and a direct relationship between them developed. The model constants were tuned by an optimization method using half of the experimental data, with the other half used for verification. Good agreement was found between the experimental and theoretical data in the verification stage and, therefore, it was concluded that the model is capable of predicting the extent of transesterification reaction with a high level of confidence for a wide range of processing conditions. Similarly to the experimental results, the model showed that, among all parameters affecting the extent of transesterification reaction, the time and temperature play the major role, whereas the blend composition does not have a significant role. [source]


Pathological gambling: an increasing public health problem

ACTA PSYCHIATRICA SCANDINAVICA, Issue 4 2001
Article first published online: 7 JUL 200
Gambling has always existed, but only recently has it taken on the endlessly variable and accessible forms we know today. Gambling takes place when something valuable , usually money , is staked on the outcome of an event that is entirely unpredictable. It was only two decades ago that pathological gambling was formally recognized as a mental disorder, when it was included in the DSM-III in 1980. For most people, gambling is a relaxing activity with no negative consequences. For others, however, gambling becomes excessive. Pathological gambling is a disorder that manifests itself through the irrepressible urge to wager money. This disorder ultimately dominates the gambler's life, and has a multitude of negative consequences for both the gambler and the people they interact with, i.e. friends, family members, employers. In many ways, gambling might seem a harmless activity. In fact, it is not the act of gambling itself that is harmful, but the vicious cycle that can begin when a gambler wagers money they cannot afford to lose, and then continues to gamble in order to recuperate their losses. The gambler's ,tragic flaw' of logic lies in their failure to understand that gambling is governed solely by random, chance events. Gamblers fail to recognize this and continue to gamble, attempting to control outcomes by concocting strategies to ,beat the game'. Most, if not all, gamblers try in some way to predict the outcome of a game when they are gambling. A detailed analysis of gamblers' selfverbalizations reveals that most of them behave as though the outcome of the game relied on their personal ,skills'. From the gambler's perspective, skill can influence chance , but in reality, the random nature of chance events is the only determinant of the outcome of the game. The gambler, however, either ignores or simply denies this fundamental rule (1). Experts agree that the social costs of pathological gambling are enormous. Changes in gaming legislation have led to a substantial expansion of gambling opportunities in most industrialized countries around the world, mainly in Europe, America and Australia. Figures for the United States' leisure economy in 1996 show gross gambling revenues of $47.6 billion, which was greater than the combined revenue of $40.8 billion from film box offices, recorded music, cruise ships, spectator sports and live entertainment (2). Several factors appear to be motivating this growth: the desire of governments to identify new sources of revenue without invoking new or higher taxes; tourism entrepreneurs developing new destinations for entertainment and leisure; and the rise of new technologies and forms of gambling (3). As a consequence, prevalence studies have shown increased gambling rates among adults. It is currently estimated that 1,2% of the adult population gambles excessively (4, 5). Given that the prevalence of gambling is related to the accessibility of gambling activities, and that new forms of gambling are constantly being legalized throughout most western countries, this figure is expected to rise. Consequently, physicians and mental health professionals will need to know more about the diagnosis and treatment of pathological gamblers. This disorder may be under-diagnosed because, clinically, pathological gamblers usually seek help for the problems associated with gambling such as depression, anxiety or substance abuse, rather than for the excessive gambling itself. This issue of Acta Psychiatrica Scandinavica includes the first national survey of problem gambling completed in Sweden, conducted by Volberg et al. (6). This paper is based on a large sample (N=9917) with an impressively high response rate (89%). Two instruments were used to assess gambling activities: the South Oaks Gambling Screen-Revised (SOGS-R) and an instrument derived from the DSM-IV criteria for pathological gambling. Current (1 year) and lifetime prevalence rates were collected. Results show that 0.6% of the respondents were classified as probable pathological gamblers, and 1.4% as problem gamblers. These data reveal that the prevalence of pathological gamblers in Sweden is significantly less than what has been observed in many western countries. The authors have pooled the rates of problem (1.4%) and probable pathological gamblers (0.6%), to provide a total of 2.0% for the current prevalence. This 2% should be interpreted with caution, however, as we do not have information on the long-term evolution of these subgroups of gamblers; for example, we do not know how many of each subgroup will become pathological gamblers, and how many will decrease their gambling or stop gambling altogether. Until this information is known, it would be preferable to keep in mind that only 0.6% of the Swedish population has been identified as pathological gamblers. In addition, recent studies show that the SOGS-R may be producing inflated estimates of pathological gambling (7). Thus, future research in this area might benefit from the use of an instrument based on DSM criteria for pathological gambling, rather than the SOGS-R only. Finally, the authors suggest in their discussion that the lower rate of pathological gamblers obtained in Sweden compared to many other jurisdictions may be explained by the greater availability of games based on chance rather than games based on skill or a mix of skill and luck. Before accepting this interpretation, researchers will need to demonstrate that the outcomes of all games are determined by other factor than chance and randomness. Many studies have shown that the notion of randomness is the only determinant of gambling (1). Inferring that skill is an important issue in gambling may be misleading. While these are important issues to consider, the Volberg et al. survey nevertheless provides crucial information about gambling in a Scandinavian country. Gambling will be an important issue over the next few years in Sweden, and the publication of the Volberg et al. study is a landmark for the Swedish community (scientists, industry, policy makers, etc.). This paper should stimulate interesting discussions and inspire new, much-needed scientific investigations of pathological gambling. Acta Psychiatrica Scandinavica Guido Bondolfi and Robert Ladouceur Invited Guest Editors References 1.,LadouceurR & WalkerM. The cognitive approach to understanding and treating pathological gambling. In: BellackAS, HersenM, eds. Comprehensive clinical psychology. New York: Pergamon, 1998:588 , 601. 2.,ChristiansenEM. Gambling and the American economy. In: FreyJH, ed. Gambling: socioeconomic impacts and public policy. Thousand Oaks, CA: Sage, 1998:556:36 , 52. 3.,KornDA & ShafferHJ. Gambling and the health of the public: adopting a public health perspective. J Gambling Stud2000;15:289 , 365. 4.,VolbergRA. Problem gambling in the United States. J Gambling Stud1996;12:111 , 128. 5.,BondolfiG, OsiekC, FerreroF. Prevalence estimates of pathological gambling in Switzerland. Acta Psychiatr Scand2000;101:473 , 475. 6.,VolbergRA, AbbottMW, RönnbergS, MunckIM. Prev-alence and risks of pathological gambling in Sweden. Acta Psychiatr Scand2001;104:250 , 256. 7.,LadouceurR, BouchardC, RhéaumeNet al. Is the SOGS an accurate measure of pathological gambling among children, adolescents and adults?J Gambling Stud2000;16:1 , 24. [source]


Incremental dynamic analysis for estimating seismic performance sensitivity and uncertainty ,

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 2 2010
Dimitrios Vamvatsikos
Abstract Incremental dynamic analysis (IDA) is presented as a powerful tool to evaluate the variability in the seismic demand and capacity of non-deterministic structural models, building upon existing methodologies of Monte Carlo simulation and approximate moment-estimation. A nine-story steel moment-resisting frame is used as a testbed, employing parameterized moment-rotation relationships with non-deterministic quadrilinear backbones for the beam plastic-hinges. The uncertain properties of the backbones include the yield moment, the post-yield hardening ratio, the end-of-hardening rotation, the slope of the descending branch, the residual moment capacity and the ultimate rotation reached. IDA is employed to accurately assess the seismic performance of the model for any combination of the parameters by performing multiple nonlinear timehistory analyses for a suite of ground motion records. Sensitivity analyses on both the IDA and the static pushover level reveal the yield moment and the two rotational-ductility parameters to be the most influential for the frame behavior. To propagate the parametric uncertainty to the actual seismic performance we employ (a) Monte Carlo simulation with latin hypercube sampling, (b) point-estimate and (c) first-order second-moment techniques, thus offering competing methods that represent different compromises between speed and accuracy. The final results provide firm ground for challenging current assumptions in seismic guidelines on using a median-parameter model to estimate the median seismic performance and employing the well-known square-root-sum-of-squares rule to combine aleatory randomness and epistemic uncertainty. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Considerations for the use of SADIE statistics to quangify spatial patterns

ECOGRAPHY, Issue 6 2003
Xiangming Xu
The SADIE methodology has recently been developed to quangify spatial patterns, particularly in determining the randomness of observed patterns, estimating patches and gaps, and quangifying correlation between species. We used argificially generated data sets to investigate the relationship of the SADIE pattern and association statistics with patch number and spatial positions of the patches. Results showed that SADIE statistics are critically dependent on the absolute positions of quadrat counts, and not just on their relative positions. In particular, degree of patchiness increased as the patch was displaced away from the centre. Therefore, the dependence of SADIE statistics on absolute sampling positions should be taken into consideration when using and interpreting SADIE-based results. [source]


Metacommunity patterns of highly diverse stream midges: gradients, chequerboards, and nestedness, or is there only randomness?

ECOLOGICAL ENTOMOLOGY, Issue 5 2005
Jani Heino
Abstract., 1.,Several non-random patterns in the distribution of species have been observed, including Clementsian gradients, Gleasonian gradients, nestedness, chequerboards, and evenly spaced gradients. Few studies have examined these patterns simultaneously, although they have often been studied in isolation and contrasted with random distribution of species across sites. 2.,This study examined whether assemblages of chironomid midges exhibit any of the idealised distribution patterns as opposed to random distribution of species across sites within the metacommunity context in a boreal drainage system. Analyses were based on stream surveys conducted during three consecutive years. Analytical approaches included ordinations, cluster analysis, null models, and associated randomisation methods. 3.,Midge assemblages did not conform to Clementsian gradients, which was evidenced by the absence of clearly definable assemblage types with numerous species exclusive to each assemblage type. Rather, there were signs of continuous Gleasonian variability of assemblage composition, as well as significant nested subset patterns of species distribution. 4.,Midge assemblages showed only weak relationships with any of the measured environmental variables, and even these weak environmental relationships varied among years. 5.,Midge assemblages did not appear to be structured by competition. This finding was somewhat problematic, however, because the two indices measuring co-occurrence provided rather different signs of distribution patterns. This was probably a consequence of how they actually measure co-occurrence. 6.,Although midge assemblages did not show a perfect match with any of the idealised distribution patterns, they nevertheless showed a resemblance to the empirical patterns found previously for several plant and animal groups. [source]


The Effects of Random and Discrete Sampling when Estimating Continuous,Time Diffusions

ECONOMETRICA, Issue 2 2003
Sahalia, Yacine Aït
High,frequency financial data are not only discretely sampled in time but the time separating successive observations is often random. We analyze the consequences of this dual feature of the data when estimating a continuous,time model. In particular, we measure the additional effects of the randomness of the sampling intervals over and beyond those due to the discreteness of the data. We also examine the effect of simply ignoring the sampling randomness. We find that in many situations the randomness of the sampling has a larger impact than the discreteness of the data. [source]


Spatial,temporal marked point processes: a spectrum of stochastic models

ENVIRONMETRICS, Issue 3-4 2010
Eric Renshaw
Abstract Many processes that develop through space and time do so in response not only to their own individual growth mechanisms but also in response to interactive pressures induced by their neighbours. The growth of trees in a forest which compete for light and nutrient resources, for example, provides a classic illustration of this general spatial,temporal growth-interaction process. Not only has its mathematical representation proved to be a powerful tool in the study and analysis of marked point patterns since it may easily be simulated, but it has also been shown to be highly flexible in terms of its application since it is robust with respect to incorrect choice of model selection. Moreover, it is highly amenable to maximum likelihood and least squares parameter estimation techniques. Currently the algorithm comprises deterministic growth and interaction coupled with a stochastic arrival and departure mechanism. So for systems with a fixed number of particles there is an inherent lack of randomness. A variety of different stochastic approaches are therefore presented, from the exact event,time model through to the associated stochastic differential equation, taking in time-increment and Tau- and Langevin-Leaping approximations en route. The main algorithm is illustrated through application to forest management and high-intensity packing of hard particle systems, and comparisons are made with the established force biased approach. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Cardiac Dysrhythmia Associated with the Immediate Postictal State after Maximal Electroshock in Freely Moving Rat

EPILEPSIA, Issue 4 2002
Olivier Darbin
Summary: ,Purpose: Cardiac autonomic changes accompany complex partial seizures and generalized tonic,clonic seizures, and participate, at least partially, in the sudden and unexpected death in epilepsy (SUDEP). The analysis of the heart rate variability (HRV) is one of the simplest ways of providing insight into autonomic functions. The entropy quantifies the repetition of complex patterns in a signal and refers to systems randomness, regularity, and predictability. Clinical investigations have reported that entropy decreases in patients with a high risk of sudden cardiac death. The goal of this study was to evaluate the effects of the maximal electroshock (MES) on the entropy of HRV, monitored in the immediate postictal stage in the model of the freely moving rat. Methods: Entropy changes were correlated with the high and low frequencies of spectral analysis, which reflect the participation of the sympathetic and parasympathetic activities. Results: MES-induced arrhythmia is characterized by an HRV increase, an imbalance in favor of the parasympathetic activity, and a decrease in the entropy. Entropy decrease was restricted to the duration of the arrhythmia, suggesting that the postictal arrhythmia may be associated with a higher risk of lethal cardiac complications. Nevertheless, entropy changes did not correlate with spectral changes. Conclusions: The results suggest that the imbalance demonstrated in the spectral domain explains only partially the contribution of each autonomic system in the complexity of the heart rate during the postictal state. [source]


Optimal observability of sustained stochastic competitive inhibition oscillations at organellar volumes

FEBS JOURNAL, Issue 1 2006
Kevin L. Davis
When molecules are present in small numbers, such as is frequently the case in cells, the usual assumptions leading to differential rate equations are invalid and it is necessary to use a stochastic description which takes into account the randomness of reactive encounters in solution. We display a very simple biochemical model, ordinary competitive inhibition with substrate inflow, which is only capable of damped oscillations in the deterministic mass-action rate equation limit, but which displays sustained oscillations in stochastic simulations. We define an observability parameter, which is essentially just the ratio of the amplitude of the oscillations to the mean value of the concentration. A maximum in the observability is seen as the volume is varied, a phenomenon we name system-size observability resonance by analogy with other types of stochastic resonance. For the parameters of this study, the maximum in the observability occurs at volumes similar to those of bacterial cells or of eukaryotic organelles. [source]


Review and comparison between the Wells,Riley and dose-response approaches to risk assessment of infectious respiratory diseases

INDOOR AIR, Issue 1 2010
G. N. Sze To
Abstract, Infection risk assessment is very useful in understanding the transmission dynamics of infectious diseases and in predicting the risk of these diseases to the public. Quantitative infection risk assessment can provide quantitative analysis of disease transmission and the effectiveness of infection control measures. The Wells,Riley model has been extensively used for quantitative infection risk assessment of respiratory infectious diseases in indoor premises. Some newer studies have also proposed the use of dose-response models for such purpose. This study reviews and compares these two approaches to infection risk assessment of respiratory infectious diseases. The Wells,Riley model allows quick assessment and does not require interspecies extrapolation of infectivity. Dose-response models can consider other disease transmission routes in addition to airborne route and can calculate the infectious source strength of an outbreak in terms of the quantity of the pathogen rather than a hypothetical unit. Spatial distribution of airborne pathogens is one of the most important factors in infection risk assessment of respiratory disease. Respiratory deposition of aerosol induces heterogeneous infectivity of intake pathogens and randomness on the intake dose, which are not being well accounted for in current risk models. Some suggestions for further development of the risk assessment models are proposed. Practical Implications This review article summarizes the strengths and limitations of the Wells,Riley and the dose-response models for risk assessment of respiratory diseases. Even with many efforts by various investigators to develop and modify the risk assessment models, some limitations still persist. This review serves as a reference for further development of infection risk assessment models of respiratory diseases. The Wells,Riley model and dose-response model offer specific advantages. Risk assessors can select the approach that is suitable to their particular conditions to perform risk assessment. [source]


Assessing early warning signals of currency crises: a fuzzy clustering approach

INTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 4 2006
Shuhua Liu
In the period of 1990s alone, four waves of financial crises occurred around the world. The repeated occurrence of financial crises stimulated a large number of theoretical and empirical studies on the phenomena, in particular studies on the determinants of or early warning signals of financial crises. Nonetheless, the different studies of early warning systems have achieved mixed results and there remains much room for further investigation. Since, so far, the empirical studies have focused on conventional economic modelling methods such as simplified probabilistic models and regression models, in this study we examine whether new insights can be gained from the application of the fuzzy clustering method. The theories of fuzzy sets and fuzzy logic offer us the means to deal with uncertainties inherent in a wide variety of tasks, especially when the uncertainty is not the result of randomness but the result of unknown factors and relationships that are difficult to explain. They also provide us with the instruments to treat vague and imprecise linguistic values and to model nonlinear relationships. This paper presents empirical results from analysing the Finnish currency crisis in 1992 using the fuzzy C-means clustering method. We first provide the relevant background knowledge and introduce the fuzzy clustering method. We then show how the use of fuzzy C-means method can help us to identify the critical levels of important economic indicators for predicting of financial crises. Copyright © 2007 John Wiley & Sons, Ltd. [source]


More on Marriage, Fertility, and the Distribution of Income*

INTERNATIONAL ECONOMIC REVIEW, Issue 3 2003
Jeremy Greenwood
According to Pareto (1896), the distribution of income depends on "the nature of the people comprising a society, on the organization of the latter, and, also, in part, on chance." In the model developed here the "nature of the people" is captured by attitudes toward marriage, divorce, fertility, and children. Singles search for mates in a marriage market. Married agents bargain about work, and the quantity and quality of children. They can divorce. Social policies, such as child support requirements, reflect the "organization of the (society)." Finally, "chance" is modeled by randomness in income, marriage opportunities, and marital bliss. [source]


Stochastic computational modelling of highly heterogeneous poroelastic media with long-range correlations

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 1 2004
Diego G. Frias
Abstract The compaction of highly heterogeneous poroelastic reservoirs with the geology characterized by long-range correlations displaying fractal character is investigated within the framework of the stochastic computational modelling. The influence of reservoir heterogeneity upon the magnitude of the stresses induced in the porous matrix during fluid withdrawal and rock consolidation is analysed by performing ensemble averages over realizations of a log-normally distributed stationary random hydraulic conductivity field. Considering the statistical distribution of this parameter characterized by a coefficient of variation governing the magnitude of heterogeneity and a correlation function which decays with a power-law scaling behaviour we show that the combination of these two effects result in an increase in the magnitude of effective stresses of the rock during reservoir depletion. Further, within the framework of a perturbation analysis we show that the randomness in the hydraulic conductivity gives rise to non-linear corrections in the upscaled poroelastic equations. These corrections are illustrated by a self-consistent recursive hierarchy of solutions of the stochastic poroelastic equations parametrized by a scale parameter representing the fluctuating log-conductivity standard deviation. A classical example of land subsidence caused by fluid extraction of a weak reservoir is numerically simulated by performing Monte Carlo simulations in conjunction with finite elements discretizations of the poroelastic equations associated with an ensemble of geologies. Numerical results illustrate the effects of the spatial variability and fractal character of the permeability distribution upon the evolution of the Mohr,Coulomb function of the rock. Copyright © 2004 John Wiley & Sons, Ltd. [source]


An improved perturbation method for stochastic finite element model updating

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 13 2008
X. G. Hua
Abstract In this paper, an improved perturbation method is developed for the statistical identification of structural parameters by using the measured modal parameters with randomness. On the basis of the first-order perturbation method and sensitivity-based finite element (FE) model updating, two recursive systems of equations are derived for estimating the first two moments of random structural parameters from the statistics of the measured modal parameters. Regularization technique is introduced to alleviate the ill-conditioning in solving the equations. The numerical studies of stochastic FE model updating of a truss bridge are presented to verify the improved perturbation method under three different types of uncertainties, namely natural randomness, measurement noise, and the combination of the two. The results obtained using the perturbation method are in good agreement with, although less accurate than, those obtained using the Monte Carlo simulation (MCS) method. It is also revealed that neglecting the correlation of the measured modal parameters may result in an unreliable estimation of the covariance matrix of updating parameters. The statistically updated FE model enables structural design and analysis, damage detection, condition assessment, and evaluation in the framework of probability and statistics. Copyright © 2007 John Wiley & Sons, Ltd. [source]


A test of homogeneity for autoregressive processes

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 3 2002
Rafael Martínez Pedro Gómez
Abstract In this paper, we introduce a new hypothesis test to determine whether or not two spectral densities are proportional. We deliberately limit our study to autoregressive processes and derive the asymptotic behaviour of the test. A test for autoregressive coefficient nullity or randomness is deduced. We derive asymptotic behaviour for these tests and show the usefulness of our test to detect speech in a noisy environment. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Spatial dynamics of predation by carabid beetles on slugs

JOURNAL OF ANIMAL ECOLOGY, Issue 3 2000
David A. Bohan
Summary 1.,An explicitly spatial sampling approach was employed to test the null hypothesis that the predation on slugs by the carabid beetle Pterostichus melanarius (Illiger) was opportunistic. 2.,The beetles and slugs were sampled across a nested series of grids of sampling points, in a field of winter wheat during June and July 1997. 3.,The spatial distribution of all slugs in June was found to change with the scale of the sampling grid, from random on the 0.25 m scale, through aggregation at 1 m, to random at 4 m. At the highest scale of 16 m, the slugs were significantly spatially aggregated. 4.,The distribution of beetles in June was also spatially dynamic, with randomness observed at the 4 m and 8 m scales. At 16 m, significant aggregation was observed. 5.,The dynamic distributions of slugs and beetles, at 16 m, were found not to be associated with, and thus were not determined by, soil or crop factors. 6.,Comparison of slug and beetle populations showed, however, that the distributions at 16 m were dynamically associated with each other. In June where there were many slugs there were also many carabids, whilst in July where there were many carabids there were few slugs. 7.,Approximately 11% of the beetles sampled across the 16 m grid in June and July were found to have ingested slug protein, following intensive enzyme-linked immunosorbent assay (ELISA) testing. 8.,The spatial distribution of these slug-positive beetles was significantly associated with the distribution of the larger slug classes, over 25 mg. Where there were many large slugs in June there were many slug-positive beetles. Conversely, in July few large slugs were found where there were many slug-positive beetles. 9.,Parametric analysis revealed that these changes in the large slug class, at each sampling point between June and July (growth), were negatively related to the local numbers of slug-positive beetles, and that growth declined as the local numbers of beetles increased. 10.,These findings suggest that predation was not opportunistic, but direct and dynamic, falsifying the null hypothesis. Moreover, this predation elicited significant changes in the spatial distribution and local density of the slugs, in a manner that may be termed spatially density dependent. [source]


Altered mating behaviour in a Cry1Ac-resistant strain of Helicoverpa armigera (Lepidoptera: Noctuidae)

JOURNAL OF APPLIED ENTOMOLOGY, Issue 5 2008
X. C. Zhao
Abstract Randomness of mating between susceptible and resistant individuals is a major factor that closely relates to the refuge strategy of resistance management for Helicoverpa armigera (Hübner) to Bacillus thuringiensis cotton. The mating behaviour of Cry1Ac-susceptible and Cry1Ac-resistant strains of H. armigera was compared to investigate the randomness of their mating. The percentage of mating was lower for Cry1Ac-resistant H. armigera compared with that of the susceptible strain under both no-choice and multiple-choice conditions. The low percentage of mating in the resistant strain indicates a reduced incidence of successful mating. The percentage of spermatophore-containing mated female H. armigera in the crossing of susceptible females × resistant males was significantly lower than in the crossing of resistant females × susceptible males, but the observed mating frequencies of these two types of cross were similar to each other. This indicates that resistant males reduce the incidence of mating paternity more than they do their mating frequency. The percentages of heterogametic matings (susceptible females × resistant males, resistant females × susceptible males) in the multiple-choice experiment were lower than those of homogametic matings (susceptible × susceptible, resistant × resistant) on peak mating nights. However, the difference between heterogametic and homogametic mating was not significant, indicating that there was a random mating between susceptible and resistant strains. The results presented here do not reflect reality in mating associated with Cry1Ac resistance but can provide insight into variable expression. [source]


Clustering of RR Intervals Predicts Effective Electrical Cardioversion for Atrial Fibrillation

JOURNAL OF CARDIOVASCULAR ELECTROPHYSIOLOGY, Issue 9 2004
MAARTEN P. VAN DEN BERG M.D.
Introduction: Atrial fibrillation (AF) is characterized by an irregularly irregular ("random") heart beat. However, controversy exists whether the ventricular rhythm in AF is truly random. We investigated randomness by constructing three-dimensional RR interval plots (3D plots), allowing identification of "clustering" of RR intervals. It was hypothesized that electrical cardioversion (ECV) would be more effective in AF patients with clustering, because clustering might reflect a higher degree of organization of atrial fibrillatory activity. Methods and Results: The study group consisted of 66 patients (44 men and 22 women; mean age 68 ± 11 years,) who were referred for ECV because of persistent AF. Twenty-four-hour Holter recordings were used to construct 3D plots by plotting each RR interval (x axis) against the previous RR interval (y axis) and the number of occurrences of each of these x,y combinations (z axis). A clustering index was calculated as the percentage of beats within the peaks in the 3D plot. Based on the 3D plots, clustering of RR intervals was present in 31 (47%) of the 66 patients. ECV was effective in restoring sinus rhythm in 29 (94%) of these 31 patients, whereas sinus rhythm was restored in only 25 (71%) of the remaining 35 patients without clustering (P = 0.020). The clustering index ranged from <2% in the 12 patients with failed ECV to >8% in the 32 patients with sinus rhythm at the end of the study (4 weeks after the ECV); the clustering index in the 22 patients with a relapse of AF after effective ECV was intermediate (P = 0.034 and P = 0.042, respectively). Conclusion: This study indicates that ECV is more effective in restoring sinus rhythm in AF patients with clustering compared to patients in whom no clustering is apparent on 3D plots. In addition, the degree of clustering appears to be predictive of the overall outcome of ECV; the higher the degree of clustering, the higher the likelihood of sinus rhythm at follow-up. [source]


Underlying cognitions in the selection of lottery tickets

JOURNAL OF CLINICAL PSYCHOLOGY, Issue 6 2001
Karen K. Hardoon
There is evidence that the faulty cognitions underlying an individual's playing behavior maintains and supports their gambling behavior. Sixty undergraduate students completed the South Oaks Gambling Screen (SOGS), a measure to assess pathological gambling, and a questionnaire ascertaining the type and frequency of their gambling activities. Sixteen Loto 6/49 tickets were presented to participants and ranked according to their perceived likelihood of being the winning ticket. The numbers on the tickets were categorized as: long sequences (e.g., 1,2,3,4,5,6), patterns and series in a pseudo-psychological order (e.g., 16,21,26,31,36,41), unbalanced (e.g., six numbers from 1,24 or 25,49), and those appearing to be random (e.g., 11,14,20,29,37,43). Verbal protocols of ticket selections were ranked into eight heuristics. Results revealed that for the entire sample the greatest percentage of tickets chosen for the first four selections were "random" tickets. Further, the most commonly cited reason for selecting and changing a lottery ticket was perceived randomness. The results are discussed with reference to the cognitions used when purchasing lottery tickets. © 2001 John Wiley & Sons, Inc. J Clin Psychol 57: 749,763, 2001. [source]


Fine-scale spatial structure in a grassland community: quantifying the plant's-eye view

JOURNAL OF ECOLOGY, Issue 1 2002
D. W. Purves
Summary 1The fine-scale spatial patterns of Agrostis stolonifera, Holcus lanatus and Lolium perenne were recorded in an English lowland grassland as presence/absence maps from 400-cell quadrats at two different scales (2 × 2 cm or 8 × 8 cm cells). 2Local spatial structure in these patterns was quantified using spatial covariance functions. Distance- and direction-dependent components were examined separately for both intra- and interspecific patterns. The significance of departures from randomness was determined using Monte Carlo techniques. 3The smaller-scale data showed that all three species were significantly aggregated, Agrostis to a greater distance (8 cm) than Holcus or Lolium(4 cm). The intensity of aggregation decreased in the order Lolium > Holcus > Agrostis. The larger-scale data suggested that this aggregation extended to greater distances, and that it was most intense in Agrostis. 4Despite the lack of visual directionality in the environment, Agrostis showed a directional pattern at both scales, with Lolium varying in the same direction at the larger scale. 5Only Agrostis and Lolium showed a significant interspecific relationship (segregated to 2 cm at the small scale, but aggregated to 8 cm at the larger scale). There was no evidence of directionality in the interspecific components of pattern. 6The nature of spatial structure appears to depend on the scale of observation, but the smaller-scale data are more likely to provide a biologically interpretable measure of local spatial structure in this community. [source]


Modeling Randomness in Judging Rating Scales with a Random-Effects Rating Scale Model

JOURNAL OF EDUCATIONAL MEASUREMENT, Issue 4 2006
Wen-Chung Wang
This study presents the random-effects rating scale model (RE-RSM) which takes into account randomness in the thresholds over persons by treating them as random-effects and adding a random variable for each threshold in the rating scale model (RSM) (Andrich, 1978). The RE-RSM turns out to be a special case of the multidimensional random coefficients multinomial logit model (MRCMLM) (Adams, Wilson, & Wang, 1997) so that the estimation procedures for the MRCMLM can be directly applied. The results of the simulation indicated that when the data were generated from the RSM, using the RSM and the RE-RSM to fit the data made little difference: both resulting in accurate parameter recovery. When the data were generated from the RE-RSM, using the RE-RSM to fit the data resulted in unbiased estimates, whereas using the RSM resulted in biased estimates, large fit statistics for the thresholds, and inflated test reliability. An empirical example of 10 items with four-point rating scales was illustrated in which four models were compared: the RSM, the RE-RSM, the partial credit model (Masters, 1982), and the constrained random-effects partial credit model. In this real data set, the need for a random-effects formulation becomes clear. [source]


Does disturbance affect the structure of tropical fish assemblages?

JOURNAL OF FISH BIOLOGY, Issue 2 2007
A test using null models
Null model analyses on fish presence-absence data from tropical river assemblages of the Western Ghats, India, revealed a difference in the extent of randomness in species assemblage structure among impacted and unimpacted rivers in the region. [source]


Evaluating animal welfare with choice experiments: an application to Swedish pig production

AGRIBUSINESS : AN INTERNATIONAL JOURNAL, Issue 1 2008
Carolina Liljenstolpe
In this study, the demand for animal welfare attributes when buying pork fillet is investigated among Swedish respondents. The issue is of importance in order to ensure an economically viable pig industry while applying an increasing number of animal friendly practices. In order to obtain information about consumer demand, an indirect utility function and willingness to pay (WTP) for animal welfare attributes are estimated. The attributes are solely associated with animal friendly practices. An investigation of numerous housing and managerial practices of pig production has not yet been performed. The indirect utility function is estimated using a random parameter logit model. A realistic approach when modeling consumer choice is to allow for heterogeneity in preferences. The relevance of assuming randomness of some of the parameters is evaluated by using a specification test developed by McFadden and Train (2000). The WTP is also estimated at the individual level. The results indicate that WTP for animal welfare attributes may be negative or positive. The preferences are also heterogeneous among respondents, which may be explained by a segmentation of preferences. Finally, the WTP estimates for animal welfare practices are compared with cost estimates for such production systems. [Econlit subject codes: C010, C500, Q100] © 2008 Wiley Periodicals, Inc. [source]


Stochastic modeling of particle motion along a sliding conveyor

AICHE JOURNAL, Issue 1 2010
Kevin Cronin
Abstract The sliding conveyor consists of a plane surface, known as the track, along which particles are induced to move by vibrating the bed sinusoidal with respect to time. The forces on the particle include gravity, bed reaction force and friction. Because friction coefficients are inherently variable, particle motion along the bed is erratic and unpredictable. A deterministic model of particle motion (where friction is considered to be known and invariant) is selected and its output validated by experiment. Two probabilistic solution techniques are developed and applied to the deterministic model, in order to account for the randomness that is present. The two methods consider particle displacement to be represented by discrete time and continuous time random processes, respectively, and permits analytical solutions for mean and variance in displacement versus time to be found. These are compared with experimental measurements of particle motion. Ultimately this analysis can be employed to calculate residence-time distributions for such items of process equipment. © 2009 American Institute of Chemical Engineers AIChE J, 2010 [source]


Methods of studying the planar distribution of objects in histological sections of brain tissue

JOURNAL OF MICROSCOPY, Issue 3 2006
R. A. ARMSTRONG
Summary This article reviews the statistical methods that have been used to study the planar distribution, and especially clustering, of objects in histological sections of brain tissue. The objective of these studies is usually quantitative description, comparison between patients or correlation between histological features. Objects of interest such as neurones, glial cells, blood vessels or pathological features such as protein deposits appear as sectional profiles in a two-dimensional section. These objects may not be randomly distributed within the section but exhibit a spatial pattern, a departure from randomness either towards regularity or clustering. The methods described include simple tests of whether the planar distribution of a histological feature departs significantly from randomness using randomized points, lines or sample fields and more complex methods that employ grids or transects of contiguous fields, and which can detect the intensity of aggregation and the sizes, distribution and spacing of clusters. The usefulness of these methods in understanding the pathogenesis of neurodegenerative diseases such as Alzheimer's disease and Creutzfeldt-Jakob disease is discussed. [source]


An interactive fuzzy satisficing method for multiobjective stochastic linear programming problems using chance constrained conditions

JOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 3 2002
Masatoshi Sakawa
Abstract Two major approaches to deal with randomness or ambiguity involved in mathematical programming problems have been developed. They are stochastic programming approaches and fuzzy programming approaches. In this paper, we focus on multiobjective linear programming problems with random variable coefficients in objective functions and/or constraints. Using chance constrained programming techniques, the stochastic programming problems are transformed into deterministic ones. As a fusion of stochastic approaches and fuzzy ones, after determining the fuzzy goals of the decision maker, interactive fuzzy satisficing methods to derive a satisficing solution for the decision maker by updating the reference membership levels is presented. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Hydrodynamic investigation of USP dissolution test apparatus II

JOURNAL OF PHARMACEUTICAL SCIENCES, Issue 9 2007
Ge Bai
Abstract The USP Apparatus II is the device commonly used to conduct dissolution testing in the pharmaceutical industry. Despite its widespread use, dissolution testing remains susceptible to significant error and test failures, and limited information is available on the hydrodynamics of this apparatus. In this work, laser-Doppler velocimetry (LDV) and computational fluid dynamics (CFD) were used, respectively, to experimentally map and computationally predict the velocity distribution inside a standard USP Apparatus II under the typical operating conditions mandated by the dissolution test procedure. The flow in the apparatus is strongly dominated by the tangential component of the velocity. Secondary flows consist of an upper and lower recirculation loop in the vertical plane, above and below the impeller, respectively. A low recirculation zone was observed in the lower part of the hemispherical vessel bottom where the tablet dissolution process takes place. The radial and axial velocities in the region just below the impeller were found to be very small. This is the most critical region of the apparatus since the dissolving tablet will likely be at this location during the dissolution test. The velocities in this region change significantly over short distances along the vessel bottom. This implies that small variations in the location of the tablet on the vessel bottom caused by the randomness of the tablet descent through the liquid are likely to result in significantly different velocities and velocity gradients near the tablet. This is likely to introduce variability in the test. © 2007 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 96: 2327,2349, 2007 [source]