Histograms

Distribution by Scientific Domains
Distribution within Medical Sciences

Kinds of Histograms

  • frequency histogram

  • Terms modified by Histograms

  • histogram analysis

  • Selected Abstracts


    Comparison of Langmuir Probe and Laser Thomson Scattering Methods in the Electron Temperature Measurement in Divertor Simulator MAP-II

    CONTRIBUTIONS TO PLASMA PHYSICS, Issue 5-6 2006
    A. Okamoto
    Abstract In order to investigate details of anomaly in the Langmuir probe current (I)-voltage (V) characteristics, electron temperatures and densities are measured by both Langmuir probe and laser Thomson scattering methods. The electron densities measured with both methods show good agreement in hydrogen-molecular assisted recombination (H2 -MAR) plasmas. On the other hand, the electron temperatures measured with Langmuir probe are overestimated compared with that obtained from the Thomson scattering spectrum in the H2 -MAR plasmas. Histogram of electron current deviated from its average shows that fluctuation appeared in the electron current becomes large and the histogram distorted in temperature-overestimated condition, especially when the probe voltage is negatively biased. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    AV Nodal Pathways in the R-R Interval Histogram of the 24-Hour Monitoring ECG in Patients with Atrial Fibrillation

    ANNALS OF NONINVASIVE ELECTROCARDIOLOGY, Issue 4 2001
    Peter Weismüller M.D.
    Background: Patients with more than one AV nodal pathway show two and more peaks in the histogram of the R-R intervals of the Holter monitoring ECG during atrial fibrillation. It was the purpose of the present study to determine the number of patients showing more than one AV nodal pathway in a larger patient group with permanent atrial fibrillation by analyzing the Holter monitoring ECG. Methods: 250 patients with permanent atrial fibrillation during Holter monitoring ECG were studied; 203 patients had structural heart disease. The number of peaks in the R-R interval histogram of each patient was determined. The distribution of the number of peaks in the R-R interval histogram in different patient groups was analyzed. Results: 153 patients (61 %) had one peak, 80 patients (32%) two peaks, 13 patients (5%) three peaks, and four patients (2%) four peaks, reflecting the number of different AV nodal pathways. In the different patient groups, in the patients with or without structural heart disease, with coronary heart disease, with a history of syncope, and in patients with a mean heart rate of more than 100/min, there was no significant difference in the distribution of the number of peaks in the R-R interval histogram. Conclusions: In more than one third of all patients with permanent atrial fibrillation there are two, three, or four AV nodal pathways. It is suggested that this number of different AV nodal pathways found in the studied group can be applied to all humans. 38.8% of all patients with permanent atrial fibrillation have more than one AV nodal pathway; 6.4% of all patients with atrial fibrillation would benefit from an ablation of AV nodal pathways with shorter refractory periods for reduction of the heart rate. A.N.E. 2001;6(4):285,289 [source]


    Comparing methods for analysing mortality profiles in zooarchaeological and palaeontological samples

    INTERNATIONAL JOURNAL OF OSTEOARCHAEOLOGY, Issue 6 2005
    T. E. Steele
    Abstract In this study, I examine three methods that are currently used for comparing mortality profiles from zooarchaeological and palaeontological samples: (1) histograms with 10% of life-span age classes; (2) boxplots showing tooth crown height medians; and (3) triangular plots of the proportions of young, prime and old animals. I assess the advantages and disadvantages of each method using data collected on two samples of Northern Yellowstone elk (Cervus elaphus nelsoni) with known, or cementum annuli-determined, ages at death. One sample was hunted by wolves (n,=,96), and the other was hunted by recent humans using rifles (n,=,226). I tested each method with the known or cementum annuli age distributions and with age estimation techniques appropriate for archaeological assemblages. Histograms are best used when the relationship between dental eruption/attrition and age is well established so that individuals can be confidently assigned into 10% of life-span groups, and when more than 30 or 40 individuals are present in the assemblage. Boxplots employ raw crown heights, thus removing the error introduced by assigning specimens to age classes, and therefore they allow the analysis of species where the relationship between dental eruption/attrition and age is unknown. Confidence intervals around the medians allow samples to be statistically compared. Triangular plots are easy to use and allow multiple samples and species to be considered simultaneously, but samples cannot be statistically compared. Modified triangular plots bootstrap samples to provide 95% confidence ellipses, allowing for statistical comparisons between samples. When possible, samples should be examined using multiple methods to increase confidence in the results. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Effect of moisture and pressure on tablet compaction studied with FTIR spectroscopic imaging

    JOURNAL OF PHARMACEUTICAL SCIENCES, Issue 2 2007
    Noha Elkhider
    Abstract FTIR spectroscopic imaging using a diamond ATR accessory has been applied to examine the influence of moisture and compression pressure on the density and components distribution of compacted pharmaceutical tablets. The model drug and excipient used within this study are ibuprofen and hydroxypropylmethylcellulose (HPMC). Chemical images of these compacted tablets were captured in situ without removing the tablet between measurements. A powder mixture of both, drug and excipient, prior to compaction, were subjected to a controlled environment, using a controlled humidity cell. Histograms were plotted to assess the density distribution quantitatively. This FTIR spectroscopic imaging approach enabled both measurement of water sorption and enhanced visualization of the density distribution of the compacted tablets. ©2006 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 96:351,360, 2007 [source]


    Oesophageal pH has a power-law distribution in control and gastro-oesophageal reflux disease subjects

    ALIMENTARY PHARMACOLOGY & THERAPEUTICS, Issue 11-12 2004
    J. D. Gardner
    Summary Background :,We are unaware of any solid theoretical or pathophysiological basis for selecting pH 4 or any other pH value to assess oesophageal acid exposure or to define oesophageal reflux episodes. Aim :,To examine the frequency of different oesophageal pH values in control and GERD subjects. Methods :,Oesophageal pH was measured for 24 h in 57 gastro-oesophageal reflux disease subjects and 26 control subjects. Histograms were constructed using the 21 600 values from each recording and bins of 0.25 pH units. Results :,Compared with controls, gastro-oesophageal reflux disease subjects had significantly more low pH values and significantly fewer high pH values. In both gastro-oesophageal reflux disease and control subjects, the frequency of oesophageal pH values was characterized by a power-law distribution indicating that the same relationship that describes low pH values also describes high pH values, as well as all values in between. Conclusions :,The distribution of oesophageal pH values indicates that a variety of different pH values can be used to assess oesophageal acid exposure, but raises important questions regarding how oesophageal reflux episodes are defined. [source]


    Frequency analyses of gastric pH in control and gastro-oesophageal reflux disease subjects treated with a proton-pump inhibitor

    ALIMENTARY PHARMACOLOGY & THERAPEUTICS, Issue 11-12 2004
    J. D. Gardner
    Summary Background :,We are unaware of any solid theoretical or pathophysiological basis for selecting pH 4 or any other pH value to assess gastric acidity. Aim :,To examine the frequency of different gastric pH values in control and GERD subjects. Methods :,Gastric pH was measured for 24 h in 26 control subjects, 26 gastro-oesophageal reflux disease subjects at baseline and the same 26 gastro-oesophageal reflux disease subjects during treatment with a proton-pump inhibitor. Histograms were constructed using the 21 600 values generated from each recording and bins of 0.25 pH units. Results :,The distribution of gastric pH values in gastro-oesophageal reflux disease subjects was significantly different from that in controls and in some instances the distributions detected significant differences that were not detected by integrated acidity. Proton-pump inhibitor treatment significantly altered the distribution of gastric pH values and the nature of this alteration during the postprandial period was different from that during the nocturnal period. Using time pH,4 can significantly underestimate the magnitude of inhibition of gastric acidity caused by a proton-pump inhibitor. Conclusions :,The distribution of gastric pH values provides a rationale for selecting a particular pH value to assess gastric acidity. In some instances, the distribution of gastric pH values detects significant differences between gastro-oesophageal reflux disease and normal subjects that are not detected by integrated acidity. [source]


    Evaluation of probabilistic prediction systems for a scalar variable

    THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 609 2005
    G. Candille
    Abstract A systematic study is performed of a number of scores that can be used for objective validation of probabilistic prediction of scalar variables: Rank Histograms, Discrete and Continuous Ranked Probability Scores (DRPS and CRPS, respectively). The reliability-resolution-uncertainty decomposition, defined by Murphy for the DRPS, and extended here to the CRPS, is studied in detail. The decomposition is applied to the results of the Ensemble Prediction Systems of the European Centre for Medium-range Weather Forecasts and the National Centers for Environmental Prediction. Comparison is made with the decomposition of the CRPS defined by Hersbach. The possibility of determining an accurate reliability-resolution decomposition of the RPSs is severely limited by the unavoidably (relatively) small number of available realizations of the prediction system. The Hersbach decomposition may be an appropriate compromise between the competing needs for accuracy and practical computability. Copyright © 2005 Royal Meteorological Society. [source]


    Position-Invariant Neural Network for Digital Pavement Crack Analysis

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 2 2004
    Byoung Jik Lee
    This system includes three neural networks: (1) image-based neural network, (2) histogram-based neural network, and (3) proximity-based neural network. These three neural networks were developed to classify various crack types based on the subimages (crack tiles) rather than crack pixels in digital pavement images. These spatial neural networks were trained using artificially generated data following the Federal Highway Administration (FHWA) guidelines. The optimal architecture of each neural network was determined based on the testing results from different sets of the number of hidden units, learning coefficients, and the number of training epochs. To validate the system, actual pavement pictures taken from pavements as well as the computer-generated data were used. The proximity value is determined by computing relative distribution of crack tiles within the image. The proximity-based neural network effectively searches the patterns of various crack types in both horizontal and vertical directions while maintaining its position invariance. The final result indicates that the proximity-based neural network produced the best result with the accuracy of 95.2% despite its simplest neural network structure with the least computing requirement. [source]


    Comparison of Langmuir Probe and Laser Thomson Scattering Methods in the Electron Temperature Measurement in Divertor Simulator MAP-II

    CONTRIBUTIONS TO PLASMA PHYSICS, Issue 5-6 2006
    A. Okamoto
    Abstract In order to investigate details of anomaly in the Langmuir probe current (I)-voltage (V) characteristics, electron temperatures and densities are measured by both Langmuir probe and laser Thomson scattering methods. The electron densities measured with both methods show good agreement in hydrogen-molecular assisted recombination (H2 -MAR) plasmas. On the other hand, the electron temperatures measured with Langmuir probe are overestimated compared with that obtained from the Thomson scattering spectrum in the H2 -MAR plasmas. Histogram of electron current deviated from its average shows that fluctuation appeared in the electron current becomes large and the histogram distorted in temperature-overestimated condition, especially when the probe voltage is negatively biased. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    A method of new filter design based on the co-occurrence histogram

    ELECTRICAL ENGINEERING IN JAPAN, Issue 1 2009
    Takayuki Fujiwara
    Abstract We have proposed that the co-occurrence frequency image (CFI) based on the co-occurrence frequency histogram of the gray value of an image can be used in a new scheme for image feature extraction. This paper proposes new enhancement filters to achieve sharpening and smoothing of images. These filters are very similar in result but quite different in process from those which have been used previously. Thus, we show the possibility of a new paradigm for basic image enhancement filters making use of the CFI. © 2008 Wiley Periodicals, Inc. Electr Eng Jpn, 166(1): 36,42, 2009; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/eej.20699 [source]


    Geostatistical Prediction and Simulation of Point Values from Areal Data

    GEOGRAPHICAL ANALYSIS, Issue 2 2005
    Phaedon C. Kyriakidis
    The spatial prediction and simulation of point values from areal data are addressed within the general geostatistical framework of change of support (the term support referring to the domain informed by each measurement or unknown value). It is shown that the geostatistical framework (i) can explicitly and consistently account for the support differences between the available areal data and the sought-after point predictions, (ii) yields coherent (mass-preserving or pycnophylactic) predictions, and (iii) provides a measure of reliability (standard error) associated with each prediction. In the case of stochastic simulation, alternative point-support simulated realizations of a spatial attribute reproduce (i) a point-support histogram (Gaussian in this work), (ii) a point-support semivariogram model (possibly including anisotropic nested structures), and (iii) when upscaled, the available areal data. Such point-support-simulated realizations can be used in a Monte Carlo framework to assess the uncertainty in spatially distributed model outputs operating at a fine spatial resolution because of uncertain input parameters inferred from coarser spatial resolution data. Alternatively, such simulated realizations can be used in a model-based hypothesis-testing context to approximate the sampling distribution of, say, the correlation coefficient between two spatial data sets, when one is available at a point support and the other at an areal support. A case study using synthetic data illustrates the application of the proposed methodology in a remote sensing context, whereby areal data are available on a regular pixel support. It is demonstrated that point-support (sub-pixel scale) predictions and simulated realizations can be readily obtained, and that such predictions and realizations are consistent with the available information at the coarser (pixel-level) spatial resolution. [source]


    A view from the bridge: agreement between the SF-6D utility algorithm and the Health Utilities Index

    HEALTH ECONOMICS, Issue 11 2003
    Bernie J. O'Brien
    Abstract Background: The SF-6D is a new health state classification and utility scoring system based on 6 dimensions (,6D') of the Short Form 36, and permits a "bridging" transformation between SF-36 responses and utilities. The Health Utilities Index, mark 3 (HUI3) is a valid and reliable multi-attribute health utility scale that is widely used. We assessed within-subject agreement between SF-6D utilities and those from HUI3. Methods: Patients at increased risk of sudden cardiac death and participating in a randomized trial of implantable defibrillator therapy completed both instruments at baseline. Score distributions were inspected by scatterplot and histogram and mean score differences compared by paired t -test. Pearson correlation was computed between instrument scores and also between dimension scores within instruments. Between-instrument agreement was by intra-class correlation coefficient (ICC). Results: SF-6D and HUI3 forms were available from 246 patients. Mean scores for HUI3 and SF-6D were 0.61 (95% CI 0.60,0.63) and 0.58 (95% CI 0.54,0.62) respectively; a difference of 0.03 (p<0.03). Score intervals for HUI3 and SF-6D were (-0.21 to 1.0) and (0.30,0.95). Correlation between the instrument scores was 0.58 (95% CI 0.48,0.68) and agreement by ICC was 0.42 (95% CI 0.31,0.52). Correlations between dimensions of SF-6D were higher than for HUI3. Conclusions: Our study casts doubt on the whether utilities and QALYs estimated via SF-6D are comparable with those from HUI3. Utility differences may be due to differences in underlying concepts of health being measured, or different measurement approaches, or both. No gold standard exists for utility measurement and the SF-6D is a valuable addition that permits SF-36 data to be transformed into utilities to estimate QALYs. The challenge is developing a better understanding as to why these classification-based utility instruments differ so markedly in their distributions and point estimates of derived utilities. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Neural Signal Manager: a collection of classical and innovative tools for multi-channel spike train analysis

    INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 11 2009
    Antonio Novellino
    Abstract Recent developments in the neuroengineering field and the widespread use of the micro electrode arrays (MEAs) for electrophysiological investigations made available new approaches for studying the dynamics of dissociated neuronal networks as well as acute/organotypic slices maintained ex vivo. Importantly, the extraction of relevant parameters from these neural populations is likely to involve long-term measurements, lasting from a few hours to entire days. The processing of huge amounts of electrophysiological data, in terms of computational time and automation of the procedures, is actually one of the major bottlenecks for both in vivo and in vitro recordings. In this paper we present a collection of algorithms implemented within a new software package, named the Neural Signal Manager (NSM), aimed at analyzing a huge quantity of data recorded by means of MEAs in a fast and efficient way. The NSM offers different approaches for both spike and burst analysis, and integrates state-of-the-art statistical algorithms, such as the inter-spike interval histogram or the post stimulus time histogram, with some recent ones, such as the burst detection and its related statistics. In order to show the potentialities of the software, the application of the developed algorithms to a set of spontaneous activity recordings from dissociated cultures at different ages is presented in the Results section. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Linear discriminant analysis in network traffic modelling

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 1 2006
    Bing-Yi Zhang
    Abstract It is difficult to give an accurate judgement of whether the traffic model fit the actual traffic. The traditional method is to compare the Hurst parameter, data histogram and autocorrelation function. The method of comparing Hurst parameter cannot give exact results and judgement. The method of comparing data histogram and autocorrelation only gives a qualitative judgement. Based on linear discriminant analysis we proposed a novel arithmetic. Utilizing this arithmetic we analysed some sets of data with large and little differences. We also analysed some sets of data generated by network simulator. The analysis result is accurate. Comparing with traditional method, this arithmetic is useful and can conveniently give an accurate judgement for complex network traffic trace. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Analysis of the current methods used to size a wind/hydrogen/fuel cell-integrated system: A new perspective

    INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 12 2010
    H. G. Geovanni
    Abstract As an alternative to the production and storage of intermittent renewable energy sources, it has been suggested that one can combine several renewable energy technologies in one system, known as integrated or hybrid system, that integrate wind technology with hydrogen production unit and fuel cells. This work assesses the various methods used in sizing such systems. Most of the published papers relate the use of simulation tools such as HOMER, HYBRID2 and TRNSYS, to simulate the operation of different configurations for a given application in order to select the best economic option. But, with these methods one may not accurately determine certain characteristics of the energy resources available on a particular site, the profiles of estimated consumption and the demand for hydrogen, among other factors, which will be the optimal parameters of each subsystem. For example, velocity design, power required for the wind turbine, power required for the fuel cell and electrolyzer and the storage capacity needed for the system. Moreover, usually one makes excessive use of bi-parametric Weibull distribution function to approximate the histogram of the observed wind to the theoretical, which is not appropriate when there are bimodal frequency distributions of wind, as is the case in several places in the world. A new perspective is addressed in this paper, based on general system theory, modeling and simulation with a systematic approach and the use of exergoeconomic analysis. There are some general ideas on the advantages offered in this method, which is meant for the implementation of wind/hydrogen/fuel cell-integrated systems and in-situ clean hydrogen production. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Color reduction for complex document images

    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 1 2009
    Nikos Nikolaou
    Abstract A new technique for color reduction of complex document images is presented in this article. It reduces significantly the number of colors of the document image (less than 15 colors in most of the cases) so as to have solid characters and uniform local backgrounds. Therefore, this technique can be used as a preprocessing step by text information extraction applications. Specifically, using the edge map of the document image, a representative set of samples is chosen that constructs a 3D color histogram. Based on these samples in the 3D color space, a relatively large number of colors (usually no more than 100 colors) are obtained by using a simple clustering procedure. The final colors are obtained by applying a mean-shift based procedure. Also, an edge preserving smoothing filter is used as a preprocessing stage that enhances significantly the quality of the initial image. Experimental results prove the method's capability of producing correctly segmented complex color documents where the character elements can be easily extracted as connected components. © 2009 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 19, 14,26, 2009 [source]


    Color invariant object recognition using entropic graphs

    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 5 2006
    Jan C. van Gemert
    Abstract We present an object recognition approach using higher-order color invariant features with an entropy-based similarity measure. Entropic graphs offer an unparameterized alternative to common entropy estimation techniques, such as a histogram or assuming a probability distribution. An entropic graph estimates entropy from a spanning graph structure of sample data. We extract color invariant features from object images invariant to illumination changes in intensity, viewpoint, and shading. The Henze,Penrose similarity measure is used to estimate the similarity of two images. Our method is evaluated on the ALOI collection, a large collection of object images. This object image collection consists of 1000 objects recorded under various imaging circumstances. The proposed method is shown to be effective under a wide variety of imaging conditions. © 2007 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 16, 146,153, 2006 [source]


    A new multi-state fading model for mobile satellite channels based upon AFD and LCR data,

    INTERNATIONAL JOURNAL OF SATELLITE COMMUNICATIONS AND NETWORKING, Issue 2 2004
    David W. Matolak
    Abstract Using measured data on average fade duration (AFD) and level crossing rate (LCR), we obtain new analytical expressions for the probability density function (pdf) of received signal envelope in a mobile satellite channel, via a new method. The measured data for an urban environment comes from Kanatas et al. Proceedings of the 1997 International Mobile Satellite Conference, Pasadena, CA, 16,18 June, 1997; 169,175, but the new method is general in nature and can be applied to other environments. The method is less direct than curve-fitting to a histogram of the ,raw' measured fading amplitude data, but is comparable in complexity and yields good results. Our new model is a composite one, similar to other composite models given in the literature, e.g. the Loo and Lutz models, but in contrast to these, the new model affords a completely closed-form expression for the pdf. As with these other composite models, the new model is amenable to the development of computer simulations of mobile satellite channel amplitude time series realizations, and can be combined with state transition models to provide a complete multi-state fading model. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Minimizing errors in identifying Lévy flight behaviour of organisms

    JOURNAL OF ANIMAL ECOLOGY, Issue 2 2007
    DAVID W. SIMS
    Summary 1Lévy flights are specialized random walks with fundamental properties such as superdiffusivity and scale invariance that have recently been applied in optimal foraging theory. Lévy flights have movement lengths chosen from a probability distribution with a power-law tail, which theoretically increases the chances of a forager encountering new prey patches and may represent an optimal solution for foraging across complex, natural habitats. 2An increasing number of studies are detecting Lévy behaviour in diverse organisms such as microbes, insects, birds, and mammals including humans. A principal method for detecting Lévy flight is whether the exponent (µ) of the power-law distribution of movement lengths falls within the range 1 < µ , 3. The exponent can be determined from the histogram of frequency vs. movement (step) lengths, but different plotting methods have been used to derive the Lévy exponent across different studies. 3Here we investigate using simulations how different plotting methods influence the µ-value and show that the power-law plotting method based on 2k (logarithmic) binning with normalization prior to log transformation of both axes yields low error (1·4%) in identifying Lévy flights. Furthermore, increasing sample size reduced variation about the recovered values of µ, for example by 83% as sample number increased from n = 50 up to 5000. 4Simple log transformation of the axes of the histogram of frequency vs. step length underestimated µ by c.40%, whereas two other methods, 2k (logarithmic) binning without normalization and calculation of a cumulative distribution function for the data, both estimate the regression slope as 1 , µ. Correction of the slope therefore yields an accurate Lévy exponent with estimation errors of 1·4 and 4·5%, respectively. 5Empirical reanalysis of data in published studies indicates that simple log transformation results in significant errors in estimating µ, which in turn affects reliability of the biological interpretation. The potential for detecting Lévy flight motion when it is not present is minimized by the approach described. We also show that using a large number of steps in movement analysis such as this will also increase the accuracy with which optimal Lévy flight behaviour can be detected. [source]


    Bimodal RR Interval Distribution in Chronic Atrial Fibrillation:

    JOURNAL OF CARDIOVASCULAR ELECTROPHYSIOLOGY, Issue 5 2000
    Impact of Dual Atrioventricular Nodal Physiology on Long-Term Rate Control after Catheter Ablation of the Posterior Atrionodal Input
    Bimodal RR Interval Distribution, Introduction: Radiofrequency (RF) catheter modification of the AV node hi patients with atrial fibrillation (AF) is limited by an unpredictable decrease of the ventricular rate and a wish incidence of permanent AV block, A bimodal RR histogram has been suggested to serve as a predictor for successful outcome but the corresponding AV node properties have never been characterized, We hypothesized that a bimodal histogram indicates dual AV nodal physiology and predicts a better outcome after AV node modification in chronic AF. Methods and Results: Thirty-seven patients were prospectively subdivided into two groups according to the RR histogram of 24-hour ECC monitoring, Before to RF ablation, internal cardioversion and programmed stimulation were performed, Among the 22 patients (group I) with a bimodal RR histogram, dual AV nodal physiology was found in 17 (779f) patients, Ablation significantly decreased ventricular rate with loss of the peak of short RR cycles after ablation (mean and maximal ventricular rates: 32% and 35% rate reduction, respectively; P < 0,01), In 15 patients with a unimodal RR histogram (group II), dual AV nodal physiology was found in 2 (13%), and rate reductions were 16% and 17%, respectively, At 6 months, 3 (14%) patients in group 1 and 6 (40%) in group II underwent elective AV nodal ablation with pacemaker implantation due to intolerable rapid ventricular response to AF. Conclusion: Bimodal RR interval distribution during chronic AF suggests the presence of dual AV nodal physiology and predicts a better outcome of RF ablation of the posterior atrionocdal input. [source]


    Systematic and statistical error in histogram-based free energy calculations

    JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 12 2003
    Mark N. Kobrak
    Abstract A common technique for the numerical calculation of free energies involves estimation of the probability density along a given coordinate from a set of configurations generated via simulation. The process requires discretization of one or more reaction coordinates to generate a histogram from which the continuous probability density is inferred. We show that the finite size of the intervals used to construct the histogram leads to quantifiable systematic error. The width of these intervals also determines the statistical error in the free energy, and the choice of the appropriate interval is therefore driven by the need to balance the two sources of error. We present a method for the construction of the optimal histogram for a given system, and show that the use of this technique requires little additional computational expense. We demonstrate the efficacy of the technique for a model system, and discuss how the principles governing the choice of discretization interval could be used to improve extended sampling techniques. © 2003 Wiley Periodicals, Inc. J Comput Chem 24: 1437,1446, 2003 [source]


    SEGMENTATION OF BEEF JOINT IMAGES USING HISTOGRAM THRESHOLDING

    JOURNAL OF FOOD PROCESS ENGINEERING, Issue 6 2006
    CHAOXIN ZHENG
    ABSTRACT Four histogram-based thresholding methods, i.e., one-dimensional (1-D) histogram variance, 1-D histogram entropy, two-dimensional (2-D) histogram variance and 2-D histogram entropy, were proposed to segment the images of beef joints (raw, cooked and cooled) automatically from the background. The 2-D histogram-based methods incorporate a fast algorithm to reduce the calculation time, thus increasing the speed greatly. All the four methods were applied to 15 beef joint images captured from a video camera, and the methods including pixel classification, object overlap and object contrast for the evaluation of segmentation results were then employed to compare the performances or the abilities of the four different segmenting methods. Results indicate that the 2-D histogram variance thresholding method can accomplish the segmentation task with the most satisfactory performance. [source]


    A Magnetization Transfer MRI Study of Deep Gray Matter Involvement in Multiple Sclerosis

    JOURNAL OF NEUROIMAGING, Issue 4 2006
    Jitendra Sharma MD
    ABSTRACT Background/Purpose: Gray matter involvement in multiple sclerosis (MS) is of growing interest with respect to disease pathogenesis. Magnetization transfer imaging (MTI), an advanced MRI technique, is sensitive to disease in normal appearing white matter (NAWM) in patients with MS. Design/Methods: We tested if MTI detected subcortical (deep) gray matter abnormalities in patients with MS (n= 60) vs. age-matched normal controls (NL, n= 20). Magnetization transfer ratio (MTR) maps were produced from axial proton density, conventional spin-echo, 5 mm gapless slices covering the whole brain. Region-of-interest,derived MTR histograms for the caudate, putamen, globus pallidus, thalamus, and NAWM were obtained. Whole brain MTR was also measured. Results: Mean whole brain MTR and the peak position of the NAWM MTR histogram were lower in patients with MS than NL (P < .001) and mean whole brain MTR was lower in secondary progressive (SP, n= 10) than relapsing-remitting (RR, n= 50, P < .001) patients. However, none of the subcortical gray matter nuclei showed MTR differences in MS vs. NL, RR vs. SP, or SP vs. NL. Conclusions: The MTI technique used in this cohort was relatively insensitive to disease in the deep gray matter nuclei despite showing sensitivity for whole brain disease in MS. It remains to be determined if other MRI techniques are more sensitive than MTI for detecting pathology in these areas. [source]


    Vapor,liquid equilibria of mixtures containing alkanes, carbon dioxide, and nitrogen

    AICHE JOURNAL, Issue 7 2001
    Jeffrey J. Potoff
    New force fields for carbon dioxide and nitrogen are introduced that quantitatively reproduce the vapor,liquid equilibria (VLE) of the neat systems and their mixtures with alkanes. In addition to the usual VLE calculations for pure CO2 and N2, calculations of the binary mixtures with propane were used in the force-field development to achieve a good balance between dispersive and electrostatic (quadrupole,quadrupole) interactions. The transferability of the force fields was then assessed from calculations of the VLE for the binary mixtures with n-hexane, the binary mixture of CO2/N2, and the ternary mixture of CO2 /N2/propane. The VLE calculations were carried out using configurational-bias Monte Carlo simulations in either the grand canonical ensemble with histogram,reweighting or in the Gibbs ensemble. [source]


    Evaluation of Antineutrophil IgG Antibodies in Persistently Neutropenic Dogs

    JOURNAL OF VETERINARY INTERNAL MEDICINE, Issue 3 2007
    Douglas J. Weiss DVM
    Background:Immune-mediated neutropenia (IMN) is one of several causes of persistent neutropenia in dogs. A test to detect IMN in dogs is not available. Hypothesis:A flow cytometric immunofluorescence assay will provide a sensitive method for detection of antineutrophil antibodies in dogs. Animals:The study included 12 neutropenic dogs and 20 healthy dogs. Methods:An indirect immunofluorescence assay was used to detect immunoglobulin G (IgG) binding to dog neutrophils. Leukoagglutination was evaluated by light microscopy. Neutrophil distribution in scatter plots, neutrophil fluorescence intensity, and the percentage of neutrophils with increased fluorescence intensity was evaluated by use of flow cytometry. Results:Antineutrophil antibodies were detected in the serum of 5 of 6 dogs with a clinical diagnosis of IMN. Leukoagglutination was present in 3 dogs. Four dogs had altered neutrophil distribution in forward-angle versus side-angle light scatter plots. Five of 6 dogs had increased neutrophil fluorescence intensity and 4 of 6 dogs had an increased percentage of neutrophils with increased fluorescence intensity. Conclusions and Clinical Importance: The flow cytometric test for antineutrophil antibodies detects dogs with a clinical diagnosis of IMN. Testing for antineutrophil antibodies should include observation for leukoagglutination, observation of scatter plots for altered distribution of the neutrophil population, observation of the shape of the fluorescence histogram, determination of neutrophil fluorescence intensity, and determination of the percentage of neutrophils with increased fluorescence intensity. [source]


    Probabilistic temperature forecast by using ground station measurements and ECMWF ensemble prediction system

    METEOROLOGICAL APPLICATIONS, Issue 4 2004
    P. Boi
    The ECMWF Ensemble Prediction System 2-metre temperature forecasts are affected by systematic errors due mainly to resolution inadequacies. Moreover, other errors sources are present: differences in height above sea level between the station and the corresponding grid point, boundary layer parameterisation, and description of the land surface. These errors are more marked in regions of complex orography. A recursive statistical procedure to adapt ECMWF EPS-2metre temperature fields to 58 meteorological stations on the Mediterranean island of Sardinia is presented. The correction has been made in three steps: (1) bias correction of systematic errors; (2) calibration to adapt the EPS temperature distribution to the station temperature distribution; and (3) doubling the ensemble size with the aim of taking into account the analysis errors. Two years of probabilistic forecasts of freezing are tested by Brier Score, reliability diagram, rank histogram and Brier Skill Score with respect to the climatological forecast. The score analysis shows much better performance in comparison with the climatological forecast and direct model output, for all forecast timse, even after the first step (bias correction). Further gains in skill are obtained by calibration and by doubling the ensemble size. Copyright © 2004 Royal Meteorological Society. [source]


    Twenty-five pitfalls in the analysis of diffusion MRI data,

    NMR IN BIOMEDICINE, Issue 7 2010
    Derek K. Jones
    Abstract Obtaining reliable data and drawing meaningful and robust inferences from diffusion MRI can be challenging and is subject to many pitfalls. The process of quantifying diffusion indices and eventually comparing them between groups of subjects and/or correlating them with other parameters starts at the acquisition of the raw data, followed by a long pipeline of image processing steps. Each one of these steps is susceptible to sources of bias, which may not only limit the accuracy and precision, but can lead to substantial errors. This article provides a detailed review of the steps along the analysis pipeline and their associated pitfalls. These are grouped into 1 pre-processing of data; 2 estimation of the tensor; 3 derivation of voxelwise quantitative parameters; 4 strategies for extracting quantitative parameters; and finally 5 intra-subject and inter-subject comparison, including region of interest, histogram, tract-specific and voxel-based analyses. The article covers important aspects of diffusion MRI analysis, such as motion correction, susceptibility and eddy current distortion correction, model fitting, region of interest placement, histogram and voxel-based analysis. We have assembled 25 pitfalls (several previously unreported) into a single article, which should serve as a useful reference for those embarking on new diffusion MRI-based studies, and as a check for those who may already be running studies but may have overlooked some important confounds. While some of these problems are well known to diffusion experts, they might not be to other researchers wishing to undertake a clinical study based on diffusion MRI. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Slow-responders to IV ,2 -adrenergic agonist therapy: Defining a novel phenotype in pediatric asthma,

    PEDIATRIC PULMONOLOGY, Issue 7 2008
    Christopher L. Carroll MD
    Abstract Objectives While aerosolized administration of ,2 -adrenergic receptor (,2 -AR) agonists is the mainstay of treatment for pediatric asthma exacerbations, the efficacy of intravenous (IV) delivery is controversial. Failure to demonstrate improved outcomes with IV ,2 -AR agonists may be due to phenotypic differences within this patient population. Our hypothesis is that children who respond more slowly to IV ,2 -AR agonist therapy comprise a distinct phenotype. Methods Retrospective chart review of all children admitted to the ICU for status asthmaticus who were treated with IV terbutaline between December 2002 and September 2006. Results Seventy-eight children were treated with IV terbutaline according to guidelines that adjusted the dose by clinical asthma score. After examining the histogram of duration of terbutaline infusions, a 48-hr cutoff was chosen to define responsiveness. Thirty-eight (49%) children were slow-responders by this definition. There were no significant differences in baseline asthma severity or severity on admission between the slow-responders and responders. Slow-responders required significantly higher total doses of IV terbutaline, higher maximum administration rates, and had longer ICU and hospital length of stay. Conclusion There were significant differences in outcomes between the responders and slow-responders without differences in acute or chronic illness severity. Other factors may have lead to slower response to IV ,2 -agonist therapy. Pediatr Pulmonol. 2008; 43:627,633. © 2008 Wiley-Liss, Inc. [source]


    Complementary and alternative medicine inclusion in physical therapist education in the United States

    PHYSIOTHERAPY RESEARCH INTERNATIONAL, Issue 4 2009
    Paula Richley Geigle
    Abstract Purpose.,The purpose of this study was to determine the current prevalence, and at what level, complementary and alternative medicine (CAM) content is included in physical therapist (PT) education in the United States. This survey study provides self-report data regarding reasons why faculty members choose to include or not include CAM into programme content.,Background/Significance.,This study investigates the current prevalence of CAM content, and what level of inclusion (minimal, moderate, advanced) in PT curricula will assist programmes as they modify existing curricula and develop new programmes.,Subjects.,All 196 US-accredited programmes were included in our survey.,Materials and Methods.,An IRB-approved (Investigational Review Board), pilot-tested, two-page survey was emailed to all programme chairpersons of accredited PT programmes. A hard copy survey was mailed to non-responding programmes.,Analyses.,Returned surveys were analyzed descriptively to characterize the data shape, tendency and variability. Data were summarized in a frequency distribution and graphically depicted in a histogram for each category. In addition, qualitative analysis was completed for the explanatory data.,Results.,Forty-seven per cent (92) of all accredited PT programmes (196) responded. Most commonly included CAM areas were: manipulative and body-based methods, alternative medical systems and biologically based therapies. Most frequent responses to limitations to including CAM in PT curriculum were: limited curriculum time, lack of evidence supporting CAM practices and trouble locating qualified CAM presenters.,Conclusions.,This survey suggests the following: CAM techniques are included in entry-level PT education in the United States; the majority of these techniques are offered at the minimum or exposure level; manipulative and body-based methods, alternative medical systems and biologically based therapies are the most frequently included CAM techniques. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Accelerated Reliability Qualification in Automotive Testing

    QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 2 2004
    Alex Porter
    Abstract Products must come to market quickly, be more reliable and cost less. The problem is that statistical measures take time. There is a clear need for actionable information about the robustness or durability of a product early in the development process. In a Failure Mode Verification Test (FMVT), the analysis is not statistical but is designed to check two assumptions. First, that the design is capable of producing a viable product for the environments applied. Second, that a good design and fabrication of the product would last for a long period of time under all of the stresses that it is expected to see and would accumulate stress damage throughout the product in a uniform way. Testing a product in this way leads to three measures of the product's durability: (1) design maturity, the ratio between time to first failure and the average time between failures after the first failure; (2) technological limit, the time under test at which fixing additional failures would not provide a significant improvement in the life of the product; and (3) failure mode histogram, which indicates the repeatability of failures in a product. Using techniques like FMVT can provide a means of breaking the tyranny of statistics over durability and reliability testing in a competitive business climate. Copyright © 2004 John Wiley & Sons, Ltd. [source]