Required Number (required + number)

Distribution by Scientific Domains


Selected Abstracts


Impact of Grain Size on the Cerchar Abrasiveness Test

GEOMECHANICS AND TUNNELLING, Issue 1 2008
Klaus Lassnig Mag.
The Cerchar abrasiveness test is a common testing procedure for the prediction of tool wear but consistent and detailed recommendations of the testing procedure are inexistent until now. One point of disagreement is the required number of scratch tests per sample to obtain reliable results depending on the grain size of the samples. The focus of this work was to verify the influence of grain size on the number of required single examinations per sample. Grain size analyses were performed to get sum-curves of each tested rock sample. From the grain size data the median and the interquartile range of the grain sizes were calculated. CAI values after 5 and after 10 scratch tests were compared with the median and the interquartile-range of the grain size. No grain size dependency of the CAI deviation between 5 and 10 tests in the analysed range was observed. Einfluss der Korngröße auf den Cerchar Abrasivitätstest Der Cerchar-Abrasivitätstest ist ein häufig verwendeter Indextest zur Ermittlung der Abrasivität von Gesteinen gegenüber Bohrwerkzeugen. Bis jetzt existieren keine einheitlichen und detaillierten Empfehlungen für die Durchführung des Tests. Insbesondere gilt das für die Anzahl der durchzuführenden Tests in Abhängigkeit von der Korngröße der Gesteine. Es existiert lediglich ein Empfehlung, wonach bei grobkörnigen Gesteinen zehn anstatt der sonst üblichen fünf Tests durchzuführen seien. In dieser Arbeit wird der Einfluss der Korngröße auf das Testergebnis in Abhängigkeit von der Anzahl der Tests untersucht. Dazu wurden an den getesteten Proben die Korngrößen bestimmt. Von den Korngrößendaten wurden die statistische Parameter Median und Interquartile-range, berechnet. Die CAI Ergebnisse nach fünf Ritztests und nach zehn Ritztest wurden dann mit dem Median und dem Interquartile-range der Korngrößen verglichen. Im untersuchten Korngrößenbereich wurde kein Einfluss der Korngröße auf die Differenzwerte von fünf und zehn Tests beobachtet. Daraus kann abgeleitet werden, dass im untersuchten Korngrößenbereich die Korngröße , entgegen den bisherigen Annahmen , keinen messbaren Einfluss auf das Ergebnis des CAI-Tests hat. [source]


Bayesian inference in a piecewise Weibull proportional hazards model with unknown change points

JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 4 2007
J. Casellas
Summary The main difference between parametric and non-parametric survival analyses relies on model flexibility. Parametric models have been suggested as preferable because of their lower programming needs although they generally suffer from a reduced flexibility to fit field data. In this sense, parametric survival functions can be redefined as piecewise survival functions whose slopes change at given points. It substantially increases the flexibility of the parametric survival model. Unfortunately, we lack accurate methods to establish a required number of change points and their position within the time space. In this study, a Weibull survival model with a piecewise baseline hazard function was developed, with change points included as unknown parameters in the model. Concretely, a Weibull log-normal animal frailty model was assumed, and it was solved with a Bayesian approach. The required fully conditional posterior distributions were derived. During the sampling process, all the parameters in the model were updated using a Metropolis,Hastings step, with the exception of the genetic variance that was updated with a standard Gibbs sampler. This methodology was tested with simulated data sets, each one analysed through several models with different number of change points. The models were compared with the Deviance Information Criterion, with appealing results. Simulation results showed that the estimated marginal posterior distributions covered well and placed high density to the true parameter values used in the simulation data. Moreover, results showed that the piecewise baseline hazard function could appropriately fit survival data, as well as other smooth distributions, with a reduced number of change points. [source]


Correlation of hepatic vein Doppler waveform and hepatic artery resistance index with the severity of nonalcoholic fatty liver disease

JOURNAL OF CLINICAL ULTRASOUND, Issue 7 2010
Amir Reza Mohammadinia MD
Abstract Purpose. The study was conducted to evaluate the effect of various degrees of fatty infiltration in patients with nonalcoholic fatty liver disease on hepatic artery resistance index and hepatic vein waveform patterns. Methods. After identification and grading of fatty infiltration, 60 patients and 20 normal healthy subjects were examined using color and spectral Doppler sonography. The level of fatty liver infiltration was ascertained and graded by biopsy in patients and excluded by MRI in controls. The patients were allocated to four study groups consecutively, until the required number was reached, according to infiltration level as follows: normal (group A), mild (group B), moderate (group C), and severe (group D). The hepatic vein waveforms were classified into the three following groups: triphasic, biphasic, and monophasic waveform. The hepatic artery resistance index was calculated as the mean of three different measurements. Results. The incidence of monophasic and biphasic hepatic vein waveform was 2 (10%) for group B, 11 (55%) for group C, 16 (80%) for group D, and none for group A. The difference in the distribution of triphasic Doppler waveform pattern between the patients and the control group was significant (p < 0.001). Hepatic artery resistance index was 0.81 (±0.02), 0.78 (±0.03), 0.73 (±0.03), and 0.68 (±0.05), respectively, in groups A, B, C, and D and was significantly different between groups (p < 0.001). Conclusion. As the severity of nonalcoholic fatty infiltration increases, the incidence of abnormal hepatic vein waveforms increases and hepatic artery resistance index decreases. © 2010 Wiley Periodicals, Inc. J Clin Ultrasound 38:346-352, 2010 [source]


Use of a structured interview to assess portfolio-based learning

MEDICAL EDUCATION, Issue 9 2008
Vanessa C Burch
Context, Portfolio-based learning is a popular educational tool usually examined by document review which is sometimes accompanied by an oral examination. This labour-intensive assessment method prohibits its use in the resource-constrained settings typical of developing countries. Objectives, We aimed to determine the feasibility and internal consistency of a portfolio-based structured interview and its impact on student learning behaviour. Methods, Year 4 medical students (n = 181) recorded 25 patient encounters during a 14-week medical clerkship. Portfolios were examined in a 30-minute, single-examiner interview in which four randomly selected cases were discussed. Six standard questions were used to guide examiners in determining the ability of candidates to interpret and synthesise clinical data gathered during patient encounters. Examiners were trained to score responses using a global rating scale. Pearson's correlation co-efficient, Cronbach's , coefficient and the standard error of measurement (SEM) of the assessment tool were determined. The number of students completing more than the required number of portfolio entries was also recorded. Results, The mean (± standard deviation [SD], 95% confidence interval [CI]) interview score was 67.5% (SD ± 10.5, 95% CI 66.0,69.1). The correlation coefficients for the interview compared with other component examinations of the assessment process were: multiple-choice question (MCQ) examination 0.42; clinical case-based examination 0.37; in-course global rating 0.08, and overall final score 0.54. Cronbach's , coefficient was 0.88 and the SEM was 3.6. Of 181 students, 45.3% completed more than 25 portfolio entries. Conclusions, Portfolio assessment using a 30-minute structured interview is a feasible, internally consistent assessment method that requires less examination time per candidate relative to methods described in published work and which may encourage desirable student learning behaviour. [source]


Laser guide stars for extremely large telescopes: efficient Shack,Hartmann wavefront sensor design using the weighted centre-of-gravity algorithm

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 3 2009
L. Schreiber
ABSTRACT Over the last few years increasing consideration has been given to the study of laser guide stars (LGS) for the measurement of the disturbance introduced by the atmosphere in optical and near-infrared (near-IR) astronomical observations from the ground. A possible method for the generation of a LGS is the excitation of the sodium layer in the upper atmosphere at approximately 90 km of altitude. Since the sodium layer is approximately 10 km thick, the artificial reference source looks elongated, especially when observed from the edge of a large aperture. The spot elongation strongly limits the performance of the most common wavefront sensors. The centroiding accuracy in a Shack,Hartmann wavefront sensor, for instance, decreases proportionally to the elongation (in a photon noise dominated regime). To compensate for this effect, a straightforward solution is to increase the laser power, i.e. to increase the number of detected photons per subaperture. The scope of the work presented in this paper is twofold: an analysis of the performance of the weighted centre of gravity algorithm for centroiding with elongated spots and the determination of the required number of photons to achieve a certain average wavefront error over the telescope aperture. [source]


Minimum effort dead-beat control of linear servomechanisms with ripple-free response

OPTIMAL CONTROL APPLICATIONS AND METHODS, Issue 3 2001

Abstract A new and systematic approach to the problem of minimum effort ripple-free dead-beat (EFRFDB) control of the step response of a linear servomechanism is presented. There is specified a set of admissible discrete error feedback controllers, complying with general conditions for the design of ripple-free dead-beat (RFDB) controllers, regardless of the introduced degree of freedom, defined as the number of steps exceeding their minimum number. The solution is unique for the minimum number of steps, while their increase enables one to make an optimal choice from a competitive set of controllers via their parametrization in a finite-dimensional space. As an objective function, Chebyshev's norm of an arbitrarily chosen linear projection of the control variable was chosen. There has been elaborated a new, efficient algorithm for all stable systems of the given class with an arbitrary degree of freedom. A parametrized solution in a finite space of polynomials is obtained through the solution of a standard problem of mathematical programming which simultaneously yields the solution of a total position change maximization of servomechanism provided that a required number of steps and control effort limitation are given. A problem formulated in this way is consecutively used in solving the time-optimal (minimum-step) control of a servomechanism to a given steady-state position with a specified limitation on control effort. The effect of EFRFDB control on the example of a linear servomechanism with torsion spring shaft, with the criterions of control effort and control difference effort, is illustrated and analysed. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Hydrodynamic Cell Model: General Formulation and Comparative Analysis of Different Approaches

THE CANADIAN JOURNAL OF CHEMICAL ENGINEERING, Issue 5 2007
Emiliy K. Zholkovskiy
Abstract This paper is concerned with the Cell Model method of addressing hydrodynamic flow through system of solid particles. The starting point of the analysis is the general problem formulation intended for describing a pressure driven flow through a diaphragm which can be considered as a set of representative cells having arbitrary shape and containing any number of particles. Using the general problem formulation, the hydrodynamic field inside an individual representative cell is interrelated with the applied pressure difference and the external flow velocity. To this end, four relationships containing integrals over the outer boundary of a representative cell are derived in the paper. Assuming that the representative cell is a sphere containing a single particle in the centre, the derived general relationships are transformed into outer cell boundary conditions employed in the literature by different authors. The general number of the obtained outer boundary conditions is more than the required number. Accordingly, by choosing different sets of the outer boundary conditions, different models are considered and compared with each other and with the results obtained by others for regular particle arrays. The common and different features of the hydrodynamic and electrodynamic versions of the Cell Model approaches are analyzed. Finally, it is discussed which version of the cell model gives the best approximation while describing pressure and electrically driven flows through a diaphragm and sedimentation of particles. On s'intéresse dans cet article à la méthode du Modèle de Cellules pour traiter l'écoulement à travers un système de particules solides. Le point de départ de l'analyse consiste à formuler le problème général dans le but de décrire un écoulement sous pression dans un diaphragme qui peut être considéré comme un ensemble de cellules représentatives de forme arbitraire et contenant un nombre quelconque de particules. À l'aide de cette formulation générale du problème, l'hydrodynamique dans une cellule représentative donnée est reliée à la différence de pression appliquée et à la vitesse d'écoulement externe. À cette fin, quatre relations contenant des intégrales sur la frontière d'une cellule représentative sont établies dans cette étude. Si l'on suppose que la cellule représentative est une sphère contenant une particule unique en son centre, les relations générales calculées peuvent être transformées en conditions à la frontière des cellules semblables à celles employées dans la littérature scientifique par différents auteurs. Le nombre général de conditions limites obtenues dépasse le nombre requis. Par conséquent, en choisissant différents ensembles de conditions limites, différents modèles sont considérés et comparés entre eux ainsi qu'avec les résultats obtenus pour des arrangements réguliers de particules. Les caractéristiques des versions hydrodynamiques et électrodynamiques des approches du Modèle de Cellules sont analysées. Finalement, on examine quelle version de modèle de cellule donne la meilleure approximation des écoulements sous pression et des écoulements électrodynamiques à travers un diaphragme et pour la sédimentation des particules. [source]


The minimum crystal size needed for a complete diffraction data set

ACTA CRYSTALLOGRAPHICA SECTION D, Issue 4 2010
James M. Holton
In this work, classic intensity formulae were united with an empirical spot-fading model in order to calculate the diameter of a spherical crystal that will scatter the required number of photons per spot at a desired resolution over the radiation-damage-limited lifetime. The influences of molecular weight, solvent content, Wilson B factor, X-ray wavelength and attenuation on scattering power and dose were all included. Taking the net photon count in a spot as the only source of noise, a complete data set with a signal-to-noise ratio of 2 at 2,Å resolution was predicted to be attainable from a perfect lysozyme crystal sphere 1.2,µm in diameter and two different models of photoelectron escape reduced this to 0.5 or 0.34,µm. These represent 15-fold to 700-fold less scattering power than the smallest experimentally determined crystal size to date, but the gap was shown to be consistent with the background scattering level of the relevant experiment. These results suggest that reduction of background photons and diffraction spot size on the detector are the principal paths to improving crystallographic data quality beyond current limits. [source]


Application of pure and mixed probiotic lactic acid bacteria and yeast cultures for oat fermentation

JOURNAL OF THE SCIENCE OF FOOD AND AGRICULTURE, Issue 12 2005
Associate Professor Dr Angel Angelov
Abstract Fermentation of a prebiotic containing oat substrate with probiotic lactic acid bacteria and yeast strains is an intriguing approach for the development of new synbiotic functional products. This approach was applied in the present work by using pure and mixed microbial cultures to ferment a heat-treated oat mash. Results show that the strains studied were appropriate for oat fermentation and the process could be completed for 6,10 h depending on the strain. The viable cell counts achieved within this time were above the required levels of 106,107 cfu ml,1 for probiotic products. Both single lactic acid bacteria strains and mixed cultures of the same strains with yeast were found suitable for oat fermentation. However, the pure LAB cultures attributed better flavour and shelf life of the oat drinks. The content of the prebiotic oat component beta-glucan remained within 0.30,0.36% during fermentation and storage of the drinks obtained with each of the strains used. Thus, these products would contribute diet with the valuable functional properties of beta-glucan. Also, the viability of pure and mixed cultures in the oat products was good: levels of cell counts remained above the required numbers for probiotic products throughout the estimated shelf-life period. Copyright © 2005 Society of Chemical Industry [source]