Home About us Contact | |||
Size Determination (size + determination)
Kinds of Size Determination Selected AbstractsPARTICLE SIZE DETERMINATION OF FOOD SUSPENSIONS: APPLICATION TO CLOUDY APPLE JUICEJOURNAL OF FOOD PROCESS ENGINEERING, Issue 6 2000D.B. GENOVESE ABSTRACT Three different techniques were applied to determine particle size distribution (PSD) of cloudy apple juice: sedimentation-photometry (S-F), scanning electron microscopy (SEM) and photon correlation spectroscopy (PCS). All the three techniques found particles in a range from about 0.05 to 3 micrometers (,m = 10,6m). While calculation of PSD by SEM was based on particle number, calculation of PSD by PCS and S-F were based on intensity of scattered light, and both weight (or volume) and projected area (or absorbed light), respectively. In order to compare results from these techniques, appropriate equations were used to convert distributions from one base to another. Three characteristic diameters were also obtained from each distribution: mean, median and modal. Characteristic diameters range from 0.88 to 2.50 ,m in weight basis, 0.77 to 2.50 ,m in projected area basis and 0.08 to 0.23 ,m in number basis. Differences between these diameters were due to asymmetry in the distributions. [source] Sample Size Determination for Categorical ResponsesJOURNAL OF FORENSIC SCIENCES, Issue 1 2009Dimitris Mavridis Ph.D. Abstract:, Procedures are reviewed and recommendations made for the choice of the size of a sample to estimate the characteristics (sometimes known as parameters) of a population consisting of discrete items which may belong to one and only one of a number of categories with examples drawn from forensic science. Four sampling procedures are described for binary responses, where the number of possible categories is only two, e.g., licit or illicit pills. One is based on priors informed from historical data. The other three are sequential. The first of these is a sequential probability ratio test with a stopping rule derived by controlling the probabilities of type 1 and type 2 errors. The second is a sequential variation of a procedure based on the predictive distribution of the data yet to be inspected and the distribution of the data that have been inspected, with a stopping rule determined by a prespecified threshold on the probability of a wrong decision. The third is a two-sided sequential criterion which stops sampling when one of two competitive hypotheses has a probability of being accepted which is larger than another prespecified threshold. The fifth procedure extends the ideas developed for binary responses to multinomial responses where the number of possible categories (e.g., types of drug or types of glass) may be more than two. The procedure is sequential and recommends stopping when the joint probability interval or ellipsoid for the estimates of the proportions is less than a given threshold in size. For trinomial data this last procedure is illustrated with a ternary diagram with an ellipse formed around the sample proportions. There is a straightforward generalization of this approach to multinomial populations with more than three categories. A conclusion provides recommendations for sampling procedures in various contexts. [source] Shape and Size Determination by Laser Diffraction: Average Aspect Ratio and Size Distributions by Volume; Feasibility of Data Analysis by Neural NetworksPARTICLE & PARTICLE SYSTEMS CHARACTERIZATION, Issue 1 2005Luc Deriemaeker Abstract A new strategy for the recovery of the average shape factor and the volume weighted size distribution from laser diffraction data using neural networks is presented. The method yields reliable estimates for both the shape factor and the volume weighted size distribution. [source] Sample Size Determination for Establishing Equivalence/Noninferiority via Ratio of Two Proportions in Matched,Pair DesignBIOMETRICS, Issue 4 2002Man-Lai Tang Summary. In this article, we propose approximate sample size formulas for establishing equivalence or noninferiority of two treatments in match-pairs design. Using the ratio of two proportions as the equivalence measure, we derive sample size formulas based on a score statistic for two types of analyses: hypothesis testing and confidence interval estimation. Depending on the purpose of a study, these formulas can be used to provide a sample size estimate that guarantees a prespecified power of a hypothesis test at a certain significance level or controls the width of a confidence interval with a certain confidence level. Our empirical results confirm that these score methods are reliable in terms of true size, coverage probability, and skewness. A liver scan detection study is used to illustrate the proposed methods. [source] Precise/ Small Sample Size Determinations of Lithium Isotopic Compositions of Geological Reference Materials and Modern Seawater by MC-ICP-MSGEOSTANDARDS & GEOANALYTICAL RESEARCH, Issue 1 2004Alistair B. Jeffcoate composition isotopique de Li; matériaux de référence silicates; eau de mer; MC-ICP-MS; Li standard The Li isotope ratios of four international rock reference materials, USGS BHVO-2, GSJ JB-2, JG-2, JA-1 and modern seawater (Mediterranean, Pacific and North Atlantic) were determined using multi-collector inductively coupled plasma-mass spectrometry (MC-ICP-MS). These reference materials of natural samples were chosen to span a considerable range in Li isotope ratios and cover several different matrices in order to provide a useful benchmark for future studies. Our new analytical technique achieves significantly higher precision and reproducibility (< ± O.3%o 2s) than previous methods, with the additional advantage of requiring very low sample masses of ca. 2 ng of Li. Les rapports isotopiques du Li de 4 matériaux de référence, de provenance Internationale, BHVO-2, JB-2, JG-2, JA-1 et d'eau de mer (Méditerranée, Pacifique et Atlantique Nord) ont été déterminés par MC-ICP-MS (spectrométrie de masse avec source à plasma induit à multicollection). Ces matériaux de référence naturels ont été choisis car ils balaient un large champ des rapports isotopiques du Lithium et couvrent différentes matrices afin de fournir un point de repère utile pour les études futures. Notre nouvelle technique analytique permet d'atteindre une précision et une reproductibilité (< ± 0.3%. 2s) nettement supérieures à celles des méthodes précédemment utilisées et présente I'avantage de pouvoir travailler avec des échantillons de petite masse, , 2 ng de Li. [source] Capillary electrophoresis analysis of glucooligosaccharide regioisomersELECTROPHORESIS, Issue 6 2004Gilles Joucla Abstract Complex gluco-oligosaccharide mixtures of two regioisomer series were successfully separated by CE. The gluco-oligosaccharide series were synthesized, employing a dextransucrase from Leuconostoc mesenteroides NRRL B-512F, by successive glucopyranosyl transfers from sucrose to the acceptor glucose or maltose. The glucosyl transfer to both acceptors, occurring through the formation of ,1,6 linkages, differed for the two series only in the glucosidic bond to the reducing end namely ,1,6 or ,1,4 bond for glucose or maltose acceptor, respectively. Thus, the combination of the two series results in mixed pairs of gluco-oligosaccharide regioisomers with different degrees of polymerization (DP). These regioisomer series were first derivatized by reductive amination with 9-aminopyrene-1,4,6-trisulfonate (APTS). Under acidic conditions using triethyl ammonium acetate as electrolyte, the APTS-gluco-oligosaccharides of each series were separated enabling unambiguous size determination by coupling CE to electrospray-mass spectrometry. However, neither these acidic conditions nor alkaline buffer systems could be adapted for the separation of the gluco-oligosaccharide regioisomers arising from the two combined series. By contrast, increased resolution was observed in an alkaline borate buffer, using differential complexation of the regioisomers with the borate anions. Such conditions were also successfully applied to the separation of glucodisaccharide regioisomers composed of ,1,2, ,1,3, ,1,4, and ,1,6 linkages commonly synthesized by glucansucrase enzymes. [source] The pros and cons of noninferiority trialsFUNDAMENTAL & CLINICAL PHARMACOLOGY, Issue 4 2003Stuart J. Pocock Abstract Noninferiority trials comparing new treatment with an active standard control are becoming increasingly common. This article discusses relevant issues regarding their need, design, analysis and interpretation: the appropriate choice of control group, types of noninferiority trial, ethical considerations, sample size determination and potential pitfalls to consider. [source] Influence of cervical preflaring on apical file size determinationINTERNATIONAL ENDODONTIC JOURNAL, Issue 7 2005J. D. Pecora Abstract Aim, To investigate the influence of cervical preflaring with different instruments (Gates-Glidden drills, Quantec Flare series instruments and LA Axxess burs) on the first file that binds at working length (WL) in maxillary central incisors. Methodology, Forty human maxillary central incisors with complete root formation were used. After standard access cavities, a size 06 K-file was inserted into each canal until the apical foramen was reached. The WL was set 1 mm short of the apical foramen. Group 1 received the initial apical instrument without previous preflaring of the cervical and middle thirds of the root canal. Group 2 had the cervical and middle portion of the root canals enlarged with Gates-Glidden drills sizes 90, 110 and 130. Group 3 had the cervical and middle thirds of the root canals enlarged with nickel-titanium Quantec Flare series instruments. Titanium-nitrite treated, stainless steel LA Axxess burs were used for preflaring the cervical and middle portions of root canals from group 4. Each canal was sized using manual K-files, starting with size 08 files with passive movements until the WL was reached. File sizes were increased until a binding sensation was felt at the WL, and the instrument size was recorded for each tooth. The apical region was then observed under a stereoscopic magnifier, images were recorded digitally and the differences between root canal and maximum file diameters were evaluated for each sample. Results, Significant differences were found between experimental groups regarding anatomical diameter at the WL and the first file to bind in the canal (P < 0.01, 95% confidence interval). The major discrepancy was found when no preflaring was performed (0.151 mm average). The LA Axxess burs produced the smallest differences between anatomical diameter and first file to bind (0.016 mm average). Gates-Glidden drills and Flare instruments were ranked in an intermediary position, with no statistically significant differences between them (0.093 mm average). Conclusions, The instrument binding technique for determining anatomical diameter at WL is not precise. Preflaring of the cervical and middle thirds of the root canal improved anatomical diameter determination; the instrument used for preflaring played a major role in determining the anatomical diameter at the WL. Canals preflared with LA Axxess burs created a more accurate relationship between file size and anatomical diameter. [source] Design of a clustered observational study to predict emergency admissions in the elderly: statistical reasoning in clinical practiceJOURNAL OF EVALUATION IN CLINICAL PRACTICE, Issue 2 2007Gillian A. Lancaster MSc PhD CStat Abstract Objective, To describe the statistical design issues and practical considerations that had to be addressed in setting up a clustered observational study of emergency admission to hospital of elderly people. Study design and setting, Clustered observational study (sample survey) of elderly people registered with 18 general practices in Halton Primary Care Trust in the north-west of England. Results, The statistical design features that warranted particular attention were sample size determination, intra-class correlation, sampling and recruitment, bias and confounding. Pragmatic decisions based on derived scenarios of different design effects are discussed. A pilot study was carried out in one practice. From the remaining practices, a total of 4000 people were sampled, stratified by gender. The average cluster size was 200 and the intra-class correlation coefficient for the emergency admission outcome was 0.00034, 95% confidence interval (0,0.008). Conclusion, Studies that involve sampling from clusters of people are common in a wide range of healthcare settings. The clustering adds an extra level of complexity to the study design. This study provides an empirical illustration of the importance of statistical as well as clinical reasoning in study design in clinical practice. [source] Using historical data for Bayesian sample size determinationJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2007Fulvio De Santis Summary., We consider the sample size determination (SSD) problem, which is a basic yet extremely important aspect of experimental design. Specifically, we deal with the Bayesian approach to SSD, which gives researchers the possibility of taking into account pre-experimental information and uncertainty on unknown parameters. At the design stage, this fact offers the advantage of removing or mitigating typical drawbacks of classical methods, which might lead to serious miscalculation of the sample size. In this context, the leading idea is to choose the minimal sample size that guarantees a probabilistic control on the performance of quantities that are derived from the posterior distribution and used for inference on parameters of interest. We are concerned with the use of historical data,i.e. observations from previous similar studies,for SSD. We illustrate how the class of power priors can be fruitfully employed to deal with lack of homogeneity between historical data and observations of the upcoming experiment. This problem, in fact, determines the necessity of discounting prior information and of evaluating the effect of heterogeneity on the optimal sample size. Some of the most popular Bayesian SSD methods are reviewed and their use, in concert with power priors, is illustrated in several medical experimental contexts. [source] Optimal predictive sample size for case,control studiesJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 3 2004Fulvio De Santis Summary., The identification of factors that increase the chances of a certain disease is one of the classical and central issues in epidemiology. In this context, a typical measure of the association between a disease and risk factor is the odds ratio. We deal with design problems that arise for Bayesian inference on the odds ratio in the analysis of case,control studies. We consider sample size determination and allocation criteria for both interval estimation and hypothesis testing. These criteria are then employed to determine the sample size and proportions of units to be assigned to cases and controls for planning a study on the association between the incidence of a non-Hodgkin's lymphoma and exposition to pesticides by eliciting prior information from a previous study. [source] Sample Size Reassessment in Adaptive Clinical Trials Using a Bias Corrected EstimateBIOMETRICAL JOURNAL, Issue 7 2003Silke Coburger Abstract Point estimation in group sequential and adaptive trials is an important issue in analysing a clinical trial. Most literature in this area is only concerned with estimation after completion of a trial. Since adaptive designs allow reassessment of sample size during the trial, reliable point estimation of the true effect when continuing the trial is additionally needed. We present a bias adjusted estimator which allows a more exact sample size determination based on the conditional power principle than the naive sample mean does. [source] Computer-assisted calculation of myocardial infarct size shortens the evaluation time of contrast-enhanced cardiac MRICLINICAL PHYSIOLOGY AND FUNCTIONAL IMAGING, Issue 1 2008Lene Rosendahl Summary Background:, Delayed enhancement magnetic resonance imaging depicts scar in the left ventricle which can be quantitatively measured. Manual segmentation and scar determination is time consuming. The purpose of this study was to evaluate a software for infarct quantification, to compare with manual scar determination, and to measure the time saved. Methods:, Delayed enhancement magnetic resonance imaging was performed in 40 patients where myocardial perfusion single photon emission computed tomography imaging showed irreversible uptake reduction suggesting a myocardial scar. After segmentation, the semi-automatic software was applied. A scar area was displayed, which could be corrected and compared with manual delineation. The different time steps were recorded with both methods. Results:, The software shortened the average evaluation time by 12·4 min per cardiac exam, compared with manual delineation. There was good correlation of myocardial volume, infarct volume and infarct percentage (%) between the two methods, r = 0·95, r = 0·92 and r = 0·91 respectively. Conclusion:, A computer software for myocardial volume and infarct size determination cut the evaluation time by more than 50% compared with manual assessment, with maintained clinical accuracy. [source] Involvement of Iron (Ferric) Reduction in the Iron Absorption Mechanism of a Trivalent Iron-Protein Complex (Iron Protein SuccinylateBASIC AND CLINICAL PHARMACOLOGY & TOXICOLOGY, Issue 3 2000Kishor B. Raja Iron protein succinylate is a non-toxic therapeutic iron compound. We set out to characterise the structure of this compound and investigate the importance of digestion and intestinal reduction in determining absorption of the compound. The structure of the compound was investigated by variable temperature Mössbauer spectroscopy, molecular size determinations and kinetics of iron release by chelators. Intestinal uptake was determined with radioactive compound force fed to mice. Reduction of the compound was determined by in vitro incubation with intestinal fragments. The compound was found to contain only ferric iron, present as small particles including sizes below 10 nm. The iron was released rapidly to chelators. Digestion with trypsin reduced the molecular size of the compound. Intestinal absorption of the compound was inhibited by a ferrous chelator (ferrozine), indicating that reduction to ferrous iron may be important for absorption. The native compound was a poor substrate for duodenal reduction activity, but digestion with pepsin, followed by pancreatin, released soluble iron complexes with an increased reduction rate. We conclude that iron protein succinylate is absorbed by a mechanism involving digestion to release soluble, available ferric species which may be reduced at the mucosal surface to provide ferrous iron for membrane transport into enterocytes. [source] |