Sample Size (sample + size)

Distribution by Scientific Domains
Distribution within Medical Sciences

Kinds of Sample Size

  • adequate sample size
  • calculating sample size
  • expected sample size
  • insufficient sample size
  • large sample size
  • larger sample size
  • limited sample size
  • required sample size
  • small sample size
  • smaller sample size
  • smallest sample size
  • total sample size
  • very large sample size

  • Terms modified by Sample Size

  • sample size calculation
  • sample size determination
  • sample size estimation
  • sample size formula
  • sample size requirement

  • Selected Abstracts


    INFLUENCE OF SAMPLE SIZE AND SHAPE ON TRANSPORT PARAMETERS DURING DRYING OF SHRINKING BODIES

    JOURNAL OF FOOD PROCESS ENGINEERING, Issue 2 2007
    NAJMUR RAHMAN
    ABSTRACT An experimental investigation on the influence of sample size and shape on heat and mass transport parameters under natural convection air-drying is presented. Potato cylinders with length of 0.05 m and thicknesses of 0.005, 0.008, 0.010 and 0.016 m, and circular slices with diameter of 0.05 m and thickness of 0.01 m were dried in a laboratory scale hot-air cabinet dryer. Results indicate that each transport parameter exhibits a linear relationship with sample thickness. Convective heat and mass transfer coefficients (hcand hm) decreased whereas moisture diffusion coefficient (Deff) increased with increasing thickness. Considering no sample shrinkage effect in the parameter analysis, for the thickness range considered, the values of hcare found to be underestimated in the range of 29.0,30.6%, whereas those of hmand Deff are overestimated in the range of 33.7,38.0% and 75.9,128.1%, respectively. Using Levenberg,Marquardt algorithm for optimization, a correlation for Biot number for mass transfer (Bim) as a function of drying time and sample thickness is proposed. A close agreement was observed between dimensionless moisture contents predicted by this relation and those obtained from experiments for different sample thicknesses at drying air temperature of 60C. For the same thickness and drying conditions, circular slices caused an increase in each transport parameter significantly. [source]


    MEASUREMENT OF FIRMNESS OF FRESH-CUT SLICED TOMATO USING PUNCTURE TESTS , STUDIES ON SAMPLE SIZE, PROBE SIZE AND DIRECTION OF PUNCTURE

    JOURNAL OF TEXTURE STUDIES, Issue 5 2007
    MILZA M. LANA
    ABSTRACT In order to investigate the firmness of tomato slices, two experiments were performed. In the first one, Monte Carlo simulation was used to study the variation in firmness within and between slices. Adding more slices and more measurements per slice reduced the SD, but in general, the efficiency of adding more slices was higher. In the second experiment, the firmness of tomato slices was measured by puncture test during storage, using one of three flat-tipped cylindrical probes (3.5-, 2.5- and 1.5-mm diameter) in two directions, along or perpendicular to the main axis of the fruit. Changes in firmness were studied by nonlinear regression analysis. The same model could be applied to all combinations of probe size and direction with the same correction for shear and compression. It suggests that shear and compression forces decay with storage time according to the same mechanism, irrespective of the measurement direction. PRACTICAL APPLICATIONS Methodologies for both firmness evaluation and data analysis were presented. Monte Carlo simulation was used to optimize the number of samples for firmness assays. After calculating the experimental SD from preliminary experimental results, simulations were performed with different numbers of replicates and measurements per replicate, to find an optimal experimental design where the SD is minimized. Using nonlinear regression, the effects on firmness of probe size, puncture direction in relation to the plant tissue and storage time can be analyzed simultaneously. The incorporation of a correction factor to account for differences in firmness due to probe size was proposed. The relative influence of shear (s) and compression force (c) on the observed force is estimated. Results of interest for the industry were presented, confirming previous findings that the firmness of ripened tomato slices measured by puncture analysis does not change significantly during short-term storage at low temperature. [source]


    DIET OF HARBOR PORPOISES IN THE KATTEGAT AND SKAGERRAK SEAS: ACCOUNTING FOR INDIVIDUAL VARIATION AND SAMPLE SIZE

    MARINE MAMMAL SCIENCE, Issue 1 2003
    Patrik Börjesson
    Abstract Stomach contents of 112 bycaught harbor porpoises (Phocoena phocoena) collected between 1989 and 1996 in the Kattegat and Skagerrak seas were analyzed to describe diet composition and estimate prey size, to examine sample size requirements, and to compare juvenile and adult diets. Although porpoises preyed on a variety of species, only a few contributed substantially to the diet. Atlantic herring (Clupea harengus) was the dominating prey species for both juveniles and adults. Our results, in combination with those from previous studies, suggest that where herring is a dominant food source, porpoises prey primarily on size classes containing mature or maturing individuals. Further, we also show that Atlantic hagfish (Myxine glutinosa) may be an important food resource, at least for adult porpoises. Examination of sample size requirement showed that, depending on the taxonomic level used to describe the diet, a minimum of 35,71 stomachs are needed to be confident that all common prey species will be found. [source]


    HOW TO EXPAND THE ,SAMPLE SIZE' OF STUDIES OF DRUG MARKET DISRUPTIONS

    ADDICTION, Issue 3 2009
    JONATHAN CAULKINS
    No abstract is available for this article. [source]


    Detecting the Effects of Fishing on Seabed Community Diversity: Importance of Scale and Sample Size

    CONSERVATION BIOLOGY, Issue 2 2003
    Michel J. Kaiser
    I investigated the importance of the extent of area sampled to the observed outcome of comparisons of the diversity of seabed assemblages in different areas of the seabed that experience either low or high levels of fishing disturbance. Using a finite data set within each disturbance regime, I pooled samples of the benthic communities at random. Thus, although individual sample size increased with each additional level of pooled data, the number of samples decreased accordingly. Detecting the effects of disturbance on species diversity was strongly scale-dependent. Despite increased replication at smaller scales, disturbance effects were more apparent when larger but less numerous samples were collected. The detection of disturbance effects was also affected by the choice of sampling device. Disturbance effects were apparent with pooled anchor-dredge samples but were not apparent with pooled beam-trawl samples. A more detailed examination of the beam-trawl data emphasized that a whole-community approach to the investigation of changes in diversity can miss responses in particular components of the community ( e.g., decapod crustacea ). The latter may be more adversely affected by disturbance than the majority of the taxa found within the benthic assemblage. Further, the diversity of some groups ( e.g., echinoderms ) actually increased with disturbance. Experimental designs and sampling regimes that focus on diversity at only one scale may miss important disturbance effects that occur at larger or smaller scales. Resumen: Las perturbaciones antropogénicas de ambientes terrestres y marinos, tales como la tala y la pesca, se identifican generalmente con impactos negativos sobre la diversidad de especies. Sin embargo, observaciones empíricas a menudo no apoyan este supuesto. Investigué la importancia de la extensión del área muestreada sobre los resultados observados de comparaciones de la diversidad de ensamblajes de fondos marinos en diferentes áreas que experimentaron niveles bajos o altos de perturbación por pesca. Usando un juego finito de datos dentro de cada régimen de perturbación, se combinaron las muestras de comunidades bénticas de manera aleatoria. Por lo tanto, a pesar de que el tamaño de muestra individual incrementó con cada nivel adicional de datos combinados, el número de muestras disminuyó en consecuencia. La detección de los efectos de la perturbación sobre la diversidad de especies dependió en gran medida de la escala. A pesar del incremento en replicación de las escalas pequeñas, los efectos de la perturbación fueron más visibles cuando las muestras recolectadas fueron más grandes pero menos numerosas. La detección de los efectos de la perturbación también fueron afectados por la selección del equipo de muestreo. Los efectos de la perturbación eran evidentes cuando se usaron muestras mezcladas de dragas de ancla, pero no fueron evidentes para muestras mezcladas de redes de arrastre con vigas. Un análisis más detallado de los datos de las redes de arrastre muestran que una aproximación a nivel de toda la comunidad para investigar los cambios de diversidad puede resultar en la pérdida de información a nivel de componentes específicos ( por ejemplo crustáceos decápodos ) de la comunidad. Estos pueden ser adversamente afectados en mayor medida por la perturbación que la mayoría de los taxones que integran el ensamblaje béntico. Además, la diversidad de algunos grupos ( por ejemplo los equinodermos ) de hecho aumentó con la perturbación. Los diseños experimentales y los regímenes de muestreo que se enfocan en la diversidad a una sola escala pueden no detectar los efectos importantes de la perturbación que ocurren a mayores o menores escalas. [source]


    Using Profile Monitoring Techniques for a Data-rich Environment with Huge Sample Size

    QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 7 2005
    Kaibo Wang
    Abstract In-process sensors with huge sample size are becoming popular in the modern manufacturing industry, due to the increasing complexity of processes and products and the availability of advanced sensing technology. Under such a data-rich environment, a sample with huge size usually violates the assumption of homogeneity and degrades the detection performance of a conventional control chart. Instead of charting summary statistics such as the mean and standard deviation of observations that assume homogeneity within a sample, this paper proposes charting schemes based on the quantile,quantile (Q,Q) plot and profile monitoring techniques to improve the performance. Different monitoring schemes are studied based on various shift patterns in a huge sample and compared via simulation. Guidelines are provided for applying the proposed schemes to similar industrial applications in a data-rich environment. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Optimal Design of the Adaptive Sample Size and Sampling Interval np Control Chart

    QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 6 2004
    Zhang Wu
    Abstract Recent research has shown that the adaptive control charts are quicker than the traditional static charts in detecting process shifts. This paper develops the algorithm for the optimization designs of the adaptive np control charts for monitoring the process fraction non-conforming p. It includes the variable sample size chart, the variable sampling interval chart, and the variable sample size and sampling interval chart. The performance of the adaptive np charts is measured by the average time to signal under the steady-state mode, which allows the shift in p to occur at any time, even during the sampling inspection. By studying the performance of the adaptive np charts systematically, it is found that they do improve effectiveness significantly, especially for detecting small or moderate process shifts. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Calculating Sample Size for Studies with Expected All-or-None Nonadherence and Selection Bias

    BIOMETRICS, Issue 2 2009
    Michelle D. Shardell
    Summary We develop sample size formulas for studies aiming to test mean differences between a treatment and control group when all-or-none nonadherence (noncompliance) and selection bias are expected. Recent work by Fay, Halloran, and Follmann (2007, Biometrics63, 465,474) addressed the increased variances within groups defined by treatment assignment when nonadherence occurs, compared to the scenario of full adherence, under the assumption of no selection bias. In this article, we extend the authors' approach to allow selection bias in the form of systematic differences in means and variances among latent adherence subgroups. We illustrate the approach by performing sample size calculations to plan clinical trials with and without pilot adherence data. Sample size formulas and tests for normally distributed outcomes are also developed in a Web Appendix that account for uncertainty of estimates from external or internal pilot data. [source]


    Rejoinder to "Modification of the Computational Procedure in Parker and Bregman's Method of Calculating Sample Size from Matched Case,Control Studies with a Dichotomous Exposure"

    BIOMETRICS, Issue 4 2005
    Robert A. Parker
    No abstract is available for this article. [source]


    A New Method for Choosing Sample Size for Confidence Interval,Based Inferences

    BIOMETRICS, Issue 3 2003
    Michael R. Jiroutek
    Summary. Scientists often need to test hypotheses and construct corresponding confidence intervals. In designing a study to test a particular null hypothesis, traditional methods lead to a sample size large enough to provide sufficient statistical power. In contrast, traditional methods based on constructing a confidence interval lead to a sample size likely to control the width of the interval. With either approach, a sample size so large as to waste resources or introduce ethical concerns is undesirable. This work was motivated by the concern that existing sample size methods often make it difficult for scientists to achieve their actual goals. We focus on situations which involve a fixed, unknown scalar parameter representing the true state of nature. The width of the confidence interval is defined as the difference between the (random) upper and lower bounds. An event width is said to occur if the observed confidence interval width is less than a fixed constant chosen a priori. An event validity is said to occur if the parameter of interest is contained between the observed upper and lower confidence interval bounds. An event rejection is said to occur if the confidence interval excludes the null value of the parameter. In our opinion, scientists often implicitly seek to have all three occur: width, validity, and rejection. New results illustrate that neglecting rejection or width (and less so validity) often provides a sample size with a low probability of the simultaneous occurrence of all three events. We recommend considering all three events simultaneously when choosing a criterion for determining a sample size. We provide new theoretical results for any scalar (mean) parameter in a general linear model with Gaussian errors and fixed predictors. Convenient computational forms are included, as well as numerical examples to illustrate our methods. [source]


    Sample Size in Pilot Genetic Studies

    CLINICAL AND TRANSLATIONAL SCIENCE, Issue 2 2008
    Terry Hyslop Ph.D.
    No abstract is available for this article. [source]


    Determination of Sample Sizes for Demonstrating Efficacy of Radiation Countermeasures

    BIOMETRICS, Issue 1 2010
    Ralph L. Kodell
    Summary In response to the ever increasing threat of radiological and nuclear terrorism, active development of nontoxic new drugs and other countermeasures to protect against and/or mitigate adverse health effects of radiation is ongoing. Although the classical LD50 study used for many decades as a first step in preclinical toxicity testing of new drugs has been largely replaced by experiments that use fewer animals, the need to evaluate the radioprotective efficacy of new drugs necessitates the conduct of traditional LD50 comparative studies (FDA, 2002,,Federal Register,67, 37988,37998). There is, however, no readily available method to determine the number of animals needed for establishing efficacy in these comparative potency studies. This article presents a sample-size formula based on Student's,t,for comparative potency testing. It is motivated by the U.S. Food and Drug Administration's (FDA's) requirements for robust efficacy data in the testing of response modifiers in total body irradiation experiments where human studies are not ethical or feasible. Monte Carlo simulation demonstrated the formula's performance for Student's,t, Wald, and likelihood ratio tests in both logistic and probit models. Importantly, the results showed clear potential for justifying the use of substantially fewer animals than are customarily used in these studies. The present article may thus initiate a dialogue among researchers who use animals for radioprotection survival studies, institutional animal care and use committees, and drug regulatory bodies to reach a consensus on the number of animals needed to achieve statistically robust results for demonstrating efficacy of radioprotective drugs. [source]


    Comparison of Neural Networks and Gravity Models in Trip Distribution

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 2 2006
    Frans Tillema
    Modeling the distribution of trips between zones is complex and dependent on the quality and availability of field data. This research explores the performance of neural networks in trip distribution modeling and compares the results with commonly used doubly constrained gravity models. The approach differs from other research in several respects; the study is based on both synthetic data, varying in complexity, as well as real-world data. Furthermore, neural networks and gravity models are calibrated using different percentages of hold out data. Extensive statistical analyses are conducted to obtain necessary sample sizes for significant results. The results show that neural networks outperform gravity models when data are scarce in both synthesized as well as real-world cases. Sample size for statistically significant results is forty times lower for neural networks. [source]


    Evaluation of statistical methods for left-censored environmental data with nonuniform detection limits

    ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 9 2006
    Parikhit Sinha
    Abstract Monte Carlo simulations were used to evaluate statistical methods for estimating 95% upper confidence limits of mean constituent concentrations for left-censored data with nonuniform detection limits. Two primary scenarios were evaluated: data sets with 15 to 50% nondetected samples and data sets with 51 to 80% nondetected samples. Sample size and the percentage of nondetected samples were allowed to vary randomly to generate a variety of left-censored data sets. All statistical methods were evaluated for efficacy by comparing the 95% upper confidence limits for the left-censored data with the 95% upper confidence limits for the noncensored data and by determining percent coverage of the true mean (,). For data sets with 15 to 50% nondetected samples, the trimmed mean, Winsorization, Aitchison's, and log-probit regression methods were evaluated. The log-probit regression was the only method that yielded sufficient coverage (99,100%) of ,, as well as a high correlation coefficient (r2 = 0.99) and small average percent residuals (, 0.1%) between upper confidence limits for censored versus noncensored data sets. For data sets with 51 to 80% nondetected samples, a bounding method was effective (r2 = 0.96,0.99, average residual = ,5% to ,7%, 95-98% coverage of ,), except when applied to distributions with low coefficients of variation (standard deviation/, < 0.5). Thus, the following recommendations are supported by this research: data sets with 15 to 50% nondetected samples,log-probit regression method and use of Chebyshev theorem to estimate 95% upper confidence limits; data sets with 51 to 80% nondetected samples, bounding method and use of Chebyshev theorem to estimate 95% upper confidence limits. [source]


    Outcome of secondary root canal treatment: a systematic review of the literature

    INTERNATIONAL ENDODONTIC JOURNAL, Issue 12 2008
    Y.-L. Ng
    Abstract Aims, (i) To investigate the effects of study characteristics on the reported success rates of secondary root canal treatment (2°RCT or root canal retreatment); and (ii) to investigate the effects of clinical factors on the success of 2°RCT. Methodology, Longitudinal human clinical studies investigating outcome of 2°RCT which were published upto the end of 2006 were identified electronically (MEDLINE and Cochrane database 1966,2006 Dec, week 4). Four journals (Dental Traumatology, International Endodontic Journal, Journal of Endodontics, Oral Surgery Oral Medicine Oral Pathology Endodontics Radiology), bibliographies of all relevant papers and review articles were hand-searched. Two reviewers (Y-LN, KG) independently assessed and selected the studies based on specified inclusion criteria and extracted the data onto a pre-designed proforma, independently. The criteria were: (i) Clinical studies on 2°RCT; (ii) Stratified analyses available for 2°RCT where 1°RCT data included; (iii) Sample size given and larger than 10; (iv) At least 6-month post-operative review; (v) Success based on clinical and/or radiographic criteria (strict = absence of apical radiolucency; loose = reduction in size of radiolucency); and (vi) Overall success rate given or could be calculated from the raw data. Three strands of evidence or analyses were used to triangulate a consensus view. The reported findings from individual studies, including those excluded for quantitative analysis, were utilized for the intuitive synthesis which constituted the first strand of evidence. Secondly, the pooled weighted success rates by each study characteristic and potential prognostic factor were estimated using the random effect model. Thirdly, the effects of study characteristics and prognostic factors (expressed as odds ratios) on success rates were estimated using fixed and random effects meta-analysis with DerSimonean and Laird's methods. Meta-regression models were used to explore potential sources of statistical heterogeneity. Study characteristics considered in the meta-regression analyses were: decade of publication, study-specific criteria for success (radiographic, combined radiographic & clinical), unit of outcome measure (tooth, root), duration after treatment when assessing success (,at least 4 years' or ,<4 years'), geographic location of the study (North American, Scandinavian, other countries), and qualification of the operator (undergraduate students, postgraduate students, general dental practitioners, specialist or mixed group). Results, Of the 40 papers identified, 17 studies published between 1961 and 2005 were included; none were published in 2006. The majority of studies were retrospective (n = 12) and only five prospective. The pooled weighted success rate of 2°RCT judged by complete healing was 76.7% (95% CI 73.6%, 89.6%) and by incomplete healing, 77.2% (95% CI 61.1%, 88.1%). The success rates by ,decade of publication' and ,geographic location of study' were not significantly different at the 5% level. Eighteen clinical factors had been investigated in various combinations in previous studies. The most frequently and thoroughly investigated were ,periapical status' (n = 13), ,size of lesion' (n = 7), and ,apical extent of RF' (n = 5) which were found to be significant prognostic factors. The effect of different aspects of primary treatment history and re-treatment procedures has been poorly tested. Conclusions, The pooled estimated success rate of secondary root canal treatment was 77%. The presence of pre-operative periapical lesion, apical extent of root filling and quality of coronal restoration proved significant prognostic factors with concurrence between all three strands of evidence whilst the effects of 1°RCT history and 2°RCT protocol have been poorly investigated. [source]


    Antibacterial efficacy of calcium hydroxide intracanal dressing: a systematic review and meta-analysis

    INTERNATIONAL ENDODONTIC JOURNAL, Issue 1 2007
    C. Sathorn
    Abstract Aim, To determine to what extent does calcium hydroxide intracanal medication eliminate bacteria from human root canals, compared with the same canals before medication, as measured by the number of positive cultures, in patients undergoing root canal treatment for apical periodontitis (teeth with an infected root canal system). Methodology, CENTRAL, MEDLINE and EMBASE databases were searched. Reference lists from identified articles were scanned. A forward search was undertaken on the authors of the identified articles. Papers that had cited these articles were also identified through the Science Citation Index to identify potentially relevant subsequent primary research. Review methods, The included studies were pre-/post-test clinical trials comparing the number of positive bacterial cultures from treated canals. Data in those studies were independently extracted. Risk differences of included studies were combined using the generic inverse variance and random effect method. Results, Eight studies were identified and included in the review, covering 257 cases. Sample size varied from 18 to 60 cases; six studies demonstrated a statistically significant difference between pre- and post-medicated canals, whilst two did not. There was considerable heterogeneity among studies. Pooled risk difference was ,21%; 95% CI: ,47% to 6%. The difference between pre- and post-medication was not statistically significant (P = 0.12). Conclusions, Calcium hydroxide has limited effectiveness in eliminating bacteria from human root canal when assessed by culture techniques. [source]


    Calculation of sample size for stroke trials assessing functional outcome: comparison of binary and ordinal approaches

    INTERNATIONAL JOURNAL OF STROKE, Issue 2 2008
    The Optimising Analysis of Stroke Trials (OAST) collaboration
    Background Many acute stroke trials have given neutral results. Sub-optimal statistical analyses may be failing to detect efficacy. Methods which take account of the ordinal nature of functional outcome data are more efficient. We compare sample size calculations for dichotomous and ordinal outcomes for use in stroke trials. Methods Data from stroke trials studying the effects of interventions known to positively or negatively alter functional outcome , Rankin Scale and Barthel Index , were assessed. Sample size was calculated using comparisons of proportions, means, medians (according to Payne), and ordinal data (according to Whitehead). The sample sizes gained from each method were compared using Friedman 2 way ANOVA. Results Fifty-five comparisons (54 173 patients) of active vs. control treatment were assessed. Estimated sample sizes differed significantly depending on the method of calculation (P<0·0001). The ordering of the methods showed that the ordinal method of Whitehead and comparison of means produced significantly lower sample sizes than the other methods. The ordinal data method on average reduced sample size by 28% (inter-quartile range 14,53%) compared with the comparison of proportions; however, a 22% increase in sample size was seen with the ordinal method for trials assessing thrombolysis. The comparison of medians method of Payne gave the largest sample sizes. Conclusions Choosing an ordinal rather than binary method of analysis allows most trials to be, on average, smaller by approximately 28% for a given statistical power. Smaller trial sample sizes may help by reducing time to completion, complexity, and financial expense. However, ordinal methods may not be optimal for interventions which both improve functional outcome and cause hazard in a subset of patients, e.g. thrombolysis. [source]


    Muscle Weakness and Falls in Older Adults: A Systematic Review and Meta-Analysis

    JOURNAL OF AMERICAN GERIATRICS SOCIETY, Issue 7 2004
    Julie D. Moreland MSc
    Objectives: To evaluate and summarize the evidence of muscle weakness as a risk factor for falls in older adults. Design: Random-effects meta-analysis. Setting: English-language studies indexed in MEDLINE and CINAHL (1985,2002) under the key words aged and accidental falls and risk factors; bibliographies of retrieved papers. Participants: Fifty percent or more subjects in a study were aged 65 and older. Studies of institutionalized and community-dwelling subjects were included. Measurements: Prospective cohort studies that included measurement of muscle strength at inception (in isolation or with other factors) with follow-up for occurrence of falls. Methods: Sample size, population, setting, measure of muscle strength, and length of follow-up, raw data if no risk estimate, odds ratios (ORs), rate ratios, or incidence density ratios. Each study was assessed using the validity criteria: adjustment for confounders, objective definition of fall outcome, reliable method of measuring muscle strength, and blinded outcome measurement. Results: Thirty studies met the selection criteria; data were available from 13. For lower extremity weakness, the combined OR was 1.76 (95% confidence interval (CI)=1.31,2.37) for any fall and 3.06 (95% CI=1.86,5.04) for recurrent falls. For upper extremity weakness the combined OR was 1.53 (95% CI=1.01,2.32) for any fall and 1.41 (95% CI=1.25,1.59) for recurrent falls. Conclusion: Muscle strength (especially lower extremity) should be one of the factors that is assessed and treated in older adults at risk for falls. More clinical trials are needed to isolate whether muscle-strengthening exercises are effective in preventing falls. [source]


    Meta-analysis of the effect of scaling and root planing, surgical treatment and antibiotic therapies on periodontal probing depth and attachment loss

    JOURNAL OF CLINICAL PERIODONTOLOGY, Issue 11 2002
    Hsin-Chia Hung
    Abstract Objective: This paper reports a meta-analysis of studies that have investigated the effect of scaling and root planing on periodontal probing depth and attachment loss. Material and methods: The criteria used for inclusion of studies were as follows: root planing and scaling alone was one of the primary treatment arms; patients or quadrants of each patient were randomly assigned to study groups; 80% of patients enrolled were included in first year follow-up examinations; periodontal probing depth and attachment loss were reported in mm; the sample size of each study and substudy was reported. Sample size was used to weight the relative contribution of each study since standard errors were not reported by many studies and sample size is highly correlated with standard error and therefore statistically able to explain a substantial portion of the standard error on studies that use similar measures. Results: The meta-analysis results show that periodontal probing depth and gain of attachment level do not improve significantly following root planing and scaling for patients with shallow initial periodontal probing depths. However, there was about a 1-mm reduction for medium initial periodontal probing depths and a 2-mm reduction for deep initial periodontal probing depths. Similarly, there was about a 0.50-mm gain in attachment for medium initial periodontal probing depth measurements and slightly more than a l-mm gain in attachment for deep initial periodontal probing depth measurements. Surgical therapy for patients with deep initial probing depths showed better results than scaling and root planing in reducing probing depths. When patients were followed up over 3 years or more, these differences were reduced to less than 0.4 mm. Antibiotic therapy showed similar results to scaling and root planing. However, a consistent improvement in periodontal probing depth and gain of attachment is demonstrated when local antibiotic therapy is combined with root planing and scaling. Zusammenfassung Metaanalyse des Effekts von Scaling und Wurzelglätten, chirurgischer Behandlung und Antibiotikatherapien auf parodontale Sondiertiefe und Attachment-Verlust Zielsetzung: Der vorliegende Artikel beschreibt eine Metaanalyse von Studien, in denen die Wirkung von Scaling und Wurzelglätten auf parodontale Sondiertiefe und Attachment-Verlust untersucht wurden. Material und Methodik: Die Kriterien für die Aufnahme in die Studie waren wie folgt: 1) Wurzelglätten und Scaling allein war einer der primären Behandlungsverfahren; 2) Patienten oder Quadranten einzelner Patienten wurden auf Studiengruppen zufallsverteilt; 3) 80% der aufgenommenen Patienten waren in den Follow-up-Untersuchungen nach einem Jahr eingeschlossen; 4) parodontale Sondiertiefe und Attachment-Verlust wurden in mm berichtet; und 5) der Probenumfang jeder einzelnen Studie und Unterstudie wurde berichtet. Der Probenumfang wurde zur Gewichtung des relativen Beitrages jeder einzelnen Studie herangezogen, da von vielen Studien keine Standardabweichungen berichtet wurden und der Probenumfang mit der Standardabweichung stark korreliert ist. Daher lässt sich auf dem Weg über den Probenumfang ein wesentlicher Teil der Standardabweichung bei Studien mit ähnlichen Messparametern statistisch erklären. Ergebnisse: Die Ergebnisse der Metaanalyse zeigten, dass die parodontale Sondiertiefe und die Zunahme des Attachment-Niveaus nach Scaling und Wurzelglätten bei Patienten mit ursprünglich geringen Sondiertiefen keine signifikanten Verbesserungen liefern. Bei ursprünglich mittleren parodontalen Sondiertiefen konnte jedoch eine Reduktion von 1 mm, bei ursprünglich tiefen parodontalen Sondiertiefen eine Reduktion von 2 mm beobachtet werden. Dementsprechend wurde eine Zunahme des Attachment-Niveaus bei ursprünglich mittleren parodontalen Sondiertiefen von 0,5 mm eine sowie eine Zunahme von etwas mehr als 1 mm bei ursprünglich tiefen parodontalen Sondiertiefen verzeichnet. Die chirurgische Behandlung bei Patienten mit ursprünglich beträchtlicher Sondiertiefe lieferte bei der Reduktion der Sondiertiefe bessere Ergebnisse als Scaling und Wurzelglätten. Wenn sich Patienten über 3 Jahre oder länger einem Follow-up unterzogen, liessen sich diese Differenzen auf unter 0,4 mm reduzieren. Darüber hinaus kann jedoch eine konsistente Verbesserung der parodontalen Sondiertiefe und Zunahme des Attachment-Niveaus erreicht werden, wenn eine lokale Antibiotikatherapie mit Scaling und Wurzelglätten kombiniert wird. Résumé Méta-analyse de l'influence du détartrage et du surfaçage radiculaire, du traitement chirurgical et des traitements antibiotiques sur la profondeur de poche au sondage et la perte d'attache But: Ce rapport présente une méta-analyse des études qui ont porté sur l'influence du détartrage et du surfaçage radiculaire sur la profondeur de poche au sondage et la perte d'attache. Matériaux et méthodes: Les critères d'inclusion dans les études étaient les suivants: 1) le détartrage et le surfaçage radiculaire constituaient l'un des premiers moyens de traitement utilisés; 2) les patients ou les quadrants de chaque patient ont été répartis dans les groupes d'étude de façon aléatoire; 3) 80% des patients enrôlés ont fait l'objet d'examens de suivi durant un an; 4) la profondeur de poche au sondage et la perte d'attache ont été mesurés en mm; 5) la taille de l'échantillon a été relevée pour chaque étude et sous-étude. La taille de l'échantillon a été utilisée pour évaluer la contribution relative de chaque étude. En effet, de nombreuses études ne mentionnaient pas les erreurs standard, alors qu'il existe une corrélation étroite entre la taille de l'échantillon et l'erreur standard et qu'elle permet donc d'expliquer statistiquement une part substantielle de l'erreur standard dans les études qui se basent sur des mesures similaires. Résultats: Les résultats de la méta-analyse montrent que la profondeur de poche au sondage et le gain d'attache ne s'améliorent pas de façon significative suite au détartrage et surfaçage radiculaire chez les patients dont les profondeurs de poche au sondage initiales étaient faibles. Il y avait toutefois une réduction d'environ 1 mm des profondeurs de poche au sondage initiales moyennes, et une réduction de 2 mm des profondeurs de poche au sondage initiales élevées. De façon similaire, on a observé un gain d'attache d'environ 0,50 mm pour les mesures des profondeurs de poche au sondage initiales moyennes et un gain d'attache légèrement supérieur à 1 mm pour les mesures des profondeurs de poche au sondage initiales élevées. Chez les patients à profondeur de poche au sondage initiale élevée, le traitement par chirurgie s'est avéré plus efficace que le détartrage et le surfaçage radiculaire pour réduire la profondeur au sondage. Lorsque les patients faisaient l'objet d'un suivi durant trois ans ou plus, ces différences s'abaissaient jusqu'à moins de 0,4 mm. Le traitement antibiotique a donné des résultats similaires à ceux obtenus par détartrage et surfaçage radiculaire. Une amélioration régulière de la profondeur de poche au sondage et du gain d'attache a toutefois été observée lorsque le traitement antibiotique local est combiné au détartrage et surfaçage radiculaire. [source]


    Mixing calcium chloride with commercial fungicide formulations results in very slow penetration of calcium into apple fruits

    JOURNAL OF PLANT NUTRITION AND SOIL SCIENCE, Issue 3 2004
    Thomas K. Schlegel
    Abstract Foliar applications of calcium salts are usually combined with fungicides. In the years 2002 and 2003, it was tested if this practice assures high rates of penetration of calcium. Amounts that penetrated in 24 h were measured at 20,°C using 45CaCl2. To ensure maximum penetration rates, humidity was maintained at 100,%. Sample size was 40 to 50 fruits, and data were presented as box plots because distribution of data was not normal. Median rates of penetration of CaCl2, measured with mixtures of fungicides and CaCl2 at 5 or 10 g l,1, respectively, were very slow, and amounted to only a few percent of the dose applied. Rates were a little higher with very young fruits (55 days after full bloom, DAFB). Adding alkyl polyglycoside surfactants at 0.2 g l,1 significantly decreased surface tensions, and increased rates of penetration by up to 15-fold. Still, total penetration of CaCl2 rarely exceeded 20,% of the dose applied (median penetration), even in the presence of an additional surfactant. In all treatments, outliers with 60 to 100,% penetration in 24 h occurred, and this was attributed to penetration into lenticels. This is expected to result in unequal concentrations of calcium in fruits, especially in the sub-epidermal layers. Addition of a suitable surfactant to mixtures of fungicides with CaCl2 is strongly recommended as it enhances wetting and greatly increases penetration rates of CaCl2. Mischen von Calciumchlorid mit Fungiziden resultiert in einer sehr langsamen Penetration von Calcium in Apfelfrüchte Zur Blattdüngung werden Calciumsalze meist mit Fungiziden gemischt. Es wurde untersucht, ob diese Praxis hohe Penetrationsraten für das ausgebrachte Calciumsalz gewährleistet. Die Penetration wurde mit 45CaCl2 bei 20,°C gemessen. Die Luftfeuchte betrug 100,%, so dass maximale Penetrationsraten erzielt wurden. Der Stichprobenumfang betrug 40 bis 50 Früchte. Die Ergebnisse werden als Box-Plots präsentiert, weil die Verteilung der Daten nicht normal war. Bei Mischungen von CaCl2 (5 bis 10 g l,1) mit Fungiziden betrugen die Mediane der Penetrationsraten von CaCl2 nur einige Prozent. Bei sehr jungen Früchten (55 Tage nach Vollblüte, DAFB) waren die Raten etwas höher als bei älteren. Durch Zugabe von Alkyl-Polyglykosiden (0.2 g l,1) wurden die Oberflächenspannungen der Lösungen verringert und die Penetration von Calcium bis zu 15fach erhöht. Trotzdem betrug die Penetration in 24 h nur selten mehr als 20,% der Dosis. In allen Behandlungen wurden Ausreißer beobachtet, bei denen die Penetration in 24 h 60 bis 100,% der Dosis betrug. Das wurde auf Infiltration von Lentizellen zurückgeführt. Dadurch dürfte die Calciumkonzentration in Früchten nach Spritzung mit CaCl2 nicht homogen sein, besonders nicht in den subepidermalen Schichten. Die Zugabe eines geeigneten Netzmittels zu Mischungen von Fungiziden mit CaCl2 wird empfohlen. Dadurch wird die Benetzung der Früchte verbessert, und die Penetrationsraten werden erhöht. [source]


    Sample size and the detection of a hump-shaped relationship between biomass and species richness in Mediterranean wetlands

    JOURNAL OF VEGETATION SCIENCE, Issue 2 2006
    J.L. Espinar
    Abstract Questions: What is the observed relationship between biomass and species richness across both spatial and temporal scales in communities of submerged annual macrophytes? Does the number of plots sampled affect detection of hump-shaped pattern? Location: Doñana National Park, southwestern Spain. Methods: A total of 102 plots were sampled during four hydrological cycles. In each hydrological cycle, the plots were distributed randomly along an environmental flooding gradient in three contrasted microhabitats located in the transition zone just below the upper marsh. In each plot (0.5 m × 0.5 m), plant density and above- and below-ground biomass of submerged vegetation were measured. The hump-shaped model was tested by using a generalized linear model (GLM). A bootstrap procedure was used to test the effect of the number of plots on the ability to detect hump-shaped patterns. Result: The area exhibited low species density with a range of 1,9 species and low values of biomass with a range of 0.2 -87.6 g-DW/0.25 m2. When data from all years and all microhabitats were combined, the relationships between biomass and species richness showed a hump-shaped pattern. The number of plots was large enough to allow detection of the hump-shaped pattern across microhabitats but it was too small to confirm the hump-shaped pattern within each individual microhabitat. Conclusion: This study provides evidence of hump-shaped patterns across microhabitats when GLM analysis is used. In communities of submerged annual macrophytes in Mediterranean wetlands, the highest species density occurs in intermediate values of biomass. The bootstrap procedure indicates that the number of plots affects the detection of hump-shaped patterns. [source]


    Sample size for post-marketing safety studies based on historical controls,

    PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, Issue 8 2010
    Yu-te Wu
    Abstract Purpose As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Methods Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. Results The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. Conclusions The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Placebo-controlled trial of rituximab in IgM anti,myelin-associated glycoprotein antibody demyelinating neuropathy,

    ANNALS OF NEUROLOGY, Issue 3 2009
    Marinos C. Dalakas MD
    Objective Report a double-blind, placebo-controlled study of rituximab in patients with anti,MAG demyelinating polyneuropathy (A-MAG-DP). Methods Twenty-six patients were randomized to four weekly infusions of 375mg/m2 rituximab or placebo. Sample size was calculated to detect changes of ,1 Inflammatory Neuropathy Course and Treatment (INCAT) leg disability scores at month 8. IgM levels, anti-MAG titers, B cells, antigen-presenting cells, and immunoregulatory T cells were monitored every 2 months. Results Thirteen A-MAG-DP patients were randomized to rituximab and 13 to placebo. Randomization was balanced for age, electrophysiology, disease duration, disability scores, and baseline B cells. After 8 months, by intention to treat, 4 of 13 rituximab-treated patients improved by ,1 INCAT score compared with 0 of 13 patients taking placebo (p = 0.096). Excluding one rituximab-randomized patient who had normal INCAT score at entry, and thus could not improve, the results were significant (p = 0.036). The time to 10m walk was significantly reduced in the rituximab group (p = 0.042) (intention to treat). Clinically, walking improved in 7 of 13 rituximab-treated patients. At month 8, IgM was reduced by 34% and anti-MAG titers by 50%. CD25+CD4+Foxp3+ regulatory cells significantly increased by month 8. The most improved patients were those with high anti-MAG titers and most severe sensory deficits at baseline. Interpretation Rituximab is the first drug that improves some patients with A-MAG-DP in a controlled study. The benefit may be exerted by reducing the putative pathogenic antibodies or by inducing immunoregulatory T cells. The results warrant confirmation with a larger trial. Ann Neurol 2009;65:286,293 [source]


    A Statistical Framework for Quantile Equivalence Clinical Trials with Application to Pharmacokinetic Studies that Bridge from HIV-Infected Adults to Children

    BIOMETRICS, Issue 4 2008
    Lixia Pei
    Summary Bridging clinical trials are sometimes designed to evaluate whether a proposed dose for use in one population, for example, children, gives similar pharmacokinetic (PK) levels, or has similar effects on a surrogate marker as an established effective dose used in another population, for example, adults. For HIV bridging trials, because of the increased risk of viral resistance to drugs at low PK levels, the goal is often to determine whether the doses used in different populations result in similar percentages of patients with low PK levels. For example, it may be desired to evaluate that a proposed pediatric dose gives approximately 10% of children with PK levels below the 10th percentile of PK levels for the established adult dose. However, the 10th percentile for the adult dose is often imprecisely estimated in studies of relatively small size. Little attention has been given to the statistical framework for such bridging studies. In this article, a formal framework for the design and analysis of quantile-based bridging studies is proposed. The methodology is then developed for normally distributed outcome measures from both frequentist and Bayesian directions. Sample size and other design considerations are discussed. [source]


    Pelvic floor disorders 4 years after first delivery: a comparative study of restrictive versus systematic episiotomy

    BJOG : AN INTERNATIONAL JOURNAL OF OBSTETRICS & GYNAECOLOGY, Issue 2 2008
    X Fritel
    Objective, To compare two policies for episiotomy: restrictive and systematic. Design, Quasi-randomised comparative study. Setting, Two French university hospitals with contrasting policies for episiotomy: one using episiotomy restrictively and the second routinely. Population, Seven hundred and seventy-four nulliparous women delivered during 1996 of a singleton in cephalic presentation at a term of 37,41 weeks. Methods, A questionnaire was mailed 4 years after delivery. Sample size was calculated to allow us to show a 10% difference in the prevalence of urinary incontinence with 80% power. Main outcome measures, Urinary incontinence, anal incontinence, perineal pain, and pain during intercourse. Results, We received 627 responses (81%), 320 from women delivered under the restrictive policy, 307 from women delivered under the routine policy. In the restrictive group, 186 (49%) deliveries included mediolateral episiotomies and in the routine group, 348 (88%). Four years after the first delivery, there was no difference in the prevalence of urinary incontinence (26 versus 32%), perineal pain (6 versus 8%), or pain during intercourse (18 versus 21%) between the two groups. Anal incontinence was less prevalent in the restrictive group (11 versus 16%). The difference was significant for flatus (8 versus 13%) but not for faecal incontinence (3% for both groups). Logistic regression confirmed that a policy of routine episiotomy was associated with a risk of anal incontinence nearly twice as high as the risk associated with a restrictive policy (OR = 1.84, 95% CI: 1.05,3.22). Conclusions, A policy of routine episiotomy does not protect against urinary or anal incontinence 4 years after first delivery. [source]


    Sample size for kappa

    ACTA NEUROPSYCHIATRICA, Issue 4 2010
    Dusan Hadzi-Pavlovic
    No abstract is available for this article. [source]


    Ocular findings among young men: a 12-year prevalence study of military service in Poland

    ACTA OPHTHALMOLOGICA, Issue 5 2010
    Michal S. Nowak
    Abstract. Purpose:, To determine the prevalence of ocular diseases among young men and to assess the main ocular causes reflecting discharge from military service in Poland. Methods:, A retrospective review of the medical records of 105 017 men undergoing a preliminary examination for military service during the period 1993,2004. Sample size for the study was calculated with 99% confidence within an error margin of 5%. All of the study participants were White men of European origin, most of whom live or lived in Poland. Data regarding the vision status were assessed in 1938 eyes of 969 participants. Two groups were distinguished based on the age of the participants: group I aged 18,24 years, and group II aged 25,34 years. Results:, Presented visual impairment [visual acuity (VA) < 20/40)] followed by colour vision defects were the most common ocular disorders, accounting for 13.2%. There were statistically significant differences in uncorrected VA as well as in the rates of particular refractive errors in between the age groups (p < 0.05). The prevalence of glaucoma and ocular hypertension was significantly higher in older participants. Six hundred and sixty-seven (68.8%) participants examined medically in the study period were accepted for military service. However, 302 (31.2%) failed their examination and were temporarily or permanently discharged from duty. Fifty-two of them (17.2%) were discharged because of various ocular disorders. The most common causes were high refractive errors, which accounted for 38.5% of all the ocular discharges, followed by chronic and recurrent diseases of the posterior segment of the eye, which accounted for 19.2%. Conclusion:, The prevalence of ocular disorders among young men in an unselected military population was closer to the results obtained in other population-based studies comprising both men and women in the same age group. High refractive errors followed by chronic and recurrent diseases of the posterior segment of the eye are important causes of medical discharges from military service in Poland. [source]


    A randomised controlled trial of clinics in secondary schools for adolescents with asthma

    CHILD: CARE, HEALTH AND DEVELOPMENT, Issue 1 2004
    Cliona Ni Bhrolchain
    Aim To test the hypothesis that delivery of a programme of asthma care in nurse-led clinics in school would improve access to care and health outcomes compared with care in general practice. Methods Pupils at four secondary schools in Bristol, North Somerset and South Gloucestershire, UK, were included in the randomized controlled trial. Another two schools were included to control for any cross-contamination between school clinic attenders and general practice attenders in the trial schools. Pupils in trial schools were randomly assigned to receive an invitation for an asthma review at school or in general practice. Schools were stratified for deprivation and covered rural, urban and suburban populations. Pupils with asthma were identified using a screening questionnaire and then cross-referenced with practice prescribing records. Four school nurses with additional specialist asthma training ran the school clinics weekly. Consultations concentrated on the needs and interests of adolescents and followed national guidelines for treatment changes. Reviews were arranged at 1 and 6 months, with an additional 3-month review if needed. The pupil's GP was kept informed. General practice care was according to the practice's usual treatment protocols. Primary outcomes were the proportion of pupils who had had an asthma review in 6 months, health-related quality of life and level of symptoms. Secondary outcomes were pupil knowledge and attitude to asthma, inhaler technique, the proportion taking inhaled steroids daily, school absence due to asthma, PEFR and pupil preference for the setting of care. Sample size was calculated to have an 80% chance of showing an increase from 40% to 60% having a review in 6 months and half a standard deviation improvement on the quality of life measure. Analysis was on an intention to treat basis. Results School clinic pupils were more likely to attend (91% vs. 51%). However, symptom control or quality of life were no better. School clinic pupils knew more about asthma, had a more positive attitude and better inhaler technique. Absence and PEFR showed no difference. 63% who attended school clinics preferred this model but, taking both groups together, just over half would prefer to attend their GP for follow-up. Cost of care (including practice, school clinic, hospital and medication) was £32.10 at school, £19.80 at the trial practices and £18.00 at control practices. Conclusions Previous evaluations of nurse-led asthma clinics in practice have also failed to show improvements in outcomes, though process measures do improve. This may be due to the need for nurses to refer patients to doctors for changes in medication, rather than doing this themselves. Some weaknesses in study design that may have affected outcome, but the essential conclusion is that nurse-led asthma clinics in school are not cost effective. The study does suggest that such clinics can reach a high proportion of adolescents, but for asthma at least this does not result in any measurable improvement in outcome. [source]


    Nested distributions of bat flies (Diptera: Streblidae) on Neotropical bats: artifact and specificity in host-parasite studies

    ECOGRAPHY, Issue 3 2009
    Bruce D. Patterson
    We examined the structure of ectoparasitic bat fly infestations on 31 well-sampled bat species, representing 4 Neotropical families. Sample sizes varied from 22 to 1057 bats per species, and bat species were infested by 4 to 27 bat fly species. Individual bats supported smaller infracommunities (the set of parasites co-occurring on an individual host), ranging from 1 to 5 fly species in size, and no bat species had more than 6 bat fly species characteristically associated with it (its primary fly species). Nestedness analyses used system temperature (BINMATNEST algorithm) because it is particularly well-suited for analysis of interaction networks, where parasite records may be nested among hosts and host individuals simultaneously nested among parasites. Most species exhibited very low system temperatures (mean 3.14°; range 0.14,12.28°). Simulations showed that nested structure for all 31 species was significantly stronger than simulated values under 2 of the 3 null hypotheses, and about half the species were also nested under the more stringent conditions of the third null hypothesis. Yet this structure disappears when analyses are restricted to "primary" associations of fly species (flies on their customary host species), which exclude records thought to be atypical, transient, or potential contaminants. Despite comprising a small fraction of total parasite records, such anomalies represent a considerable part of the statistical state-space, offering the illusion of significant ecological structure. Only well understood and well documented systems can make distinctions between primary and other occurrence records. Generally, nestedness appears best developed in host-parasite systems where infestations are long-term and accumulate over time. Dynamic, short-term infestations by highly mobile parasites like bat flies may appear to be nested, but such structure is better understood in terms of host specificity and accidental occurrences than in terms of prevalence, persistence, or hierarchical niche relations of the flies. [source]


    Semiparametric M -quantile regression for estimating the proportion of acidic lakes in 8-digit HUCs of the Northeastern US

    ENVIRONMETRICS, Issue 7 2008
    Monica Pratesi
    Abstract Between 1991 and 1995, the Environmental Monitoring and Assessment Program of the US Environmental Protection Agency conducted a survey of lakes in the Northeastern states of the US to determine the ecological condition of these waters. Here, to this end, we want to obtain estimates of the proportion of lakes at (high) risk of acidification or acidified already for each 8-digit hydrologic unit code (HUC) within the region of interest. Sample sizes for the 113 HUCs are very small and 27 HUCs are not even observed. Therefore, small area estimation techniques should be invoked for the estimation of the distribution function of acid neutralizing capacity (ANC) for each HUC. The procedure is based on a semiparametric M -quantile regression model in which ANC depends on elevation and the year of the survey linearly, and on the geographical position of the lake through an unknown smooth bivariate function estimated by low-rank thin plate splines. Copyright © 2008 John Wiley & Sons, Ltd. [source]