Simple Methods (simple + methods)

Distribution by Scientific Domains


Selected Abstracts


Low-Tech, High-Touch,Pain Management with Simple Methods

PAIN PRACTICE, Issue 1 2003
Ramani Vijayan1
First page of article [source]


Simple methods for convection in porous media: scale analysis and the intersection of asymptotes

INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 10 2003
Adrian Bejan
Abstract This article outlines the basic rules and promise of two of the simplest methods for solving problems of convection in porous media. First, scale analysis is the method that produces order-of-magnitude results and trends (scaling laws) for concrete and applicable results such as heat transfer rates, flow rates, and time intervals. Scale analysis also reveals the correct dimensionless form in which to present more exact results produced by more complicated methods. Second, the intersection of asymptotes method identifies the correct flow configuration (e.g. Bénard convection in a porous medium) by intersecting the two extremes in which the flow may exist: the many cells limit, and the few plumes limit. Every important feature of the flow and its transport characteristics is found at the intersection, i.e. at the point where the two extremes compete and find themselves in balance. The intersection is also the flow configuration that minimizes the global resistance to heat transfer through the system. This is an example of the constructal principle of deducing flow patterns by optimizing the flow geometry for minimal global resistance. The article stresses the importance of trying the simplest method first, and the researcher's freedom to choose the appropriate problem solving method. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Structure determination of diclofenac in a diclofenac-containing chitosan matrix using conventional X-ray powder diffraction data

JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 2 2004
Nongnuj Muangsin
The structure determination of diclofenac embedded in a diclofenac-containing chitosan matrix using conventional X-ray powder diffraction data is demonstrated. It reveals that sodium diclofenac, the starting material in the preparation of a controlled-release diclofenac-containing chitosan matrix, changes to diclofenac acid in space group C2/c in the matrix. Simple methods were employed for handling the sample to obtain X-ray powder diffraction data of sufficiently high quality for the determination of the crystal structure of diclofenac embedded in chitosan. These involved grinding and sieving several times through a micro-mesh sieve to obtain a suitable particle size and a uniformly spherical particle shape. A traditional technique for structure solution from X-ray powder diffraction data was applied. The X-ray diffraction intensities were extracted using Le Bail's method. The structure was solved by direct methods from the extracted powder data and refined using the Rietveld method. For comparison, the single-crystal structure of the same drug was also determined. The result shows that the crystal structure solved from conventional X-ray powder diffraction data is in good agreement with that of the single crystal. The deviations of the differences in bond lengths and angles are of the order of 0.030,Å and 0.639°, respectively. [source]


Using Population Count Data to Assess the Effects of Changing River Flow on an Endangered Riparian Plant

CONSERVATION BIOLOGY, Issue 4 2006
DIANE M. THOMSON
análisis de viabilidad poblacional; gestión ribereña; método de difusión; presas; riesgo de extinción Abstract:,Methods for using simple population count data to project extinction risk have been the focus of much recent theoretical work, but few researchers have used these approaches to address management questions. We analyzed 15 years of census data on the federally endangered endemic riparian plant Pityopsis ruthii (Small) with the diffusion approximation (DA). Our goals were to evaluate relative extinction risk among populations in two different watersheds (in Tennessee, U.S.A.) and potential effects of variation in managed river flow on population dynamics. Populations in both watersheds had high projected risks of extinction within 50 years, but the causes of this risk differed. Populations of P. ruthii on the Hiwassee River had higher initial population sizes but significantly lower average growth rates than those on the Ocoee River. The only populations with low predicted short-term extinction risk were on the Ocoee. Growth rates for populations on both rivers were significantly reduced during periods of lower river flow. We found only marginal evidence of a quadratic relationship between population performance and flow. These patterns are consistent with the idea that low flows affect P. ruthii due to growth of competing vegetation, but the degree to which very high flows may reduce population growth is still unclear. Simulations indicated that populations were most sensitive to growth rates in low-flow years, but small changes in the frequency of these periods did not strongly increase risk for most populations. Consistent with results of other studies, DA estimates of extinction risk had wide confidence limits. Still, our results yielded several valuable insights, including the need for greater monitoring of populations on the Hiwassee and the importance of low-flow years to population growth. Our work illustrates the potential value of simple methods for analyzing count data despite the challenges posed by uncertainty in estimates of extinction risk. Resumen:,Los métodos que utilizan datos de conteos simples de la población para proyectar el riesgo de extinción han sido el foco reciente de mucho trabajo teórico, pero pocos investigadores han utilizado estos métodos para responder preguntas de gestión. Analizamos 15 años de datos de censos de la planta ribereña, endémica y federalmente en peligro Pityopsis ruthii (Small) mediante el método de difusión. Nuestras metas fueron evaluar el riesgo de extinción de poblaciones en dos cuencas hidrológicas distintas y con dos efectos potenciales de la variación del flujo de agua sobre la dinámica de la población. Las poblaciones en ambas cuencas tenían alto riesgo de extinción proyectado a 50 años, pero las causas de este riesgo difirieron. Las poblaciones de P. ruthii en el Río Hiwassee tuvieron poblaciones iniciales más grandes, pero tasas de crecimiento significativamente menores, que las poblaciones en el Río Ocoee. Las únicas poblaciones con bajo riesgo de extinción pronosticado estaban en el Ocoee. Las tasas de crecimiento de las poblaciones en ambos ríos se redujeron significativamente durante períodos de bajo flujo en el río. Sólo encontramos evidencia marginal de la relación cuadrática entre el funcionamiento de la población y el flujo. Estos patrones son consistentes con la idea de que los bajos flujos afectan a P. ruthii debido al crecimiento de vegetación competitiva, pero aun no es claro el grado en que flujos muy grandes pueden reducir el crecimiento poblacional. Las simulaciones indicaron que las poblaciones son más sensibles a las tasas de crecimiento en años con bajo flujo en los ríos, pero pequeños cambios en la frecuencia de esos períodos no aumentaron el riesgo en la mayoría de las poblaciones. Consistentemente con los resultados de otros estudios, las estimaciones del riesgo de extinción mediante el método de difusión tienen amplios límites de confianza. Aun así, nuestros resultados aportaron varios conocimientos valiosos, incluyendo la necesidad de mayor monitoreo de las poblaciones en el Hiwassee y la importancia para el crecimiento poblacional de los años con bajo flujo. Nuestro trabajo ilustra el valor potencial de métodos sencillos de análisis de datos de conteo a pesar de los retos impuestos por la incertidumbre en las estimaciones del riesgo de extinción. [source]


Road pricing: lessons from London

ECONOMIC POLICY, Issue 46 2006
Georgina Santos
SUMMARY Road pricing LESSONS FROM LONDON This paper assesses the original London Congestion Charging Scheme (LCCS) and its impacts, and it simulates the proposed extension which will include most of Kensington and Chelsea. It also touches upon the political economy of the congestion charge and the increase of the charge from £5 to £8 per day. The possibility of transferring the experience to Paris, Rome and New York is also discussed. The LCCS has had positive impacts. This was despite the considerable political influences on the charge level and location. It is difficult to assess the impacts of the increase of the charge from £5 to £8, which took place in July 2005, because no data have yet been released by Transport for London. The proposed extension of the charging zone does not seem to be an efficient change on economic grounds, at least for the specific boundaries, method of charging and level of charging that is currently planned. Our benefit cost ratios computed under different assumptions of costs and benefits are all below unity. Overall, the experience shows that simple methods of congestion charging, though in no way resembling first-best Pigouvian taxes, can do a remarkably good job of creating benefits from the reduction of congestion. Nevertheless, the magnitude of these benefits can be highly sensitive to the details of the scheme, which therefore need to be developed with great care. , Georgina Santos and Gordon Fraser [source]


On the measurement of growth with applications to the modelling and analysis of plant growth

FUNCTIONAL ECOLOGY, Issue 2 2000
Roderick M. L.
Abstract 1.,In this paper, a theoretical framework for the analysis of growth is described. Growth is equated with change in volume (V) and the growth rate is given by the equation; dV/dt = (dm/dt)(1/,) , (d,/dt)(m/,2) where m is the mass and , the density. The volume is inclusive of internal air spaces. 2.,The second term of the growth equation (see above) can be ignored if density is constant over time. Data for humans (and presumably other large animals) show that while composition changes over time, the density is approximately constant at about that of water. In that case, the growth rate can be estimated from measures of the rate of change of mass. However, the density of plants is variable (c. 0·4,1·2 g cm,3) and measures of mass and density are necessary to analyse plant growth. 3.,To use the theory as the basis of plant growth models, it is necessary to develop simple methods for estimating the surface area of roots, stems and leaves assuming that the mass and volume are known. A literature review found that the surface area to volume ratios of leaves and roots generally increase with the mass concentration of water. Theoretical arguments are used to predict that in woody stems, the situation should be reversed such that the surface area to volume ratio increases with the mass concentration of dry matter. Those relationships should be very useful in the development of plant growth models. 4.,Measures of plant dry mass and estimates of the rate of change in dry mass are shown to be very difficult to interpret because of differences in the mass concentration of dry matter between individuals and over time. 5.,It is concluded that measures of mass and density will be necessary before plant growth analysis can achieve its full potential. A framework for extending the theory to include the forces necessary for growth to occur is described. [source]


Genome-wide association studies for discrete traits

GENETIC EPIDEMIOLOGY, Issue S1 2009
Duncan C. Thomas
Abstract Genome-wide association studies of discrete traits generally use simple methods of analysis based on ,2 tests for contingency tables or logistic regression, at least for an initial scan of the entire genome. Nevertheless, more power might be obtained by using various methods that analyze multiple markers in combination. Methods based on sliding windows, wavelets, Bayesian shrinkage, or penalized likelihood methods, among others, were explored by various participants of Genetic Analysis Workshop 16 Group 1 to combine information across multiple markers within a region, while others used Bayesian variable selection methods for genome-wide multivariate analyses of all markers simultaneously. Imputation can be used to fill in missing markers on individual subjects within a study or in a meta-analysis of studies using different panels. Although multiple imputation theoretically should give more robust tests of association, one participant contribution found little difference between results of single and multiple imputation. Careful control of population stratification is essential, and two contributions found that previously reported associations with two genes disappeared after more precise control. Other issues considered by this group included subgroup analysis, gene-gene interactions, and the use of biomarkers. Genet. Epidemiol. 33 (Suppl. 1):S8,S12, 2009. © 2009 Wiley-Liss, Inc. [source]


Improved In vitro Model Systems for Gastrointestinal Infection by Choice of Cell Line, pH, Microaerobic Conditions, and Optimization of Culture Conditions

HELICOBACTER, Issue 4 2007
Sara K. Lindén
Abstract Background:, Commonly used in vitro infection cultures do not mimic the human gastrointestinal tract with regard to pH and microaerobic conditions. Furthermore, despite the importance of mucin,Helicobacter interactions, the cell lines used have not been selected for appropriate mucin expression. To make in vitro studies more applicable to human disease, we have developed coculture methods taking these factors into account. Materials and methods:, Nine human gastrointestinal epithelial cell lines (MKN1, MKN7, MKN28, MKN45, KATO3, HFE145, PCAA/C11 Caco-2, and LS513) were investigated. Expression and glycosylation of mucins (MUC1, 2, 3, 4, 5AC, 5B, 6, 12, 13, and 16) were determined by immunohistochemistry. We analyzed the effect of microaerobic conditions and acidic pH on cell proliferation, viability, and apoptosis. Results:, Microaerobic culture, which is more physiological for the bacteria, did not adversely affect mammalian cell viability, proliferation, or induce apoptosis The cell lines varied in mucin expression, with MKN7 and MKN45 being most similar to gastric mucosa and Caco-2 and LS513 to intestinal mucosa, although none exactly matched normal mucosa. However, changes in culture conditions did not cause major changes in the mucin expression within cell lines. Conclusions:, Culture conditions mimicking the natural environment and allowing the bacterial cells to thrive had no effect on cell viability or apoptosis, and very little influence on mucin expression of human gastrointestinal cells. Thus, it is feasible, using the simple methods we present here, to substantially improve bacterial,mammalian cell in vitro coculture studies to make them more reflective of human infection. [source]


Realization of contact resolving approximate Riemann solvers for strong shock and expansion flows

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 10 2010
Sung Don Kim
Abstract The Harten,Lax,van Leer contact (HLLC) and Roe schemes are good approximate Riemann solvers that have the ability to resolve shock, contact, and rarefaction waves. However, they can produce spurious solutions, called shock instabilities, in the vicinity of strong shock. In strong expansion flows, the Roe scheme can admit nonphysical solutions such as expansion shock, and it sometimes fails. We carefully examined both schemes and propose simple methods to prevent such problems. High-order accuracy is achieved using the weighted average flux (WAF) and MUSCL-Hancock schemes. Using the WAF scheme, the HLLC and Roe schemes can be expressed in similar form. The HLLC and Roe schemes are tested against Quirk's test problems, and shock instability appears in both schemes. To remedy shock instability, we propose a control method of flux difference across the contact and shear waves. To catch shock waves, an appropriate pressure sensing function is defined. Using the proposed method, shock instabilities are successfully controlled. For the Roe scheme, a modified Harten,Hyman entropy fix method using Harten,Lax,van Leer-type switching is suggested. A suitable criterion for switching is established, and the modified Roe scheme works successfully with the suggested method. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Feeding habits of the gilthead seabream (Sparus aurata) from the Ria Formosa (southern Portugal) as compared to the black seabream (Spondyliosoma cantharus) and the annular seabream (Diplodus annularis)

JOURNAL OF APPLIED ICHTHYOLOGY, Issue 2 2002
C. Pita
The feeding habits of Sparus aurata L., Diplodus annularis L. and Spondyliosoma cantharus L. in the Ria Formosa (southern Portugal) lagoon system were studied using three simple methods (frequency of occurrence, numeric percentage and percentage weight) and a composite index [index of relative importance (IRI)]. The Ivlev index was used to evaluate diet selectivity, while the Schoener overlap index was used to compare diets, and diet diversity was characterized by the Simpson index. The diets of the three species consist of a wide variety of food organisms, nevertheless S. aurata seems to be the most specialized. No significant dietary overlap was found, with S. aurata preferentially selecting gastropods and bivalves, while S. cantharus preferentially selected a wide variety of crustaceans and D. sargus a wider array, including crustaceans, gastropods and bivalves. [source]


Analysis of latent structures in linear models

JOURNAL OF CHEMOMETRICS, Issue 12 2003
Agnar Höskuldsson
Abstract In chemometrics the emphasis is on latent structure models. The latent structure is the part of the data that the modeling task is based upon. This paper addresses some fundamental issues that arise when latent structures are used. The paper consists of three parts. The first part is concerned with defining the latent structure of a linear model. Here the ,atomic' parts of the algorithms that generate the latent structure for linear models are analyzed. It is shown how the PLS algorithm fits within this way of presenting the numerical procedures. The second part concerns graphical illustrations, which are useful when studying latent structures. It is shown how loading weight vectors are generated and how they can be interpreted in analyzing the latent structure. It is shown how the covariance can be used to get useful a priori information on the modeling task. Some simple methods are presented for deciding whether a single or multiple latent structures should be used. The last part is about choosing the variables that should be used in the analysis. The traditional procedures for selecting variables to include in the model are presented and the insufficiencies of such approaches are demonstrated. A case study to illustrate the use of CovProc methods is presented. The CovProc methods are discussed and some of their advantages are presented. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Synthesis, characterization and crystal structures of two 2-naphthyl substituted pyrazoles

JOURNAL OF HETEROCYCLIC CHEMISTRY, Issue 4 2003
Guang Yang
The synthesis, characterization and X-ray crystal structures of two 2-naphthyl-substituted pyrazoles - 3-(2-naphthyl) pyrazole (1) and 5-(2-naphthyl)-3-trifluoromethyl-pyrazole (3) - are reported. In addition, the isolation and structural characterization of 5-hydroxy-3-(2-naphthyl)-5-trifluoromethyl-4,5-dihydropyrazole (2), an intermediate of the synthesis of 3, is included. Two simple methods of dehydration of 2 are also presented. [source]


Bias modelling in evidence synthesis

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2009
Rebecca M. Turner
Summary., Policy decisions often require synthesis of evidence from multiple sources, and the source studies typically vary in rigour and in relevance to the target question. We present simple methods of allowing for differences in rigour (or lack of internal bias) and relevance (or lack of external bias) in evidence synthesis. The methods are developed in the context of reanalysing a UK National Institute for Clinical Excellence technology appraisal in antenatal care, which includes eight comparative studies. Many were historically controlled, only one was a randomized trial and doses, populations and outcomes varied between studies and differed from the target UK setting. Using elicited opinion, we construct prior distributions to represent the biases in each study and perform a bias-adjusted meta-analysis. Adjustment had the effect of shifting the combined estimate away from the null by approximately 10%, and the variance of the combined estimate was almost tripled. Our generic bias modelling approach allows decisions to be based on all available evidence, with less rigorous or less relevant studies downweighted by using computationally simple methods. [source]


Simple devices for measuring complex ultrashort pulses

LASER & PHOTONICS REVIEWS, Issue 3 2009
R. Trebino
Abstract We describe experimentally simple, accurate, and reliable methods for measuring from very simple to potentially very complex ultrashort laser pulses. With only a few easily aligned components, these methods allow the measurement of a wide range of pulses, including those with time-bandwidth products greater than 1000 and those with energies of only a few hundred photons. In addition, two new, very simple methods allow the measurement of the complete spatio-temporal intensity and phase of even complex pulses on a single shot or at a tight focus. [source]


On the Rate of Convergence of Discrete-Time Contingent Claims

MATHEMATICAL FINANCE, Issue 1 2000
Steve Heston
This paper characterizes the rate of convergence of discrete-time multinomial option prices. We show that the rate of convergence depends on the smoothness of option payoff functions, and is much lower than commonly believed because option payoff functions are often of all-or-nothing type and are not continuously differentiable. To improve the accuracy, we propose two simple methods, an adjustment of the discrete-time solution prior to maturity and smoothing of the payoff function, which yield solutions that converge to their continuous-time limit at the maximum possible rate enjoyed by smooth payoff functions. We also propose an intuitive approach that systematically derives multinomial models by matching the moments of a normal distribution. A highly accurate trinomial model also is provided for interest rate derivatives. Numerical examples are carried out to show that the proposed methods yield fast and accurate results. [source]


From linear to non-linear scales: analytical and numerical predictions for weak-lensing convergence

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 2 2004
Andrew J. Barber
ABSTRACT Weak-lensing convergence can be used directly to map and probe the dark-mass distribution in the Universe. Building on earlier studies, we recall how the statistics of the convergence field are related to the statistics of the underlying mass distribution, in particular to the many-body density correlations. We describe two model-independent approximations which provide two simple methods to compute the probability distribution function (pdf) of the convergence. We apply one of these to the case where the density field can be described by a lognormal pdf. Next, we discuss two hierarchical models for the high-order correlations which allow us to perform exact calculations and evaluate the previous approximations in such specific cases. Finally, we apply these methods to a very simple model for the evolution of the density field from linear to highly non-linear scales. Comparisons with the results obtained from numerical simulations, obtained from a number of different realizations, show excellent agreement with our theoretical predictions. We have probed various angular scales in the numerical work and considered sources at 14 different redshifts in each of two different cosmological scenarios, an open cosmology and a flat cosmology with non-zero cosmological constant. Our simulation technique employs computations of the full three-dimensional shear matrices along the line of sight from the source redshift to the observer and is complementary to more popular ray-tracing algorithms. Our results therefore provide a valuable cross-check for such complementary simulation techniques, as well as for our simple analytical model, from the linear to the highly non-linear regime. [source]


A Latent-Class Mixture Model for Incomplete Longitudinal Gaussian Data

BIOMETRICS, Issue 1 2008
Caroline Beunckens
Summary In the analyses of incomplete longitudinal clinical trial data, there has been a shift, away from simple methods that are valid only if the data are missing completely at random, to more principled ignorable analyses, which are valid under the less restrictive missing at random assumption. The availability of the necessary standard statistical software nowadays allows for such analyses in practice. While the possibility of data missing not at random (MNAR) cannot be ruled out, it is argued that analyses valid under MNAR are not well suited for the primary analysis in clinical trials. Rather than either forgetting about or blindly shifting to an MNAR framework, the optimal place for MNAR analyses is within a sensitivity-analysis context. One such route for sensitivity analysis is to consider, next to selection models, pattern-mixture models or shared-parameter models. The latter can also be extended to a latent-class mixture model, the approach taken in this article. The performance of the so-obtained flexible model is assessed through simulations and the model is applied to data from a depression trial. [source]


An Exact Test for the Association Between the Disease and Alleles at Highly Polymorphic Loci with Particular Interest in the Haplotype Analysis

BIOMETRICS, Issue 3 2001
Chihiro Hirotsu
Summary. The association analysis between the disease and genetic alleles is one of the simple methods for localizing the susceptibility locus in the genes. For revealing the association, several statistical tests have been proposed without discussing explicitly the alternative hypotheses. We therefore specify two types of alternative hypotheses (i.e., there is only one susceptibility allele in the locus, and there is an extension or shortening of alleles associated with the disease) and derive exact tests for the respective hypotheses. We also propose to combine these two tests when the prior knowledge is not sufficient enough to specify one of these two hypotheses. In particular, these ideas are extended to the haplotype analysis of three-way association between the disease and bivariate allele frequencies at two closely linked loci. As a by-product, a factorization of the probability distribution of the three-way cell frequencies under the null hypothesis of no three-way interaction is obtained. [source]


Advanced Statistics: Missing Data in Clinical Research,Part 2: Multiple Imputation

ACADEMIC EMERGENCY MEDICINE, Issue 7 2007
Craig D. Newgard MD
In part 1 of this series, the authors describe the importance of incomplete data in clinical research, and provide a conceptual framework for handling incomplete data by describing typical mechanisms and patterns of censoring, and detailing a variety of relatively simple methods and their limitations. In part 2, the authors will explore multiple imputation (MI), a more sophisticated and valid method for handling incomplete data in clinical research. This article will provide a detailed conceptual framework for MI, comparative examples of MI versus naive methods for handling incomplete data (and how different methods may impact subsequent study results), plus a practical user's guide to implementing MI, including sample statistical software MI code and a deidentified precoded database for use with the sample code. [source]


Employing following eye movements to discriminate normal from glaucoma subjects

CLINICAL & EXPERIMENTAL OPHTHALMOLOGY, Issue 3 2000
Wl Severt MDPhD
ABSTRACT We recorded optokinetic nystagmus (OKN) to see if slow phase velocity, duration or other measures were affected by glaucoma. Drifting grating patterns that either weakly or strongly evoked the spatial frequency doubling illusion were employed. Analysis of 68 variables characterizing the OKN revealed that small subsets of these variables were good at discriminating normal from primary open angle glaucoma subjects. The variables were related to the regularity of following eye movements. Models including the best five variables selected in two different ways classified about 90% of subjects correctly. Impaired accuracy of eye movements suggests that glaucoma changes the signal to noise ratio available to the brain. The gross changes observed permit the use of electro-oculography or other simple methods in the clinic. [source]


Comparison of practical methods for urinary glycosaminoglycans and serum hyaluronan with clinical activity scores in patients with Graves' ophthalmopathy

CLINICAL ENDOCRINOLOGY, Issue 6 2004
João R. M. Martins
Summary objective, Immunosuppressive treatment of Graves' opthalmopathy (GO) should be restricted to patients with active eye disease, but assessing disease activity is difficult. Several methods to evaluate GO activity have been introduced, but none of them is satisfactory. Glycosaminoglycans (GAGs) are complex polysaccharides that participate on the pathogenesis of GO and attempts to correlate its local increase to urinary GAGs (uGAGs) or serum hyaluronan (sHA) have been made, but the available techniques are labourious, time-consuming and difficult for routine use. The aim of the present study is to develop practical and simple methods for uGAGs and sHA and compare them to the activity and severity of thyroid-associated ophthalmopathy. design, patients and measurements, We developed a microelectrophoresis technique for uGAGs and a fluoroassay for sHA and assessed each in 152 patients with Graves' disease, 25 without GO and 127 with GO, classified according to the Clinical Activity Score (CAS). All patients had been euthyroid for > 2 months. results, Patients with inactive disease (CAS = 2, n = 100) had uGAGs (4·2 ± 1·3 µg/mg/creatinine) and sHA (11·1 ± 7·2 µg/l) that did not differ from normal subjects (3·1 ± 1·1 µg/mg/creatinine, n = 138 and 13·9 ± 9·6 µg/l, n = 395). In contrast, patients with active eye disease (CAS = 3, n = 27) had uGAGs (8·4 ± 2·7 µg/mg/creatinine) and sHA (32·3 ± 17·8 µg/l) 2,3 times higher than those patients with inactive eye disease. Using a cut-off of 6·1 µg/mg creatinine for uGAGs and 20·7 µg/l for sHA we found, respectively, 85% and 81% sensitivity and 93% and 91% specificity for each test. The positive and negative predictive values were 77% and 96% for uGAGs and 71% and 95% for sHA. conclusion, Employing these two new methods we have established a significant relationship between the levels of uGAGs and/or sHA and the clinical activity of GO. Therefore, together with CAS, uGAGs determination, and, to a lesser degree, sHA, would be very useful in the discrimination from active and inactive ocular disease and aid in deciding on the best therapy for GO patients. [source]