Optimal Protocol (optimal + protocol)

Distribution by Scientific Domains


Selected Abstracts


The preparation of periapical lesions for flow cytometry

INTERNATIONAL ENDODONTIC JOURNAL, Issue 2 2000
K. Fernando
Aim To devise an optimal protocol and to analyse the leucocyte composition of periapical (PA) lesions by flow cytometry. Methodology PA lesions were mechanically agitated, with and without proteolysis. This was with either 0.2% collagenase alone, or in combination with 0.02% DNA-ase in serial incubations until all tissue was digested. The efficacy of each method was assessed by counting total cell yield and cell viability. Phenotype stability was gauged by the percentage of peripheral blood leucocytes (PBL) which expressed CD45RB, CD3, CD20, CD4 and CD8 before and after mechanical and collagenase treatment. Results Disaggregation of PA lesions was superior if collagenase was present, but cell clumping was problematic unless the DNA-ase was also added, and serial digestion with this combination produced optimal cell yield and viability. Nevertheless, the total number of cells released rarely exceeded 105, though viability was in excess of 80%. Mechanical agitation and proteolysis adversely affected PBL phenotypes, but collagenase digestion limited to 10 min caused least damage. Flow cytometric analysis of disaggregated PA lesions failed to identify more than 7.9% (mean, range 6,10%) CD45RB + cells. Conclusions Because of the necessity for single cell suspensions, flow cytometry is not easily applied to the analysis of leucocytes in PA lesions, and further refinements in tissue disaggregation and cell preparation are required. [source]


Popliteal lymph node assay: facts and perspectives

JOURNAL OF APPLIED TOXICOLOGY, Issue 6 2005
Guillaume Ravel
Abstract The popliteal lymph node assay (PLNA) derives from the hypothesis that some supposedly immunemediated adverse effects induced by certain pharmaceuticals involve a mechanism resembling a graft-versus-host reaction. The injection of many but not all of these compounds into the footpad of mice or rats produces an increase in the weight and/or cellularity of the popliteal lymph node in the treated limb (direct PLNA). Some of the compounds known to cause these adverse effects in humans, however, failed to induce a positive PLNA response, leading to refinements of the technique to include pretreatment with enzyme inducers, depletion of CD4+ T cells or additional endpoints such as histological examination, lymphocyte subset analysis and cytokine fingerprinting. Alternative approaches have been used to improve further the predictability of the assay. In the secondary PLNA, the test compound is injected twice in order to illicit a greater secondary response, thus suggesting a memory-specific T cell response. In the adoptive PLNA, popliteal lymph node cells from treated mice are injected into the footpad of naive mice; a marked response to a subsequent footpad challenge demonstrates the involvement of T cells. Finally, the reporter antigens TNP-Ficoll and TNP-ovalbumin are used to differentiate compounds that induce responses involving neo-antigen help or co-stimulatory signals (modified PLNA). The PLNA is increasingly considered as a tool for detection of the potential to induce both sensitization and autoimmune reactions. A major current limitation is validation. A small inter-laboratory validation study of the direct PLNA found consistent results. No such study has been performed using an alternative protocol. Other issues include selection of the optimal protocol for an improved prediction of sensitization vs autoimmunity, and the elimination of false-positive responses due to primary irritation. Finally, a better understanding of underlying mechanisms is essential to determine the most relevant endpoints. The confusion resulting from use of the PLNA to predict autoimmune-like reactions as well as sensitization should be clarified. Interestingly, most drugs that were positive in the direct PLNA are also known to cause drug hypersensitivity syndrome in treated patients. This observation is expected to open new avenues of research. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Liver fat is reproducibly measured using computed tomography in the Framingham Heart Study

JOURNAL OF GASTROENTEROLOGY AND HEPATOLOGY, Issue 6 2008
Elizabeth K Speliotes
Abstract Background and Aims:, Fatty liver is the hepatic manifestation of obesity, but community-based assessment of fatty liver among unselected patients is limited. We sought to determine the feasibility of and optimal protocol for quantifying fat content in the liver in the Framingham Heart Study using multidetector computed tomography (MDCT) scanning. Methods:, Participants (n = 100, 49% women, mean age 59.4 years, mean body mass index 27.8 kg/m2) were drawn from the Framingham Heart Study cohort. Two readers measured the attenuation of the liver, spleen, paraspinal muscles, and an external standard from MDCT scans using multiple slices in chest and abdominal scans. Results:, The mean measurement variation was larger within a single axial computed tomography (CT) slice than between multiple axial CT slices for the liver and spleen, whereas it was similar for the paraspinal muscles. Measurement variation in the liver, spleen, and paraspinal muscles was smaller in the abdomen than in the chest. Three versus six measures of attenuation in the liver and two versus three measures in the spleen gave reproducible measurements of tissue attenuation (intraclass correlation coefficient [ICCC] of 1 in the abdomen). Intrareader and interreader reproducibility (ICCC) of the liver-to-spleen ratio was 0.98 and 0.99, the liver-to-phantom ratio was 0.99 and 0.99, and the liver-to-muscle ratio was 0.93 and 0.86, respectively. Conclusion:, One cross-sectional slice is adequate to capture the majority of variance of fat content in the liver per individual. Abdominal scan measures as compared to chest scan measures of fat content in the liver are more precise. The measurement of fat content in the liver on MDCT scans is feasible and reproducible. [source]


Coin flipping from a cosmic source: On error correction of truly random bits

RANDOM STRUCTURES AND ALGORITHMS, Issue 4 2005
Elchanan Mossel
We study a problem related to coin flipping, coding theory, and noise sensitivity. Consider a source of truly random bits x , {0, 1}n, and k parties, who have noisy version of the source bits yi , {0, 1}n, when for all i and j, it holds that P[y = xj] = 1 , ,, independently for all i and j. That is, each party sees each bit correctly with probability 1 , ,, and incorrectly (flipped) with probability ,, independently for all bits and all parties. The parties, who cannot communicate, wish to agree beforehand on balanced functions fi: {0, 1}n , {0, 1} such that P[f1(y1) = , = fk(yk)] is maximized. In other words, each party wants to toss a fair coin so that the probability that all parties have the same coin is maximized. The function fi may be thought of as an error correcting procedure for the source x. When k = 2,3, no error correction is possible, as the optimal protocol is given by fi(yi) = y. On the other hand, for large values of k, better protocols exist. We study general properties of the optimal protocols and the asymptotic behavior of the problem with respect to k, n, and ,. Our analysis uses tools from probability, discrete Fourier analysis, convexity, and discrete symmetrization. © 2005 Wiley Periodicals, Inc. Random Struct. Alg., 2005 [source]