Used Methods (used + methods)

Distribution by Scientific Domains


Selected Abstracts


Cytogenetic status in newborns and their parents in Madrid: The BioMadrid study

ENVIRONMENTAL AND MOLECULAR MUTAGENESIS, Issue 4 2010
Virginia Lope
Abstract Monitoring cytogenetic damage is frequently used to assess population exposure to environmental mutagens. The cytokinesis-block micronucleus assay is one of the most widely used methods employed in these studies. In the present study we used this assay to assess the baseline frequency of micronuclei in a healthy population of father-pregnant woman-newborn trios drawn from two Madrid areas. We also investigated the association between micronucleus frequency and specific socioeconomic, environmental, and demographic factors collected by questionnaire. Mercury, arsenic, lead, and cadmium blood levels were measured by atomic absorption spectrometry. The association between micronucleated cell frequency and the variables collected by questionnaire, as well as, the risk associated with the presence of elevated levels of metals in blood, was estimated using Poisson models, taking the number of micronucleated cells in 1,000 binucleated cells (MNBCs) as the dependent variable. Separate analyses were conducted for the 110 newborns, 136 pregnant women, and 134 fathers in whom micronuclei could be assessed. The mean number of micronucleated cells per 1,000 binucleated cells was 3.9, 6.5, and 6.1 respectively. Our results show a statistically significant correlation in MNBC frequency between fathers and mothers, and between parents and newborns. Elevated blood mercury levels in fathers were associated with significantly higher MNBC frequency, compared with fathers who had normal mercury levels (RR:1.21; 95%CI:1.02,1.43). This last result suggests the need to implement greater control over populations which, by reason of their occupation or life style, are among those most exposed to this metal. Environ. Mol. Mutagen., 2010. © 2009 Wiley-Liss, Inc. [source]


Measuring metabolic rate in the field: the pros and cons of the doubly labelled water and heart rate methods

FUNCTIONAL ECOLOGY, Issue 2 2004
P. J. Butler
Summary 1Measuring the metabolic rate of animals in the field (FMR) is central to the work of ecologists in many disciplines. In this article we discuss the pros and cons of the two most commonly used methods for measuring FMR. 2Both methods are constantly under development, but at the present time can only accurately be used to estimate the mean rate of energy expenditure of groups of animals. The doubly labelled water method (DLW) uses stable isotopes of hydrogen and oxygen to trace the flow of water and carbon dioxide through the body over time. From these data, it is possible to derive a single estimate of the rate of oxygen consumption () for the duration of the experiment. The duration of the experiment will depend on the rate of flow of isotopes of oxygen and hydrogen through the body, which in turn depends on the animal's size, ranging from 24 h for small vertebrates to up to 28 days in Humans. 3This technique has been used widely, partly as a result of its relative simplicity and potential low cost, though there is some uncertainty over the determination of the standard error of the estimate of mean . 4The heart rate (fH) method depends on the physiological relationship between heart rate and . 5If these two quantities are calibrated against each other under controlled conditions, fH can then be measured in free-ranging animals and used to estimate . 6The latest generation of small implantable data loggers means that it is possible to measure fH for over a year on a very fine temporal scale, though the current size of the data loggers limits the size of experimental animals to around 1 kg. However, externally mounted radio-transmitters are now sufficiently small to be used with animals of less than 40 g body mass. This technique is gaining in popularity owing to its high accuracy and versatility, though the logistic constraint of performing calibrations can make its use a relatively extended process. [source]


Correction of an anemia in patients with a terminal stage chronic renal insufficiency on haemodialysis

HEMODIALYSIS INTERNATIONAL, Issue 1 2005
R.Z. Ismagilov
One of the basic symptoms of a terminal stage chronic renal insufficiency is anemia. From everything, used methods of correction of an anemia, it is considered the most effective application of preparations recombinant human erythropoietin (r-Hu EPO). Since 1994 in the Scientific Centre of Surgery begins application r-Hu EPO. Application r-Hu EPO in patients with a terminal stage chronic renal insufficiency in 90,95% of cases had a positive effect, but 5,10% of patients have intolerance to erythropoietin, that has induced to search of new effective methods of correction of anemia. During research were determined quantity erythrocytes, hemoglobin, reticulocyte in peripheral blood and acid-alkaline condition of blood. All hematology parameters were defined at the beginning of treatment, over 5 day and for 15 day of stimulation of a bone marrow. For 15 days after stimulation of a bone marrow by the laser there was an authentic increase of quantity erythrocyte, hemoglobin, hematocrit. The initial contents erythrocytes made 2.22 ± 0.1 10 × 12, hemoglobin 67.7 ± 3.2 g/l and hematocrit 18.2 ± 1.2%. During treatment by the laser parameters erythrocytes have increased up to 2.9 ± 0.8 10 × 12, hemoglobin up to 89.6 ± 2.9 g/l and hematocrit up to 28.2 ± 1.3%(P < 0,005). Hematology parameters in blood of control group authentically have not changed. [source]


Using feedforward neural networks and forward selection of input variables for an ergonomics data classification problem

HUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 1 2004
Chuen-Lung Chen
A method was developed to accurately predict the risk of injuries in industrial jobs based on datasets not meeting the assumptions of parametric statistical tools, or being incomplete. Previous research used a backward-elimination process for feedforward neural network (FNN) input variable selection. Simulated annealing (SA) was used as a local search method in conjunction with a conjugate-gradient algorithm to develop an FNN. This article presents an incremental step in the use of FNNs for ergonomics analyses, specifically the use of forward selection of input variables. Advantages to this approach include enhancing the effectiveness of the use of neural networks when observations are missing from ergonomics datasets, and preventing overspecification or overfitting of an FNN to training data. Classification performance across two methods involving the use of SA combined with either forward selection or backward elimination of input variables was comparable for complete datasets, and the forward-selection approach produced results superior to previously used methods of FNN development, including the error back-propagation algorithm, when dealing with incomplete data. © 2004 Wiley Periodicals, Inc. Hum Factors Man 14: 31,49, 2004. [source]


Nanopatterning Soluble Multifunctional Materials by Unconventional Wet Lithography

ADVANCED MATERIALS, Issue 10-11 2009
Massimiliano Cavallini
Abstract Molecular multifunctional materials have potential applications in many fields of technology, such as electronics, optics and optoelectronics, information storage, sensing, and energy conversion and storage. These materials are designed exhibit enhanced properties, and at the same time are endowed with functional groups that control their interactions, and hence self-organization, into a variety of supramolecular architectures. Since most of the multifunctional materials are soluble, lithographic methods suitable for solutions are attracting increasing interest for the manufacturing of the new materials and their applications. The aim of this paper is to highlight some of the recent advances of solution-based fabrication of multifunctional materials. We explain and examine the principles, processes, materials, and limitations of this class of patterning techniques, which we term unconventional wet lithographies (UWLs). We describe their ability to yield patterns and structures whose feature sizes range from nanometers to micrometers. In the following sections, we focus our attention on micromolding in capillaries, lithographically controlled wetting, and grid-assisted deposition, the most used methods demonstrated to lead to fully operating devices. [source]


Fully stressed frame structures unobtainable by conventional design methodology

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 12 2001
Keith M. Mueller
Abstract A structure is said to be fully stressed if every member of the structure is stressed to its maximum allowable limit for at least one of the loading conditions. Fully stressed design is most commonly used for small and medium size frames where drift is not a primary concern. There are several potential methods available to the engineer to proportion a fully stressed frame structure. The most commonly used methods are those taught to all structural engineering students and are very easy to understand and to implement. These conventional methods are based on the intuitive idea that if a member is overstressed, it should be made larger. If a member is understressed, it can be made smaller, saving valuable material. It has been found that a large number of distinct fully stressed designs can exist for a single frame structure subjected to multiple loading conditions. This study will demonstrate that conventional methods are unable to converge to many, if not most, of these designs. These unobtainable designs are referred to as ,repellers' under the action of conventional methods. Other, more complicated methods can be used to locate these repelling fully stressed designs. For example, Newton's method can be used to solve a non-linear system of equations that defines the fully stressed state. However, Newton's method can be plagued by divergence and also by convergence to physically meaningless solutions. This study will propose a new fully stressed design technique that does not have these problems. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Modelling of wetting and drying of shallow water using artificial porosity

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2005
B. van't Hof
Abstract A new method for wetting and drying in two-dimensional shallow water flow models is proposed. The method is closely related to the artificial porosity method used by different authors in Boussinesq-type models, but is further extended for use in a semi-implicit (ADI-type) time integration scheme. The method is implemented in the simulation model WAQUA using general boundary fitted coordinates and is applied to realistic schematization for a portion of the river Meuse in the Netherlands. A large advantage of the artificial porosity method over traditionally used methods on the basis of ,screens' is a strongly reduced sensitivity of model results. Instead of blocking all water transport in grid points where the water level becomes small, as in screen-based methods, the flow is gradually closed off. Small changes in parameters such as the initial conditions or bottom topography therefore no longer lead to large changes in the model results. Copyright © 2005 John Wiley & Sons, Ltd. [source]


DHPLC is superior to SSCP in screening p53 mutations in esophageal cancer tissues

INTERNATIONAL JOURNAL OF CANCER, Issue 1 2005
Osamu Yamanoshita
Abstract Mutations of the p53 tumor-suppressor gene universally occur on exons 5,8 in human cancer. We analyzed these mutations in esophageal cancer tissue from 207 patients in China using 2 methods, single-strand conformation polymorphism (SSCP), one of the most frequently used methods, and the recently developed denaturing high-performance liquid chromatography (DHPLC), and compared their sensitivity and efficiency. Exons 5,8 of p53 were amplified from esophageal cancer tissue genomes, screened for fragments of mutations and polymorphisms by SSCP and DHPLC in a blind study and confirmed by direct sequencing to detect the mutations and polymorphisms. The numbers detected by DHPLC were greater than those detected by SSCP, though the rate of mutations and polymorphisms was lower in SSCP than in DHPLC, which appeared to detect smaller mutations (substitutions and 1 bp insertions/deletions). Of the mutations with substitutions detected by DHPLC but not by SSCP, 50% substituted adenosine for other nucleotides, suggesting that these mutations are often missed when SSCP is used. According to these data, the sensitivity of SSCP and DHPLC was 81% and 97%, respectively, and the specificity was 97% and 85%, respectively. Our results suggest that DHPLC may be recommended over SSCP when screening gene mutations. Thus, rates of p53 mutations and polymorphisms in esophageal cancer tissue in Chinese patients were 49% and 41% by DHPLC and SSCP, respectively. © 2004 Wiley-Liss, Inc. [source]


Mixed venous oxygen saturation is a prognostic marker after surgery for aortic stenosis

ACTA ANAESTHESIOLOGICA SCANDINAVICA, Issue 5 2010
J. HOLM
Background: Adequate monitoring of the hemodynamic state is essential after cardiac surgery and is vital for medical decision making, particularly concerning hemodynamic management. Unfortunately, commonly used methods to assess the hemodynamic state are not well documented with regard to outcome. Mixed venous oxygen saturation (SvO2) was therefore investigated after cardiac surgery. Methods: Detailed data regarding mortality were available on all patients undergoing aortic valve replacement for isolated aortic stenosis during a 5-year period in the southeast region of Sweden (n=396). SvO2 was routinely measured on admission to the intensive care unit (ICU) and registered in a database. A receiver operating characteristics (ROC) analysis of SvO2 in relation to post-operative mortality related to cardiac failure and all-cause mortality within 30 days was performed. Results: The area under the curve (AUC) was 0.97 (95% CI 0.96,1.00) for mortality related to cardiac failure (P=0.001) and 0.76 (95% CI 0.53,0.99) for all-cause mortality (P=0.011). The best cutoff for mortality related to cardiac failure was SvO2 53.7%, with a sensitivity of 1.00 and a specificity of 0.94. The negative predictive value was 100%. The best cutoff for all-cause mortality was SvO2 58.1%, with a sensitivity of 0.75 and a specificity of 0.84. The negative predictive value was 99.4%. Post-operative morbidity was also markedly increased in patients with a low SvO2. Conclusion: SvO2, on admission to the ICU after surgery for aortic stenosis, demonstrated excellent sensitivity and specificity for post-operative mortality related to cardiac failure and a fairly good AUC for all-cause mortality, with an excellent negative predictive value. [source]


Stress and elastic-constant analysis by X-ray diffraction in thin films

JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 3-2 2003
F. Badawi
Residual stresses influence most physical properties of thin films and are closely related to their microstructure. Among the most widely used methods, X-ray diffraction is the only one allowing the determination of both the mechanical and microstructural state of each diffracting phase. Diffracting planes are used as a strain gauge to measure elastic strains in one or several directions of the diffraction vector. Important information on the thin-film microstructure may also be extracted from the width of the diffraction peaks: in particular, the deconvolution of these peaks allows values of coherently diffracting domain size and microdistortions to be obtained. The genesis of residual stresses in thin films results from multiple mechanisms. Stresses may be divided into three major types: epitaxic stresses, thermal stresses and intrinsic stresses. Diffraction methods require the knowledge of the thin-film elastic constants, which may differ from the bulk-material values as a result of the particular microstructure. Combining an X-ray diffractometer with a tensile tester, it is possible to determine X-ray elastic constants of each diffracting phase in a thin-film/substrate system, in particular the Poisson ratio and the Young modulus. It is important to notice that numerous difficulties relative to the application of diffraction methods may arise in the case of thin films. [source]


Comparison of the Performance of Varimax and Promax Rotations: Factor Structure Recovery for Dichotomous Items

JOURNAL OF EDUCATIONAL MEASUREMENT, Issue 1 2006
Holmes Finch
Nonlinear factor analysis is a tool commonly used by measurement specialists to identify both the presence and nature of multidimensionality in a set of test items, an important issue given that standard Item Response Theory models assume a unidimensional latent structure. Results from most factor-analytic algorithms include loading matrices, which are used to link items with factors. Interpretation of the loadings typically occurs after they have been rotated in order to amplify the presence of simple structure. The purpose of this simulation study is to compare the ability of two commonly used methods of rotation, Varimax and Promax, in terms of their ability to correctly link items to factors and to identify the presence of simple structure. Results suggest that the two approaches are equally able to recover the underlying factor structure, regardless of the correlations among the factors, though the oblique method is better able to identify the presence of a "simple structure." These results suggest that for identifying which items are associated with which factors, either approach is effective, but that for identifying simple structure when it is present, the oblique method is preferable. [source]


The biometry of gills of 0-group European flounder

JOURNAL OF FISH BIOLOGY, Issue 4 2000
M. G. J. Hartl
The gill surface area of 0-group, post-metamorphic Pleuronectes flesus L. was examined using digital image analysis software and expressed in relation to body mass according to the equation log Y=loga+c logW (a=239·02; c=0·723). The components that constitute gill area, total filament length, interlamellar space and unilateral lamellar area were measured. The measurement of the length of every filament on all eight arches showed that commonly used methods of calculation can lead to an under-estimation of up to 24% of total filament length. Direct measurements of unilateral lamellar area with digital image analysis showed that previously reported gill area data for the same species was over-estimated by as much as 58%. In addition, in this species the neglect of gill pouch asymmetry after metamorphosis, can bring about a 14% over-estimation of total gill area. [source]


Measuring Brain Atrophy in Multiple Sclerosis

JOURNAL OF NEUROIMAGING, Issue 2007
Nicola De Stefano MD
ABSTRACT The last decade has seen the development of methods that use conventional magnetic resonance imaging (MRI) to provide sensitive and reproducible assessments of brain volumes. This has increased the interest in brain atrophy measurement as a reliable indicator of disease progression in many neurological disorders, including multiple sclerosis (MS). After a brief introduction in which we discuss the most commonly used methods for assessing brain atrophy, we will review the most relevant MS studies that have used MRI-based quantitative measures of brain atrophy, the clinical importance of these results, and the potential for future application of these measures to understand MS pathology and progression. Despite the number of issues that still need to be solved, the measurement of brain atrophy by MRI is sufficiently precise and accurate. It represents one of most promising in vivo measures of neuroaxonal degeneration in MS, and it should be used extensively in the future to assess and monitor pathological evolution and treatment efficacy in this disease. [source]


Review of methods for measuring and comparing center performance after organ transplantation

LIVER TRANSPLANTATION, Issue 10 2010
James Neuberger
The assessment of outcomes after transplantation is important for several reasons: it provides patients with data so that they can make informed decisions about the benefits of transplantation and the success of the transplant unit; it informs commissioners that resources are allocated properly; and it provides clinicians reassurance that results are acceptable or, if they are not, provides early warning so that problems can be identified, corrections can be instituted early, and all interested parties can be reassured that scarce resources are used fairly. The need for greater transparency in reporting outcomes after liver transplantation and for comparisons both between and within centers has led to a number of approaches being adopted for monitoring center performance. We review some of the commonly used methods, highlight their strengths and weaknesses, and concentrate on methods that incorporate risk adjustment. Measuring and comparing outcomes after transplantation is complex, and there is no single approach that gives a complete picture. All those using analyses of outcomes must understand the merits and limitations of individual methods. When used properly, such methods are invaluable in ensuring that a scarce resource is used effectively, any adverse trend in outcomes is identified promptly and remedied, and best performers are identified; they thus allow the sharing of best practices. However, when they are used inappropriately, such measurements may lead to inappropriate conclusions, encourage risk-averse behavior, and discourage innovation. Liver Transpl 16:1119,1128, 2010. © 2010 AASLD. [source]


Evaluation of Insulin Sensitivity in Clinical Practice and in Research Settings

NUTRITION REVIEWS, Issue 12 2003
Lais U. Monzillo MD
Insulin resistance is the core metabolic abnormality in type 2 diabetes. Its high prevalence and its association with dyslipidemia, hypertension, hyperinsulinemia, and high coronary and cerebrovascular mortality put it in the forefront as the plausible target for aggressive intervention. Measurements of insulin sensitivity provide clinicians and clinical researchers with invaluable instruments to objectively evaluate the efficiency of both current and potentially useful interventional tools. Although several methods had been developed and validated to evaluate insulin sensitivity, none of these methods can be universally used in all patients. Nonetheless, a method suitable for use in clinical or basic research may not necessarily be a practical method for use in clinical practice or for epidemiologic research. We reviewed the currently used methods for assessment of insulin sensitivity. For each method, we summarized its procedure, normal value, cut-off value for defining insulin resistance, advantages and limitations, validity, accuracy for each patient population, and suitability for use in clinical practice and in research settings. The methods reviewed include fasting plasma insulin, homeostatic model assessment, quantitative insulin sensitivity check index, glucose-to-insulin ratio, continuous infusion of glucose with model assessment, indices based on oral glucose tolerance test, insulin tolerance test, and the so called "gold standard" methods, the hyperinsulinemic euglycemic clamp and the frequently sampled-intravenous glucose tolerance test. [source]


Large-scale extrusion processing and characterization of hybrid nylon-6/SiO2 nanocomposites

POLYMERS FOR ADVANCED TECHNOLOGIES, Issue 4 2004
Monserrat García
Abstract Solution impregnations, pulltrusion and film stacking are widely used methods to prepare thermoplastic composite materials. Extruders are used to melt the polymer and to incorporate fibers into the polymer in order to modify physical properties. In this article, the compounding of colloidal silica nanoparticles filled polyamide-6 (PA-6) is achieved using a twin-screw extruder, which has a significant market share due to its low cost and easy maintenance. The experiments were performed at 250 rpm and the bulk throughput was 6,kg,h,1 with a pump pressure of 30 bars. The composites were characterized with nuclear magnetic resonance (NMR), wide angle X-ray diffraction (WAXD), differential scanning calorimetry (DSC) and transmission electron microscopy (TEM). As determined by WAXD, the PA-6 showed higher amounts of , -phase when compared to other synthesis methods such as in situ polymerization. TEM pictures showed that the silica particles aggregated nevertheless, upon addition of 14% (w/w) silica the E-modulus increased from 2.7 to 3.9,GPa indicating that an effective mechanical coupling with the polymer was achieved. The behavior, illustrated with dynamic mechanical analysis (DMA) curves, indicated that in general when a filled system is compared to unfilled material, the values of the moduli (E, and E,) increased and tan , decreased. Determination of molecular mass distribution of the samples by means of size exclusion chromatography (SEC) coupled to a refractive index (RI), viscosity (DV) and light scattering (LS) detector revealed that the addition of silica did not decrease the average molecular weight of the polymer matrix, which is of importance for composite applications. Copyright © 2004 John Wiley & Sons, Ltd. [source]


A New Method for Constructing Confidence Intervals for the Index Cpm

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 7 2004
Michael Perakis
Abstract In the statistical literature on the study of the capability of processes through the use of indices, Cpm appears to have been one of the most widely used capability indices and its estimation has attracted much interest. In this article, a new method for constructing approximate confidence intervals or lower confidence limits for this index is suggested. The method is based on an approximation of the non-central chi-square distribution, which was proposed by Pearson. Its coverage appears to be more satisfactory compared with that achieved by any of the two most widely used methods that were proposed by Boyles, in situations where one is interested in assessing a lower confidence limit for Cpm. This is supported by the results of an extensive simulation study. Copyright © 2004 John Wiley & Sons, Ltd. [source]


A new high-resolution computed tomography (CT) segmentation method for trabecular bone architectural analysis

AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY, Issue 1 2009
Heike Scherf
Abstract In the last decade, high-resolution computed tomography (CT) and microcomputed tomography (micro-CT) have been increasingly used in anthropological studies and as a complement to traditional histological techniques. This is due in large part to the ability of CT techniques to nondestructively extract three-dimensional representations of bone structures. Despite prior studies employing CT techniques, no completely reliable method of bone segmentation has been established. Accurate preprocessing of digital data is crucial for measurement accuracy, especially when subtle structures such as trabecular bone are investigated. The research presented here is a new, reproducible, accurate, and fully automated computerized segmentation method for high-resolution CT datasets of fossil and recent cancellous bone: the Ray Casting Algorithm (RCA). We compare this technique with commonly used methods of image thresholding (i.e., the half-maximum height protocol and the automatic, adaptive iterative thresholding procedure). While the quality of the input images is crucial for conventional image segmentation, the RCA method is robust regarding the signal to noise ratio, beam hardening, ring artifacts, and blurriness. Tests with data of extant and fossil material demonstrate the superior quality of RCA compared with conventional thresholding procedures, and emphasize the need for careful consideration of optimal CT scanning parameters. Am J Phys Anthropol 2009. © 2009 Wiley-Liss, Inc. [source]


Estimating chimpanzee population size with nest counts: validating methods in Taï National Park

AMERICAN JOURNAL OF PRIMATOLOGY, Issue 6 2009
Célestin Yao Kouakou
Abstract Successful conservation and management of wild animals require reliable estimates of their population size. Ape surveys almost always rely on counts of sleeping nests, as the animals occur at low densities and visibility is low in tropical forests. The reliability of standing-crop nest counts and marked-nest counts, the most widely used methods, has not been tested on populations of known size. Therefore, the answer to the question of which method is more appropriate for surveying chimpanzee population remains problematic and comparisons among sites are difficult. This study aimed to test the validity of these two methods by comparing their estimates to the known population size of three habituated chimpanzee communities in Taï National Park [Boesch et al., Am J Phys Anthropol 130:103,115, 2006; Boesch et al., Am J Primatol 70:519,532, 2008]. In addition to transect surveys, we made observations on nest production rate and nest lifetime. Taï chimpanzees built 1.143 nests per day. The mean nest lifetime of 141 fresh nests was 91.22 days. Estimate precision for the two methods did not differ considerably (difference of coefficient of variation <5%). The estimate of mean nest decay time was more precise (CV=6.46%) when we used covariates (tree species, rainfall, nest height and age) to model nest decay rate, than when we took a simple mean of nest decay times (CV=9.17%). The two survey methods produced point estimates of chimpanzee abundance that were similar and reliable: i.e. for both methods the true chimpanzee abundance was included within the 95% estimate confidence interval. We recommend further research on covariate modeling of nest decay times as one way to improve the precision and to reduce the costs of conducting nest surveys. Am. J. Primatol. 71:447,457, 2009. © 2009 Wiley-Liss, Inc. [source]


Acrosome reaction: methods for detection and clinical significance

ANDROLOGIA, Issue 6 2000
T. Zeginiadou
The present article reviews the methods for detection and the clinical significance of the acrosome reaction. The best method for the detection of the acrosome reaction is electron microscopy, but it is expensive and labour-intensive and therefore cannot be used routinely. The most widely used methods utilize optical microscopy where spermatozoa are stained for the visualization of their acrosomal status. Different dyes are used for this purpose as well as lectins and antibodies labelled with fluorescence. The acrosome reaction following ionophore challenge (ARIC) can separate spermatozoa that undergo spontaneous acrosome reaction from those that are induced, making the result of the inducible acrosome reaction more meaningful. Many different stimuli have been used for the induction of the acrosome reaction with different results. The ARIC test can provide information on the fertilizing capability of a sample. The ARIC test was also used to evaluate patients undergoing in vitro fertilization since a low percentage of induced acrosome reaction was found to be associated with lower rates of fertilization. The cut-off value that could be used to identify infertile patients is under debate. Therapeutic decisions can also be made on the basis of the value of the ARIC test. [source]


Haplotype Misclassification Resulting from Statistical Reconstruction and Genotype Error, and Its Impact on Association Estimates

ANNALS OF HUMAN GENETICS, Issue 5 2010
Claudia Lamina
Summary Haplotypes are an important concept for genetic association studies, but involve uncertainty due to statistical reconstruction from single nucleotide polymorphism (SNP) genotypes and genotype error. We developed a re-sampling approach to quantify haplotype misclassification probabilities and implemented the MC-SIMEX approach to tackle this as a 3 × 3 misclassification problem. Using a previously published approach as a benchmark for comparison, we evaluated the performance of our approach by simulations and exemplified it on real data from 15 SNPs of the APM1 gene. Misclassification due to reconstruction error was small for most, but notable for some, especially rarer haplotypes. Genotype error added misclassification to all haplotypes resulting in a non-negligible drop in sensitivity. In our real data example, the bias of association estimates due to reconstruction error alone reached ,48.2% for a 1% genotype error, indicating that haplotype misclassification should not be ignored if high genotype error can be expected. Our 3 × 3 misclassification view of haplotype error adds a novel perspective to currently used methods based on genotype intensities and expected number of haplotype copies. Our findings give a sense of the impact of haplotype error under realistic scenarios and underscore the importance of high-quality genotyping, in which case the bias in haplotype association estimates is negligible. [source]


Comparing Methods of Measurement for Detecting Drug-Induced Changes in the QT Interval: Implications for Thoroughly Conducted ECG Studies

ANNALS OF NONINVASIVE ELECTROCARDIOLOGY, Issue 2 2004
Nkechi E. Azie M.D.
Background:,The aim of this study was to compare the reproducibility and sensitivity of four commonly used methods for QT interval assessment when applied to ECG data obtained after infusion of ibutilide. Methods:,Four methods were compared: (1) 12-lead simultaneous ECG (12-SIM), (2) lead II ECG (LEAD II), both measured on a digitizing board, (3) 3-LEAD ECG using a manual tangential method, and (4) a computer-based, proprietary algorithm, 12SLÔ ECG Analysis software (AUT). QT intervals were measured in 10 healthy volunteers at multiple time points during 24 hours at baseline and after single intravenous doses of ibutilide 0.25 and 0.5 mg. Changes in QT interval from baseline were calculated and compared across ECG methods, using Bland,Altman plots. Variability was studied using a mixed linear model. Results:,Baseline QT values differed between methods (range 376,395 ms), mainly based on the number of leads incorporated into the measurement, with LEAD II and 3-LEAD providing the shortest intervals. The 3-LEAD generated the largest QT change from baseline, whereas LEAD II and 12-SIM generated essentially identical result within narrow limits of agreement (0.4 ms mean difference, 95% confidence interval ± 20.5 ms). Variability with AUT (standard deviation 15.8 ms for within-subject values) was clearly larger than with 3-LEAD, LEAD II, and 12-SIM (9.6, 10.0, and 11.3 ms). Conclusion:,This study demonstrated significant differences among four commonly used methods for QT interval measurement after pharmacological prolongation of cardiac repolarization. Observed large differences in variability of measurements will have a substantial impact on the sample size required to detect QT prolongation in the range that is currently advised in regulatory guidance. [source]


The Complementarity of the Technical Tools of Tissue Engineering and the Concepts of Artificial Organs for the Design of Functional Bioartificial Tissues

ARTIFICIAL ORGANS, Issue 9 2008
Petros Lenas
Abstract:, Although tissue engineering uses powerful biological tools, it still has a weak conceptual foundation, which is restricted at the cell level. The design criteria at the cell level are not directly related with the tissue functions, and consequently, such functions cannot be implemented in bioartificial tissues with the currently used methods. On the contrary, the field of artificial organs focuses on the function of the artificial organs that are treated in the design as integral entities, instead of the optimization of the artificial organ components. The field of artificial organs has already developed and tested methodologies that are based on system concepts and mathematical-computational methods that connect the component properties with the desired global organ function. Such methodologies are needed in tissue engineering for the design of bioartificial tissues with tissue functions. Under the framework of biomedical engineering, artificial organs and tissue engineering do not present competitive approaches, but are rather complementary and should therefore design a common future for the benefit of patients. [source]


Pairing mechanisms for binary stars

ASTRONOMISCHE NACHRICHTEN, Issue 9-10 2008
M.B.N. Kouwenhoven
Abstract Knowledge of the binary population in stellar groupings provides important information about the outcome of the star forming process in different environments. Binarity is also a key ingredient in stellar population studies and is a prerequisite to calibrate the binary evolution channels. In these proceedings we present an overview of several commonly used methods to pair individual stars into binary systems, which we refer to as the pairing function.Many pairing functions are frequently used by observers and computational astronomers, either for the mathematical convenience, or because they roughly describe the expected outcome of the star forming process. We discuss the consequences of each pairing function for the interpretation of observations and numerical simulations. The binary fraction and mass ratio distribution generally depend strongly on the selection of the range in primary spectral type in a sample. These quantities, when derived from a binary survey with a mass-limited sample of target stars, are thus not representative for the population as a whole. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Notch: Implications of endogenous inhibitors for therapy

BIOESSAYS, Issue 6 2010
Ivan Dikic
Abstract Soluble components of Notch signalling can be applied to manipulate a central pathway essential for the development of metazoans and often deregulated in illnesses such as stroke, cancer or cardiovascular diseases. Commonly, the Notch cascade is inhibited by small compound inhibitors, which either block the proteolysis of Notch receptors by ,-secretases or interfere with the transcriptional activity of the Notch intracellular domain. Specific antibodies can also be used to inhibit ligand-induced activation of Notch receptors. Alternatively, naturally occurring endogenous inhibitors of Notch signalling might offer a specific way to block receptor activation. Examples are the soluble variants of the canonical Notch ligand Jagged1 and the non-canonical Notch ligand Dlk1, both deprived of their transmembrane regions upon ectodomain shedding, or the bona fide secreted molecule EGFL7. We present frequently used methods to decrease Notch signalling, and we discuss how soluble Notch inhibitors may be used to treat diseases. [source]


Purification of a crystallin domain of Yersinia crystallin from inclusion bodies and its comparison to native protein from the soluble fraction

BIOMEDICAL CHROMATOGRAPHY, Issue 9 2006
M. K. Jobby
Abstract It has been established that many heterologously produced proteins in E. coli accumulate as insoluble inclusion bodies. Methods for protein recovery from inclusion bodies involve solubilization using chemical denaturants such as urea and guanidine hydrochloride, followed by removal of denaturant from the solution to allow the protein to refold. In this work, we applied on-column refolding and purification to the second crystallin domain D2 of Yersinia crystallin isolated from inclusion bodies. We also purified the protein from the soluble fraction (without using any denaturant) to compare the biophysical properties and conformation, although the yield was poor. On-column refolding method allows rapid removal of denaturant and refolding at high protein concentration, which is a limitation in traditionally used methods of dialysis or dilution. We were also able to develop methods to remove the co-eluting nucleic acids during chromatography from the protein preparation. Using this protocol, we were able to rapidly refold and purify the crystallin domain using a two-step process with high yield. We used biophysical techniques to compare the conformation and calcium-binding properties of the protein isolated from the soluble fraction and inclusion bodies. Copyright © 2006 John Wiley & Sons, Ltd. [source]


The generation of stable, high MAb expressing CHO cell lines based on the artificial chromosome expression (ACE) technology

BIOTECHNOLOGY & BIOENGINEERING, Issue 3 2009
Malcolm L. Kennard
Abstract The manufacture of recombinant proteins at industrially relevant levels requires technologies that can engineer stable, high expressing cell lines rapidly, reproducibly and with relative ease. Commonly used methods incorporate transfection of mammalian cell lines with plasmid DNA containing the gene of interest. Identifying stable high expressing transfectants is normally laborious and time consuming. To improve this process, the ACE System has been developed based on pre-engineered artificial chromosomes with multiple recombination acceptor sites. This system allows for the targeted transfection of single or multiple genes and eliminates the need for random integration into native host chromosomes. To illustrate the utility of the ACE System in generating stable, high expressing cell lines, CHO based candidate cell lines were generated to express a human monoclonal IgG1 antibody. Candidate cell lines were generated in under 6 months and expressed over 1,g/L and with specific productivities of up to 45,pg/cell/day under non-fed, non-optimized shake flask conditions. These candidate cell lines were shown to have stable expression of the monoclonal antibody for up to 70 days of continuous culture. The results of this study demonstrate that clonal, stable monoclonal antibody expressing CHO based cell lines can be generated by the ACE System rapidly and perform competitively with those cell lines generated by existing technologies. The ACE System, therefore, provides an attractive and practical alternative to conventional methods of cell line generation. Biotechnol. Bioeng. 2009; 104: 540,553 © 2009 Wiley Periodicals, Inc. [source]