One Method (one + method)

Distribution by Scientific Domains


Selected Abstracts


Advanced Analysis of Steel Frames Using Parallel Processing and Vectorization

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 5 2001
C. M. Foley
Advanced methods of analysis have shown promise in providing economical building structures through accurate evaluation of inelastic structural response. One method of advanced analysis is the plastic zone (distributed plasticity) method. Plastic zone analysis often has been deemed impractical due to computational expense. The purpose of this article is to illustrate applications of plastic zone analysis on large steel frames using advanced computational methods. To this end, a plastic zone analysis algorithm capable of using parallel processing and vector computation is discussed. Applicable measures for evaluating program speedup and efficiency on a Cray Y-MP C90 multiprocessor supercomputer are described. Program performance (speedup and efficiency) for parallel and vector processing is evaluated. Nonlinear response including postcritical branches of three large-scale fully restrained and partially restrained steel frameworks is computed using the proposed method. The results of the study indicate that advanced analysis of practical steel frames can be accomplished using plastic zone analysis methods and alternate computational strategies. [source]


Use of Knowledge, Skill, and Ability Statements in Developing Licensure and Certification Examinations

EDUCATIONAL MEASUREMENT: ISSUES AND PRACTICE, Issue 1 2005
Ning Wang
The task inventory approach is commonly used in job analysis for establishing content validity evidence supporting the use and interpretation of licensure and certification examinations. Although the results of a task inventory survey provide job task-related information that can be used as a reliable and valid source for test development, it is often the knowledge, skills, and abilities (KSAs) required for performing the tasks, rather than the job tasks themselves, which are tested by licensure and certification exams. This article presents a framework that addresses the important role of KSAs in developing and validating licensure and certification examinations. This includes the use of KSAs in linking job task survey results to the test content outline, transferring job task weights to test specifications, and eventually applying the results to the development of the test items. The impact of using KSAs in the development of test specifications is illustrated from job analyses for two diverse professions. One method for transferring job task weights from the job analysis to test specifications through KSAs is also presented, along with examples. The two examples demonstrated in this article are taken from nursing certification and real estate licensure programs. However, the methodology for using KSAs to link job tasks and test content is also applicable in the development of teacher credentialing examinations. [source]


Computational economy improvements in PRISM

INTERNATIONAL JOURNAL OF CHEMICAL KINETICS, Issue 9 2003
Shaheen R. Tonse
The Piecewise Reusable Implementation of Solution Mapping (PRISM) procedure is applied to reactive flow simulations of (9-species) H2 + air combustion. PRISM takes the solution of the chemical kinetic ordinary differential equation system and parameterizes it with quadratic polynomials. To increase the accuracy, the parameterization is done piecewise, by dividing the multidimensional chemical composition space into hypercubes and constructing polynomials for each hypercube on demand. The polynomial coefficients are stored for subsequent repeated reuse. Initial cost of polynomial construction is expensive, but it recouped as the hypercube is reused, hence computational gain depends on the degree of hypercube reuse. We present two methods that help us to identify hypercubes that will ultimately have high reuse, this being accomplished before the expense of constructing polynomials has been incurred. One method utilizes the rate of movement of the chemical trajectory to estimate the number of steps the trajectory would make through the hypercube. The other method defers polynomial construction until a preset threshold of reuse has been met; an empirical method which, nevertheless, produces a substantial gain. The methods are tested on a 0-D chemical mixture and reactive flow 1-D and 2-D simulations of selected laminar and turbulent H2 + air flames. The computational performance of PRISM is improved by a factor of about 2 for both methods. © 2003 Wiley Periodicals, Inc., Int J Chem Kinet 35: 438,452, 2003 [source]


Introduction of an intensive case management style of delivery for a new mental health service

INTERNATIONAL JOURNAL OF MENTAL HEALTH NURSING, Issue 3 2006
Catherine Hangan
ABSTRACT:,Mental health case management emerged in the 1960s in response to the shift in focus from inpatient to community care. Case management per se had been used by other service industries for some time previously, particularly those involved with people with intellectual disability. The term case management describes a range of service approaches and strategies in mental health rather than a single model of care. One method of delivering case management is with an intensive model of care. Intensive case management is differentiated from other forms of case management through factors like a smaller caseload size, team management, outreach emphasis, a decreased brokerage role, and an assertive approach to maintaining contact with clients. Research has demonstrated that case management, in particular, intensive case management, can improve clients' and families' experience of mental health services but only when introduced and used for appropriately targeted client populations and suitably resourced. Determining which model of case management best suits the client population and how to introduce it is a major challenge for any mental health service. With a focus on intensive case management, a review of this process is outlined. [source]


Improved pKa prediction: Combining empirical and semimicroscopic methods

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 15 2008
Gernot Kieseritzky
Abstract Using three different methods we tried to compute 171 experimentally known pKa values of ionizable residues from 15 different proteins and compared the accuracies of computed pKa values in terms of the root mean square deviation (RMSD) from experiment. One method is based on a continuum electrostatic model of the protein including conformational flexibility (KBPLUS). The others are empirical approaches with PROPKA deploying physically motivated energy terms with adjustable parameters and PKAcal using an empirical function with no physical basis. PROPKA reproduced the pKa values with highest overall accuracy. Differentiating the data set into weakly and strongly shifted experimental pKa values, however, we found that PROPKA's accuracy is better if the pKa values are weakly shifted but on equal footing with that of KBPLUS for more strongly shifted values. On the other hand, PKAcal reproduces strongly shifted pKa values badly but weakly shifted values with the same accuracy as PROPKA. We tested different consensus approaches combining data from all three methods to find a general procedure for most accurate pKa predictions. In most of the cases we found that the consensus approach reproduced experimental data with better accuracy than any of the individual methods alone. © 2008 Wiley Periodicals, Inc. J Comput Chem 2008 [source]


Using Quality Circles to Enhance Student Involvement and Course Quality in a Large Undergraduate Food Science and Human Nutrition Course

JOURNAL OF FOOD SCIENCE EDUCATION, Issue 1 2005
S.J. Schmidt
ABSTRACT: Large undergraduate classes are a challenge to manage, to engage, and to assess, yet such formidable classes can flourish when student participation is facilitated. One method of generating authentic student involvement is implementation of quality circles by means of a Student Feedback Committee (SFC), which is a volunteer problem-solving and decision-making group that communicates student-generated input to the teaching team for the purpose of improving the course content, structure, and environment in the present and redesigning it for the future. Our objective was to implement a SFC in a large introductory Food Science and Human Nutrition (FSHN 101) course to enhance student involvement and course quality. Overall, the SFC provided a continuous and dynamic feedback mechanism for the teaching team, a beneficial experience for the SFC members, and an opportunity for class members to confidentially share their input to enhance the quality of the course throughout the semester. This article includes a brief introduction of the use of quality circles in higher education classrooms, as well as our methods of implementation and assessment after using the SFC for 3 semesters (Spring 2003, Fall 2003, and Spring 2004). [source]


The marketing of industrial real estate: application of Taguchi loss functions

JOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 4 2001
Troy A. Festervand
Abstract The marketing of industrial real estate is a resource-consuming endeavour for all parties involved consisting of many objectives that, in many cases, may be in conflict with one another. One method of minimizing resource requirements, especially time, while increasing the probability of a successful match is to select properties for presentation that maximizes buyer utility. Zionts (1992) indicated one area for future research in multiple criteria decision-making (MCDM) is in the development of ,Eclectic Approaches' using old ideas in a new way to help develop MCDM approaches. In this paper Taguchi loss functions, a procedure commonly used in quality control, is proposed as a tool that can be used by industrial real estate professionals to more efficiently determine the property that most closely matches the buyer's needs. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Hyperosmotic stress induces Axl activation and cleavage in cerebral endothelial cells

JOURNAL OF NEUROCHEMISTRY, Issue 1 2008
Imola Wilhelm
Abstract Because of the relative impermeability of the blood-brain barrier (BBB), many drugs are unable to reach the CNS in therapeutically relevant concentration. One method to deliver drugs to the CNS is the osmotic opening of the BBB using mannitol. Hyperosmotic mannitol induces a strong phosphorylation on tyrosine residues in a broad spectrum of proteins in cerebral endothelial cells, the principal components of the BBB. Previously, we have shown that among targets of tyrosine phosphorylation are ,-catenin, extracellular signal-regulated kinase 1/2 and the non-receptor tyrosine kinase Src. The aim of this study was to identify new signalling pathways activated by hypertonicity in cerebral endothelial cells. Using an antibody array and immunoprecipitation we identified the receptor tyrosine kinase Axl to become tyrosine phosphorylated in response to hyperosmotic mannitol. Besides activation, Axl was also cleaved in response to osmotic stress. Degradation of Axl proved to be metalloproteinase- and proteasome-dependent and resulted in 50,55 kDa C-terminal products which remained phosphorylated even after degradation. Specific knockdown of Axl increased the rate of apoptosis in hyperosmotic mannitol-treated cells; therefore, we assume that activation of Axl may be a protective mechanism against hypertonicity-induced apoptosis. Our results identify Axl as an important element of osmotic stress-induced signalling. [source]


Mitochondrial transport proteins of the brain

JOURNAL OF NEUROSCIENCE RESEARCH, Issue 15 2007
D.A. Berkich
Abstract In this study, cellular distribution and activity of glutamate and ,-aminobutyric acid (GABA) transport as well as oxoglutarate transport across brain mitochondrial membranes were investigated. A goal was to establish cell-type-specific expression of key transporters and enzymes involved in neurotransmitter metabolism in order to estimate neurotransmitter and metabolite traffic between neurons and astrocytes. Two methods were used to isolate brain mitochondria. One method excludes synaptosomes and the organelles may therefore be enriched in astrocytic mitochondria. The other method isolates mitochondria derived from all regions of the brain. Immunological and enzymatic methods were used to measure enzymes and carriers in the different preparations, in addition to studying transport kinetics. Immunohistochemistry was also employed using brain slices to confirm cell type specificity of enzymes and carriers. The data suggest that the aspartate/glutamate carriers (AGC) are expressed predominantly in neurons, not astrocytes, and that one of two glutamate/hydroxyl carriers is expressed predominantly in astrocytes. The GABA carrier and the oxoglutarate carrier appear to be equally distributed in astrocytes and neurons. As expected, pyruvate carboxylase and branched-chain aminotransferase were predominantly astrocytic. Insofar as the aspartate/glutamate exchange carriers are required for the malate/aspartate shuttle and for reoxidation of cytosolic NADH, the data suggest a compartmentation of glucose metabolism in which astrocytes catalyze glycolytic conversion of glucose to lactate, whereas neurons are capable of oxidizing both lactate and glucose to CO2 + H2O. © 2007 Wiley-Liss, Inc. [source]


Hydrophobic ion pairing of isoniazid using a prodrug approach

JOURNAL OF PHARMACEUTICAL SCIENCES, Issue 6 2002
Huiyu Zhou
Abstract Inhalation therapy for infectious lung diseases, such as tuberculosis, is currently being explored, with microspheres being used to target alveolar macrophages. One method of drug encapsulation into polymeric microspheres to form hydrophobic ion-paired (HIP) complexes, and then coprecipitate the complex and polymer using supercritical fluid methodology. For the potent antituberculosis drug, isoniazid (isonicotinic acid hydrazide, INH), to be used in this fashion, it was modified into an ionizable form suitable for HIP. The charged prodrug, sodium isoniazid methanesulfonate (Na,INHMS), was then ion paired with hydrophobic cations, such as alkyltrimethylammonium or tetraalkylammonium. The logarithms of the apparent partition coefficients (log P,) of various HIP complexes of INHMS display a roughly linear relationship with the numbers of carbon atoms in the organic counterions. The water solubility of the tetraheptylammonium,INHMS complex is about 220-fold lower than that of Na,INHMS, while the solubility in dichloromethane exceeds 10 mg/mL, which is sufficient for microencapsulation of the drug into poly(lactide) microspheres. The actual logarithm of the dichloromethane/water partition coefficient (log P) for tetraheptylammonium,INHMS is 1.55, compared to a value of ,,1.8 for the sodium salt of INHMS. The dissolution kinetics of the tetraheptylammonium,INHMS complex in 0.9% aqueous solutions of NaCl was also investigated. Dissolution of tetraheptylammonium,INHMS exhibited a first-order time constant of about 0.28 min,1, followed by a slower reverse ion exchange process to form Na,INHMS. The half-life of this HIP complex is on the order of 30 min, making the enhanced transport of the drug across biological barriers possible. This work represents the first use of a prodrug approach to introduce functionality that would allow HIP complex formation for a neutral molecule. © 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91:1502,1511, 2002 [source]


Single-cell gel electrophoresis: a tool for investigation of DNA protection or damage mediated by dietary antioxidants

JOURNAL OF THE SCIENCE OF FOOD AND AGRICULTURE, Issue 13 2007
Yim Tong Szeto
Abstract One method of assessing DNA damage is the comet assay, which was developed in 1988. The comet assay enables the detection of DNA strand breaks in individual cells. This test has also been used to study the in vitro and in vivo genotoxic or genoprotective effects of certain agents such as dietary antioxidants. This paper aims to consolidate the antioxidant and pro-oxidant effects of a series of dietary agents which have been evaluated by comet assay. Copyright © 2007 Society of Chemical Industry [source]


Effective Borel measurability and reducibility of functions

MLQ- MATHEMATICAL LOGIC QUARTERLY, Issue 1 2005
Vasco Brattka
Abstract The investigation of computational properties of discontinuous functions is an important concern in computable analysis. One method to deal with this subject is to consider effective variants of Borel measurable functions. We introduce such a notion of Borel computability for single-valued as well as for multi-valued functions by a direct effectivization of the classical definition. On Baire space the finite levels of the resulting hierarchy of functions can be characterized using a notion of reducibility for functions and corresponding complete functions. We use this classification and an effective version of a Selection Theorem of Bhattacharya-Srivastava in order to prove a generalization of the Representation Theorem of Kreitz-Weihrauch for Borel measurable functions on computable metric spaces: such functions are Borel measurable on a certain finite level, if and only if they admit a realizer on Baire space of the same quality. This Representation Theorem enables us to introduce a realizer reducibility for functions on metric spaces and we can extend the completeness result to this reducibility. Besides being very useful by itself, this reducibility leads to a new and effective proof of the Banach-Hausdorff-Lebesgue Theorem which connects Borel measurable functions with the Baire functions. Hence, for certain metric spaces the class of Borel computable functions on a certain level is exactly the class of functions which can be expressed as a limit of a pointwise convergent and computable sequence of functions of the next lower level. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Accuracy of reproducing hand position when using active compared with passive movement

PHYSIOTHERAPY RESEARCH INTERNATIONAL, Issue 2 2001
Yocheved Laufer PT PhD Head of Physical Therapy Department
Abstract Background and Purpose Evaluating proprioception is relevant to physical rehabilitation because of its significance in motor control. One method of proprioceptive testing involves having subjects either imitate or point at a joint position or movement which was presented via a passive movement. However, as the muscle spindles are subject to central fusimotor control, the proprioceptive system may be better-tuned to movements created by active muscular contraction than to passive movements. The objective of the present study was to determine whether accuracy of reproducing hand position is dependent on whether proprioceptive input is obtained via an active or a passive movement. Method Thirty-nine healthy volunteers (mean age (±SD) 24.6 (±3.6) years) participated in the study. Subjects' right hands, which were obscured from view, were acoustically guided to five targets on a digitizer tablet with either an active or passive upper extremity movement. Subjects were then asked to reproduce the targets' location by either reaching to them with the unseen hand or by use of a laser beam. Distance from target and angular deviations were calculated in both absolute and relative terms. Repeated measures analysis of variance (ANOVA) was performed for each variable followed by predetermined contrasts. Results Comparison between the active and passive conditions when reconstruction of target location was guided kinaesthetically indicates significant differences in absolute distance, range and angular deviation. The comparison when reconstruction of target location was guided visually indicates significant differences in absolute distance, absolute angle and angular deviation. Conclusions The ability to reproduce hand position accurately is enhanced when position is encoded by active upper extremity movement compared with passive movement. The results have implications for the design of strategies for evaluating as well as treating patients with impaired proprioception and limited movement. Copyright © 2001 Whurr Publishers Ltd. [source]


The Essence of Linkage-based Imprinting Detection: Comparing Power, Type 1 Error, and the Effects of Confounders in Two Different Analysis Approaches

ANNALS OF HUMAN GENETICS, Issue 3 2010
David A. Greenberg
Summary Imprinting is critical to understanding disease expression. It can be detected using linkage information, but the effects of potential confounders (heterogeneity, sex-specific penetrance, and sex-biased ascertainment) have not been explored. We examine power and confounders in two imprinting detection approaches, and we explore imprinting-linkage interaction. One method (PP) models imprinting by maximising lod scores w.r.t. parent-specific penetrances. The second (DRF) approximates imprinting by maximising lods over differential male-female recombination fractions. We compared power, type 1 error, and confounder effects in these two methods, using computer-simulated data. We varied heterogeneity, penetrance, family and dataset size, and confounders that might mimic imprinting. Without heterogeneity, PP had more imprinting-detecting power than DRF. PP's power increased when parental affectedness status was ignored, but decreased with heterogeneity. With heterogeneity, type 1 error increased dramatically for both methods. However, DRF's power also increased under heterogeneity, more than was attributable to inflated type 1 error. Sex-specific penetrance could increase false positives for PP but not for DRF. False positives did not increase on ascertainment through an affected "mother". For PP, non-penetrant individuals increased information, arguing against using affecteds-only methods. The high type 1 error levels under some circumstances means these methods must be used cautiously. [source]


Hyperaccumulation of selenium in hybrid striped bass: a functional food for aquaculture?

AQUACULTURE NUTRITION, Issue 3 2008
P.A. COTTER
Abstract One method of increasing the value of aquacultured product is to produce fillets that are fortified with minerals that are beneficial to human health , that is enhance the functionality of an already healthy product. A good candidate mineral in this regard is selenium (Se) which is of vital importance to normal metabolism in humans. In order to evaluate the dose response and tissue accumulation of supplemental dietary Se, a study was undertaken with hybrid striped bass (HSB). Animals were fed diets supplemented with either organic (0,3.2 mg kg,1 as SelPlex®) or inorganic (0.2 and 0.4 mg kg,1 as sodium selenite) Se for 6 weeks. Because basal fishmeal-based diets contained 1.22 mg Se kg,1, doses of Se delivered equated to 1.22,4.42 mg kg,1. At trial end, greatest weight gain was observed in fish receiving 0.2 mg Se kg,1, irrespective of form (organic/inorganic). Se accumulation in HSB liver and fillet revealed a classical dose-response once a threshold level of 0.2 mg Se kg,1 was surpassed. Greatest tissue accumulation of Se was observed in fish fed the 3.2 mg Se kg,1 level (P > 0.0001). A 100 g portion of Se-enhanced HSB fillet would contain between 33 and 109 ,g Se, amounting to a dietary intake of between 25 and 80 ,g Se; a level that would satisfy present daily intake recommendations. Comparison of tissue Se levels indicated that the muscle provides a more conspicuous gauge of dietary Se dose-response than does liver. Dietary treatments of between 0.4 and 1.6 mg organic Se kg,1 reduced (P < 0.024) plasma glutathione peroxidase (GSH-Px) activity. No differences were observed in ceruloplasmin, lysozyme or GSH-Px activities between organic and inorganic Se when delivered at the 0.2 mg Se kg,1 level. Ceruloplasmin, lysozyme and GSH-Px levels were elevated (P , 0.025) in fish fed the diet containing 0.4 mg inorganic Se kg,1. [source]


Hemocompatibility Assessment of Carbonic Anhydrase Modified Hollow Fiber Membranes for Artificial Lungs

ARTIFICIAL ORGANS, Issue 5 2010
Heung-Il Oh
Abstract Hollow fiber membrane (HFM)-based artificial lungs can require a large blood-contacting membrane surface area to provide adequate gas exchange. However, such a large surface area presents significant challenges to hemocompatibility. One method to improve carbon dioxide (CO2) transfer efficiency might be to immobilize carbonic anhydrase (CA) onto the surface of conventional HFMs. By catalyzing the dehydration of bicarbonate in blood, CA has been shown to facilitate diffusion of CO2 toward the fiber membranes. This study evaluated the impact of surface modifying a commercially available microporous HFM-based artificial lung on fiber blood biocompatibility. A commercial poly(propylene) Celgard HFM surface was coated with a siloxane, grafted with amine groups, and then attached with CA which has been shown to facilitate diffusion of CO2 toward the fiber membranes. Results following acute ovine blood contact indicated no significant reduction in platelet deposition or activation with the siloxane coating or the siloxane coating with grafted amines relative to base HFMs. However, HFMs with attached CA showed a significant reduction in both platelet deposition and activation compared with all other fiber types. These findings, along with the improved CO2 transfer observed in CA modified fibers, suggest that its incorporation into HFM design may potentiate the design of a smaller, more biocompatible HFM-based artificial lung. [source]


Estimation of Pump Flow Rate and Abnormal Condition of Implantable Rotary Blood Pumps During Long-Term In Vivo Study

ARTIFICIAL ORGANS, Issue 4 2000
K. Nakata
Abstract: The control system for an implantable rotary blood pump is not clearly defined. A detection system is considered to be necessary for pump flow monitoring and abnormal conditions such as back flow or a sucking phenomenon where the septum or left ventricle wall is sucked into the cannula, etc. The ultrasound flowmeter is durable and reliable but the control system should not be totally dependent on the flowmeter. If the flowmeter breaks, the rotary blood pumps have no control mechanism. Therefore, the authors suggest controlling the pumps by an intrinsic parameter. One left ventricular assist device (LVAD) calf model was studied where the flow rate and waveform of the pump flow proved to identify the sucking phenomenon. Thus, the pump flow rate was calculated from the required power, motor speed, and heart rate. The value of the coefficient of determination (R2) between the measured and estimated pump flow rate was 0.796. To estimate this abnormal phenomenon, 2 methods were evaluated. One method was the total pressure head in which the pump flow rate and motor speed were estimated. During normal conditions the total pressure head is 79.5 ± 7.0 mm Hg whereas in the abnormal condition, it is 180.0 ± 2.8 mm Hg. There was a statistical difference (p < 0.01). Another method is using a current waveform. There is an association between the current and pump flow waves. The current was differentiated and squared to calculate the power of the differentiated current. The normal range of this value was 0.025 ± 0.029; the abnormal condition was 11.25 ± 15.13. There was a statistical difference (p < 0.01). The predicted flow estimation method and a sucking detection method were available from intrinsic parameters of the pump and need no sensors. These 2 methods are simple, yet effective and reliable control methods for a rotary blood pump. [source]


Crystallization and preliminary X-ray analysis of the complexes between a Fab and two forms of human insulin-like growth factor II

ACTA CRYSTALLOGRAPHICA SECTION F (ELECTRONIC), Issue 9 2009
Janet Newman
Elevated expression of insulin-like growth factor II (IGF-II) is frequently observed in a variety of human malignancies, including breast, colon and liver cancer. As IGF-II can deliver a mitogenic signal through both the type 1 insulin-like growth factor receptor (IGF-IR) and an alternately spliced form of the insulin receptor (IR-A), neutralizing the biological activity of this growth factor directly is an attractive therapeutic option. One method of doing this would be to find antibodies that bind tightly and specifically to the peptide, which could be used as protein therapeutics to lower the peptide levels in vivo and/or to block the peptide from binding to the IGF-IR or IR-A. To address this, Fabs were selected from a phage-display library using a biotinylated precursor form of the growth factor known as IGF-IIE as a target. Fabs were isolated that were specific for the E-domain C-terminal extension and for mature IGF-II. Four Fabs selected from the library were produced, complexed with IGF-II and set up in crystallization trials. One of the Fab,IGF-II complexes (M64-F02,IGF-II) crystallized readily, yielding crystals that diffracted to 2.2,Å resolution and belonged to space group P212121, with unit-cell parameters a = 50.7, b = 106.9, c = 110.7,Å. There was one molecule of the complete complex in the asymmetric unit. The same Fab was also crystallized with a longer form of the growth factor, IGF-IIE. This complex crystallized in space group P212121, with unit-cell parameters a = 50.7, b = 107, c = 111.5,Å, and also diffracted X-rays to 2.2,Å resolution. [source]


Bioanalysis of pentoxifylline and related metabolites in plasma samples through LC-MS/MS

BIOMEDICAL CHROMATOGRAPHY, Issue 6 2010
Daniela Iuliana Sora
Abstract Analytical aspects related to the assay of pentoxifylline (PTX), lisofylline (M1) and carboxypropyl dimethylxanthine (M5) metabolites are discussed through comparison of two alternative analytical methods based on liquid chromatography separation and atmospheric pressure electrospray ionization tandem mass spectrometry detection. One method is based on a ,pure' reversed-phase liquid chromatography mechanism, while the second one uses the additional polar interactions with embedded amide spacers linking octadecyl moieties to the silicagel surface (C-18 Aqua stationary phase). In both cases, elution is isocratic. Both methods are equally selective and allows separation of unknowns (four species associated to PTX, two species associated to M1) detected through specific mass transitions of the parent compounds and owning respective structural confirmation. Plasma concentration,time patterns of these compounds follow typical metabolic profiles. It has been advanced that in-vivo formation of conjugates of PTX and M1 is possible, such compounds being cleaved back to the parent ones within the ion source. The first method was associated with a sample preparation procedure based on plasma protein precipitation by strong organic acid addition. The second method used protein precipitation by addition of a water miscible organic solvent. Both analytical methods were fully validated and used to assess bioequivalence between a prolonged release generic formulation and the reference product, under multidose and single dose approaches. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Statistical approaches in landscape genetics: an evaluation of methods for linking landscape and genetic data

ECOGRAPHY, Issue 5 2009
Niko Balkenhol
The goal of landscape genetics is to detect and explain landscape effects on genetic diversity and structure. Despite the increasing popularity of landscape genetic approaches, the statistical methods for linking genetic and landscape data remain largely untested. This lack of method evaluation makes it difficult to compare studies utilizing different statistics, and compromises the future development and application of the field. To investigate the suitability and comparability of various statistical approaches used in landscape genetics, we simulated data sets corresponding to five landscape-genetic scenarios. We then analyzed these data with eleven methods, and compared the methods based on their statistical power, type-1 error rates, and their overall ability to lead researchers to accurate conclusions about landscape-genetic relationships. Results suggest that some of the most commonly applied techniques (e.g. Mantel and partial Mantel tests) have high type-1 error rates, and that multivariate, non-linear methods are better suited for landscape genetic data analysis. Furthermore, different methods generally show only moderate levels of agreement. Thus, analyzing a data set with only one method could yield method-dependent results, potentially leading to erroneous conclusions. Based on these findings, we give recommendations for choosing optimal combinations of statistical methods, and identify future research needs for landscape genetic data analyses. [source]


Proposed Curriculum for an "Observational" International Emergency Medicine Fellowship Program

ACADEMIC EMERGENCY MEDICINE, Issue 4 2000
C. James Holliman MD
Abstract. This article presents information on considerations involved in setting up and conducting fellowship training programs in emergency medicine (EM) for physicians from other countries. General goals for these programs are to assist in providing physicians from other countries with the knowledge and skills needed to further develop EM in their home countries. The authors report their opinions, based on their cumulative extensive experiences, on the necessary and optional structural elements to consider for international EM fellowship programs. Because of U.S. medical licensing restrictions, much of the proposed programs' content would be "observational" rather than involving direct "hands-on" clinical EM training. Due to the very recent initiation of these programs in the United States, there has not yet been reported any scientific evaluation of their structure or efficacy. International EM fellowship programs involving mainly observational EM experience can serve as one method to assist in EM development in other countries. Future studies should assess the impact and efficacy of these programs. [source]


On a multilevel preconditioning module for unstructured mesh Krylov solvers: two-level Schwarz

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 6 2002
R. S. Tuminaro
Abstract Multilevel methods offer the best promise to attain both fast convergence and parallel efficiency in the numerical solution of parabolic and elliptic partial differential equations. Unfortunately, they have not been widely used in part because of implementation difficulties for unstructured mesh solvers. To facilitate use, a multilevel preconditioner software module, ML, has been constructed. Several methods are provided requiring relatively modest programming effort on the part of the application developer. This report discusses the implementation of one method in the module: a two-level Krylov,Schwarz preconditioner. To illustrate the use of these methods in computational fluid dynamics (CFD) engineering applications, we present results for 2D and 3D CFD benchmark problems. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Highly accurate solutions of the bifurcation structure of mixed-convection heat transfer using spectral method

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 5 2002
M. Selmi
Abstract This paper is concerned with producing highly accurate solution and bifurcation structure using the pseudo-spectral method for the two-dimensional pressure-driven flow through a horizontal duct of a square cross-section that is heated by a uniform flux in the axial direction with a uniform temperature on the periphery. Two approaches are presented. In one approach, the streamwise vorticity, streamwise momentum and energy equations are solved for the stream function, axial velocity, and temperature. In the second approach, the streamwise vorticity and a combination of the energy and momentum equations are solved for stream function and temperature only. While the second approach solves less number of equations than the first approach, a grid sensitivity analysis has shown no distinct advantage of one method over the other. The overall solution structure composed of two symmetric and four asymmetric branches in the range of Grashof number (Gr) of 0,2 × 106 for a Prandtl number (Pr) of 0.73 has been computed using the first approach. The computed structure is comparable to that found by Nandakumar and Weinitschke (1991) using a finite difference scheme for Grashof numbers in the range of 0,1×106. The stability properties of some solution branches; however, are different. In particular, the two-cell structure of the isolated symmetric branch that has been found to be unstable by the study of Nandakumar and Weinitschke is found to be stable by the current study. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Use of the first rib for adult age estimation: a test of one method

INTERNATIONAL JOURNAL OF OSTEOARCHAEOLOGY, Issue 5 2005
H. Kurki
Abstract The human first rib is relatively easy to identify and is often preserved, in comparison with elements such as the fourth rib and pubic symphysis. Therefore it is potentially a valuable skeletal element for estimating age in forensic and archaeological contexts. A method of adult age estimation using the first rib (Kunos et al., 1999) is tested on a sample of known age skeletons from the J.C.B. Grant Collection (n,=,29, mean age,=,55.7 years). The high correlation coefficient (r,=,0.69) and moderate coefficient of determination (r2,=,0.47) demonstrate agreement between the known and estimated ages, suggesting that the first rib demonstates morphological changes with age. The inaccuracy and bias are high (all ages inaccuracy,=,10.4 years, bias,=,4.7 years) but comparable to several other age estimation methods in common use. Although the results are not as good for younger age categories (<,50 years: inaccuracy and bias rank ninth of nine age estimation methods), the inaccuracy and bias for the older age categories are relatively low (60,+ years inaccuracy,=,8.9 years, ranks third out of nine; bias,=,,,5.8 years, ranks first out of nine) compared with other age estimation methods. The first rib method is reasonably precise (93% of individuals fall within the limits of agreement of the mean difference between two trials). The first rib method is therefore a useful addition to the methods available for biological profile reconstructions from skeletal remains, especially if it is suspected that the remains represent an older individual. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Multi-scale occupancy estimation and modelling using multiple detection methods

JOURNAL OF APPLIED ECOLOGY, Issue 5 2008
James D. Nichols
Summary 1Occupancy estimation and modelling based on detection,nondetection data provide an effective way of exploring change in a species' distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method. 2We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species' use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site. 3We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species. 4Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design. [source]


Myth and reality: practical test system for the measurement of anti-DNA antibodies in the diagnosis of systemic lupus erythematosus (SLE)

JOURNAL OF CLINICAL LABORATORY ANALYSIS, Issue 2 2010
Laura J. McCloskey
Abstract The myth persists that only the labor intensive Farr radioimmunoassay and Crithidia luciliae immunofluorescence (CL-IFA) are systemic lupus erythematosus (SLE)-specific tests. We compared them to ELISA with bacteriophage , DNA (EL-dsDNA) and denatured calf thymus DNA (EL-ssDNA). By percentile ranking, the specificity cut-off level was set both out of clinical context (SOCC) on 100 blood bank donors, and in clinical context (SICC) on 100 patients with either rheumatoid arthritis or scleroderma (50/50). Clinical sensitivity was calculated on 100 random SLE patients. At 95% SICC, the sensitivity of Farr, CL-IFA, EL-dsDNA, and EL-ssDNA was similar (95%CI): 76% (66,84), 76% (66,84), 63% (53,72), and 75% (65,83), respectively; 87% of the patients were positive by at least one method and 55%by all methods. At 99% SICC, the sensitivity was also similar (95% CI): 57% (47,67), 47% (37,57), 58% (47,67), and 43% (33,53), respectively. The areas under ROC curve were similar (95% CI) when patients were used as controls for specificity. At 99% SOCC, EL-ssDNA identified 89% positive, 2 negative but positive by another method at 95% SICC, and 9 negative (i.e. 89/2/9), followed by CL-IFA (80/6/14), Farr (76/12/12), and EL-dsDNA (64/23/13). Thus, at relatively low cost and easy automation, under the same conditions of specificity, the two ELISA tests combined were at least as good, if not superior, to CL-IFA or Farr: they showed similar clinical sensitivity and also identified more patients with anti-DNA antibodies. J. Clin. Lab. Anal. 24:77,84, 2010. © 2010 Wiley-Liss, Inc. [source]


Self-injury: A research review for the practitioner

JOURNAL OF CLINICAL PSYCHOLOGY, Issue 11 2007
E. David Klonsky
Non-suicidal self-injury is the intentional destruction of body tissue without suicidal intent and for purposes not socially sanctioned. In this practice-friendly review, the authors summarize the empirical research on who self-injures, why people self-injure, and what treatments have demonstrated effectiveness. Self-injury is more common in adolescents and young adults as compared to adults. Common forms include cutting, severe scratching, burning, and banging or hitting; most individuals who self-injure have used more than one method. Although diagnostically heterogeneous, self-injurers typically exhibit two prominent characteristics: negative emotionality and self-derogation. Self-injury is most often performed to temporarily alleviate intense negative emotions, but may also serve to express self-directed anger or disgust, influence or seek help from others, end periods of dissociation or depersonalization, and help resist suicidal thoughts. Psychotherapies that emphasize emotion regulation, functional assessment, and problem solving appear to be most effective in treating self-injury. © 2007 Wiley Periodicals, Inc. J Clin Psychol: In Session 63: 1045,1056, 2007. [source]


Fast GC for the analysis of fats and oils

JOURNAL OF SEPARATION SCIENCE, JSS, Issue 17 2003
Luigi Mondello
Abstract Fast and conventional GC techniques were both applied to ten different lipidic matrices and the results then compared. The fats and oils were of fish, animal, and vegetable origin and were all simultaneously transesterified with acidic methanol before performing batch analysis of the fatty acid methyl esters (FAMEs) obtained. All FAMEs samples were consecutively analyzed three times by each method. The fast method significantly reduced the time required for analysis by a factor of 5 while maintaining a similar resolution. Furthermore, the reproducibility of relative quantitative data was measured on going from one method to the other. Peak identification was achieved through conventional GC-MS in combination with linear retention index values contained in a home library and information derived from comprehensive 2D GC group patterns. [source]


Comparison of in vitro starch digestibility methods for predicting the glycaemic index of grain foods

JOURNAL OF THE SCIENCE OF FOOD AND AGRICULTURE, Issue 4 2008
Kirsty A Germaine
Abstract BACKGROUND:In vitro starch digestibility tests are useful for the prediction of glycaemic index (GI). However, there are no internationally recognised methods and no one method has been found to be suitable for all food types. This study compared six in vitro methods, using four grain foods, including those with a varied particle size and soluble fibre content. Method variations included using chewing or mincing, mincing with or without amylase and incubation in a restricted versus non-restricted system. Hydrolysis index (HI) values, calculated from the starch digestibility curves and GI prediction equations were used to compare the in vitro results to GI. RESULTS: HI values for five of the six methods ranked all foods in the same order as the GI values. Using a GI prediction equation (predicted GIHI) the mincing (without amylase) non-restricted method had the smallest standard error of prediction between the predicted GIHI and GI values. This method was then validated using 14 grain foods and demonstrated a significant correlation (r = 0.93, P < 0.01) between the in vitro starch digestibility and reported GI responses. CONCLUSIONS: The non-restricted mincing method showed good potential as a new in vitro starch digestibility method for predicting GI in grain foods. Copyright © 2007 Society of Chemical Industry [source]


Swedish pre-school children's UVR exposure , a comparison between two outdoor environments

PHOTODERMATOLOGY, PHOTOIMMUNOLOGY & PHOTOMEDICINE, Issue 1 2004
C. Boldeman
Background: Overexposure to ultraviolet radiation (UVR) in childhood is a major risk factor for skin cancer. Shady environments are recommended as one method of protection. Methods: Environmental exposure to UVR and environmental protection were assessed by dosimeter measurements on 64 children aged 1,6 years at two geographically close and topographically similar pre-schools outside Stockholm. Outdoor play constructions of site 1 (34 children) were mainly exposed to the sun, and those of site 2 (30 children) were mainly shaded. Dosimetry was carried out during 11 work days in May,June 2002 under clear weather conditions. The reliability of dosimeters was tested with meteorologically modelled data from SMHI, and with stationary dosimeters exposed to free sky, and compared with other UV instruments. The differences between children's outdoor stays were adjusted for. Results: The children's average daily exposures were approximately 200 JCIE/m2 erythemally effective UVR. The average relative UVR exposure (% total available UVR 08:30,18:30) was 6.4% (7.0% at site 1, 5.7% at site 2). Fractions of available UVR during outdoor stay were 14.4% (both sites), 15.3% (site 1), and 13.3% (site 2). In terms of relative differences, 5,6-year-old children at site 2 were exposed to 41% less UVR, and 1,4-year-old children 6% less than those at site 1. Conclusion: The difference can be explained by the children's outdoor pre-school environments, and the behaviors linked to these environments. It is recommended to consider the attractiveness of shady environments in the design of children's pre-school playgrounds, particularly if these are extremely exposed to the sun. [source]