Second Method (second + method)

Distribution by Scientific Domains


Selected Abstracts


Use of a blocking antibody method for the flow cytometric measurement of ZAP-70 in B-CLL

CYTOMETRY, Issue 4 2006
Mark Shenkin
Abstract Background: In this study we developed a method to measure the amount of ZAP-70 [zeta accessory protein] in B-CLL cells without relying on the ZAP-70 expression of patient B or T cells to normalize fluorescence intensity. Methods: B-CLL cells were fixed with formaldehyde before surface staining with gating antibodies CD19PC5 and CD5FITC. The cells were permeabilized with saponin, and the ZAP-70 antigen was blocked in one tube with unlabeled antibody to ZAP-70 [clone 1E7.2]. Zap-70-PE was then added to this tube. ZAP-70-PE was added to a second tube without unlabeled antibody to ZAP-70. The mean fluorescence intensity of the ZAP-70 in the tube without unlabeled antibody divided by the mean fluorescence intensity of the ZAP-70 in the tube with unlabeled antibody equals the RATIO of total fluorescence to non-specific ZAP-70 fluorescence in the B-CLL cells. In a second method of analysis, a region is created in the histogram showing ZAP-70 fluorescence intensity in the tube with unlabeled antibody to ZAP-70. This region is set to 0.9% positive cells. This same region is then used to measure the % positive [%POS] ZAP-70 cells in the tube without unlabeled antibody to ZAP-70. The brighter the ZAP-70 fluorescence above the non-specific background, the higher the %POS. Results: Due to the varying amount of non-specific staining between patient B-CLL cells and other cells, the blocking antibody method yielded a more quantitative and reproducible measure of ZAP-70 in B-CLL cells than other methods, which use the ratio of B-CLL fluorescence to normal B or T-cell fluorescence. Using this improved method, ZAP-70 was determined to be negative if the RATIO was less than 2:1 and positive if the RATIO was greater than 2:1. ZAP-70 was determined to be negative if the %POS was less than 5% and positive if the %POS was greater than 5%, a cut-off value lower than previous values published, due to exclusion of non-specific staining. Both cut-offs were based upon patient specimen distribution profiling. Conclusions: Use of a blocking antibody resulted in a robust, reproducible clinical B-CLL assay that is not influenced by the need to measure the amount of ZAP-70 in other cells. ZAP-70 results segre gate patients into indolent and aggressive groups suggested by published clinical outcomes. © 2006 International Society for Analytical Cytology [source]


Kinematic transformations for planar multi-directional pseudodynamic testing

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 9 2009
Oya Mercan
Abstract The pseudodynamic (PSD) test method imposes command displacements to a test structure for a given time step. The measured restoring forces and displaced position achieved in the test structure are then used to integrate the equations of motion to determine the command displacements for the next time step. Multi-directional displacements of the test structure can introduce error in the measured restoring forces and displaced position. The subsequently determined command displacements will not be correct unless the effects of the multi-directional displacements are considered. This paper presents two approaches for correcting kinematic errors in planar multi-directional PSD testing, where the test structure is loaded through a rigid loading block. The first approach, referred to as the incremental kinematic transformation method, employs linear displacement transformations within each time step. The second method, referred to as the total kinematic transformation method, is based on accurate nonlinear displacement transformations. Using three displacement sensors and the trigonometric law of cosines, this second method enables the simultaneous nonlinear equations that express the motion of the loading block to be solved without using iteration. The formulation and example applications for each method are given. Results from numerical simulations and laboratory experiments show that the total transformation method maintains accuracy, while the incremental transformation method may accumulate error if the incremental rotation of the loading block is not small over the time step. A procedure for estimating the incremental error in the incremental kinematic transformation method is presented as a means to predict and possibly control the error. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Unanticipated impacts of spatial variance of biodiversity on plant productivity

ECOLOGY LETTERS, Issue 8 2005
Lisandro Benedetti-Cecchi
Abstract Experiments on biodiversity have shown that productivity is often a decelerating monotonic function of biodiversity. A property of nonlinear functions, known as Jensen's inequality, predicts negative effects of the variance of predictor variables on the mean of response variables. One implication of this relationship is that an increase in spatial variability of biodiversity can cause dramatic decreases in the mean productivity of the system. Here I quantify these effects by conducting a meta-analysis of experimental data on biodiversity,productivity relationships in grasslands and using the empirically derived estimates of parameters to simulate various scenarios of levels of spatial variance and mean values of biodiversity. Jensen's inequality was estimated independently using Monte Carlo simulations and quadratic approximations. The median values of Jensen's inequality estimated with the first method ranged from 3.2 to 26.7%, whilst values obtained with the second method ranged from 5.0 to 45.0%. Meta-analyses conducted separately for each combination of simulated values of mean and spatial variance of biodiversity indicated that effect sizes were significantly larger than zero in all cases. Because patterns of biodiversity are becoming increasingly variable under intense anthropogenic pressure, the impact of loss of biodiversity on productivity may be larger than current estimates indicate. [source]


Channel estimation methods for preamble-based OFDM/OQAM modulations,

EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 7 2008
C. Lélé
In this paper, OFDM/OQAM is proposed as an alternative to conventional OFDM with cyclic prefix (CP) for transmission over multi-path fading channels. Two typical features of the OFDM/OQAM modulation are the absence of a guard interval (GI) and the fact that the orthogonality property only holds in the real field and for a distortion-free channel. Thus, the classical channel estimation (CE) methods used for OFDM cannot be directly applied to OFDM/OQAM. Therefore, we propose an analysis of the transmission of an OFDM/OQAM signal through a time-varying multi-path channel and we derive two new CE methods. The first proposed method only requires the use of pair of real pilots (POP). In a second method, named interference approximation method (IAM), we show how the imaginary interference can be used to improve the CE quality. Several preamble variants of the IAM are compared with respect to the resulting instantaneous power. Finally, the performance results obtained for the transmission of an OFDM/OQAM signal through an IEEE 802.22 channel using the POP method and three variants of IAM are compared to those obtained with CP-OFDM. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Feature removal and isolation in potential field data

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2004
F. Boschetti
SUMMARY With the aim of designing signal processing tools that act locally in space upon specific features of a signal, we compare two algorithms to remove or isolate individual anomalies in potential field profiles. The first method, based on multiscale edge analysis, leaves other features in the signal relatively untouched. A second method, based on iterative lateral continuation and subtraction of anomalies, accounts for the influence of adjacent anomalies on one another. This allows a potential field profile to be transformed into a number of single anomaly signals. Each single anomaly can then be individually processed, which considerably simplifies applications such as inversion and signal processing. [source]


Extracting Parameters from the Current,Voltage Characteristics of Organic Field-Effect Transistors

ADVANCED FUNCTIONAL MATERIALS, Issue 11 2004
G. Horowitz
Abstract Organic field-effect transistors were fabricated with vapor-deposited pentacene on aluminum oxide insulating layers. Several methods are used in order to extract the mobility and threshold voltage from the transfer characteristic of the devices. In all cases, the mobility is found to depend on the gate voltage. The first method consists of deriving the drain current as a function of gate voltage (transconductance), leading to the so-called field-effect mobility. In the second method, we assume a power-law dependence of the mobility with gate voltage together with a constant contact resistance. The third method is the so-called transfer line method, in which several devices with various channel length are used. It is shown that the mobility is significantly enhanced by modifying the aluminum oxide layer with carboxylic acid self-assembled monolayers prior to pentacene deposition. The methods used to extract parameters yield threshold voltages with an absolute value of less than 2 V. It is also shown that there is a shift of the threshold voltage after modification of the aluminum oxide layer. These features seem to confirm the validity of the parameter-extraction methods. [source]


Outlying Charge, Stability, Efficiency, and Algorithmic Enhancements in the Quantum-Mechanical Solvation Method, COSab-GAMESS

HELVETICA CHIMICA ACTA, Issue 12 2003
Laura
In this work, we present algorithmic modifications and extensions to our quantum-mechanical approach for the inclusion of solvent effects by means of molecule-shaped cavities. The theory of conductor-like screening, modified and extended for quantum-mechanical techniques, serves as the basis for our solvation methodology. The modified method is being referred to as COSab-GAMESS and is available within the GAMESS package. Our previous work has emphasized the implementation of this model by way of a distributed multipole approach for handling the effects of outlying charge. The method has been enabled within the framework of open- and closed-shell RHF and MP2. In the present work, we present a) a second method to handle outlying charge effects, b) algorithmic extensions to open- and closed-shell density-functional theory, second-derivative analysis, and reaction-path following, and c) enhancements to improve performance, convergence, and predictability. The method is now surtable for large molecular systems. New features of the enhanced continuum model are highlighted by means of a set of neutral and charged species. Computations on a series of structures with roughly the same molecular shape and volume provides an evaluation of cavitation effects. [source]


On computing the forces from the noisy displacement data of an elastic body

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 11 2008
A. Narayana Reddy
Abstract This study is concerned with the accurate computation of the unknown forces applied on the boundary of an elastic body using its measured displacement data with noise. Vision-based minimally intrusive force-sensing using elastically deformable grasping tools is the motivation for undertaking this problem. Since this problem involves incomplete and inconsistent displacement/force of an elastic body, it leads to an ill-posed problem known as Cauchy's problem in elasticity. Vision-based displacement measurement necessitates large displacements of the elastic body for reasonable accuracy. Therefore, we use geometrically non-linear modelling of the elastic body, which was not considered by others who attempted to solve Cauchy's elasticity problem before. We present two methods to solve the problem. The first method uses the pseudo-inverse of an over-constrained system of equations. This method is shown to be not effective when the noise in the measured displacement data is high. We attribute this to the appearance of spurious forces at regions where there should not be any forces. The second method focuses on minimizing the spurious forces by varying the measured displacements within the known accuracy of the measurement technique. Both continuum and frame elements are used in the finite element modelling of the elastic bodies considered in the numerical examples. The performance of the two methods is compared using seven numerical examples, all of which show that the second method estimates the forces with an error that is not more than the noise in the measured displacements. An experiment was also conducted to demonstrate the effectiveness of the second method in accurately estimating the applied forces. Copyright © 2008 John Wiley & Sons, Ltd. [source]


2D nearly orthogonal mesh generation

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 7 2004
Yaoxin Zhang
Abstract The Ryskin and Leal (RL) system is the most widely used mesh generation system for the orthogonal mapping. However, when this system is used in domains with complex geometry, particularly in those with sharp corners and strong curvatures, serious distortion or overlapping of mesh lines may occur and an acceptable solution may not be possible. In the present study, two methods are proposed to generate nearly orthogonal meshes with the smoothness control. In the first method, the original RL system is modified by introducing smoothness control functions, which are formulated through the blending of the conformal mapping and the orthogonal mapping; while in the second method, the RL system is modified by introducing the contribution factors. A hybrid system of both methods is also developed. The proposed methods are illustrated by several test examples. Applications of these methods in a natural river channel are demonstrated. It is shown that the modified RL systems are capable of producing meshes with an adequate balance between the orthogonality and the smoothness for complex computational domains without mesh distortions and overlapping. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Dynamic pricing based on asymmetric multiagent reinforcement learning

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 1 2006
Ville Könönen
A dynamic pricing problem is solved by using asymmetric multiagent reinforcement learning in this article. In the problem, there are two competing brokers that sell identical products to customers and compete on the basis of price. We model this dynamic pricing problem as a Markov game and solve it by using two different learning methods. The first method utilizes modified gradient descent in the parameter space of the value function approximator and the second method uses a direct gradient of the parameterized policy function. We present a brief literature survey of pricing models based on multiagent reinforcement learning, introduce the basic concepts of Markov games, and solve the problem by using proposed methods. © 2006 Wiley Periodicals, Inc. Int J Int Syst 21: 73,98, 2006. [source]


Fluoride pit and fissure sealants: a review

INTERNATIONAL JOURNAL OF PAEDIATRIC DENTISTRY, Issue 2 2000
Tonia L. Morphis
There are two methods of fluoride incorporation into fissure sealants. In the first method, fluoride is added to the unpolymerized resin in the form of a soluble fluoride salt that releases fluoride ions by dissolution, following sealant application. In the second method, an organic fluoride compound is chemically bound to the resin and the fluoride is released by exchange with other ions (anion exchange system). This report reviews the literature on the effectiveness of all the fluoride-releasing sealants ,commercial and experimental , that have been prepared using either the former or the latter method of fluoride incorporation. There is evidence for equal retention rates to conventional sealants and for ex vivo fluoride release and reduced enamel demineralization. However, further research is necessary to ensure the clinical longevity of fluoride sealant retention and to establish the objective of greater caries inhibition through the fluoride released in saliva and enamel. [source]


Comment on the connected-moments polynomial approach

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 7 2008
M. G. Marmorino
Abstract Bartashevich has recently proposed two new methods for approximating eigenvalues of a Hamiltonian. The first method uses Hamiltonian moments generated from a trial function and his second method is a generalization of local energy methods. We show that the first method is equivalent to a variational one, a matrix eigenvalue problem using a Lanzcos subspace. © 2008 Wiley Periodicals, Inc. Int J Quantum Chem, 2008 [source]


Membranes of cellulose triacetate produced from sugarcane bagasse cellulose as alternative matrices for doxycycline incorporation

JOURNAL OF APPLIED POLYMER SCIENCE, Issue 6 2009
Guimes Rodrigues Filho
Abstract Cellulose triacetate (CTA) membranes were prepared using polyethylene glycol, 600 g mol,1, (PEG) as additive and were utilized in essays of doxycycline (DOX) incorporation using two different procedures: (i) incorporation of the drug during the membrane preparation and (ii) incorporation of the drug to a previously prepared membrane. In the first, the produced membrane presented high compatibility between DOX and CTA, what was evidenced by analyzing the DSC curve for a CTA/PEG 50%/DOX system. Results showed that the drug is homogeneously distributed throughout the matrix, molecularly. In the second method, the drug was molecularly and superficially adsorbed, as seen through the DSC curve for the system CTA/PEG 10%/DOX, which nearly does not present alterations in relation to the original material, and through the isotherm of drug adsorption that follows the Langmuir model. Results showed that the membranes produced from sugarcane bagasse are adequate to produce matrices for drug-controlled release, both for enteric use (Method (i)) and topic use (Method (ii)). © 2009 Wiley Periodicals, Inc. J Appl Polym Sci, 2009 [source]


Quantitative analysis of total mitochondrial DNA: Competitive polymerase chain reaction versus real-time polymerase chain reaction

JOURNAL OF BIOCHEMICAL AND MOLECULAR TOXICOLOGY, Issue 4 2004
Hari K. Bhat
Abstract An efficient and effective method for quantification of small amounts of nucleic acids contained within a sample specimen would be an important diagnostic tool for determining the content of mitochondrial DNA (mtDNA) in situations where the depletion thereof may be a contributing factor to the exhibited pathology phenotype. This study compares two quantification assays for calculating the total mtDNA molecule number per nanogram of total genomic DNA isolated from human blood, through the amplification of a 613-bp region on the mtDNA molecule. In one case, the mtDNA copy number was calculated by standard competitive polymerase chain reaction (PCR) technique that involves co-amplification of target DNA with various dilutions of a nonhomologous internal competitor that has the same primer binding sites as the target sequence, and subsequent determination of an equivalence point of target and competitor concentrations. In the second method, the calculation of copy number involved extrapolation from the fluorescence versus copy number standard curve generated by real-time PCR using various dilutions of the target amplicon sequence. While the mtDNA copy number was comparable using the two methods (4.92 ± 1.01 × 104 molecules/ng total genomic DNA using competitive PCR vs 4.90 ± 0.84 × 104 molecules/ng total genomic DNA using real-time PCR), both inter- and intraexperimental variance were significantly lower using the real-time PCR analysis. On the basis of reproducibility, assay complexity, and overall efficiency, including the time requirement and number of PCR reactions necessary for the analysis of a single sample, we recommend the real-time PCR quantification method described here, as its versatility and effectiveness will undoubtedly be of great use in various kinds of research related to mitochondrial DNA damage- and depletion-associated disorders. © 2004 Wiley Periodicals, Inc. J Biochem Mol Toxicol 18:180,186, 2004 Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/jbt.20024 [source]


The parameterization and validation of generalized born models using the pairwise descreening approximation

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 14 2004
Julien Michel
Abstract Generalized Born Surface Area (GBSA) models for water using the Pairwise Descreening Approximation (PDA) have been parameterized by two different methods. The first method, similar to that used in previously reported parameterizations, optimizes all parameters against the experimental free energies of hydration of organic molecules. The second method optimizes the PDA parameters to compensate only for systematic errors of the PDA. The best models are compared to Poisson,Boltzmann calculations and applied to the computation of potentials of mean force (PMFs) for the association of various molecules. PMFs present a more rigorous test of the ability of a solvation model to correctly reproduce the screening of intermolecular interactions by the solvent, than its accuracy at predicting free energies of hydration of small molecules. Models derived with the first method are sometimes shown to fail to compute accurate potentials of mean force because of large errors in the computation of Born radii, while no such difficulties are observed with the second method. Furthermore, accurate computation of the Born radii appears to be more important than good agreement with experimental free energies of solvation. We discuss the source of errors in the potentials of mean force and suggest means to reduce them. Our findings suggest that Generalized Born models that use the Pairwise Descreening Approximation and that are derived solely by unconstrained optimization of parameters against free energies of hydration should be applied to the modeling of intermolecular interactions with caution. © 2004 Wiley Periodicals, Inc. J Comput Chem 25: 1760,1770, 2004 [source]


Related-variables selection in temporal disaggregation

JOURNAL OF FORECASTING, Issue 4 2009
Kosei Fukuda
Abstract Two related-variables selection methods for temporal disaggregation are proposed. In the first method, the hypothesis tests for a common feature (cointegration or serial correlation) are first performed. If there is a common feature between observed aggregated series and related variables, the conventional Chow,Lin procedure is applied. In the second method, alternative Chow,Lin disaggregating models with and without related variables are first estimated and the corresponding values of the Bayesian information criterion (BIC) are stored. It is determined on the basis of the selected model whether related variables should be included in the Chow,Lin model. The efficacy of these methods is examined via simulations and empirical applications. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Parameters of drug antagonism: re-examination of two modes of functional competitive drug antagonism on intraocular muscles

JOURNAL OF PHARMACY AND PHARMACOLOGY: AN INTERNATI ONAL JOURNAL OF PHARMACEUTICAL SCIENCE, Issue 8 2004
Popat N. Patil
There are two distinct kinetic functional pharmacological procedures by which the equilibrium affinity constant, KB, of a competitive reversible blocker is obtained. The classical method on an organ system requires the study of the parallel displacement of the agonist concentration-response curve in the presence of the blocker. In the second method, the agonist-evoked functional mechanical response is reduced to half by the blocker IC50 (the concentration required for 50% inhibition). In relation to these parameters the role of the ionization constant pKa and liposolubility log Pc or log D of blockers was examined. On the ciliary muscle from human eye, IC50/KB ratios for (±)-atropine, its quaternary analogue (±)-methylatropine, (-)-scopolamine, (±)-cyclopentolate, (-)-tropicamide, (±)-oxybutynin and pirenzepine were 15, 23, 4.4, 2.6, 1.66, 1.46 and 1.71, respectively. The ratios on the iris sphincter were comparable with those of ciliary muscle. When compared with large proportions of ionized molecules with water soluble properties of (±)-atropine and (±)-methylatropine, relatively high amounts of un-ionized and/or with greater partitioning of all other blockers in the lipoid barrier co-related well to low IC50/KB ratios, as predicted by the classical theory of competitive drug antagonism. It was hypothesized that due to receptor biophase access, the reduction of the mechanical response of the agonist by the highly ionized water-soluble antagonist at IC50 represented time-distorted "pseudo-equilibrium" estimation, where a higher concentration of the blocker was needed. On the other cholinergic effectors, like that of rat anococcygeus muscle or frog rectus abdominus muscle, IC50/KB ratios of respective blockers atropine or (+)-tubocurarine and hexamethonium were close to 1. Thus physicochemical properties, which affect the distribution coefficient log D and the tissue morphology (where asymmetric distribution of receptors may occur), appeared to be a critical factor in the analysis of the affinity parameters of the competitive reversible blocker. On the intraocular muscles, two functional pharmacological procedures for obtaining KB and IC50 values were not kinetically equivalent. [source]


Small confidence sets for the mean of a spherically symmetric distribution

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 3 2005
Richard Samworth
Summary., Suppose that X has a k -variate spherically symmetric distribution with mean vector , and identity covariance matrix. We present two spherical confidence sets for ,, both centred at a positive part Stein estimator . In the first, we obtain the radius by approximating the upper , -point of the sampling distribution of by the first two non-zero terms of its Taylor series about the origin. We can analyse some of the properties of this confidence set and see that it performs well in terms of coverage probability, volume and conditional behaviour. In the second method, we find the radius by using a parametric bootstrap procedure. Here, even greater improvement in terms of volume over the usual confidence set is possible, at the expense of having a less explicit radius function. A real data example is provided, and extensions to the unknown covariance matrix and elliptically symmetric cases are discussed. [source]


Structural elucidation by 1D and 2D NMR of three isomers of a carotenoid lysophosphocholine and its synthetic precursors

MAGNETIC RESONANCE IN CHEMISTRY, Issue 4 2004
Bente Jeanette Foss
Abstract A carotenoic acid was used to obtain a long-chain unsaturated lysophosphocholine. The carotenoid lysophosphocholine was synthesized by two methods. The first method resulted in mixtures of regioisomers for each step in the synthetic route. Homo- and heteronuclear 1D and 2D NMR methods were employed to elucidate the structures of the individual isomers and their intermediates. The pure regioisomer [1-(,-apo-8, -carotenoyl)-2-lyso- glycero -3-phosphocholine] was obtained by a second method, but in low yield. The 1D 1H NMR subtraction spectrum of the mixture and the pure regioisomer was used to interpret the 1H shifts of the unsaturated acyl moieties. The 1H and 13C signals of the acyl chain show characteristic shifts depending on the positions of the choline and the acyl group attached to the glycerol backbone. Therefore, the unsaturated acyl chain signals have diagnostic values for the identification of isomers of unsaturated (lyso)phosphocholines. Chemical shifts and indirect coupling constants are reported for each of the major components of the mixtures. The methods used were 1D (1H, 13C and 31P) and 2D (H,H-COSY, HMBC, HSQC and HETCOR) NMR. Copyright © 2004 John Wiley & Sons, Ltd. [source]


MODEL UNCERTAINTY AND ITS IMPACT ON THE PRICING OF DERIVATIVE INSTRUMENTS

MATHEMATICAL FINANCE, Issue 3 2006
Rama Cont
Uncertainty on the choice of an option pricing model can lead to "model risk" in the valuation of portfolios of options. After discussing some properties which a quantitative measure of model uncertainty should verify in order to be useful and relevant in the context of risk management of derivative instruments, we introduce a quantitative framework for measuring model uncertainty in the context of derivative pricing. Two methods are proposed: the first method is based on a coherent risk measure compatible with market prices of derivatives, while the second method is based on a convex risk measure. Our measures of model risk lead to a premium for model uncertainty which is comparable to other risk measures and compatible with observations of market prices of a set of benchmark derivatives. Finally, we discuss some implications for the management of "model risk." [source]


Atopy patch test in patients with atopic eczema/dermatitis syndrome: comparison of petrolatum and aqueous solution as a vehicle

ALLERGY, Issue 4 2004
J. M. Oldhoff
Background:, The atopy patch test (APT) is an in vivo model to study the induction of eczema by inhalant allergens. This study was designed to compare two commonly used APT methods. Methods:, In the first method, the allergen is dissolved in aqueous solution, which is applied on tape-stripped skin. In the second method, the allergen is dissolved in petrolatum and applied without tape stripping. Thirteen patients with atopic dermatitis sensitized to inhalant allergens were patch tested using both methods. Reactions were evaluated macroscopically and microscopically after 48 h. Results:, Nine out of 13 patients displayed a positive reaction for both methods. One patient had a positive APT for the aqueous method alone and three for the petrolatum method alone. Reactions were significantly stronger when using the petrolatum method. Histological evaluation of the nine patients positive for both methods showed no significant differences in number of eosinophils, T-cells and neutrophils. Conclusion:, The APT using the petrolatum vehicle induces a higher number of positive reactions and is significantly stronger relative to the APT using allergen in aqueous vehicle. The cellular influx in both test methods is comparable. Both methods can be used to study the mechanisms in the induction of eczema by inhalant allergens. [source]


Evaluation of erythrocyte dysmorphism by light microscopy with lowering of the condenser lens: A simple and efficient method

NEPHROLOGY, Issue 2 2010
GYL EANES BARROS SILVA
ABSTRACT: Aim: To demonstrate that the evaluation of erythrocyte dysmorphism by light microscopy with lowering of the condenser lens (LMLC) is useful to identify patients with a haematuria of glomerular or non-glomerular origin. Methods: A comparative double-blind study between phase contrast microscopy (PCM) and LMLC is reported to evaluate the efficacy of these techniques. Urine samples of 39 patients followed up for 9 months were analyzed, and classified as glomerular and non-glomerular haematuria. The different microscopic techniques were compared using receiver,operator curve (ROC) analysis and area under curve (AUC). Reproducibility was assessed by coefficient of variation (CV). Results: Specific cut-offs were set for each method according to their best rate of specificity and sensitivity as follows: 30% for phase contrast microscopy and 40% for standard LMLC, reaching in the first method the rate of 95% and 100% of sensitivity and specificity, respectively, and in the second method the rate of 90% and 100% of sensitivity and specificity, respectively. In ROC analysis, AUC for PCM was 0.99 and AUC for LMLC was 0.96. The CV was very similar in glomerular haematuria group for PCM (35%) and LMLC (35.3%). Conclusion: LMLC proved to be effective in contributing to the direction of investigation of haematuria, toward the nephrological or urological side. This method can substitute PCM when this equipment is not available. [source]


Influence of Mobile Magnetic Resonance Imaging on Implanted Pacemakers

PACING AND CLINICAL ELECTROPHYSIOLOGY, Issue 1p2 2003
RYOJI KISHI
KISHI, R., et al.: Influence of Mobile Magnetic Resonance Imaging on Implanted Pacemakers.Purpose: Mobile magnetic resonance imaging (MRI) systems will be widely used in Japan. When traveling, mobile MRI generate alternating electromagnetic waves which may cause electromagnetic interference (EMI). This study was designed to determine whether this may influence the function of implanted pacemakers (PM). Methods and Results: The influence of the static magnetic fields was tested in the first method using a PM-human model (Phantom). Magnetic force was simultaneously measured. The PM was switched to the magnet mode within 90 cm from the vehicle, where the magnetic force was = 2 mT. In the second method, six phantoms were placed on the side of the road, facing in three different directions in X-Y-Z axis orientations, at 1.3 m and 2.0 m above the ground. The mobile MRI passed by at a distance of 1 m from the phantoms at the speed of 20 or 40 km/h. In these experiments, magnet mode switch of the PM was observed for 2 seconds when the vehicle passed close to the phantoms, though no electrical noise was recorded. Conclusion: Mobile MRI vehicles can switch a PM to magnet mode when the distance between patient and vehicle is <90 cm, regardless of whether the vehicle is moving or at a stop. Patients with implanted PM should not approach within <1 m of a mobile MRI. No other EMI-induced PM dysfunction was detected. (PACE 2003; 26[Pt. II]:527,529) [source]


Grafting of functionalized silica particles with poly(acrylic acid)

POLYMERS FOR ADVANCED TECHNOLOGIES, Issue 6 2006
Jarkko J. Heikkinen
Abstract Two different methods to graft silica particles with poly(acrylic acid) (PAA) were studied. In the first method PAA was reacted with 1,1,-carbonyldiimidazole to give functionalized PAA. The resulting activated carbonyl group reacted easily with 3-aminopropyl-functionalized silica at low temperatures. In the second method 3-glycidoxypropyl-functionalized silica particles were reacted directly with PAA by using magnesium chloride as a catalyst. Different molecular weights of PAAs were used in order to investigate the effect of molecular weight on grafting yields in both methods. The grafting yields were determined with thermogravimetric analysis (TGA). All products were also investigated with IR. The results showed that the yields of reactions performed at ambient temperature by using 1,1,-carbonyldiimidazole-functionalized PAA were the same as with a direct reaction of unfunctionalized PAA and 3-aminopropyl-functionalized silica performed at 153°C. Also in reactions between 3-glycidoxypropyl-functionalized silica and PAA the yields were satisfactory. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Managing platform architectures and manufacturing processes for nonassembled products

THE JOURNAL OF PRODUCT INNOVATION MANAGEMENT, Issue 4 2002
Marc H. Meyer
The article presents methods for defining product platforms and measuring business performance in process intensive industries. We first show how process intensive product platforms can be defined using the products and processes of a film manufacturer. We then present an empirical method for understanding the dynamics of process intensive platform innovation, allocating engineering and sales data to specific platform and product development efforts within a product family. We applied this method to a major product line of a materials manufacturer. We gathered ten years of engineering and manufacturing cost data and allocated these to successive platforms and products, and then generated R&D performance measures. These data show the dynamic of heavy capital spending relative to product engineering as one might expect in a process intensive industries. The data also show how derivative products can be leveraged from underlying product platforms and processes for nonassembled products. Embedded within these data are strategies for creating reusable subsystems (comprising components, materials, etc.) and common production processes. Hard data on the degree to which subsystems and processes are shared across different products frequently are typically not maintained by corporations for the duration needed to understand the dynamics of evolving product families. For this reason, we developed and applied a second method to assess the degree of reuse of subsystems and processes. This method asks engineering managers to provide subjective ratings on an ordinal scale regarding the use of technology and processes from one product to the next in a cumulative manner. We find that high levels of reuse generally indicate that a product family was developed with a platform discipline. We applied this measure of platform intensity to two product lines of integrated circuits from another large manufacturer. We used this method to gather approximately ten years of information for each product family. Upon analysis, one product family showed substantial platform discipline, emphasizing a common architecture and processes across specific products within the product line. The other product family was developed with significantly less sharing and reuse of architecture, components, and processes. We then found that the platform centric product family outperformed the latter along a number of performance dimensions over the course of the decade under examination. [source]


Protein crystallization in hydrogel beads

ACTA CRYSTALLOGRAPHICA SECTION D, Issue 9 2005
Ronnie Willaert
The use of hydrogel beads for the crystallization of proteins is explored in this contribution. The dynamic behaviour of the internal precipitant, protein concentration and relative supersaturation in a gel bead upon submerging the bead in a precipitant solution is characterized theoretically using a transient diffusion model. Agarose and calcium alginate beads have been used for the crystallization of a low-molecular-weight (14.4,kDa, hen egg-white lysozyme) and a high-molecular-weight (636.0,kDa, alcohol oxidase) protein. Entrapment of the protein in the agarose-gel matrix was accomplished using two methods. In the first method, a protein solution is mixed with the agarose sol solution. Gel beads are produced by immersing drops of the protein,agarose sol mixture in a cold paraffin solution. In the second method (which was used to produce calcium alginate and agarose beads), empty gel beads are first produced and subsequently filled with protein by diffusion from a bulk solution into the bead. This latter method has the advantage that a supplementary purification step is introduced (for protein aggregates and large impurities) owing to the diffusion process in the gel matrix. Increasing the precipitant, gel concentration and protein loading resulted in a larger number of crystals of smaller size. Consequently, agarose as well as alginate gels act as nucleation promoters. The supersaturation in a gel bead can be dynamically controlled by changing the precipitant and/or the protein concentration in the bulk solution. Manipulation of the supersaturation allowed the nucleation rate to be varied and led to the production of large crystals which were homogeneously distributed in the gel bead. [source]


DATA-DRIVEN SMOOTH TESTS AND A DIAGNOSTIC TOOL FOR LACK-OF-FIT FOR CIRCULAR DATA

AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 4 2009
Heidi Wouters
Summary Two contributions to the statistical analysis of circular data are given. First we construct data-driven smooth goodness-of-fit tests for the circular von Mises assumption. As a second method, we propose a new graphical diagnostic tool for the detection of lack-of-fit for circular distributions. We illustrate our methods on two real datasets. [source]


Assessment of Agreement under Nonstandard Conditions Using Regression Models for Mean and Variance

BIOMETRICS, Issue 1 2006
Pankaj K. Choudhary
Summary The total deviation index of Lin (2000, Statistics in Medicine19, 255,270) and Lin et al. (2002, Journal of the American Statistical Association97, 257,270) is an intuitive approach for the assessment of agreement between two methods of measurement. It assumes that the differences of the paired measurements are a random sample from a normal distribution and works essentially by constructing a probability content tolerance interval for this distribution. We generalize this approach to the case when differences may not have identical distributions,a common scenario in applications. In particular, we use the regression approach to model the mean and the variance of differences as functions of observed values of the average of the paired measurements, and describe two methods based on asymptotic theory of maximum likelihood estimators for constructing a simultaneous probability content tolerance band. The first method uses bootstrap to approximate the critical point and the second method is an analytical approximation. Simulation shows that the first method works well for sample sizes as small as 30 and the second method is preferable for large sample sizes. We also extend the methodology for the case when the mean function is modeled using penalized splines via a mixed model representation. Two real data applications are presented. [source]


Structure,Activity Relationships in Cholapod Anion Carriers: Enhanced Transmembrane Chloride Transport through Substituent Tuning

CHEMISTRY - A EUROPEAN JOURNAL, Issue 31 2008

Abstract Chloride transport by a series of steroid-based "cholapod" receptors/carriers was studied in vesicles. The principal method involved preincorporation of the cholapods in the vesicle membranes, and the use of lucigenin fluorescence quenching to detect inward-transported Cl,. The results showed a partial correlation between anion affinity and transport activity, in that changes at the steroidal 7 and 12 positions affected both properties in concert. However, changes at the steroidal 3-position yielded irregular effects. Among the new steroids investigated the bis- p -nitrophenylthiourea 3 showed unprecedented activity, giving measurable transport through membranes with a transporter/lipid ratio of 1:250,000 (an average of <2 transporter molecules per vesicle). Increasing transporter lipophilicity had no effect, and positively charged steroids had low activity. The p -nitrophenyl monourea 25 showed modest but significant activity. Measurements using a second method, requiring the addition of transporters to preformed vesicle suspensions, implied that transporter delivery was problematic in some cases. A series of measurements employing membranes of different thicknesses provided further evidence that the cholapods act as mobile anion carriers. [source]