Similar Approach (similar + approach)

Distribution by Scientific Domains


Selected Abstracts


Host immune responses in ex vivo approaches to cutaneous gene therapy targeted to keratinocytes

EXPERIMENTAL DERMATOLOGY, Issue 10 2005
Z. Lu
Abstract:, Epidermal gene therapy may benefit a variety of inherited skin disorders and certain systemic diseases. Both in vivo and ex vivo approaches of gene transfer have been used to target human epidermal stem cells and achieve long-term transgene expression in immunodeficient mouse/human chimera models. Immunological responses however, especially in situations where a neoantigen is expressed, are likely to curtail expression and thereby limit the therapy. In vivo gene transfer to skin has been shown to induce transgene-specific immune responses. Ex vivo gene transfer approaches, where keratinocytes are transduced in culture and transplanted back to patient, however, may avoid signals provided to the immune system by in vivo administration of vectors. In the current study, we have developed a stable epidermal graft platform in immunocompetent mice to analyze host responses in ex vivo epidermal gene therapy. Using green fluorescent protein (GFP) as a neoantigen and an ex vivo retrovirus-mediated gene transfer to mouse primary epidermal cultures depleted of antigen-presenting cells (APCs), we show induction of GFP-specific immune responses leading to the clearance of transduced cells. Similar approach in immunocompetent mice tolerant to GFP resulted in permanent engraftment of transduced cells and continued GFP expression. Activation of transgene-specific immune responses in ex vivo gene transfer targeted to keratinocytes require cross-presentation of transgene product to APCs, a process that is most amenable to immune modulation. This model may be used to explore strategies to divert transgene-specific immune responses to less destructive or tolerogenic ones. [source]


EVOLUTION UNDER RELAXED SEXUAL CONFLICT IN THE BULB MITE RHIZOGLYPHUS ROBINI

EVOLUTION, Issue 9 2006
Magdalena Tilszer
Abstract The experimental evolution under different levels of sexual conflict have been used to demonstrate antagonistic coevolution in muscids, but among other taxa a similar approach has not been employed. Here, we describe the results of 37 generations of evolution under either experimentally enforced monogamy or polygamy in the bulb mite Rhizoglyphus robini. Three replicates were maintained for each treatment. Monogamy makes male and female interests congruent; thus selection is expected to decrease harmfulness of males to their partners. Our results were consistent with this prediction in that females from monogamous lines achieved lower fecundity when housed with males from polygamous lines. Fecundity of polygamous females was not affected by mating system under which their partners evolved, which suggests that they were more resistant to male-induced harm. As predicted by the antagonistic coevolution hypothesis, the decrease in harmfulness of monogamous males was accompanied by a decline in reproductive competitiveness. In contrast, female fecundity and embryonic viability, which were not expected to be correlated with male harmfulness, did not differ between monogamous and polygamous lines. None of the fitness components assayed differed between individuals obtained from crosses between parents from the same line and those obtained from crosses between parents from different lines within the same mating system. This indicates that inbreeding depression did not confound our results. However, interpretation of our results is complicated by the fact that both males and females from monogamous lines evolved smaller body size compared to individuals from polygamous lines. Although a decrease in reproductive performance of males from monogamous lines was still significant when body size was taken into account, we were not able to separate the effects of male body size and mating system in their influence on fecundity of their female partners. [source]


Efficient killing of SW480 colon carcinoma cells by a signal transducer and activator of transcription (STAT) 3 hairpin decoy oligodeoxynucleotide , interference with interferon-,-STAT1-mediated killing

FEBS JOURNAL, Issue 9 2009
Ali Tadlaoui Hbibi
The signal transducers and activators of transcription (STATs) convey signals from the membrane to the nucleus in response to cytokines or growth factors. STAT3 is activated in response to cytokines involved mostly in cell proliferation; STAT1 is activated by cytokines, including interferon-,, involved in defence against pathogens and the inhibition of cell proliferation. STAT3, which is frequently activated in tumour cells, is a valuable target with respect to achieving inhibition of tumour cell proliferation. Indeed, its inhibition results in cell death. We previously observed that inhibition of the transcription factor nuclear factor-,B, a key regulator of cell proliferation, with decoy oligodeoxynucleotides results in cell death. We used a similar approach for STAT3. A hairpin STAT3 oligodeoxynucleotide was added to a colon carcinoma cell line in which it induced cell death as efficiently as the STAT3 inhibitor stattic. The hairpin STAT3 oligodeoxynucleotide co-localized with STAT3 within the cytoplasm, prevented STAT3 localization to the nucleus, blocked a cyclin D1 reporter promoter and associated with STAT3 in pull-down assays. However, the same cells were efficiently killed by interferon-,. This effect was counteracted by the STAT3 oligodeoxynucleotide, which was found to efficiently inhibit STAT1. Thus, although it can inhibit STAT3, the hairpin STAT3 oligodeoxynucleotide appears also to inhibit STAT1-mediated interferon-, cell killing, highlighting the need to optimize STAT3-targeting oligodeoxynucleotides. [source]


The SAAPdb web resource: A large-scale structural analysis of mutant proteins,

HUMAN MUTATION, Issue 4 2009
Jacob M. Hurst
Abstract The Single Amino Acid Polymorphism database (SAAPdb) is a new resource for the analysis and visualization of the structural effects of mutations. Our analytical approach is to map single nucleotide polymorphisms (SNPs) and pathogenic deviations (PDs) to protein structural data held within the Protein Data Bank. By mapping mutations onto protein structures, we can hypothesize whether the mutant residues will have any local structural effect that may "explain" a deleterious phenotype. Our prior work used a similar approach to analyze mutations within a single protein. An analysis of the contents of SAAPdb indicates that there are clear differences in the sequence and structural characteristics of SNPs and PDs, and that PDs are more often explained by our structural analysis. This mapping and analysis is a useful resource for the mutation community and is publicly available at http://www.bioinf.org.uk/saap/db/. Hum Mutat 0, 1,9, 2009. © 2009 Wiley-Liss, Inc. [source]


Market Valuation of Research and Development Spending under Canadian GAAP,

ACCOUNTING PERSPECTIVES, Issue 1 2004
ANTONELLO CALLIMACI
ABSTRACT Section 3450 of the Canadian Institute of Chartered Accountants (CICA) Handbook requires Canadian firms to capitalize development costs that meet certain criteria and to expense those that relate to research. International Accounting Standard (IAS) No. 38 favours a similar approach. In the United States, Statement of Financial Accounting Standard (SFAS) No. 2 recommends the immediate expensing of all research and development (R&D) spending. The only exception is SFAS No. 86, which requires software development costs to be capitalized when a product successfully passes a technological feasibility test. Consequently, the Canadian financial disclosure regime provides a rich setting for testing the market valuation of capitalized R&D. Our primary research question asks whether capitalized R&D provides useful information to market participants investing in Canadian firms. We use price-level and return models to assess the value relevance of capitalized R&D disclosed in the financial statements under Canadian GAAP. In line with expectations, using a price-level model, we find that capitalized R&D and R&D expense as disclosed in the financial statements provide information that is value relevant to market participants. However, we find that R&D capitalized during the year helps explain returns while R&D expense does not. Thus we conclude that the application of section 3450 of the CICA Handbook produces value-relevant information. [source]


Self-regular boundary integral equation formulations for Laplace's equation in 2-D

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 1 2001
A. B. Jorge
Abstract The purpose of this work is to demonstrate the application of the self-regular formulation strategy using Green's identity (potential-BIE) and its gradient form (flux-BIE) for Laplace's equation. Self-regular formulations lead to highly effective BEM algorithms that utilize standard conforming boundary elements and low-order Gaussian integrations. Both formulations are discussed and implemented for two-dimensional potential problems, and numerical results are presented. Potential results show that the use of quartic interpolations is required for the flux-BIE to show comparable accuracy to the potential-BIE using quadratic interpolations. On the other hand, flux error results in the potential-BIE implementation can be dominated by the numerical integration of the logarithmic kernel of the remaining weakly singular integral. Accuracy of these flux results does not improve beyond a certain level when using standard quadrature together with a special transformation, but when an alternative logarithmic quadrature scheme is used these errors are shown to reduce abruptly, and the flux results converge monotonically to the exact answer. In the flux-BIE implementation, where all integrals are regularized, flux results accuracy improves systematically, even with some oscillations, when refining the mesh or increasing the order of the interpolating function. The flux-BIE approach presents a great numerical sensitivity to the mesh generation scheme and refinement. Accurate results for the potential and the flux were obtained for coarse-graded meshes in which the rate of change of the tangential derivative of the potential was better approximated. This numerical sensitivity and the need for graded meshes were not found in the elasticity problem for which self-regular formulations have also been developed using a similar approach. Logarithmic quadrature to evaluate the weakly singular integral is implemented in the self-regular potential-BIE, showing that the magnitude of the error is dependent only on the standard Gauss integration of the regularized integral, but not on this logarithmic quadrature of the weakly singular integral. The self-regular potential-BIE is compared with the standard (CPV) formulation, showing the equivalence between these formulations. The self-regular BIE formulations and computational algorithms are established as robust alternatives to singular BIE formulations for potential problems. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Haplotyping of the canine MHC without the need for DLA typing

INTERNATIONAL JOURNAL OF IMMUNOGENETICS, Issue 6 2005
C. A. McLure
Summary The genomic matching technique has proven useful in MHC haplotyping in humans. We have adopted a similar approach in Australian cattle dogs and report that genotyping can be achieved with a single assay. [source]


A Conservative Approach to Performing Transseptal Punctures Without the Use of Intracardiac Echocardiography: Stepwise Approach with Real-Time Video Clips

JOURNAL OF CARDIOVASCULAR ELECTROPHYSIOLOGY, Issue 6 2007
ALAN CHENG M.D.
Atrial transseptal puncture as a means of accessing the left heart is a critical component of catheter ablation procedures for atrial fibrillation, left-sided accessory pathways, and access to the left ventricle in patients with certain types of prosthetic aortic valves. Although this technique has been performed successfully since the 1950s, severe and potentially life-threatening complications can still occur, including cardiac tamponade and/or death. Some have adopted the use of intracardiac echocardiography, but our laboratory and many others throughout the world have successfully relied on fluoroscopic imaging alone. The aim of this brief report is to describe in detail our technique for performing transseptal punctures during catheter ablation procedures for atrial fibrillation. We employ a similar approach when targeting left-sided accessory pathways, although only a single transseptal is performed in those cases. Utilizing a series of real-time video clips, we describe our technique of double transseptal puncture and illustrate in detail ways in which to avoid common pitfalls. [source]


Application of the frozen atom approximation to the GB/SA continuum model for solvation free energy

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 2 2002
Olgun Guvench
Abstract The generalized Born/surface area (GB/SA) continuum model for solvation free energy is a fast and accurate alternative to using discrete water molecules in molecular simulations of solvated systems. However, computational studies of large solvated molecular systems such as enzyme,ligand complexes can still be computationally expensive even with continuum solvation methods simply because of the large number of atoms in the solute molecules. Because in such systems often only a relatively small portion of the system such as the ligand binding site is under study, it becomes less attractive to calculate energies and derivatives for all atoms in the system. To curtail computation while still maintaining high energetic accuracy, atoms distant from the site of interest are often frozen; that is, their coordinates are made invariant. Such frozen atoms do not require energetic and derivative updates during the course of a simulation. Herein we describe methodology and results for applying the frozen atom approach to both the generalized Born (GB) and the solvent accessible surface area (SASA) parts of the GB/SA continuum model for solvation free energy. For strictly pairwise energetic terms, such as the Coulombic and van-der-Waals energies, contributions from pairs of frozen atoms can be ignored. This leaves energetic differences unaffected for conformations that vary only in the positions of nonfrozen atoms. Due to the nonlocal nature of the GB analytical form, however, excluding such pairs from a GB calculation leads to unacceptable inaccuracies. To apply a frozen-atom scheme to GB calculations, a buffer region within the frozen-atom zone is generated based on a user-definable cutoff distance from the nonfrozen atoms. Certain pairwise interactions between frozen atoms in the buffer region are retained in the GB computation. This allows high accuracy in conformational GB comparisons to be maintained while achieving significant savings in computational time compared to the full (nonfrozen) calculation. A similar approach for using a buffer region of frozen atoms is taken for the SASA calculation. The SASA calculation is local in nature, and thus exact SASA energies are maintained. With a buffer region of 8 Å for the frozen-atom cases, excellent agreement in differences in energies for three different conformations of cytochrome P450 with a bound camphor ligand are obtained with respect to the nonfrozen cases. For various minimization protocols, simulations run 2 to 10.5 times faster and memory usage is reduced by a factor of 1.5 to 5. Application of the frozen atom method for GB/SA calculations thus can render computationally tractable biologically and medically important simulations such as those used to study ligand,receptor binding conformations and energies in a solvated environment. © 2002 Wiley Periodicals, Inc. J Comput Chem 23: 214,221, 2002 [source]


Simultaneous analysis of the behavioural phenotype, physical factors, and parenting stress in people with Cornelia de Lange syndrome

JOURNAL OF INTELLECTUAL DISABILITY RESEARCH, Issue 7 2009
J. Wulffaert
Abstract Background Studies into the phenotype of rare genetic syndromes largely rely on bivariate analysis. The aim of this study was to describe the phenotype of Cornelia de Lange syndrome (CdLS) in depth by examining a large number of variables with varying measurement levels. Virtually the only suitable multivariate technique for this is categorical principal component analysis. The characteristics of the CdLS phenotype measured were also analysed in relation to parenting stress. Method Data for 37 children and adults with CdLS were collected. The type of gene mutation and relevant medical characteristics were measured. Information on adaptive functioning, behavioural problems, the presence of the autistic disorder and parenting stress were obtained through questionnaires and semi-structured interviews with the parents. Chronological age and gender were also included in the analysis. Results All characteristics measured, except gender, were highly interrelated and there was much variability in the CdLS phenotype. Parents perceived more stress when their children were older, were lower functioning, had more behavioural problems, and if the autistic disorder was present. A new perspective was acquired on the relation between the gene mutation type and medical and behavioural characteristics. In contrast with earlier research the severity of medical characteristics did not appear a strong prognostic factor for the level of development. Conclusion Categorical principal component analysis proved particularly valuable for the description of this small group of participants given the large number of variables with different measurement levels. The success of the technique in the present study suggests that a similar approach to the characterisation of other rare genetic syndromes could prove extremely valuable. Given the high variability and interrelatedness of characteristics in CdLS persons, parents should be informed about this differentiated perspective. [source]


Synthesis of ,3 adrenergic receptor agonist LY377604 and its metabolite 4-hydroxycarbazole, labeled with carbon-14 and deuterium

JOURNAL OF LABELLED COMPOUNDS AND RADIOPHARMACEUTICALS, Issue 6 2005
Boris A. Czeskis
Abstract Synthesis of 14C-radiolabeled 4-hydroxycarbazole was accomplished starting from aniline-[U- 14C], based on zinc chloride initiated Fischer cyclization of the phenylhydrazone prepared from phenylhydrazine-[U- 14C] and cyclohexane-1,3-dione. The resulting tetrahydrooxocarbazole was subjected to dehydrogenation,aromatization using palladium on carbon. The aromatized 4-hydroxycarbazole-[4b,5,6,7,8,8a- 14C] was then used for the synthesis of 14C-labeled ,3 adrenergic receptor agonist LY377604. The introduction of four deuteria in the carbazole fragment of LY377604 accomplished by its initial bromination and subsequent catalytic deuteration of the resulting tetrabromide. A similar approach was used for the conversion of 4-hydroxycarbazole into its tetradeutero-isotopomer. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Thermal machines based on surface energy of wetting: Thermodynamic analysis

AICHE JOURNAL, Issue 3 2003
A. Laouir
This work proposes an original thermodynamic-energetic analysis of the feasibility and ideal performance of thermal machines based on the wetting phenomenon proposed by V. A. Eroshenko. The extension or contraction of a liquid film is taken as a "tutorial" example to introduce the basic thermodynamic relations of this 2-D transformation. It implies both mechanical and thermal effects, and this coupling allows conversion of heat to work (thermal engine) or conversely to pump heat (refrigeration/heat pump effect). A similar approach is then developed for the interface between a liquid and a highly microporous solid, having a large internal surface area. The thermodynamic behavior of this interface involves as state variables the surface tension of the liquid, the contact angle, and their dependence on temperature. Depending on the relative magnitude and sign of these quantities, and, therefore, on the working couple and the temperature range, a variety of machine cycles are feasible, or excluded, and a method is proposed for a comprehensive inventory. Order-of-magnitude calculations of the energy densities are presented based on the existing experimental data for several systems involving water as the fluid. The tentative conclusions are that the energy densities are very small on a mass basis compared to conventional systems based on vaporization, but the contrary is true on a volume basis because the phase transformation (extension of the surface) occurs in a condensed state. There may, therefore, be some niches for thermal machines of this type, but they remain to be identified and validated. [source]


Stomatal evidence for a decline in atmospheric CO2 concentration during the Younger Dryas stadial: a comparison with Antarctic ice core records

JOURNAL OF QUATERNARY SCIENCE, Issue 1 2002
J. C. Mcelwain
Abstract A recent high-resolution record of Late-glacial CO2 change from Dome Concordia in Antarctica reveals a trend of increasing CO2 across the Younger Dryas stadial (GS-1). These results are in good agreement with previous Antarctic ice-core records. However, they contrast markedly with a proxy CO2 record based on the stomatal approach to CO2 reconstruction, which records a ca. 70 ppm mean CO2 decline at the onset of GS-1. To address these apparent discrepancies we tested the validity of the stomatal-based CO2 reconstructions from Kråkenes by obtaining further proxy CO2 records based on a similar approach using fossil leaves from two independent lakes in Atlantic Canada. Our Late-glacial CO2 reconstructions reveal an abrupt ca. 77 ppm decrease in atmospheric CO2 at the onset of the Younger Dryas stadial, which lagged climatic cooling by ca. 130 yr. Furthermore, the trends recorded in the most accurate high-resolution ice-core record of CO2, from Dome Concordia, can be reproduced from our stomatal-based CO2 records, when time-averaged by the mean age distribution of air contained within Dome Concordia ice (200 to 550 yr). If correct, our results indicate an abrupt drawdown of atmospheric CO2 within two centuries at the onset of GS-1, suggesting that some re-evaluation of the behaviour of atmospheric CO2 sinks and sources during times of rapid climatic change, such as the Late-glacial, may be required. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Detecting and creating oscillations using multifractal methods

MATHEMATISCHE NACHRICHTEN, Issue 11 2006
Stéphane Seuret
Abstract By comparing the Hausdorff multifractal spectrum with the large deviations spectrum of a given continuous function f, we find sufficient conditions ensuring that f possesses oscillating singularities. Using a similar approach, we study the nonlinear wavelet threshold operator which associates with any function f = ,j ,kdj,k,j,k , L2(,) the function series ft whose wavelet coefficients are dtj,k = dj,k1, for some fixed real number , > 0. This operator creates a context propitious to have oscillating singularities. As a consequence, we prove that the series ft may have a multifractal spectrum with a support larger than the one of f . We exhibit an example of function f , L2(,) such that the associated thresholded function series ft effectively possesses oscillating singularities which were not present in the initial function f . This series ft is a typical example of function with homogeneous non-concave multifractal spectrum and which does not satisfy the classical multifractal formalisms. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Noninvasive assessment of energy expenditure in children

AMERICAN JOURNAL OF HUMAN BIOLOGY, Issue 5 2006
Isabelle Sarton-Miller
This study establishes an affordable, simple, and noninvasive method to assess energy expenditure (EE) in children, an underrepresented group. The method is based on regression modeling, where prediction of oxygen consumption (VO2), a proxy of EE, was deduced from heart rate (HR) and several variables that adjusted for interindividual variability. Limb activities (arms vs. legs) and posture (sitting vs. standing) were represented in the regression as dichotomous covariates. The order of activities and intensities was randomized. Seventy-four children (aged 7,10 years), raised at sea-level (Seattle, WA), comprised the sample. Anthropometric measures were taken, and VO2 and HR were measured for activities using the arms in sitting and standing positions (mixing and punching), as well as walking at different velocities on a treadmill. Repeated measures and least square regression estimation were used. HR, body mass, number of hours of physical activity per week (HPA), an interaction term between sitting and standing resting HR, and the two dichotomous variables, sex and limbs, were significant covariates; posture was not. Several equations were developed for various field uses. The equations were built from sea-level data, but ultimately this method could serve as a baseline for developing a similar approach in other populations, where noninvasive estimation of EE is imperative in order to gain a better understanding of children's energetic issues. Am. J. Hum. Biol. 18:600,609, 2006. © 2006 Wiley-Liss, Inc. [source]


Caught in the trio trap?

AMERICAN JOURNAL OF MEDICAL GENETICS, Issue 4 2001
Potential selection bias inherent to association studies usings parent-offspring trios
Abstract During the last years, the validity of classic case control studies in psychiatric genetic research has been increasingly under question due to the risk of population stratification problems inherent to this type of association study. By consequence, the application of family-based association studies using parent-offspring trios has been strongly advocated. Recently, however, in a study comparing clinical characteristics between index patients from parent-offspring trios and singleton patients with bipolar affective disorder, the question was raised whether a systematic neglect of case control association studies could lead to a selection bias of susceptibility genes. In a similar approach, we compared demographic and clinical characteristics of 122 singleton bipolar patients with those of 54 bipolar patients derived from parent-offspring trios. The singleton patients did not only present with a higher age of onset, but also with a higher frequency of suicidal behavior and a higher familial loading for suicidality. These findings suggest that the genetic mechanism for disease might be different between trio-based and classic case control samples, where patients are examined whose parents are not available for genetic studies. Thus, giving up case control designs for the sake of family-based association studies could be at the risk of selecting against several genetically determined factors. © 2001 Wiley-Liss, Inc. [source]


The statistics of the highest E value

ACTA CRYSTALLOGRAPHICA SECTION A, Issue 4 2007
Grzegorz Chojnowski
In a previous publication, the Gumbel,Fisher,Tippett (GFT) extreme-value analysis has been applied to investigate the statistics of the intensity of the strongest reflection in a thin resolution shell. Here, a similar approach is applied to study the distribution, expectation value and standard deviation of the highest normalized structure-factor amplitude (E value). As before, acentric and centric reflections are treated separately, a random arrangement of scattering atoms is assumed, and E -value correlations are neglected. Under these assumptions, it is deduced that the highest E value is GFT distributed to a good approximation. Moreover, it is shown that the root of the expectation value of the highest `normalized' intensity is not only an upper limit for the expectation value of the highest E value but also a very good estimate. Qualitatively, this can be attributed to the sharpness of the distribution of the highest E value. Although the formulas were derived with various simplifying assumptions and approximations, they turn out to be useful also for real small-molecule and protein crystal structures, for both thin and thick resolution shells. The only limitation is that low-resolution data (below 2.5,Å) have to be excluded from the analysis. These results have implications for the identification of outliers in experimental diffraction data. [source]


The molecular architecture of the arachidonate-regulated Ca2+ -selective ARC channel is a pentameric assembly of Orai1 and Orai3 subunits

THE JOURNAL OF PHYSIOLOGY, Issue 17 2009
Olivier Mignen
The activation of Ca2+ entry is a critical component of agonist-induced cytosolic Ca2+ signals in non-excitable cells. Although a variety of different channels may be involved in such entry, the recent identification of the STIM and Orai proteins has focused attention on the channels in which these proteins play a key role. To date, two distinct highly Ca2+ -selective STIM1-regulated and Orai-based channels have been identified , the store-operated CRAC channels and the store-independent arachidonic acid activated ARC channels. In contrast to the CRAC channels, where the channel pore is composed of only Orai1 subunits, both Orai1 and Orai3 subunits are essential components of the ARC channel pore. Using an approach involving the co-expression of a dominant-negative Orai1 monomer along with different preassembled concatenated Orai1 constructs, we recently demonstrated that the functional CRAC channel pore is formed by a homotetrameric assembly of Orai1 subunits. Here, we use a similar approach to demonstrate that the functional ARC channel pore is a heteropentameric assembly of three Orai1 subunits and two Orai3 subunits. Expression of concatenated pentameric constructs with this stoichiometry results in the appearance of large currents that display all the key biophysical and pharmacological features of the endogenous ARC channels. They also replicate the essential regulatory characteristics of native ARC channels including specific activation by low concentrations of arachidonic acid, complete independence of store depletion, and an absolute requirement for the pool of STIM1 that constitutively resides in the plasma membrane. [source]


Smooth Random Effects Distribution in a Linear Mixed Model

BIOMETRICS, Issue 4 2004
Wendimagegn Ghidey
Summary A linear mixed model with a smooth random effects density is proposed. A similar approach to P -spline smoothing of Eilers and Marx (1996, Statistical Science11, 89,121) is applied to yield a more flexible estimate of the random effects density. Our approach differs from theirs in that the B -spline basis functions are replaced by approximating Gaussian densities. Fitting the model involves maximizing a penalized marginal likelihood. The best penalty parameters minimize Akaike's Information Criterion employing Gray's (1992, Journal of the American Statistical Association87, 942,951) results. Although our method is applicable to any dimensions of the random effects structure, in this article the two-dimensional case is explored. Our methodology is conceptually simple, and it is relatively easy to fit in practice and is applied to the cholesterol data first analyzed by Zhang and Davidian (2001, Biometrics57, 795,802). A simulation study shows that our approach yields almost unbiased estimates of the regression and the smoothing parameters in small sample settings. Consistency of the estimates is shown in a particular case. [source]


Large scale demonstration of a process analytical technology application in bioprocessing: Use of on-line high performance liquid chromatography for making real time pooling decisions for process chromatography

BIOTECHNOLOGY PROGRESS, Issue 2 2010
Anurag S. Rathore
Abstract Process Analytical Technology (PAT) has been gaining a lot of momentum in the biopharmaceutical community because of the potential for continuous real time quality assurance resulting in improved operational control and compliance. In previous publications, we have demonstrated feasibility of applications involving use of high performance liquid chromatography (HPLC) and ultra performance liquid chromatography (UPLC) for real-time pooling of process chromatography column. In this article we follow a similar approach to perform lab studies and create a model for a chromatography step of a different modality (hydrophobic interaction chromatography). It is seen that the predictions of the model compare well to actual experimental data, demonstrating the usefulness of the approach across the different modes of chromatography. Also, use of online HPLC when the step is scaled up to pilot scale (a 2294 fold scale-up from a 3.4 mL column in the lab to a 7.8 L column in the pilot plant) and eventually to manufacturing scale (a 45930 fold scale-up from a 3.4 mL column in the lab to a 158 L column in the manufacturing plant) is examined. Overall, the results confirm that for the application under consideration, online-HPLC offers a feasible approach for analysis that can facilitate real-time decisions for column pooling based on product quality attributes. The observations demonstrate that the proposed analytical scheme allows us to meet two of the key goals that have been outlined for PAT, i.e., "variability is managed by the process" and "product quality attributes can be accurately and reliably predicted over the design space established for materials used, process parameters, manufacturing, environmental, and other conditions". The application presented here can be extended to other modes of process chromatography and/or HPLC analysis. © 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2010 [source]


Large-Scale In Vivo Synthesis of the Carbohydrate Moieties of Gangliosides GM1 and GM2 by Metabolically Engineered Escherichia coli

CHEMBIOCHEM, Issue 5 2003
Tatiana Antoine
Abstract Two metabolically engineered Escherichia coli strains have been constructed to produce the carbohydrate moieties of gangliosides GM2 (GalNAc, -4(NeuAc, -3)Gal, -4Glc; Gal=galactose, Glc=glucose, Ac=acetyl) and GM1 (Gal, -3GalNAc, -4(NeuAc, -3)Gal, -4Glc. The GM2 oligosaccharide-producing strain TA02 was devoid of both , -galactosidase and sialic acid aldolase activities and overexpressed the genes for CMP-NeuAc synthase (CMP=cytidine monophosphate), , -2,3-sialyltransferase, UDP-GlcNAc (UDP=uridine diphosphate) C4 epimerase, and , -1,4-GalNAc transferase. When this strain was cultivated on glycerol, exogenously added lactose and sialic acid were shown to be actively internalized into the cytoplasm and converted into GM2 oligosaccharide. The in vivo synthesis of GM1 oligosaccharide was achieved by taking a similar approach but using strain TA05, which additionally overexpressed the gene for , -1,3-galactosyltransferase. In high-cell-density cultures, the production yields for the GM2 and GM1 oligosaccharides were 1.25 g,L,1and 0.89 g,L,1, respectively. [source]


Total Synthesis of the Cyclodepsipeptide Apratoxin A and Its Analogues and Assessment of Their Biological Activities

CHEMISTRY - A EUROPEAN JOURNAL, Issue 29 2006
Dawei Ma Prof. Dr.
Abstract A novel total synthesis of apratoxin A is described, with key steps including the assembly of its ketide segment through a D -proline-catalyzed direct aldol reaction and Oppolzer's anti aldol reaction and the preparation of its thiazoline unit in a biomimetic synthesis. An oxazoline analogue of apratoxin A has also been elaborated by a similar approach. This compound has a potency against HeLa cell proliferation only slightly lower than that of apratoxin A, whilst a C(40)-demethylated oxazoline analogue of apratoxin A displays a much lower cytotoxicity and the C(37)-epimer and C(37) demethylation product of this new analogue are inactive. These results suggest that the two methyl groups at C(37) and C(40) and the stereochemistry at C(37) are essential for the potent cellular activity of the oxazoline analogue of apratoxin A. Further biological analysis revealed that both synthetic apratoxin A and its oxazoline analogue inhibited cell proliferation by causing cell cycle arrest in the G1 phase. [source]


Modeling and Selection of Flexible Proteins for Structure-Based Drug Design: Backbone and Side Chain Movements in p38 MAPK

CHEMMEDCHEM, Issue 2 2008
Jyothi Subramanian
Abstract Receptor rearrangement upon ligand binding (induced fit) is a major stumbling block in docking and virtual screening. Even though numerous studies have stressed the importance of including protein flexibility in ligand docking, currently available methods provide only a partial solution to the problem. Most of these methods, being computer intensive, are often impractical to use in actual drug discovery settings. We had earlier shown that ligand-induced receptor side-chain conformational changes could be modeled statistically using data on known receptor,ligand complexes. In this paper, we show that a similar approach can be used to model more complex changes like backbone flips and loop movements. We have used p38 MAPK as a test case and have shown that a few simple structural features of ligands are sufficient to predict the induced variation in receptor conformations. Rigorous validation, both by internal resampling methods and on an external test set, corroborates this finding and demonstrates the robustness of the models. We have also compared our results with those from an earlier molecular dynamics simulation study on DFG loop conformations of p38 MAPK, and found that the results matched in the two cases. Our statistical approach enables one to predict the final ligand-induced conformation of the active site of a protein, based on a few ligand properties, prior to docking the ligand. We can do this without having to trace the step-by-step process by which this state is arrived at (as in molecular dynamics simulations), thereby drastically reducing computational effort. [source]


Seeking a second opinion: uncertainty in disease ecology

ECOLOGY LETTERS, Issue 6 2010
Brett T. McClintock
Ecology Letters (2010) 13: 659,674 Abstract Analytical methods accounting for imperfect detection are often used to facilitate reliable inference in population and community ecology. We contend that similar approaches are needed in disease ecology because these complicated systems are inherently difficult to observe without error. For example, wildlife disease studies often designate individuals, populations, or spatial units to states (e.g., susceptible, infected, post-infected), but the uncertainty associated with these state assignments remains largely ignored or unaccounted for. We demonstrate how recent developments incorporating observation error through repeated sampling extend quite naturally to hierarchical spatial models of disease effects, prevalence, and dynamics in natural systems. A highly pathogenic strain of avian influenza virus in migratory waterfowl and a pathogenic fungus recently implicated in the global loss of amphibian biodiversity are used as motivating examples. Both show that relatively simple modifications to study designs can greatly improve our understanding of complex spatio-temporal disease dynamics by rigorously accounting for uncertainty at each level of the hierarchy. [source]


CD4+ T-regulatory cells: toward therapy for human diseases

IMMUNOLOGICAL REVIEWS, Issue 1 2008
Sarah E. Allan
Summary T-regulatory cells (Tregs) have a fundamental role in the establishment and maintenance of peripheral tolerance. There is now compelling evidence that deficits in the numbers and/or function of different types of Tregs can lead to autoimmunity, allergy, and graft rejection, whereas an over-abundance of Tregs can inhibit anti-tumor and anti-pathogen immunity. Experimental models in mice have demonstrated that manipulating the numbers and/or function of Tregs can decrease pathology in a wide range of contexts, including transplantation, autoimmunity, and cancer, and it is widely assumed that similar approaches will be possible in humans. Research into how Tregs can be manipulated therapeutically in humans is most advanced for two main types of CD4+ Tregs: forkhead box protein 3 (FOXP3)+ Tregs and interleukin-10-producing type 1 Tregs (Tr1 cells). The aim of this review is to highlight current information on the characteristics of human FOXP3+ Tregs and Tr1 cells that make them an attractive therapeutic target. We discuss the progress and limitations that must be overcome to develop methods to enhance Tregs in vivo, expand or induce them in vitro for adoptive transfer, and/or inhibit their function in vivo. Although many technical and theoretical challenges remain, the next decade will see the first clinical trials testing whether Treg-based therapies are effective in humans. [source]


A visco-plastic constitutive model for granular soils modified according to non-local and gradient approaches

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 2 2002
C. di Prisco
Abstract An already available non-associated elastic,viscoplastic constitutive model with anisotropic strain hardening is modified in order to describe both the constitutive parameter dependency on relative density and the spatio-temporal evolution of strain localization. To achieve this latter goal, two distinct but similar approaches are introduced: one inspired by the gradient theory and one by the non-local theory. A one-dimensional case concerning a simple shear test for a non-homogeneous infinitely long dense sand specimen is numerically discussed and a finite difference scheme is employed for this purpose. The results obtained by following the two different approaches are critically analysed and compared. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Quality control in scholarly publishing: A new proposal

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 11 2003
Stefano Mizzaro
The Internet has fostered a faster, more interactive and effective model of scholarly publishing. However, as the quantity of information available is constantly increasing, its quality is threatened, since the traditional quality control mechanism of peer review is often not used (e.g., in online repositories of preprints, and by people publishing whatever they want on their Web pages). This paper describes a new kind of electronic scholarly journal, in which the standard submission-review-publication process is replaced by a more sophisticated approach, based on judgments expressed by the readers: in this way, each reader is, potentially, a peer reviewer. New ingredients, not found in similar approaches, are that each reader's judgment is weighted on the basis of the reader's skills as a reviewer, and that readers are encouraged to express correct judgments by a feedback mechanism that estimates their own quality. The new electronic scholarly journal is described in both intuitive and formal ways. Its effectiveness is tested by several laboratory experiments that simulate what might happen if the system were deployed and used. [source]


A Human,Automation Interface Model to Guide Automation Design of System Functions

NAVAL ENGINEERS JOURNAL, Issue 1 2007
JOSHUA S. KENNEDY
A major component of the US Army's Future Combat Systems (FCS) will be a fleet of eight different manned ground vehicles (MGV). There are promises that "advanced automation" will accomplish many of the tasks formerly performed by soldiers in legacy vehicle systems. However, the current approach to automation design does not relieve the soldier operator of tasks; rather, it changes the role of the soldiers and the work they must do, often in ways unintended and unanticipated. This paper proposes a coherent, top-down, overarching approach to the design of a human,automation interaction model. First, a qualitative model is proposed to drive the functional architecture and human,automation interface scheme for the MGV fleet. Second, the proposed model is applied to a portion of the functional flow of the common crew station on the MGV fleet. Finally, the proposed model is demonstrated quantitatively via a computational task-network modeling program (Improved Performance Research and Integration Tool). The modeling approach offers insights into the impacts on human task-loading, workload, and human performance. Implications for human systems integration domains are discussed, including Manpower and Personnel, Human Factors Engineering, Training, System Safety, and Soldier Survivability. The proposed model gives engineers and scientists a top-down approach to explicitly define and design the interactions between proposed automation schemes and the human crew. Although this paper focuses on the Army's FCS MGV fleet, the model and analytical processes proposed, or similar approaches, are appropriate for many manned systems in multiple domains (aviation, space, maritime, ground transportation, manufacturing, etc.). [source]


Linear system solution by null-space approximation and projection (SNAP)

NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 1 2007
M. Ili
Abstract Solutions of large sparse linear systems of equations are usually obtained iteratively by constructing a smaller dimensional subspace such as a Krylov subspace. The convergence of these methods is sometimes hampered by the presence of small eigenvalues, in which case, some form of deflation can help improve convergence. The method presented in this paper enables the solution to be approximated by focusing the attention directly on the ,small' eigenspace (,singular vector' space). It is based on embedding the solution of the linear system within the eigenvalue problem (singular value problem) in order to facilitate the direct use of methods such as implicitly restarted Arnoldi or Jacobi,Davidson for the linear system solution. The proposed method, called ,solution by null-space approximation and projection' (SNAP), differs from other similar approaches in that it converts the non-homogeneous system into a homogeneous one by constructing an annihilator of the right-hand side. The solution then lies in the null space of the resulting matrix. We examine the construction of a sequence of approximate null spaces using a Jacobi,Davidson style singular value decomposition method, called restarted SNAP-JD, from which an approximate solution can be obtained. Relevant theory is discussed and the method is illustrated by numerical examples where SNAP is compared with both GMRES and GMRES-IR. Copyright © 2006 John Wiley & Sons, Ltd. [source]


The Anglo-American Origins and International Diffusion of the "Third Way"

POLITICS & POLICY, Issue 1 2003
Donley T. Studlar
Although much has been written about the meanings of the "Third Way," a term popularized by Prime Minister Tony Blair in Britain and U.S. President Bill Clinton to characterize their similar approaches to governing, little analysis has been done of the phenomenon of the rapid diffusion of this concept internationally. Although the Democratic Leadership Council used the term first in the United States in 1991, it was decided at a high-level meeting between Clinton and New Labour executive officials in 1997 to popularize the term to describe their common approach to governing. This paper describes both the intellectual and political sources of this concept and how it has spread, not only as a label for its originators, but also to other governments and parties in the world. The test of whether the Third Way becomes recognized as a coherent ideology will be whether, over time, those who advocate it become identified with distinctive, consistent policies. [source]