Home About us Contact | |||
Drawbacks
Kinds of Drawbacks Selected AbstractsQualitative model of concrete acidification due to cathodic protection,MATERIALS AND CORROSION/WERKSTOFFE UND KORROSION, Issue 2 2008W. H. A. Peelen In this paper a mathematical description and numerical implementation for ion transport in concrete due to current passage is developed, in which the heterogeneous equilibrium between Ca2+, OH, and the solid Ca(OH)2 is incorporated. The description is based on the Nernst,Planck equation for ion transport, and reaction terms for the dissolution/precipitation of Ca(OH)2. This description was implemented in the finite element package Comsol Multiphysics. In this way Ca(OH)2 depletion in a zone at a CP anode adjacent to a bulk of concrete with Ca(OH)2 could be modelled in one calculation. Drawback of this model is that the kinetic parameters in the reaction terms are not known, and must be chosen high to ensure the dissolution of Ca(OH)2 to be in equilibrium. This proved numerically challenging and sometimes caused long calculation times. The growth rate of the zone without solid depends on the current density applied, concrete cover, the pore liquid composition and the diffusion constants of Ca2+ and OH,. This rate must be evaluated numerically. This qualitative model of anode acidification shows no participation of Na+; therefore transport properties of this ion do not affect the acidification rate of concrete. The same would hold for any other ion included in the model, which is not involved in electrochemical or chemical reactions. [source] Fast proton spectroscopic imaging using steady-state free precession methodsMAGNETIC RESONANCE IN MEDICINE, Issue 3 2003Wolfgang Dreher Abstract Various pulse sequences for fast proton spectroscopic imaging (SI) using the steady-state free precession (SSFP) condition are proposed. The sequences use either only the FID-like signal S1, only the echo-like signal S2, or both signals in separate but adjacent acquisition windows. As in SSFP imaging, S1 and S2 are separated by spoiler gradients. RF excitation is performed by slice-selective or chemical shift-selective pulses. The signals are detected in absence of a B0 gradient. Spatial localization is achieved by phase-encoding gradients which are applied prior to and rewound after each signal acquisition. Measurements with 2D or 3D spatial resolution were performed at 4.7 T on phantoms and healthy rat brain in vivo allowing the detection of uncoupled and J-coupled spins. The main advantages of SSFP based SI are the short minimum total measurement time (Tmin) and the high signal-to-noise ratio per unit measurement time (SNRt). The methods are of particular interest at higher magnetic field strength B0, as TR can be reduced with increasing B0 leading to a reduced Tmin and an increased SNRt. Drawbacks consist of the limited spectral resolution, particularly at lower B0, and the dependence of the signal intensities on T1 and T2. Further improvements are discussed including optimized data processing and signal detection under oscillating B0 gradients leading to a further reduction in Tmin. Magn Reson Med 50:453,460, 2003. © 2003 Wiley-Liss, Inc. [source] Drawbacks to Noninteger Scoring for Ordered Categorical DataBIOMETRICS, Issue 1 2007Stephen Senn Summary A proposal to improve trend tests by using noninteger scores is examined. It is concluded that despite improved power such tests are usually inferior to the simpler integer scored approach. [source] Drawbacks of endoscopic thoracic sympathectomy (Br J Surg 2004; 91: 264-269)BRITISH JOURNAL OF SURGERY (NOW INCLUDES EUROPEAN JOURNAL OF SURGERY), Issue 8 2004W. T. NgArticle first published online: 27 JUL 200 No abstract is available for this article. [source] Psychiatric Nosology Is Ready for a Paradigm Shift in DSM-VCLINICAL PSYCHOLOGY: SCIENCE AND PRACTICE, Issue 1 2009Jack D. Maser Data since 1980 demonstrate that the DSM-III model requires revisions in its assumptions and format. Problems inherent in the DSM-III model are considered and a paradigm shift toward a mixed categorical,dimensional classification system for DSM-V is recommended. This will reduce comorbidity, allow symptom weighting, introduce noncriterion symptoms, eliminate NOS categories, and provide new directions to biological researchers. We suggest reevaluating the threshold concept and use of quality-of-life assessment. A framework for such a revision is presented. Drawbacks to change include retraining of clinicians, administrative and policy changes, and possible reinterpretation of data collected under the DSM-III model. Nevertheless, clinicians and clinical researchers are ready for a diagnostic system that more accurately reflects the patients that they treat and study. [source] A Multiresolution Model for Soft Objects Supporting Interactive Cuts and LacerationsCOMPUTER GRAPHICS FORUM, Issue 3 2000Fabio Ganovelli Performing a really interactive and physically-based simulation of complex soft objects is still an open problem in computer animation/simulation. Given the application domain of virtual surgery training, a complete model should be quite realistic, interactive and should enable the user to modify the topology of the objects. Recent papers propose the adoption of multiresolution techniques to optimize time performance by representing at high resolution only the object parts considered more important or critical. The speed up obtainable at simulation time are counterbalanced by the need of a preprocessing phase strongly dependent on the topology of the object, with the drawback that performing dynamic topology modification becomes a prohibitive issue. In this paper we present an approach that couples multiresolution and topological modifications, based on the adoption of a particle systems approach to the physical simulation. Our approach is based on a tetrahedral decomposition of the space, chosen both for its suitability to support a particle system and for the ready availability of many techniques recently proposed for the simplification and multiresolution management of 3D simplicial decompositions. The multiresolution simulation system is designed to ensure the required speedup and to support dynamic changes of the topology, e.g. due to cuts or lacerations of the represented tissue. [source] Development and validation of a smoothing-splines-based correction method for improving the analysis of CEST-MR imagesCONTRAST MEDIA & MOLECULAR IMAGING, Issue 4 2008J. Stancanello Abstract Chemical exchange saturation transfer (CEST) imaging is an emerging MRI technique relying on the use of endogenous or exogenous molecules containing exchangeable proton pools. The heterogeneity of the water resonance frequency offset plays a key role in the occurrence of artifacts in CEST-MR images. To limit this drawback, a new smoothing-splines-based method for fitting and correcting Z -spectra in order to compensate for low signal-to-noise ratio (SNR) without any a priori model was developed. Global and local voxel-by-voxel Z -spectra were interpolated by smoothing splines with smoothing terms aimed at suppressing noise. Thus, a map of the water frequency offset (,zero' map) was used to correctly calculate the saturation transfer (ST) for each voxel. Simulations were performed to compare the method to polynomials and zero-only-corrected splines on the basis of SNR improvement. In vitro acquisitions of capillaries containing solutions of LIPOCEST agents at different concentrations were performed to experimentally validate the results from simulations. Additionally, ex vivo investigations of bovine muscle mass injected with LIPOCEST agents were performed as a function of increasing pulse power. The results from simulations and experiments highlighted the importance of a proper ,zero' correction (15% decrease of fictitious CEST signal in phantoms and ex vivo preparations) and proved the method to be more accurate compared with the previously published ones, often providing a SNR higher than 5 in different simulated and experimentally noisy conditions. In conclusion, the proposed method offers an accurate tool in CEST investigation. Copyright © 2008 John Wiley & Sons, Ltd. [source] Comparison of Two Methods of Interpretation of LangmuirProbe Data for an Inductively Coupled Oxygen PlasmaCONTRIBUTIONS TO PLASMA PHYSICS, Issue 5-6 2006T. H. Chung Abstract The Langmuir probe technique has some drawback in applying to electronegative plasma since it is difficult to interpret the probe I , V data. The positive ion flux to the probe is modified due to the presence of negative ions. In this study, an inductively coupled oxygen RF plasma is employed to perform the Langmuir probe measurement of the electronegative discharge. Plasma parameters are obtained from Langmuir probe measurement using two different methods which are based on electron energy distribution function (EEDF) integrals, and the method based on the fluid model for the modified ion flux, respectively. The EEDF is measured by a double differentiation of the I , V characteristics according to the Druyvesteyn formula. The electron densities estimated based on the two methods are compared. The EEDF integral method gives little higher values than the modified ion flux method. It is observed that at low pressure the EEDF is close to a Maxwellian. Generally, as the pressure increases, the distributions switch to bi-Maxwellian and to Druyvesteyn, and suggest some depletion of electrons with larger energies. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] The current opinions and use of MTA for apical barrier formation of non-vital immature permanent incisors by consultants in paediatric dentistry in the UKDENTAL TRAUMATOLOGY, Issue 1 2008Gillian Catherine Mooney A semi-structured postal questionnaire was sent to all known consultants in paediatric dentistry in the UK. The response rate was 78.6% (44 of 56). Thity-eight consultants (86.3%) agreed that the use of this material was a good idea with 68.2% having used or arranged for its use in apical barrier formation. Forty-two consultants (95.5%) agreed that reduced number of visits was an advantage to the technique, with only 34.1% agreeing that this procedure was less likely to weaken the tooth and 63.6% agreed that material and equipment costs were a drawback and 50% agreed that lack of available evidence was a disadvantage to its use. The results from this study give an indication of the extent of MTA use by consultant-led services in paediatric dentistry in the UK and highlights the need for a multi-centre randomised controlled clinical trial. [source] CE frontal analysis based on simultaneous UV and contactless conductivity detection: A general setup for studying noncovalent interactionsELECTROPHORESIS, Issue 3 2007Henrik Jensen Dr. Abstract CE frontal analysis (CE-FA) has been established as a powerful tool to study noncovalent interactions between macromolecules and small molecules such as drug substances or pharmaceutical excipients. However, when using traditional commercial CE instrumentation, a serious drawback is related to the fact that only UV-active compounds can be studied. In recent years, contactless conductivity detection has become an attractive alternative to UV detection in CE due to its high versatility. In this study, we combine contactless conductivity detection and UV detection in a highly versatile setup for profiling noncovalent interactions between low-molecular-weight molecules and macromolecules. In the case of molecules having a chromophore the setup allows determination of binding constants using two independent detectors. The new contactless conductivity detection cell is compatible with commercial CE instrumentation and is therefore easily implemented in any analysis laboratory with CE expertise. [source] Comprehensive proteome analysis by chromatographic protein prefractionationELECTROPHORESIS, Issue 7-8 2004Pierre Lescuyer Abstract Protein copy number is distributed from 7 to 8 orders of magnitude in cells and probably up to 12 orders of magnitude in plasma. Classical silver-stained two-dimensional electrophoresis (2-DE) can only display up to four orders of magnitude. This is a major drawback since it is assumed that most of the regulatory proteins are low-abundance gene products. It is thus clear that the separation of low copy number proteins in amounts sufficient for postseparation analysis is an important issue in proteome studies to complete the comprehensive description of the proteome of any given cell type. The visualization of a polypeptide on a 2-DE gel will depend on the copy number, on the quantity loaded onto the gel and on the method of detection. As the amount of protein that can be loaded onto a gel is limited, one efficient solution is to fractionate the sample prior to 2-DE analysis. Several approaches exist including subcellular fractionation, affinity purification and chromatographic and electrophoretic protein prefractionation. The chromatographic step adds a new dimension in the protein separation using specific protein properties. It allows proteins to be adsorbed to a surface and eluted differentially under certain conditions. This review article presents studies combining chromatography-based methods to 2-DE analysis and draws general conclusions on this strategy. [source] Multiple polypeptide forms observed in two-dimensional gels of Methylococcus capsulatus (Bath) polypeptides are generated during the separation procedureELECTROPHORESIS, Issue 4 2003Frode S. Berven Abstract We have examined two-dimensional electrophoresis (2-DE) gel maps of polypeptides from the Gram-negative bacterium Methylococcus capsulatus (Bath) and found the same widespread trains of spots as often reported in 2-DE gels of polypeptides of other Gram-negative bacteria. Some of the trains of polypeptides, both from the outer membrane and soluble protein fraction, were shown to be generated during the separation procedure of 2-DE, and not by covalent post-translational modifications. The trains were found to be regenerated when rerunning individual polypeptide spots. The polypeptides analysed giving this type of trains were all found to be classified as stable polypeptides according to the instability index of Guruprasad et al. (Protein Eng. 1990, 4, 155,161). The phenomenon most likely reflects conformational equilibria of polypeptides arising from the experimental conditions used, and is a clear drawback of the standard 2-DE procedure, making the gel picture unnecessarily complex to analyse. [source] Ultrasonic treatment of waste activated sludgeENVIRONMENTAL PROGRESS & SUSTAINABLE ENERGY, Issue 2 2006Raf Dewil Abstract Activated sludge processes are key technologies to treat wastewater. These biological processes produce huge amounts of waste activated sludge (WAS), now commonly called biosolids. Mechanical, thermal, and/or chemical WAS conditioning techniques have been proposed to reduce the sludge burden. The ultrasonic treatment of WAS is quite novel. The present paper reports on extensive investigations using an ultrasonic treatment of WAS, to study its potential to meet one or all of four objectives: (1) reduce WAS quantities; (2) achieve a better dewaterability; (3) provoke a release of soluble chemical oxygen demand (COD) from the biosolids, preferably transformed into biodegradable organics; and (4) possibly destroy the filamentous microorganisms responsible for sludge bulking. Although meeting these objectives would help to solve the problems cited, the energy consumption could be a considerable drawback: the paper will thus assess whether all or some objectives are met, and at what operational cost. A literature survey defines the occurring phenomena (cavitation) and the important operation parameters [such as frequency, duration, specific energy input (SE)]. The experiments are carried out in a batch reactor of volume up to 2.3 L. The ultrasonic equipment consisted of a generator, a converter, and a sonotrode, supplied by Alpha Ultrasonics under the brand name of Telsonic. Three different kinds of sludge were tested, with different concentrations of dry solids (DS) between approximately 3.5 and 14 g DS/L WAS. Ultrasonic energy was introduced in a continuous manner (against possible pulsed operation). The major operational parameters studied include duration of the ultrasonic treatment and specific energy input. The applied frequency was set at 20 kHz. The release of COD from the WAS phase into the filtrate phase is a function of the specific energy input with yields of nearly 30% achievable at SE values of 30,000 kJ/kg DS. A major fraction of the COD is transformed into biodegradable organics (BOD). The reduction in DS fraction of the sludge is proportional to the COD release rates. Although the DS content is reduced, the dewaterability of the sludge is not improved. This reflects itself in increased filtration times during vacuum filtration and in increased values of the capillary suction time (CST). This more difficult dewaterability is the result of considerably reduced floc sizes, offering an extended surface area: more surface water is bound (CST increases) and the filterability decreases as a result of clogging of the cake. To reach the same dryness as for the untreated cake, the required dosage of polyelectrolyte is nearly doubled when the SE of the ultrasound treatment is increased from 7500 to 20,000 kJ/kg DS. The ultrasonic reduction of filamentous WAS organisms is not conclusive and very little effect is seen at low intensities and short treatment durations. Microscopic analysis of the WAS identified the dominant presence of Actynomyces. The release of soluble COD and BOD certainly merit further research. © 2006 American Institute of Chemical Engineers Environ Prog, 2006 [source] Forecasting daily high ozone concentrations by classification treesENVIRONMETRICS, Issue 2 2004F. Bruno Abstract This article proposes the use of classification trees (CART) as a suitable technique for forecasting the daily exceedance of ozone standards established by Italian law. A model is formulated for predicting, 1 and 2 days beforehand, the most probable class of the maximum daily urban ozone concentration in the city of Bologna. The standard employed is the so-called ,warning level' (180,,g/m3). Meteorological forecasted variables are considered as predictors. Pollution data show a considerable discrepancy between the dimensions of the two classes of events. The first class includes those days when the observed maximum value exceeds the established standard, while the second class contains those when the observed maximum value does not exceed the said standard. Due to this peculiarity, model selection procedures using cross-validation usually lead to overpruning. We can overcome this drawback by means of techniques which replicate observations, through the modification of their inclusion probabilities in the cross-validation sets. Copyright © 2004 John Wiley & Sons, Ltd. [source] Upregulation of Brain Expression of P-Glycoprotein in MRP2-deficient TR - Rats Resembles Seizure-induced Up-regulation of This Drug Efflux Transporter in Normal RatsEPILEPSIA, Issue 4 2007Katrin Hoffmann Summary:,Purpose: The multidrug resistance protein 2 (MRP2) is a drug efflux transporter that is expressed predominantly at the apical domain of hepatocytes but seems also to be expressed at the apical membrane of brain capillary endothelial cells that form the blood,brain barrier (BBB). MRP2 is absent in the transport-deficient (TR,) Wistar rat mutant, so that this rat strain was very helpful in defining substrates of MRP2 by comparing tissue concentrations or functional activities of compounds in MRP2-deficient rats with those in transport-competent Wistar rats. By using this strategy to study the involvement of MRP2 in brain access of antiepileptic drugs (AEDs), we recently reported that phenytoin is a substrate for MRP2 in the BBB. However, one drawback of such studies in genetically deficient rats is the fact that compensatory changes with upregulation of other transporters can occur. This prompted us to study the brain expression of P-glycoprotein (Pgp), a major drug efflux transporter in many tissues, including the BBB, in TR, rats compared with nonmutant (wild-type) Wistar rats. Methods: The expression of MRP2 and Pgp in brain and liver sections of TR, rats and normal Wistar rats was determined with immunohistochemistry, by using a novel, highly selective monoclonal MRP2 antibody and the monoclonal Pgp antibody C219, respectively. Results: Immunofluorescence staining with the MRP2 antibody was found to label a high number of microvessels throughout the brain in normal Wistar rats, whereas such labeling was absent in TR, rats. TR, rats exhibited a significant up-regulation of Pgp in brain capillary endothelial cells compared with wild-type controls. No such obvious upregulation of Pgp was observed in liver sections. A comparable overexpression of Pgp in the BBB was obtained after pilocarpine-induced seizures in wild-type Wistar rats. Experiments with systemic administration of the Pgp substrate phenobarbital and the selective Pgp inhibitor tariquidar in TR, rats substantiated that Pgp is functional and compensates for the lack of MRP2 in the BBB. Conclusions: The data on TR, rats indicate that Pgp plays an important role in the compensation of MRP2 deficiency in the BBB. Because such a compensatory mechanism most likely occurs to reduce injury to the brain from cytotoxic compounds, the present data substantiate the concept that MRP2 performs a protective role in the BBB. Furthermore, our data suggest that TR, rats are an interesting tool to study consequences of overexpression of Pgp in the BBB on access of drugs in the brain, without the need of inducing seizures or other Pgp-enhancing events for this purpose. [source] Neutral Group-IV Metal Catalysts for the Intramolecular Hydroamination of AlkenesEUROPEAN JOURNAL OF ORGANIC CHEMISTRY, Issue 16 2008Carsten Müller Abstract A detailed comparison of the group-IV metal catalysts Ti(NMe2)4, Ind2TiMe2, Ind2ZrMe2 and Ind2HfMe2 in the intramolecular hydroamination of amino alkenes is presented. Among these catalysts, the benchmark catalyst Ti(NMe2)4 is the most active in the formation of pyrrolidines. A comparison between Ind2TiMe2, Ind2ZrMe2 and Ind2HfMe2 suggests that in the synthesis of pyrrolidines, Zr complexes show the highest catalytic activity of the group-IV metal catalysts. Although Ind2TiMe2 - and the Ind2ZrMe2 -catalyzed formation of a pyrrolidine is first-order in the concentration of the substrate, the corresponding Ti(NMe2)4 -catalyzed cyclization is second-order in the concentration of the substrate. The results obtained for the formation of piperidines catalyzed by Ti(NMe2)4, Ind2TiMe2, Ind2ZrMe2 and Ind2HfMe2 suggest that for these reactions, Ti catalysts show increased catalytic activity compared with the corresponding Zr catalysts. Unfortunately, the formation of aminocyclopentane side-products by C,H activation processes is a severe drawback of the Ti catalysts. The corresponding side-products are not formed in Ind2ZrMe2 - and Ind2HfMe2 -catalyzed reactions. However, the former catalyst gives better yields of the desired piperidine products. In contrast to the results obtained for the synthesis of pyrrolidines, the formation of a piperidine is zero-order in the concentration of the substrate for the indenyl catalysts Ind2TiMe2 and Ind2ZrMe2, and first-order for the homoleptic catalyst Ti(NMe2)4. Interestingly, Ind2TiMe2 is able to catalyze a slow hydroamination of an N -methylated amino alkene, whereas the homoleptic complex Ti(NMe2)4 as well as Ind2ZrMe2 and Ind2HfMe2 do not catalyze the same reaction. (© Wiley-VCH Verlag GmbH & Co. KGaA, 69451 Weinheim, Germany, 2008) [source] Vectorial summation of probabilistic current harmonics in power systems: From a bivariate distribution model towards a univariate probability functionEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 1 2000Y. J. Wang This paper extends the investigation into the bivariate normal distribution (BND) model which has been widely used to study the asymptotic behaviour of the sum of a sufficiently large number of randomly-varying harmonic phasors (of the same frequency). Although the BND model is effective and applicable to most problems involving harmonic summation, its main drawback resides in the computation time required to extract the probability density function of the harmonic magnitude from the two-dimensional BND model. This paper proposes a novel approach to the problem by assimilating the generalized Gamma distribution (GGD) model to the marginal distribution (the magnitude) of the BND using the method of moments. The proposed method can accurately estimate the parameters of the GGD model without time-consuming calculation. A power system containing ten harmonic sources is taken as an example where the comparison of the Monte-Carlo simulation, the BND model and the GGD model is given and discussed. The comparison shows that the GGD model approximates the BND model very well. [source] Therapeutic angiogenesis and vasculogenesis for tissue regenerationEXPERIMENTAL PHYSIOLOGY, Issue 3 2005Paolo Madeddu Therapeutic angiogenesis/vasculogenesis holds promise for the cure of ischaemic disease. The approach postulates the manipulation of spontaneous healing response by supplementation of growth factors or transplantation of vascular progenitor cells. These supplements are intended to foster the formation of arterial collaterals and promote the regeneration of damaged tissues. Angiogenic factors are generally delivered in the form of recombinant proteins or by gene transfer using viral vectors. In addition, new non-viral methods are gaining importance for their safer profile. The association of growth factors with different biological activity might offer distinct advantages in terms of efficacy, yet combined approaches require further optimization. Alternatively, substances with pleiotropic activity might be considered, by virtue of their ability to target multiple mechanisms. For instance, some angiogenic factors not only stimulate the growth of arterioles and capillaries, but also inhibit vascular destabilization triggered by metabolic and oxidative stress. Transplantation of endothelial progenitor cells was recently proposed for the treatment of peripheral and myocardial ischaemia. Progenitor cells can be transplanted either without any preliminary conditioning or after ex vivo genetic manipulation. Delivery of genetically modified progenitor cells eliminates the drawback of immune response against viral vectors and makes feasible repeating the therapeutic procedure in case of injury recurrence. It is envisioned that these new approaches of regenerative medicine will open unprecedented opportunities for the care of life-threatening diseases. [source] A Direct, Multiplex Biosensor Platform for Pathogen Detection Based on Cross-linked Polydiacetylene (PDA) SupramoleculesADVANCED FUNCTIONAL MATERIALS, Issue 23 2009Cheol Hee Park Abstract This study focuses on the development of a multiplex pathogen-detection platform based on polydiacetylene (PDA) using a novel immobilization procedure. PDA liposome-based solid sensors have a critical drawback as the PDA liposomes are not stably immobilized onto the solid substrate. Therefore, to overcome this problem, an interlinker, ethylenediamine, is introduced, which acts as a cross-linker between individual PDA liposomes. The quantity of ethylenediamine added was optimized to 1,mM, as measured by the fluorescence signal emitted by the stably immobilized PDA liposomes, a concentration at which the fluorescence signal is 10 times higher than for the resulting PDA chips made without the interlinker. This procedure is used to manufacture PDA liposome-based multiplex biosensor arrays for well-known water and food-borne pathogens. The fabricated biosensor was able to perform the simultaneous and quantitative detection of 6 species of pathogens. As such, the results demonstrated from this research can be exploited for the development of more advanced PDA-based biosensors and diagnostics. [source] Optimal Control of Rigid-Link Manipulators by Indirect MethodsGAMM - MITTEILUNGEN, Issue 1 2008Rainer Callies Abstract The present paper is a survey and research paper on the treatment of optimal control problems of rigid-link manipulators by indirect methods. Maximum Principle based approaches provide an excellent tool to calculate optimal reference trajectories for multi-link manipulators with high accuracy. Their major drawback was the need to explicitly formulate the complicated system of adjoint differential equations and to apply the full apparatus of optimal control theory. This is necessary in order to convert the optimal control problem into a piecewise defined, nonlinear multi-point boundary value problem. An accurate and efficient access to first- and higher-order derivatives is crucial. The approach described in this paper allows it to generate all the derivative information recursively and simultaneously with the recursive formulation of the equations of motion. Nonlinear state and control constraints are treated without any simplifications by transforming them into sequences of systems of linear equations. By these means, the modeling of the complete optimal control problem and the accompanying boundary value problem is automated to a great extent. The fast numerical solution is by the advanced multiple shooting method JANUS. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] The Design and Realization of Flexible, Long-Lived Light-Emitting Electrochemical CellsADVANCED FUNCTIONAL MATERIALS, Issue 16 2009Junfeng Fang Abstract Polymer light-emitting electrochemical cells (LECs) offer an attractive opportunity for low-cost production of functional devices in flexible and large-area configurations, but the critical drawback in comparison to competing light-emission technologies is a limited operational lifetime. Here, it is demonstrated that it is possible to improve the lifetime by straightforward and motivated means from a typical value of a few hours to more than one month of uninterrupted operation at significant brightness (>100,cd m,2) and relatively high power conversion efficiency (2 lm W,1 for orange-red emission). Specifically, by optimizing the composition of the active material and by employing an appropriate operational protocol, a desired doping structure is designed and detrimental chemical and electrochemical side reactions are identified and minimized. Moreover, the first functional flexible LEC with a similar promising device performance is demonstrated. [source] Pleiotropy and principal components of heritability combine to increase power for association analysisGENETIC EPIDEMIOLOGY, Issue 1 2008Lambertus Klei Abstract When many correlated traits are measured the potential exists to discover the coordinated control of these traits via genotyped polymorphisms. A common statistical approach to this problem involves assessing the relationship between each phenotype and each single nucleotide polymorphism (SNP) individually (PHN); and taking a Bonferroni correction for the effective number of independent tests conducted. Alternatively, one can apply a dimension reduction technique, such as estimation of principal components, and test for an association with the principal components of the phenotypes (PCP) rather than the individual phenotypes. Building on the work of Lange and colleagues we develop an alternative method based on the principal component of heritability (PCH). For each SNP the PCH approach reduces the phenotypes to a single trait that has a higher heritability than any other linear combination of the phenotypes. As a result, the association between a SNP and derived trait is often easier to detect than an association with any of the individual phenotypes or the PCP. When applied to unrelated subjects, PCH has a drawback. For each SNP it is necessary to estimate the vector of loadings that maximize the heritability over all phenotypes. We develop a method of iterated sample splitting that uses one portion of the data for training and the remainder for testing. This cross-validation approach maintains the type I error control and yet utilizes the data efficiently, resulting in a powerful test for association. Genet. Epidemiol. 2007. © 2007 Wiley-Liss, Inc. [source] Power and robustness of a score test for linkage analysis of quantitative traits using identity by descent data on sib pairsGENETIC EPIDEMIOLOGY, Issue 4 2001Darlene R. Goldstein Abstract Identification of genes involved in complex traits by traditional (lod score) linkage analysis is difficult due to many complicating factors. An unfortunate drawback of non-parametric procedures in general, though, is their low power to detect genetic effects. Recently, Dudoit and Speed [2000] proposed using a (likelihood-based) score test for detecting linkage with IBD data on sib pairs. This method uses the likelihood for ,, the recombination fraction between a trait locus and a marker locus, conditional on the phenotypes of the two sibs to test the null hypothesis of no linkage (, = ˝). Although a genetic model must be specified, the approach offers several advantages. This paper presents results of simulation studies characterizing the power and robustness properties of this score test for linkage, and compares the power of the test to the Haseman-Elston and modified Haseman-Elston tests. The score test is seen to have impressively high power across a broad range of true and assumed models, particularly under multiple ascertainment. Assuming an additive model with a moderate allele frequency, in the range of p = 0.2 to 0.5, along with heritability H = 0.3 and a moderate residual correlation , = 0.2 resulted in a very good overall performance across a wide range of trait-generating models. Generally, our results indicate that this score test for linkage offers a high degree of protection against wrong assumptions due to its strong robustness when used with the recommended additive model. Genet. Epidemiol. 20:415,431, 2001. © 2001 Wiley-Liss, Inc. [source] Addressing non-uniqueness in linearized multichannel surface wave inversionGEOPHYSICAL PROSPECTING, Issue 1 2009Michele Cercato ABSTRACT The multichannel analysis of the surface waves method is based on the inversion of observed Rayleigh-wave phase-velocity dispersion curves to estimate the shear-wave velocity profile of the site under investigation. This inverse problem is nonlinear and it is often solved using ,local' or linearized inversion strategies. Among linearized inversion algorithms, least-squares methods are widely used in research and prevailing in commercial software; the main drawback of this class of methods is their limited capability to explore the model parameter space. The possibility for the estimated solution to be trapped in local minima of the objective function strongly depends on the degree of nonuniqueness of the problem, which can be reduced by an adequate model parameterization and/or imposing constraints on the solution. In this article, a linearized algorithm based on inequality constraints is introduced for the inversion of observed dispersion curves; this provides a flexible way to insert a priori information as well as physical constraints into the inversion process. As linearized inversion methods are strongly dependent on the choice of the initial model and on the accuracy of partial derivative calculations, these factors are carefully reviewed. Attention is also focused on the appraisal of the inverted solution, using resolution analysis and uncertainty estimation together with a posteriori effective-velocity modelling. Efficiency and stability of the proposed approach are demonstrated using both synthetic and real data; in the latter case, cross-hole S-wave velocity measurements are blind-compared with the results of the inversion process. [source] Validation of microarray-based resequencing of 93 worldwide mitochondrial genomes,HUMAN MUTATION, Issue 1 2009Anne Hartmann Abstract The human mitochondrial genome consists of a multicopy, circular dsDNA molecule of 16,569 base pairs. It encodes for 13 proteins, two ribosomal genes, and 22 tRNAs that are essential in the generation of cellular ATP by oxidative phosphorylation in eukaryotic cells. Germline mutations in mitochondrial DNA (mtDNA) are an important cause of maternally inherited diseases, while somatic mtDNA mutations may play important roles in aging and cancer. mtDNA polymorphisms are also widely used in population and forensic genetics. Therefore, methods that allow the rapid, inexpensive and accurate sequencing of mtDNA are of great interest. One such method is the Affymetrix GeneChip® Human Mitochondrial Resequencing Array 2.0 (MitoChip v.2.0) (Santa Clara, CA). A direct comparison of 93 worldwide mitochondrial genomes sequenced by both the MitoChip and dideoxy terminator sequencing revealed an average call rate of 99.48% and an accuracy of ,99.98% for the MitoChip. The good performance was achieved by using in-house software for the automated analysis of additional probes on the array that cover the most common haplotypes in the hypervariable regions (HVR). Failure to call a base was associated mostly with the presence of either a run of ,4,C bases or a sequence variant within 12 bases up- or downstream of that base. A major drawback of the MitoChip is its inability to detect insertions/deletions and its low sensitivity and specificity in the detection of heteroplasmy. However, the vast majority of haplogroup defining polymorphism in the mtDNA phylogeny could be called unambiguously and more rapidly than with conventional sequencing. Hum Mutat 0,1,8, 2008. © 2008 Wiley-Liss, Inc. [source] Invited reaction: Informal learning and the transfer of learning: How managers develop proficiencyHUMAN RESOURCE DEVELOPMENT QUARTERLY, Issue 4 2003Victoria J. Marsick Enos, Kehrhahn, and Bell have made an important contribution to measuring informal learning and its transfer as proficiency in a set of company-identified managerial skills. Measurement of informal learning is at the crux of research that seeks to link learning outcomes to other indicators of effective performance. The ability to show how informal learning affects managerial proficiency also would help practitioners build a better business case for planning and supporting informal learning. A drawback to the research methodology employed in this study is reliance on self-report, which the authors note but do not fully discuss. Questions also arise about the nature of skills measured and the nature of managerial work in what appears to be a period of transition in the company they examined. I conclude with some thoughts on alternative lenses for considering implications for practice. [source] Simultaneous untangling and smoothing of moving gridsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 7 2008Ezequiel J. López Abstract In this work, a technique for simultaneous untangling and smoothing of meshes is presented. It is based on an extension of an earlier mesh smoothing strategy developed to solve the computational mesh dynamics stage in fluid,structure interaction problems. In moving grid problems, mesh untangling is necessary when element inversion happens as a result of a moving domain boundary. The smoothing strategy, formerly published by the authors, is defined in terms of the minimization of a functional associated with the mesh distortion by using a geometric indicator of the element quality. This functional becomes discontinuous when an element has null volume, making it impossible to obtain a valid mesh from an invalid one. To circumvent this drawback, the functional proposed is transformed in order to guarantee its continuity for the whole space of nodal coordinates, thus achieving the untangling technique. This regularization depends on one parameter, making the recovery of the original functional possible as this parameter tends to 0. This feature is very important: consequently, it is necessary to regularize the functional in order to make the mesh valid; then, it is advisable to use the original functional to make the smoothing optimal. Finally, the simultaneous untangling and smoothing technique is applied to several test cases, including 2D and 3D meshes with simplicial elements. As an additional example, the application of this technique to a mesh generation case is presented. Copyright © 2008 John Wiley & Sons, Ltd. [source] A control volume capacitance method for solidification modelling with mass transportINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 12 2002K. Davey Abstract Capacitance methods are popular methods used for solidification modelling. Unfortunately, they suffer from a major drawback in that energy is not correctly transported through elements and so provides a source of inaccuracy. This paper is concerned with the development and application of a control volume capacitance method (CVCM) to problems where mass transport and solidification are combined. The approach adopted is founded on theory that describes energy transfer through a control volume (CV) moving relative to the transporting mass. An equivalent governing partial differential equation is established, which is designed to be transformable into a finite element system that is commonly used to model transient heat-conduction problems. This approach circumvents the need to use the methods of Bubnov,Galerkin and Petrov,Galerkin and thus eliminates many of the stability problems associated with these approaches. An integration scheme is described that accurately caters for enthalpy fluxes generated by mass transport. Shrinkage effects are neglected in this paper as all the problems considered involve magnitudes of velocity that make this assumption reasonable. The CV approach is tested against known analytical solutions and is shown to be accurate, stable and computationally competitive. Copyright © 2002 John Wiley & Sons, Ltd. [source] A modification of the artificial compressibility algorithm with improved convergence characteristicsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 4 2007Frank Muldoon Abstract The artificial compressibility algorithm has a significant drawback in the difficulty of choosing the artificial compressibility parameter, improper choice of which leads either to slow convergence or divergence. A simple modification of the equation for pressure in the artificial compressibility algorithm which removes the difficulty of choosing the artificial compressibility parameter is proposed. It is shown that the choice of the relaxation parameters for the new algorithm is relatively straightforward, and that the same values can be used to provide robust convergence for a range of application problems. This new algorithm is easily parallelized making it suitable for computations such as direct numerical simulation (DNS) which require the use of distributed memory machines. Two key benchmark problems are studied in evaluating the new algorithm: DNS of a fully developed turbulent channel flow, and DNS of a driven-cavity flow, using both explicit and implicit time integration schemes. The new algorithm is also validated for a more complex flow configuration of turbulent flow over a backward-facing step, and the computed results are shown to be in good agreement with experimental data and previous DNS work. Copyright © 2007 John Wiley & Sons, Ltd. [source] Adaptive tracking control for electrically-driven robots without overparametrizationINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 2 2002Yeong-Chan Chang Abstract This paper addresses the motion tracking control of robot systems actuated by brushed direct current motors in the presence of parametric uncertainties and external disturbances. By using the integrator backstepping technique, two kinds of adaptive control schemes are developed: one requires the measurements of link position, link velocity and armature current for feedback and the other requires only the measurements of link position and armature current for feedback. The developed adaptive controllers guarantee that the resulting closed-loop system is locally stable, all the states and signals are bounded, and the tracking error can be made as small as possible. The attraction region can be not only arbitrarily preassigned but also explicitly constructed. The main novelty of the developed adaptive control laws is that the number of parameter estimates is exactly equal to the number of unknown parameters throughout the entire electromechanical system. Consequently, the phenomenon of overparametrization, a significant drawback of employing the integrator backstepping technique to treat the control of electrically driven robots in the previous literature, is eliminated in this study. Finally, simulation examples are given to illustrate the tracking performance of electrically driven robot manipulators with the developed adaptive control schemes. Copyright © 2002 John Wiley & Sons, Ltd. [source] |