Home About us Contact | |||
Second Step (second + step)
Selected AbstractsSparse points matching by combining 3D mesh saliency with statistical descriptorsCOMPUTER GRAPHICS FORUM, Issue 2 2008U. Castellani Abstract This paper proposes new methodology for the detection and matching of salient points over several views of an object. The process is composed by three main phases. In the first step, detection is carried out by adopting a new perceptually-inspired 3D saliency measure. Such measure allows the detection of few sparse salient points that characterize distinctive portions of the surface. In the second step, a statistical learning approach is considered to describe salient points across different views. Each salient point is modelled by a Hidden Markov Model (HMM), which is trained in an unsupervised way by using contextual 3D neighborhood information, thus providing a robust and invariant point signature. Finally, in the third step, matching among points of different views is performed by evaluating a pairwise similarity measure among HMMs. An extensive and comparative experimental session has been carried out, considering real objects acquired by a 3D scanner from different points of view, where objects come from standard 3D databases. Results are promising, as the detection of salient points is reliable, and the matching is robust and accurate. [source] A framework for quad/triangle subdivision surface fitting: Application to mechanical objectsCOMPUTER GRAPHICS FORUM, Issue 1 2007Guillaume Lavoué Abstract In this paper we present a new framework for subdivision surface approximation of three-dimensional models represented by polygonal meshes. Our approach, particularly suited for mechanical or Computer Aided Design (CAD) parts, produces a mixed quadrangle-triangle control mesh, optimized in terms of face and vertex numbers while remaining independent of the connectivity of the input mesh. Our algorithm begins with a decomposition of the object into surface patches. The main idea is to approximate the region boundaries first and then the interior data. Thus, for each patch, a first step approximates the boundaries with subdivision curves (associated with control polygons) and creates an initial subdivision surface by linking the boundary control points with respect to the lines of curvature of the target surface. Then, a second step optimizes the initial subdivision surface by iteratively moving control points and enriching regions according to the error distribution. The final control mesh defining the whole model is then created assembling every local subdivision control meshes. This control polyhedron is much more compact than the original mesh and visually represents the same shape after several subdivision steps, hence it is particularly suitable for compression and visualization tasks. Experiments conducted on several mechanical models have proven the coherency and the efficiency of our algorithm, compared with existing methods. [source] CAD-Based Photogrammetry for Reverse Engineering of Industrial InstallationsCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 4 2003Johan W. H. Tangelder For instance, in the case of a servicing plant, such a library contains descriptions of simple components such as straight pipes, elbows, and T-junctions. A new installation is constructed by selecting and connecting the appropriate components from the library. This article demonstrates that one can use the same approach for reverse engineering by photogrammetry. In our technique, the operator interprets images and selects the appropriate CAD component from a library. By aligning the edges of the component's wire frame to the visible edges in the images, we implicitly determine the position, orientation, and shape of the real component. For a fast object reconstruction the alignment process has been split in two parts. Initially, the operator approximately aligns a component to the images. In a second step a fitting algorithm is invoked for an automatic and precise alignment. Further improvement in the efficiency of the reconstruction is obtained by imposing geometric constraints on the CAD components of adjacent object parts. [source] Study of the Complexation, Adsorption and Electrode Reaction Mechanisms of Chromium(VI) and (III) with DTPA Under Adsorptive Stripping Voltammetric ConditionsELECTROANALYSIS, Issue 19 2003Sylvia Sander Abstract The complexation of Cr(III) and Cr(VI) with diethylenetriaminepentaacetic acid (DTPA), the redox behavior of these complexes and their adsorption on the mercury electrode surface were investigated by a combination of electrochemical techniques and UV/vis spectroscopy. A homogenous two-step reaction was observed when mixing Cr(III), present as hexaquo complex, with DTPA. The first reaction product, the electroactive 1,:,1 complex, turns into an electroinactive form in the second step. The results indicate that the second reaction product is presumably a 1,:,2 Cr(III)/DTPA complex. The electroreduction of the DTPA-Cr(III) complex to Cr(II) was found to be diffusion rather than adsorption controlled. The Cr(III) ion, generated in-situ from Cr(VI) at the mercury electrode at about ,50,mV (vs. Ag|AgCl) (3,mol,L,1 KCl), was found to form instantly an electroactive and adsorbable complex with DTPA. By means of electrocapillary measurements its surface activity was shown to be 30 times higher than that of the complex built by homogenous reaction of DTPA with the hydrated Cr(III). Both components, DTPA and the in-situ built complex Cr(III) ion were found to adsorb on the mercury electrode. The effect of nitrate, used as catalytic oxidant in the voltammetric determination method, on the complexation reaction and on the adsorption processes was found to be negligible. The proposed complex structures and an overall reaction scheme are shown. [source] 2-DE using hemi-fluorinated surfactantsELECTROPHORESIS, Issue 14 2007Mireille Starita-Geribaldi Dr. Abstract The synthesis of hemi-fluorinated zwitterionic surfactants was realized and assessed for 2-DE, a powerful separation method for proteomic analysis. These new fluorinated amidosulfobetaine (FASB- p,m) were compared to their hydrocarbon counterparts amidosulfobetaine (ASB- n) characterized by a hydrophilic polar head, a hydrophobic and lipophilic tail, and an amido group as connector. The tail of these FASB surfactants was in part fluorinated resulting in the modulation of its lipophilicity (or oleophobicity). Their effect on the red blood cell (RBC) membrane showed a specific solubilization depending on the length of the hydrophobic part. A large number of polypeptide spots appeared in the 2-DE patterns by using FASB- p,m. The oleophobic character of these surfactants was confirmed by the fact that Band 3, a highly hydrophobic transmembrane protein, was not solubilized by these fluorinated structures. The corresponding pellet was very rich in Band 3 and could then be solubilized by using a strong detergent such as amidosulfobetaine with an alkyl tail containing 14 carbon atoms (ASB-14). Thus, these hemi-fluorinated surfactants appeared as powerful tools when used at the first step of a two-step solubilization strategy using a hydrocarbon homologous surfactant in the second step. [source] Manganese speciation in human cerebrospinal fluid using CZE coupled to inductively coupled plasma MSELECTROPHORESIS, Issue 9 2007Bernhard Michalke Dr. Abstract The neurotoxic effects of manganese (Mn) at elevated concentrations are well known. This raises the question, which of the Mn species can cross neural barriers and appear in cerebrospinal fluid (CSF). CSF is the last matrix in a living human organism available for analysis before a compound reaches the brain cells and therefore it is assumed to reflect best the internal exposure of brain tissue to Mn species. A previously developed CE method was modified for separation of albumin, histidine, tyrosine, cystine, fumarate, malate, inorganic Mn, oxalacetate, ,-keto-glutarate, nicotinamide-dinucleotide (NAD), citrate, adenosine, glutathione, and glutamine. These compounds are supposed in the literature to act as potential Mn carriers. In a first attempt, these compounds were analyzed by CZE-UV to check whether they are present in CSF. The CZE-UV method was simpler than the coupled CZE-inductively coupled plasma (ICP)-dynamic reaction cell (DRC)-MS method and it was therefore chosen to obtain a first overview information. In a second step, the coupled method (CZE-ICP-DRC-MS) was used to analyze, in detail, which of the compounds found in CSF by CZE-UV were actually bound to Mn. Finally, 13 Mn species were monitored in CSF samples, most of them being identified: Mn-histidine, Mn-fumarate, Mn-malate, inorganic Mn, Mn-oxalacetate, Mn-,-keto glutarate, Mn-carrying NAD, Mn-citrate and Mn-adenosine. By far the most abundant Mn species was Mn-citrate showing a concentration of 0.7,±,0.13,µg,Mn/L. Interestingly, several other Mn species can be related to the citric acid cycle. [source] A study of copper recovery from copper-contaminated sludge with ferrite and selective leaching processesENVIRONMENTAL PROGRESS & SUSTAINABLE ENERGY, Issue 1 2007S.H. Hu Abstract The purpose of this study was to develop an effective resource recovery and leached residue stabilization process for copper-contaminated sludge. To this end, a treatment procedure utilizing ferrite and selective leaching processes was developed. The XRD examination of ferrite complex revealed the crystalline phases to be mainly Fe3O4, CuO, and 6CuO·Cu2O. A selective leaching process was followed to recover the copper content of the ferrite complex. To promote the dissolution percentage of copper and repress that of iron, additional 0.5 N sulfuric acid was added at intervals to the suspension in the second step of the selective leaching process. The purpose of this operation was to return the suspension pH back to 3 to promote the dissolution of copper oxide and repress the dissolution of iron. Finally, the heavy metal (i.e., Cu, Pb, Cr, and Cd) dissolution of the above residue was examined with toxicity characteristic leaching procedure (TCLP) testing and all met the regulatory standard. © 2007 American Institute of Chemical Engineers Environ Prog 26:104,112, 2007 [source] The Reaction of (Bipyridyl)palladium(II) Complexes with Thiourea , Influence of DNA and Other Polyanions on the Rate of ReactionEUROPEAN JOURNAL OF INORGANIC CHEMISTRY, Issue 2 2005Matteo Cusumano Abstract [Pd(bipy)(py)2](PF6)2 reacts stepwise with excess thiourea to give [Pd(tu)4](PF6)2. The kinetics of the second step, which refers to the replacement of bipyridyl in [Pd(bipy)(tu)2](PF6)2, have been studied in water and in the presence of calf thymus DNA, sodium polyriboadenylate, sodium polyvinylsulfonate or sodium polymetaphosphate at 25 °C and pH = 7 and a fixed sodium chloride concentration. The reaction follows a first order course and a plot of kobs against [thiourea]2 affords a straight line with a small intercept. DNA inhibits the process without altering the rate law. The kobs values decrease systematically on increasing the DNA concentration eventually tending to a limiting value. The values are larger at higher ionic strengths and the other polyanions show similar behaviour. The influence of DNA on the kinetics can be related to steric inhibition caused by noncovalent binding with the complex. Upon interaction with DNA, [Pd(bipy)(tu)2]2+ gives rise to immediate spectroscopic changes in the UV/Vis region as well as induced circular dichroism suggesting that the complex, like similar platinum(II) and palladium(II) species of bipyridyl, intercalates with the double helix. Such a type of interaction hampers the attack of the nucleophile at the metal centre inhibiting the reaction. The decrease in the rate of ligand substitution upon decreasing salt concentration but at a given DNA concentration is due to the influence of ionic strength on the complex,DNA interaction. The reactivity inhibition by single-stranded poly(A), polyvinylsulfonate or polymetaphosphate can be accounted for in terms of self-aggregation of the complex induced by the polyanion. (© Wiley-VCH Verlag GmbH & Co. KGaA, 69451 Weinheim, Germany, 2005) [source] Facial nerve injury-induced disinhibition in the primary motor cortices of both hemispheresEUROPEAN JOURNAL OF NEUROSCIENCE, Issue 6 2000Tamás Farkas Abstract Unilateral facial nerve transection induces plastic reorganization of the somatotopic order in the primary motor cortex area (MI). This process is biphasic and starts with a transient disinhibition of connections between cortical areas in both hemispheres. Little is known about the underlying mechanisms. Here, cortical excitability has been studied by paired pulse electrical stimulation, applied either within the MI or peripherally to the trigeminal nerve, while the responses were recorded bilaterally in the MI. The ratios between the amplitudes of the second and first evoked potentials (EPs or fEPSPs) were taken as measures of the inhibitory capacity in the MI ipsilateral or contralateral to the nerve injury. A skin wound or unilateral facial nerve exposure immediately caused a transient facilitation, which was followed by a reset to some level of inhibition in the MI on both sides. After facial nerve transection, the first relatively mild reduction of inhibition started shortly (within 10 min) after denervation. This was followed by a second step, involving a stronger decrease in inhibition, 40,45 min later. Previous publications have proved that sensory nerve injury (deafferentation) induces disinhibition in corresponding areas of the sensory cortex. It is now demonstrated that sham operation and, to an even greater extent, unilateral transection of the purely motoric facial nerve (deefferentation), each induce extended disinhibition in the MIs on both sides. [source] Acquiring a Community: The Acquis and the Institution of European Legal OrderEUROPEAN LAW JOURNAL, Issue 4 2003Hans Lindahl The emblematic manifestation of this passage, in the framework of the European legal order, is the acquis communautaire: what is the nature of the process that leads from acquired community to acquiring a community? In a first, preparatory, step, it will be argued that determinate conceptions of truth, time and the giving and taking of reason underlie the process of acquiring a European community. These findings are confronted, in a second step, with Antonio Negri's theory of the multitude as a constituent power, which opposes revolutionary self-determination to representation. Deconstructing this massive opposition, this paper explores three ways in which representation is at work in revolutionary self-determination. As will become clear in the course of the debate, instituting (European) community turns on the interval linking and separating law ,and' disorganised civil society. [source] POSTMATING SEXUAL SELECTION: ALLOPATRIC EVOLUTION OF SPERM COMPETITION MECHANISMS AND GENITAL MORPHOLOGY IN CALOPTERYGID DAMSELFLIES (INSECTA: ODONATA)EVOLUTION, Issue 2 2004A. Cordero Rivera Abstract Postmating sexual selection theory predicts that in allopatry reproductive traits diverge rapidly and that the resulting differentiation in these traits may lead to restrictions to gene flow between populations and, eventually, reproductive isolation. In this paper we explore the potential for this premise in a group of damselflies of the family Calopterygidae, in which postmating sexual mechanisms are especially well understood. Particularly, we tested if in allopatric populations the sperm competition mechanisms and genitalic traits involved in these mechanisms have indeed diverged as sexual selection theory predicts. We did so in two different steps. First, we compared the sperm competition mechanisms of two allopatric populations of Calopteryx haemorrhoidalis (one Italian population studied here and one Spanish population previously studied). Our results indicate that in both populations males are able to displace spermathecal sperm, but the mechanism used for sperm removal between both populations is strikingly different. In the Spanish population males seem to empty the spermathecae by stimulating females, whereas in the Italian population males physically remove sperm from the spermathecae. Both populations also exhibit differences in genital morphometry that explain the use of different mechanisms: the male lateral processes are narrower than the spermathecal ducts in the Italian population, which is the reverse in the Spanish population. The estimated degree of phenotypic differentiation between these populations based on the genitalic traits involved in sperm removal was much greater than the differentiation based on a set of other seven morphological variables, suggesting that strong directional postmating sexual selection is indeed the main evolutionary force behind the reproductive differentiation between the studied populations. In a second step, we examined if a similar pattern in genital morphometry emerge in allopatric populations of this and other three species of the same family (Calopteryx splendens, C. virgo and Hetaerina cruentata). Our results suggest that there is geographic variation in the sperm competition mechanisms in all four studied species. Furthermore, genitalic morphology was significantly divergent between populations within species even when different populations were using the same copulatory mechanism. These results can be explained by probable local coadaptation processes that have given rise to an ability or inability to reach and displace spermathecal sperm in different populations. This set of results provides the first direct evidence of intraspecific evolution of genitalic traits shaped by postmating sexual selection. [source] Heterologous expression of a Rauvolfia cDNA encoding strictosidine glucosidase, a biosynthetic key to over 2000 monoterpenoid indole alkaloidsFEBS JOURNAL, Issue 8 2002Irina Gerasimenko Strictosidine glucosidase (SG) is an enzyme that catalyses the second step in the biosynthesis of various classes of monoterpenoid indole alkaloids. Based on the comparison of cDNA sequences of SG from Catharanthus roseus and raucaffricine glucosidase (RG) from Rauvolfia serpentina, primers for RT-PCR were designed and the cDNA encoding SG was cloned from R. serpentina cell suspension cultures. The active enzyme was expressed in Escherichia coli and purified to homogeneity. Analysis of its deduced amino-acid sequence assigned the SG from R. serpentina to family 1 of glycosyl hydrolases. In contrast to the SG from C. roseus, the enzyme from R. serpentina is predicted to lack an uncleavable N-terminal signal sequence, which is believed to direct proteins to the endoplasmic reticulum. The temperature and pH optimum, enzyme kinetic parameters and substrate specificity of the heterologously expressed SG were studied and compared to those of the C. roseus enzyme, revealing some differences between the two glucosidases. In vitro deglucosylation of strictosidine by R. serpentina SG proceeds by the same mechanism as has been shown for the C. roseus enzyme preparation. The reaction gives rise to the end product cathenamine and involves 4,21-dehydrocorynantheine aldehyde as an intermediate. The enzymatic hydrolysis of dolichantoside (N,-methylstrictosidine) leads to several products. One of them was identified as a new compound, 3-isocorreantine A. From the data it can be concluded that the divergence of the biosynthetic pathways leading to different classes of indole alkaloids formed in R. serpentina and C. roseus cell suspension cultures occurs at a later stage than strictosidine deglucosylation. [source] Fire calorimetry relying on the use of the fire propagation apparatus.FIRE AND MATERIALS, Issue 2 2006Part I: early learning from use in Europe Abstract The fire propagation apparatus (FPA) is the bench scale fire calorimeter that was recently described in its updated version in ASTM E 2058. The apparatus was originally developed in the USA by Tewarson and co-workers from the mid 1970s, under the name ,50 kW lab-scale flammability apparatus', and is therefore still known in Europe as the ,Tewarson apparatus'. The paper focuses on the experience achieved so far with the first modern version of the apparatus implemented in Europe (France). Part I in this series of articles reports on the main results achieved during the commissioning period of the apparatus. In a first step, preliminary experiments were carried out in order to check and calibrate different sub-equipment of the calorimeter. The results are principally presented for the load cell system and the infrared heating system which are essential pieces of sub-equipment. In a second step, a set of fire tests using methane or acetone as fuel was carried out in order to check and calibrate the overall working of the calorimeter in well-fire conditions. The performance of the calorimeter was also checked when it operates in under-ventilated fires. Relevant testing procedures and potential technical problems are discussed. A set of recommendations are derived from the early learning obtained at the INERIS fire laboratory in order to check the consistency of the results obtained from bench-scale fire tests. These recommendations are thought to be applicable to all types of bench scale fire calorimeters. Copyright © 2005 John Wiley & Sons, Ltd. [source] Uncertainty analysis of heat release rate measurement from oxygen consumption calorimetryFIRE AND MATERIALS, Issue 6 2005Sylvain BrohezArticle first published online: 1 JUL 200 Abstract Oxygen consumption calorimetry remains the most widespread method for the measurement of the heat release rate from experimental fire tests. In a first step, this paper examines by theoretical analysis the uncertainty associated with this measurement, especially when CO and soot corrections are applied. Application of theoretical equations is presented for chlorobenzene which leads to high values of CO and soot yields. It appears that the uncertainty of CO and soot corrections are high when the fuel composition is unknown. In a second step, a theoretical analysis is provided when the simplest measurement procedure is used for oxygen consumption calorimetry. The overall uncertainty can be dominated either by the uncertainty associated with the oxygen concentration, the assumed heat of combustion, the fumes mass flow rate or the assumed combustion expansion factor depending on the oxygen depletion. Copyright © 2005 John Wiley & Sons, Ltd. [source] Near IR Sensitization of Organic Bulk Heterojunction Solar Cells: Towards Optimization of the Spectral Response of Organic Solar CellsADVANCED FUNCTIONAL MATERIALS, Issue 2 2010Markus Koppe Abstract The spectroscopic response of a poly(3-hexylthiophene)/[6,6]-phenyl-C61 -butyric acid methyl ester (P3HT/PCBM)-based bulk heterojunction solar cell is extended into the near infrared region (NIR) of the spectrum by adding the low bandgap polymer poly[2,6-(4,4-bis-(2-ethylhexyl)-4H -cyclopenta[2,1- b;3,4- b´]-dithiophene)- alt -4,7-(2,1,3-benzothiadiazole)] [PCPDTBT] to the blend. The dominant mechanism behind the enhanced photosensitivity of the ternary blend is found to be a two-step process: first, an ultrafast and efficient photoinduced charge transfer generates positive charges on P3HT and PCPDTBT and a negative charge on PCBM. In a second step, the positive charge on PCPDTBT is transferred to P3HT. Thus, P3HT serves two purposes. On the one hand it is involved in the generation of charge carriers by the photoinduced electron transfer to PCBM, and, on the other hand, it forms the charge transport matrix for the positive carriers transferred from PCPDTBT. Other mechanisms, such as energy transfer or photoinduced charge transfer directly between the two polymers, are found to be absent or negligible. [source] The effective action of D-branes in Calabi-Yau orientifold compactifications,FORTSCHRITTE DER PHYSIK/PROGRESS OF PHYSICS, Issue 10 2005H. Jockers Abstract In this review article we study type IIB superstring compactifications in the presence of space-time filling D-branes while preserving ,,=1 supersymmetry in the effective four-dimensional theory. This amount of unbroken supersymmetry and the requirement to fulfill the consistency conditions imposed by the space-time filling D-branes lead to Calabi-Yau orientifold compactifications. For a generic Calabi-Yau orientifold theory with space-time filling D3- or D7-branes we derive the low-energy spectrum. In a second step we compute the effective ,,=1 supergravity action which describes in the low-energy regime the massless open and closed string modes of the underlying type IIB Calabi-Yau orientifold string theory. These ,,=1 supergravity theories are analyzed and in particular spontaneous supersymmetry breaking induced by non-trivial background fluxes is studied. For D3-brane scenarios we compute soft-supersymmetry breaking terms resulting from bulk background fluxes whereas for D7-brane systems we investigate the structure of D- and F-terms originating from worldvolume D7-brane background fluxes. Finally we relate the geometric structure of D7-brane Calabi-Yau orientifold compactifications to ,,=1 special geometry. [source] Copper-Free Clickable CoatingsADVANCED FUNCTIONAL MATERIALS, Issue 21 2009Luiz A. Canalle Abstract The copper-catalyzed azide,alkyne 1,3-dipolar cycloaddition (CuAAC) is extensively used for the functionalization of well-defined polymeric materials. However, the necessity for copper, which is inherently toxic, limits the potential applications of these materials in the area of biology and biomedicine. Therefore, the first entirely copper-free procedure for the synthesis of clickable coatings for the immobilization of functional molecules is reported. In the first step, azide-functional coatings are prepared by thermal crosslinking of side-chain azide-functional polymers and dialkyne linkers. In a second step, three copper-free click reactions (i.e., the Staudinger ligation, the dibenzocyclooctyne-based strain-promoted azide,alkyne [3+2] cycloaddition, and the methyl-oxanorbornadiene-based tandem cycloaddition,retro-Diels,Alder (crDA) reaction) are used to functionalize the azide-containing surfaces with fluorescent probes, allowing qualitative comparison with the traditional CuAAC. [source] Coxibs: evolution of prescription's behaviour in FranceFUNDAMENTAL & CLINICAL PHARMACOLOGY, Issue 3 2007Julie Biga Abstract The aim of the present study was, first to characterize profiles of coxibs' prescribers [general practitioners (GPs) and rheumatologists] in 2002 in France and, secondly, to identify factors associated with modification of this profile 1 year later. All GPs and rheumatologists from Midi-Pyrenees, Aquitaine, Languedoc-Roussillon and Pays de Loire areas (South of France: 11 050 000 inhabitants) were included in the study. For each practitioner, we used data concerning all non-steroidal anti-inflammatory drugs (NSAIDs) including coxibs reimbursed during period 1 (P1; January,March 2002) and period 2 (P2; January,March 2003). The ratio between the number of coxibs' prescriptions and the total number of NSAIDs' prescriptions (including coxibs) was used to define the two profiles of prescribers, one with a low level of coxibs' prescriptions and another one with a high level of coxibs' prescriptions. Characteristics of practitioners and characteristics of their practices were compared according to this profile. In the second step, we investigated the characteristics (of practitioners and practices) associated with an increase in the level of coxibs' prescriptions in P2 for practitioners with a low level of coxibs' prescriptions in P1. Results are expressed as odds ratio with their 95% confidence intervals. A positive statistical link was found between a high level of coxibs' prescriptions, the speciality of rheumatologist or extra costs for consultation. In contrast, a negative association was observed with female gender or age below 44 years. No relationship was found with the status of referent. Concerning practices' characteristics of practitioners, there was a positive statistical link between a high ratio of coxibs' prescriptions and high co-prescriptions of gastroprotective agents and a negative association with a high number of acts, a high proportion of patients with chronic disabling diseases (CDD) or a high number of patients between 15 and 64 years. There was no statistical link with proportion of patients covered by Universal Medical Coverage (UMC) or aged more than 65 years. Among the factors involved in the increase in the ratio (between P1 and P2), no relationship was found with practitioners' characteristics. In contrast, some factors related to practices (level of gastroprotective co-prescriptions, number of acts, number of CDD patients) were related to a change in coxibs' prescriptions between P1 and P2. This study allowed to discuss some relationships between coxibs' prescription and practitioners' (age, gender, medical speciality or extra costs for consultation) or practices' (level of medical practice, patients' age, number of CDD patients or level of gastroprotective prescriptions) characteristics. In contrast, some other factors like the referent status or the number of patients with UMC are not related. Physicians, initially low prescribers of coxibs and increasing their coxibs' prescriptions during the period of our study, were those with a high level of gastroprotective prescriptions, a low number of acts or a small proportion of CDD patients. [source] Graphene Monolayers: Chemical Vapor Deposition Repair of Graphene Oxide: A Route to Highly-Conductive Graphene Monolayers (Adv. Mater.ADVANCED MATERIALS, Issue 46 200946/2009) Graphene oxide (GO) is a promising precursor for the mass production of graphene. As an important step in this direction, the electrical conductivity of GO has been enhanced by six orders of magnitude, thus closely approaching that of exfoliated graphene. The novel two-step process reported by Cristina Gómez-Navarro and co-workers on p. 4683 involves hydrogen reduction and healing by a gaseous carbon feedstock. The inside cover shows a snapshot of the crucial second step. The oxidized regions in GO are represented in red, and the blue patches correspond to vacancies. [source] Improved EEG source analysis using low-resolution conductivity estimation in a four-compartment finite element head modelHUMAN BRAIN MAPPING, Issue 9 2009Seok Lew Abstract Bioelectric source analysis in the human brain from scalp electroencephalography (EEG) signals is sensitive to geometry and conductivity properties of the different head tissues. We propose a low-resolution conductivity estimation (LRCE) method using simulated annealing optimization on high-resolution finite element models that individually optimizes a realistically shaped four-layer volume conductor with regard to the brain and skull compartment conductivities. As input data, the method needs T1- and PD-weighted magnetic resonance images for an improved modeling of the skull and the cerebrospinal fluid compartment and evoked potential data with high signal-to-noise ratio (SNR). Our simulation studies showed that for EEG data with realistic SNR, the LRCE method was able to simultaneously reconstruct both the brain and the skull conductivity together with the underlying dipole source and provided an improved source analysis result. We have also demonstrated the feasibility and applicability of the new method to simultaneously estimate brain and skull conductivity and a somatosensory source from measured tactile somatosensory-evoked potentials of a human subject. Our results show the viability of an approach that computes its own conductivity values and thus reduces the dependence on assigning values from the literature and likely produces a more robust estimate of current sources. Using the LRCE method, the individually optimized four-compartment volume conductor model can, in a second step, be used for the analysis of clinical or cognitive data acquired from the same subject. Hum Brain Mapp, 2009. © 2008 Wiley-Liss, Inc. [source] Simplified intersubject averaging on the cortical surface using SUMAHUMAN BRAIN MAPPING, Issue 1 2006Brenna D. Argall Abstract Task and group comparisons in functional magnetic resonance imaging (fMRI) studies are often accomplished through the creation of intersubject average activation maps. Compared with traditional volume-based intersubject averages, averages made using computational models of the cortical surface have the potential to increase statistical power because they reduce intersubject variability in cortical folding patterns. We describe a two-step method for creating intersubject surface averages. In the first step cortical surface models are created for each subject and the locations of the anterior and posterior commissures (AC and PC) are aligned. In the second step each surface is standardized to contain the same number of nodes with identical indexing. An anatomical average from 28 subjects created using the AC,PC technique showed greater sulcal and gyral definition than the corresponding volume-based average. When applied to an fMRI dataset, the AC,PC method produced greater maximum, median, and mean t -statistics in the average activation map than did the volume average and gave a better approximation to the theoretical-ideal average calculated from individual subjects. The AC,PC method produced average activation maps equivalent to those produced with surface-averaging methods that use high-dimensional morphing. In comparison with morphing methods, the AC,PC technique does not require selection of a template brain and does not introduce deformations of sulcal and gyral patterns, allowing for group analysis within the original folded topology of each individual subject. The tools for performing AC,PC surface averaging are implemented and freely available in the SUMA software package. Hum Brain Mapp, 2005. © 2005 Wiley-Liss, Inc. [source] Source density-driven independent component analysis approach for fMRI dataHUMAN BRAIN MAPPING, Issue 3 2005Baoming Hong Abstract Independent component analysis (ICA) has become a popular tool for functional magnetic resonance imaging (fMRI) data analysis. Conventional ICA algorithms including Infomax and FAST-ICA algorithms employ the underlying assumption that data can be decomposed into statistically independent sources and implicitly model the probability density functions of the underlying sources as highly kurtotic or symmetric. When source data violate these assumptions (e.g., are asymmetric), however, conventional ICA methods might not work well. As a result, modeling of the underlying sources becomes an important issue for ICA applications. We propose a source density-driven ICA (SD-ICA) method. The SD-ICA algorithm involves a two-step procedure. It uses a conventional ICA algorithm to obtain initial independent source estimates for the first-step and then, using a kernel estimator technique, the source density is calculated. A refitted nonlinear function is used for each source at the second step. We show that the proposed SD-ICA algorithm provides flexible source adaptivity and improves ICA performance. On SD-ICA application to fMRI signals, the physiologic meaningful components (e.g., activated regions) of fMRI signals are governed typically by a small percentage of the whole-brain map on a task-related activation. Extra prior information (using a skewed-weighted distribution transformation) is thus additionally applied to the algorithm for the regions of interest of data (e.g., visual activated regions) to emphasize the importance of the tail part of the distribution. Our experimental results show that the source density-driven ICA method can improve performance further by incorporating some a priori information into ICA analysis of fMRI signals. Hum Brain Mapping, 2005. © 2005 Wiley-Liss, Inc. [source] Numerical evaluation of eigenvalues in notch problems using a region searching methodINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 12 2006Y. Z. Chen Abstract This paper presents a method for finding the eigenvalues of some equations, or the zeros of analytic functions. There are two steps in the method. In the first step, integration along the edges of rectangle for an analytic function is performed. From the result of integration, one can know whether the zero exists in the rectangle or not. If the zero of an analytic function exists in the rectangle, we can perform the second step. In the second step, the zero is obtained by iteration. Therefore, the method is called a region searching method. Particular advantage of the suggested method is that the process for finding zero can be visualized. For example, one can clearly indicate the rectangles, which contain the zeros of an analytic function. Three numerical examples are presented. The obtained results are satisfactory even for a complicated case, for example, for finding eigenvalues of a composed wedge of dissimilar materials. Copyright © 2006 John Wiley & Sons, Ltd. [source] An iterative defect-correction type meshless method for acousticsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 15 2003V. Lacroix Abstract Accurate numerical simulation of acoustic wave propagation is still an open problem, particularly for medium frequencies. We have thus formulated a new numerical method better suited to the acoustical problem: the element-free Galerkin method (EFGM) improved by appropriate basis functions computed by a defect correction approach. One of the EFGM advantages is that the shape functions are customizable. Indeed, we can construct the basis of the approximation with terms that are suited to the problem which has to be solved. Acoustical problems, in cavities , with boundary T, are governed by the Helmholtz equation completed with appropriate boundary conditions. As the pressure p(x,y) is a complex variable, it can always be expressed as a function of cos,(x,y) and sin,(x,y) where ,(x,y) is the phase of the wave in each point (x,y). If the exact distribution ,(x,y) of the phase is known and if a meshless basis {1, cos,(x,y), sin, (x,y) } is used, then the exact solution of the acoustic problem can be obtained. Obviously, in real-life cases, the distribution of the phase is unknown. The aim of our work is to resolve, as a first step, the acoustic problem by using a polynomial basis to obtain a first approximation of the pressure field p(x,y). As a second step, from p(x,y) we compute the distribution of the phase ,(x,y) and we introduce it in the meshless basis in order to compute a second approximated pressure field p(x,y). From p(x,y), a new distribution of the phase is computed in order to obtain a third approximated pressure field and so on until a convergence criterion, concerning the pressure or the phase, is obtained. So, an iterative defect-correction type meshless method has been developed to compute the pressure field in ,. This work will show the efficiency of this meshless method in terms of accuracy and in terms of computational time. We will also compare the performance of this method with the classical finite element method. Copyright © 2003 John Wiley & Sons, Ltd. [source] A general Riemann solver for Euler equationsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2008Hao Wu Abstract In this paper, we present a general Riemann solver which is applied successfully to compute the Euler equations in fluid dynamics with many complex equations of state (EOS). The solver is based on a splitting method introduced by the authors. We add a linear advection term to the Euler equations in the first step, to make the numerical flux between cells easy to compute. The added linear advection term is thrown off in the second step. It does not need an iterative technique and characteristic wave decomposition for computation. This new solver is designed to permit the construction of high-order approximations to obtain high-order Godunov-type schemes. A number of numerical results show its robustness. Copyright © 2007 John Wiley & Sons, Ltd. [source] A 3-D non-hydrostatic pressure model for small amplitude free surface flowsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 6 2006J. W. Lee Abstract A three-dimensional, non-hydrostatic pressure, numerical model with k,, equations for small amplitude free surface flows is presented. By decomposing the pressure into hydrostatic and non-hydrostatic parts, the numerical model uses an integrated time step with two fractional steps. In the first fractional step the momentum equations are solved without the non-hydrostatic pressure term, using Newton's method in conjunction with the generalized minimal residual (GMRES) method so that most terms can be solved implicitly. This method only needs the product of a Jacobian matrix and a vector rather than the Jacobian matrix itself, limiting the amount of storage and significantly decreasing the overall computational time required. In the second step the pressure,Poisson equation is solved iteratively with a preconditioned linear GMRES method. It is shown that preconditioning reduces the central processing unit (CPU) time dramatically. In order to prevent pressure oscillations which may arise in collocated grid arrangements, transformed velocities are defined at cell faces by interpolating velocities at grid nodes. After the new pressure field is obtained, the intermediate velocities, which are calculated from the previous fractional step, are updated. The newly developed model is verified against analytical solutions, published results, and experimental data, with excellent agreement. Copyright © 2005 John Wiley & Sons, Ltd. [source] Parallelization of a vorticity formulation for the analysis of incompressible viscous fluid flowsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2002Mary J. Brown Abstract A parallel computer implementation of a vorticity formulation for the analysis of incompressible viscous fluid flow problems is presented. The vorticity formulation involves a three-step process, two kinematic steps followed by a kinetic step. The first kinematic step determines vortex sheet strengths along the boundary of the domain from a Galerkin implementation of the generalized Helmholtz decomposition. The vortex sheet strengths are related to the vorticity flux boundary conditions. The second kinematic step determines the interior velocity field from the regular form of the generalized Helmholtz decomposition. The third kinetic step solves the vorticity equation using a Galerkin finite element method with boundary conditions determined in the first step and velocities determined in the second step. The accuracy of the numerical algorithm is demonstrated through the driven-cavity problem and the 2-D cylinder in a free-stream problem, which represent both internal and external flows. Each of the three steps requires a unique parallelization effort, which are evaluated in terms of parallel efficiency. Copyright © 2002 John Wiley & Sons, Ltd. [source] Thermal dehydration kinetics of a rare earth hydroxide, Gd(OH)3INTERNATIONAL JOURNAL OF CHEMICAL KINETICS, Issue 2 2007Chengkang Chang This paper reports the synthesis, characterization, and dehydration kinetics of a rare earth hydroxide, Gd(OH)3. Uniform rod-like Gd(OH)3 powder was prepared by a colloidal hydrothermal method. The powder thus obtained dehydrated into its oxide form in a two-step process, where crystalline GdOOH was obtained as the intermediate phase. Crystal structure study revealed a monoclinic structure for GdOOH, with space group P2/1m and lattice parameters a = 6.0633, b = 3.7107, c = 4.3266, and , = 108.669. The first-step dehydration follows the F2 mechanism, while the second step follows the F1 model, indicating that both the steps are controlled by nucleation/growth mechanism. The activation energy Ea and frequency factor A are 231±12 kJ/mol and 2.08 × 1018 s,1 for the first step and 496 ± 32 kJ/mol and 7.88 × 1033 s,1 for the second step, respectively. Such high activation energy calculated from the experimental data can be ascribed to the high bonding energy of GdO bond, and the difference in activation energy for the two steps is due to the change in the bond length of hexagonal Gd(OH)3 and monoclinic GdOOH. © 2006 Wiley Periodicals, Inc. Int J Chem Kinet 39: 75,81, 2007 [source] Use of image analysis techniques for objective quantification of the efficacy of different hair removal methodsINTERNATIONAL JOURNAL OF COSMETIC SCIENCE, Issue 2 2007S. Bielfeldt In the field of consumer-used cosmetics for hair removal and hair growth reduction, there is a need for improved quantitative methods to enable the evaluation of efficacy and claim support. Optimized study designs and investigated endpoints are lacking to compare the efficacy of standard methods, like shaving or plucking, with new methods and products, such as depilating instruments or hair-growth-reducing cosmetics. Non-invasive image analysis, using a high-performance microscope combined with an optimized image analysis tool, was investigated to assess hair growth. In one step, high-resolution macrophotographs of the legs of female volunteers after shaving and plucking with cold wax were compared to observe short-term hair regrowth. In a second step, images obtained after plucking with cold wax were taken over a long-term period to assess the time, after which depilated hairs reappeared on the skin surface. Using image analysis, parameters like hair length, hair width, and hair projection area were investigated. The projection area was found to be the parameter most independent of possible image artifacts such as irregularities in skin or low contrast due to hair color. Therefore, the hair projection area was the most appropriate parameter to determine the time of hair regrowth. This point of time is suitable to assess the efficacy of different hair removal methods or hair growth reduction treatments by comparing the endpoint after use of the hair removal method to be investigated to the endpoint after simple shaving. The closeness of hair removal and visible signs of skin irritation can be assessed as additional quantitative parameters from the same images. Discomfort and pain rating by the volunteers complete the set of parameters, which are required to benchmark a new hair removal method or hair-growth-reduction treatment. Image analysis combined with high-resolution imaging techniques is a powerful tool to objectively assess parameters like hair length, hair width, and projection area. To achieve reliable data and to reduce well known image-analysis artifacts, it was important to optimize the technical equipment for use on human skin and to improve image analysis by adaptation of the image-processing procedure to the different skin characteristics of individuals, like skin color, hair color, and skin structure. [source] Combining Krylov subspace methods and identification-based methods for model order reductionINTERNATIONAL JOURNAL OF NUMERICAL MODELLING: ELECTRONIC NETWORKS, DEVICES AND FIELDS, Issue 6 2007P. J. Heres Abstract Many different techniques to reduce the dimensions of a model have been proposed in the near past. Krylov subspace methods are relatively cheap, but generate non-optimal models. In this paper a combination of Krylov subspace methods and orthonormal vector fitting (OVF) is proposed. In that way a compact model for a large model can be generated. In the first step, a Krylov subspace method reduces the large model to a model of medium size, then a compact model is derived with OVF as a second step. Copyright © 2007 John Wiley & Sons, Ltd. [source] |