Home About us Contact | |||
Being Used (being + used)
Selected AbstractsQuantification of Extinction Risk: IUCN's System for Classifying Threatened SpeciesCONSERVATION BIOLOGY, Issue 6 2008GEORGINA M. MACE definición de prioridades de conservación; especies amenazadas; Lista Roja UICN; riesgo de extinción Abstract:,The International Union for Conservation of Nature (IUCN) Red List of Threatened Species was increasingly used during the 1980s to assess the conservation status of species for policy and planning purposes. This use stimulated the development of a new set of quantitative criteria for listing species in the categories of threat: critically endangered, endangered, and vulnerable. These criteria, which were intended to be applicable to all species except microorganisms, were part of a broader system for classifying threatened species and were fully implemented by IUCN in 2000. The system and the criteria have been widely used by conservation practitioners and scientists and now underpin one indicator being used to assess the Convention on Biological Diversity 2010 biodiversity target. We describe the process and the technical background to the IUCN Red List system. The criteria refer to fundamental biological processes underlying population decline and extinction. But given major differences between species, the threatening processes affecting them, and the paucity of knowledge relating to most species, the IUCN system had to be both broad and flexible to be applicable to the majority of described species. The system was designed to measure the symptoms of extinction risk, and uses 5 independent criteria relating to aspects of population loss and decline of range size. A species is assigned to a threat category if it meets the quantitative threshold for at least one criterion. The criteria and the accompanying rules and guidelines used by IUCN are intended to increase the consistency, transparency, and validity of its categorization system, but it necessitates some compromises that affect the applicability of the system and the species lists that result. In particular, choices were made over the assessment of uncertainty, poorly known species, depleted species, population decline, restricted ranges, and rarity; all of these affect the way red lists should be viewed and used. Processes related to priority setting and the development of national red lists need to take account of some assumptions in the formulation of the criteria. Resumen:,La Lista Roja de Especies Amenazadas de la UICN (Unión Internacional para la Conservación de la Naturaleza) fue muy utilizada durante la década de l980 para evaluar el estatus de conservación de especies para fines políticos y de planificación. Este uso estimuló el desarrollo de un conjunto nuevo de criterios cuantitativos para enlistar especies en las categorías de amenaza: en peligro crítico, en peligro y vulnerable. Estos criterios, que se pretendía fueran aplicables a todas las especies excepto microorganismos, eran parte de un sistema general para clasificar especies amenazadas y fueron implementadas completamente por la UICN en 2000. El sistema y los criterios han sido ampliamente utilizados por practicantes y científicos de la conservación y actualmente apuntalan un indicador utilizado para evaluar el objetivo al 2010 de la Convención de Diversidad Biológica. Describimos el proceso y el respaldo técnico del sistema de la Lista Roja de la IUCN. Los criterios se refieren a los procesos biológicos fundamentales que subyacen en la declinación y extinción de una población. Pero, debido a diferencias mayores entre especies, los procesos de amenaza que los afectan y la escasez de conocimiento sobre la mayoría de las especies, el sistema de la UICN tenía que ser amplio y flexible para ser aplicable a la mayoría de las especies descritas. El sistema fue diseñado para medir los síntomas del riesgo de extinción, y utiliza cinco criterios independientes que relacionan aspectos de la pérdida poblacional y la declinación del rango de distribución. Una especie es asignada a una categoría de amenaza si cumple el umbral cuantitativo por lo menos para un criterio. Los criterios, las reglas acompañantes y las directrices utilizadas por la UICN tienen la intención de incrementar la consistencia, transparencia y validez de su sistema de clasificación, pero requiere algunos compromisos que afectan la aplicabilidad del sistema y las listas de especies que resultan. En particular, se hicieron selecciones por encima de la evaluación de incertidumbre, especies poco conocidas, especies disminuidas, declinación poblacional, rangos restringidos y rareza; todas estas afectan la forma en que las listas rojas deberían ser vistas y usadas. Los procesos relacionados con la definición de prioridades y el desarrollo de las listas rojas nacionales necesitan considerar algunos de los supuestos en la formulación de los criterios. [source] Minocycline-Induced Hyperpigmentation of the Tongue: Successful Treatment with the Q-Switched Ruby LaserDERMATOLOGIC SURGERY, Issue 3 2002Ilyse S. Friedman MD background. Minocycline-induced hyperpigmentation (MIH) is a benign condition that may persist for years despite abrogation of therapy. The Q-switched ruby laser (QSRL) has been successful in removing such lesions from the skin. To date there is no documentation of QSRL or any laser being used to treat lingual hyperpigmentation associated with minocycline therapy. objective. Long-term follow-up results are reported for the use of QSRL to treat lingual hyperpigmentation. The literature is reviewed comparing the use of different laser systems on MIH. methods. A 26-year-old woman with pigment changes of the tongue and buccal mucosa due to long-term minocycline therapy was treated with four consecutive sessions with QSRL (694 nm, 20-nsec pulse duration, and 6.5 mm spot size) at 3.6,4.0 J/cm2. results. A 90% resolution was achieved after three treatments. After the final treatment the lesions were completely gone. There were no side effects reported. No new pigment was detected at follow-up. conclusion. Treatment with the QSRL is a safe and effective strategy for treating hyperpigmentation of the tongue associated with minocycline therapy. [source] Non-viral gene therapy for diabetic retinopathyDRUG DEVELOPMENT RESEARCH, Issue 11 2006*Article first published online: 9 FEB 200, Toshiyuki Oshitari Abstract Diabetic retinopathy results from vascular abnormalities, such as an increase in the permeability of retinal vessels, and retinal neurodegeneration, which are irreversible changes that occur early in the course of diabetic retinopathy. To block the vascular and neuronal complications associated with the development and progression of diabetic retinopathy, a reasonable strategy would be to prevent the increased vascular permeability and to block the neuronal cell death. The purpose of this review is to present the non-viral strategies being used to block the neurovascular abnormalities and neuronal cell death that are observed in the early stages of diabetic retinopathy in order to prevent the onset or the progression of the diabetic retinopathy. Some of the non-viral gene therapeutic techniques being used are electroporation of selected genes, injections of antisense oligonucleotides, and injections of small interference RNAs. The results obtained by these methods are discussed as is the potential of these therapeutic strategies to prevent the onset or the progression of the neurovascular abnormalities in diabetic retinopathy. Drug Dev. Res. 67:835,841, 2006. © 2007 Wiley-Liss, Inc. [source] Requiring suspended drunk drivers to install alcohol interlocks to reinstate their licenses: effective?ADDICTION, Issue 8 2010Robert B. Voas ABSTRACT Aims To evaluate a new method being used by some states for motivating interlock installation by requiring it as a prerequisite to reinstatement of the driver's license. Design The driving records of Florida DWI offenders convicted between July 2002 and June 2008 were analyzed to determine the proportion of offenders subject to the interlock requirement who installed interlocks. Setting Most driving-while-impaired (DWI) offenders succeed in avoiding state laws requiring the installation of a vehicle alcohol interlock. Participants A total of 82 318 Florida DWI offenders. Findings Due to long periods of complete suspension when no driving was permitted and the failure to complete all the requirements imposed by the court, only 21 377 of the 82 318 offenders studied qualified for reinstatement, but 93% of those who qualified did install interlocks to be reinstated. Conclusions Because of the lengthy license suspensions and other barriers that the offenders face in qualifying for reinstatement, it is not clear that requiring a period on the interlock as a prerequisite to reinstating will greatly increase the current installment rate. [source] The Electrochemical Properties of Co(TPP), Tetraphenylborate Modified Glassy Carbon Electrode: Application to Dopamine and Uric Acid AnalysisELECTROANALYSIS, Issue 5 2006Yunlong Zeng Abstract We report the combination of the charge repelling property of tetraphenyl-borate (TPB) anion and the electrooxidation catalytic effect of cobalt(II) tetrakisphenylporphyrin (CoTPP) embedded in a sol gel ceramic film to develop a modified glassy carbon electrode (CoTPP-TPB-SGGCE) for the simultaneous determination of dopamine (DA) and uric acid (UA). The optimized CoTPP-TPB-SGGCE shows excellent sensitivity and selectivity for the DA and UA analysis. As high as 2000 fold acceptable tolerance of ascorbic acid (AA) for the determination of trace DA and UA is reached. In the presence of 0.10,mM AA, the linear concentration range for DA is from 6.0×10,8 to 2.5×10,5,M, and the detection limit is 2.0×10,8,M. For UA, the linear concentration range is from 1.0×10,7 to 3.5×10,5,M, and the detection limit is 7.0×10,8,M. Our study has also demonstrated that the novel CoTPP-TPB-SGGCE shows high stability and reliability. For 6.00,,M DA and UA, a total of 12,measurements were taken in one week, and the relative standard deviation is 2.05% and 2.68% respectively. No obvious shift of peak current and peak potential is observed over a three-month lifetime test. The response of the sensor is very quick and response time is approximately 1,s. Satisfactory results are also achieved when the CoTPP-TPB-SGGCEs being used to detect the DA and UA in human urine samples. [source] Diagnostic Implications of Uric Acid in Electroanalytical MeasurementsELECTROANALYSIS, Issue 14 2005Abstract Urate has a long history in clinical analysis and has served as an important diagnostic in a number of contexts. The increasing interest in metabolic syndrome has led to urate being used in combination with a number of other biomarkers in the assessment of cardiovascular risk. The traditional view of urate as principally an interferent in electrochemical measurement is now gradually being replaced with the realization that its measurement could serve as an invaluable secondary (if not primary) marker when monitoring conditions such as diabetes and heart disease. Rather than attempting to wholly exclude urate electrochemistry, many strategies are being developed that can integrate the urate signal within the device architecture such that a range of biomarkers can be sequentially assessed. The present review has sought to rationalize the clinical importance that urate measurements could hold in future diagnostic applications , particularly within near patient testing contexts. The technologies harnessed for its detection and also those previously employed for its removal are reviewed with the aim of highlighting how the seemingly contrasting approaches are evolving to aid the development of new sensing devices for clinical analysis. [source] Nanostructured pillars based on vertically aligned carbon nanotubes as the stationary phase in micro-CECELECTROPHORESIS, Issue 12 2009Ren-Guei Wu Abstract We present a micro-CEC chip carrying out a highly efficient separation of dsDNA fragments through vertically aligned multi-wall carbon nanotubes (MWCNTs) in a microchannel. The vertically aligned MWCNTs were grown directly in the microchannel to form straight nanopillar arrays as ordered and directional chromatographic supports. 1-Pyrenedodecanoic acid was employed for the surface modification of the MWCNTs' stationary phase to adsorb analytes by hydrophobic interactions. This device was used for separating dsDNA fragments of three different lengths (254, 360, and 572,bp), and fluorescence detection was employed to verify the electrokinetic transport in the MWCNT array. The micro-CEC separation of the three compounds was achieved in less than 300,s at a field strength of 66,V/cm due to superior laminar flow patterns and a lower flow resistance resulting from the vertically aligned MWCNTs being used as the stationary phase medium. In addition, a fivefold reduction of band broadening was obtained when the analyte was separated by the chromatographic MWCNT array channel instead of the CE channel. From all of the results, we suggest that an in situ grown and directional MWCNT array can potentially be useful for preparing more diversified forms of stationary phases for vertically efficient chip-based electrochromatography. [source] Reproduction and metabolism at , 10°C of bacteria isolated from Siberian permafrostENVIRONMENTAL MICROBIOLOGY, Issue 4 2003Corien Bakermans Summary We report the isolation and properties of several species of bacteria from Siberian permafrost. Half of the isolates were spore-forming bacteria unable to grow or metabolize at subzero temperatures. Other Gram-positive isolates metabolized, but never exhibited any growth at , 10°C. One Gram-negative isolate metabolized and grew at , 10°C, with a measured doubling time of 39 days. Metabolic studies of several isolates suggested that as temperature decreased below + 4°C, the partitioning of energy changes with much more energy being used for cell maintenance as the temperature decreases. In addition, cells grown at , 10°C exhibited major morphological changes at the ultrastructural level. [source] PRECLINICAL STUDY: Pentylenetetrazole-induced status epilepticus following training does not impair expression of morphine-induced conditioned place preferenceADDICTION BIOLOGY, Issue 2 2009Jie Zhang ABSTRACT Learning and memory play an important role in morphine addiction. Status epilepticus (SE) can impair the spatial and emotional learning and memory. However, little is known about the effects of SE on morphine-induced conditioned place preference (CPP). The present study was designed to investigate the effects of SE on morphine CPP, with food CPP being used as a control. The effects of SE on spatial memory in the Morris water maze (MWM) and Y-maze were investigated. SE was induced in adult mice using intraperitoneal injection of pentylenetetrazole; control mice received saline. The data indicated that SE had no effects on the formation of morphine CPP; however, the formation of food CPP was blocked by SE. Meanwhile, spatial memory assayed in the MWM and Y-maze was impaired by SE. In addition, the data demonstrated that SE did not cause a lasting disturbance of motor activity nor a change in the mice's appetite. These results suggested that although SE had no effects on morphine CPP, there was impaired food CPP and spatial memory in both the MWM and the Y-maze. The mechanisms underlying memory process of morphine CPP may be different from other types of memory. [source] Detection of marginal defects of composite restorations with conventional and digital radiographsEUROPEAN JOURNAL OF ORAL SCIENCES, Issue 4 2002Rainer Haak The purpose of this study was to determine the validity of detecting approximal imperfections of composite fillings using three intraoral radiographic systems in vitro. Class II composite resin restorations (108) with three radiopacities (264, 306, 443% Al 99.5) of which 27 had marginal openings or overhangs, respectively, were conventionally (Ektaspeed plus) and digitally (Dexis, Digora) radiographed. Images were assessed by 10 observers for the presence of marginal gaps and overhangs, as well as for their need of restorative treatment according to a five-point confidence rating scale. The validity of the observations were expressed as areas under receiver operating characteristic (ROC) curves (Aroc). Repeated measures analysis of variance revealed significant effects of ,radiographic system' and ,diagnostic purpose'. Marginal overhangs (Aroc = 0.90) were significantly easier to diagnose than openings (Aroc = 0.63). Marginal gaps were better detected on conventional and Dexis radiographs than on Digora images. the range of sensitivities and specificities of the treatment decision was 0.53,0.56 and 0.87,0.88, respectively. It was concluded that the validity of detecting marginal defects of composite resin restorations based on radiographs was only slightly affected by the radiographic system being used. The diagnosis of marginal gaps frequently resulted in false-positive and false-negative decisions. [source] Strain-life approach in thermo-mechanical fatigue evaluation of complex structuresFATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 9 2007ABSTRACT This paper is a contribution to strain-life approach evaluation of thermo-mechanically loaded structures. It takes into consideration the uncoupling of stress and damage evaluation and has the option of importing non-linear or linear stress results from finite element analysis (FEA). The multiaxiality is considered with the signed von Mises method. In the developed Damage Calculation Program (DCP) local temperature-stress-strain behaviour is modelled with an operator of the Prandtl type and damage is estimated by use of the strain-life approach and Skelton's energy criterion. Material data were obtained from standard isothermal strain-controlled low cycle fatigue (LCF) tests, with linear parameter interpolation or piecewise cubic Hermite interpolation being used to estimate values at unmeasured temperature points. The model is shown with examples of constant temperature loading and random force-temperature history. Additional research was done regarding the temperature dependency of the Kp used in the Neuber approximate formula for stress-strain estimation from linear FEA results. The proposed model enables computationally fast thermo-mechanical fatigue (TMF) damage estimations for random load and temperature histories. [source] Conflict and Rationality: Accounting in Northern Ireland's Devolved AssemblyFINANCIAL ACCOUNTABILITY & MANAGEMENT, Issue 1 2005Mahmoud Ezzamel The purpose of this study is to explore the implications of the rationality of accounting thought and practice as a mediating mechanism in the highly-charged, conflict-ridden situation in Northern Ireland (NI). The paper draws on a variety of data sources, including a series of interviews with key actors. There are some indications of accounting information being used to inform discussion and debate at the new Assembly. However, a number of politicians, from a spectrum of political traditions, do not relate to this new language, and the instability of the process (evidenced by frequent suspensions) discourages learning and engagement. Overall, this suggests that, without greater continuity, there is a limitation on the ability of accounting practices to mediate tensions. [source] Beyond Mule Kicks: The Poisson Distribution in Geographical AnalysisGEOGRAPHICAL ANALYSIS, Issue 2 2006Daniel A. Griffith The Poisson model, discovered nearly two centuries ago, is the basis for analyses of rare events. Its first applications included descriptions of deaths from mule kicks. More than half a century ago the Poisson model began being used in geographical analysis. Its initial descriptions of geographic distributions of points, disease maps, and spatial flows were accompanied by an assumption of independence. Today this unrealistic assumption is replaced by one allowing for the presence of spatial autocorrelation in georeferenced counts. Contemporary statistical theory has led to the creation of powerful Poisson-based modeling tools for geographically distributed count data. [source] Reading and Writing the Stasi File: on the Uses and Abuses of The File as (auto)biographyGERMAN LIFE AND LETTERS, Issue 4 2003Alison Lewis The opening of the Stasi files in 1992, made possible by the Stasi Documents Legislation, was an important symbolic act of reconciliation between victims and perpetrators. For victims, reading their file provided a means of re-appropriating stolen aspects of their lives and rewriting their life histories. This article argues that the Stasi file itself can be viewed as a form of hostile biography, authored by an oppressive state apparatus, that constituted in GDR times an all-powerful written ,technology of power'. The analogy of secret police files to literary genres enables us to pose a number of questions about the current uses to which the files are being put by victims and perpetrators. Are victims and perpetrators making similar use of their Stasi file in the writing of their autobiographies? What happens when the secret police file is removed from its original bureaucratic context and ,regime of truth' and starts to circulate as literary artefact in new contexts, for instance, as part of victims' and perpetrators' autobiographies? How is the value of the Stasi file now being judged? Is the file being used principally in the services of truth and reconciliation, as originally intended in the legislation, or does it now circulate in ,regimes of value' that place a higher premium on accounts of perpetrators, as can be witnessed in the publication of the fictitious ,autobiography' of the notorious secret police informer, Sascha Anderson? [source] Isolation and X-Ray Structures of Reactive Intermediates of Organocatalysis with Diphenylprolinol Ethers and with ImidazolidinonesHELVETICA CHIMICA ACTA, Issue 11 20085-Repulsion, A Survey, Comparison with Computed Structures, the Geminal-Diaryl Effect at Work, with 1-Acyl-imidazolidinones: The Abstract Reaction of 2-phenylacetaldehyde with the Me3Si ether of diphenyl-prolinol, with removal of H2O, gives a crystalline enamine (1). The HBF4 salts of the MePh2Si ether of diphenyl-prolinol and of 2-(tert -butyl)-3-methyl- and 5-benzyl-2,2,3-trimethyl-1,3-imidazolidin-4-one react with cinnamaldehyde to give crystalline iminium salts 2, 3, and 4. Single crystals of the enamine and of two iminium salts, 2 and 3, were subjected to X-ray structure analysis (Figs.,1, 2, and 6), and a 2D-NMR spectrum of the third iminium salt was recorded (Fig.,7). The crystal and NMR structures confirm the commonly accepted, general structures of the two types of reactive intermediates in organocatalysis with the five-membered heterocycles, i.e., D, E (Scheme,2). Fine details of the crystal structures are discussed in view of the observed stereoselectivities of the corresponding reactions with electrophiles and nucleophiles. The structures 1 and 2 are compared with those of other diphenyl-prolinol derivatives (from the Cambridge File CSD; Table,1) and discussed in connection with other reagents and ligands, containing geminal diaryl groups and being used in enantioselective synthesis (Fig.,4). The iminium ions 3 and 4 are compared with N -acylated imidazolidinones F and G (Figs.,9, 12, and 13, and Table,3), and common structural aspects such as minimalization of 1,5-repulsion (the ,A1,3 -effect'), are discussed. The crystal structures of the simple diphenyl-prolinol,HBF4 salt (Fig.,3) and of Boc- and benzoyl-(tert -butyl)methyl-imidazolidinone (Boc-BMI and Bz-BMI, resp.; Figs.,10 and 11) are also reported. Finally, the crystal structures are compared with previously published theoretical structures, which were obtained from high-level-of-theory DFT calculations (Figs.,5 and 8, and Table,2). Delicate details including pyramidalization of trigonal N-atoms, distortions around iminium CN bonds, shielding of diastereotopic faces, and the , -interaction between a benzene ring and a Me group match so well with, and were actually predicting the experimental results that the question may seem appropriate, whether one will soon start considering to carry out such calculations before going to the laboratory for experimental optimizations. [source] Phase precession and phase-locking of hippocampal pyramidal cellsHIPPOCAMPUS, Issue 3 2001Amitabha Bose Abstract We propose that the activity patterns of CA3 hippocampal pyramidal cells in freely running rats can be described as a temporal phenomenon, where the timing of bursts is modulated by the animal's running speed. With this hypothesis, we explain why pyramidal cells fire in specific spatial locations, and how place cells phase-precess with respect to the EEG theta rhythm for rats running on linear tracks. We are also able to explain why wheel cells phase-lock with respect to the theta rhythm for rats running in a wheel. Using biophysically minimal models of neurons, we show how the same network of neurons displays these activity patterns. The different rhythms are the result of inhibition being used in different ways by the system. The inhibition is produced by anatomically and physiologically diverse types of interneurons, whose role in controlling the firing patterns of hippocampal cells we analyze. Each firing pattern is characterized by a different set of functional relationships between network elements. Our analysis suggests a way to understand these functional relationships and transitions between them. Hippocampus 2001;11:204,215. © 2001 Wiley-Liss, Inc. [source] Modulation of temporally coherent brain networks estimated using ICA at rest and during cognitive tasksHUMAN BRAIN MAPPING, Issue 7 2008Vince D. Calhoun Abstract Brain regions which exhibit temporally coherent fluctuations, have been increasingly studied using functional magnetic resonance imaging (fMRI). Such networks are often identified in the context of an fMRI scan collected during rest (and thus are called "resting state networks"); however, they are also present during (and modulated by) the performance of a cognitive task. In this article, we will refer to such networks as temporally coherent networks (TCNs). Although there is still some debate over the physiological source of these fluctuations, TCNs are being studied in a variety of ways. Recent studies have examined ways TCNs can be used to identify patterns associated with various brain disorders (e.g. schizophrenia, autism or Alzheimer's disease). Independent component analysis (ICA) is one method being used to identify TCNs. ICA is a data driven approach which is especially useful for decomposing activation during complex cognitive tasks where multiple operations occur simultaneously. In this article we review recent TCN studies with emphasis on those that use ICA. We also present new results showing that TCNs are robust, and can be consistently identified at rest and during performance of a cognitive task in healthy individuals and in patients with schizophrenia. In addition, multiple TCNs show temporal and spatial modulation during the cognitive task versus rest. In summary, TCNs show considerable promise as potential imaging biological markers of brain diseases, though each network needs to be studied in more detail. Hum Brain Mapp, 2008. © 2008 Wiley-Liss, Inc. [source] Radio-tracking gravel particles in a large braided river in New Zealand: a field test of the stochastic theory of bed load transport proposed by EinsteinHYDROLOGICAL PROCESSES, Issue 3 2001H. M. Habersack Abstract Hans A. Einstein initiated a probabilistic approach to modelling sediment transport in rivers. His formulae were based on theory and were stimulated by laboratory investigations. The theory assumes that bed load movement occurs in individual steps of rolling, sliding or saltation and rest periods. So far very few attempts have been made to measure stochastic elements in nature. For the first time this paper presents results of radio-tracing the travel path of individual particles in a large braided gravel bed river: the Waimakariri River of New Zealand. As proposed by Einstein, it was found that rest periods can be modelled by an exponential distribution, but particle step lengths are better represented by a gamma distribution. Einstein assumed an average travel distance of 100 grain-diameters for any bed load particle between consecutive points of deposition, but larger values of 6·7 m or 150 grain-diameters and 6·1 m or 120 grain-diameters were measured for two test particle sizes. Together with other available large scale field data, a dependence of the mean step length on particle diameter relative to the D50 of the bed surface was found. During small floods the time used for movement represents only 2·7% of the total time from erosion to deposition. The increase in percentage of time being used for transport means that it then has to be regarded in stochastic transport models. Tracing the flow path of bed load particles between erosion and deposition sites is a step towards explaining the interactions between sediment transport and river morphology. Copyright © 2001 John Wiley & Sons, Ltd. [source] A numerical method for the study of shear band propagation in soft rocksINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 13 2009Marta Castelli Abstract This paper investigates the possibility of interpreting progressive shear failure in hard soils and soft rocks as the result of shear propagation of a pre-existing natural defect. This is done through the application of the principles of fracture mechanics, a slip-weakening model (SWM) being used to simulate the non-linear zone at the tips of the discontinuity. A numerical implementation of the SWM in a computation method based on the boundary element technique of the displacement discontinuity method (DDM) is presented. The crack and the non-linear zone at the advancing tip are represented through a set of elements, where the displacement discontinuity (DD) in the tangential direction is determined on the basis of a friction law. A residual friction angle is assumed on the crack elements. Shear resistance decreases on elements in the non-linear zone from a peak value at the tip, which is characteristic of intact material, to the residual value. The simulation of a uniaxial compressive test in plane strain conditions is carried out to exemplify the numerical methodology. The results emphasize the role played by the critical DD on the mechanical behaviour of the specimen. A validation of the model is shown through the back analysis of some experimental observations. The results of this back analysis show that a non-linear fracture mechanics approach seems very promising to simulate experimental results, in particular with regards to the shear band evolution pattern. Copyright © 2009 John Wiley & Sons, Ltd. [source] An SPH shell formulation for plasticity and fracture analysis in explicit dynamicsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 7 2008B. Maurel Abstract This paper introduces a new modeling method suitable for the simulation of shell fracture under impact. This method relies on an entirely meshless approach based on the smoothed particle hydrodynamics (SPH) method. The paper also presents the SPH shell formulation being used as well as the different test cases used for its validation. A plasticity model of the global type throughout the thickness is also proposed and validated. Finally, in order to illustrate the capabilities of the method, fracture simulations using a simplified fracture criterion are presented. Copyright © 2008 John Wiley & Sons, Ltd. [source] Tunable Memory Characteristics of Nanostructured, Nonvolatile Charge Trap Memory Devices Based on a Binary Mixture of Metal Nanoparticles as a Charge Trapping Layer,ADVANCED MATERIALS, Issue 2 2009Jang-Sik Lee Tunable memory characteristics are investigated according to the metal-nanoparticle species being used in memory devices. The memory devices are fabricated using diblock copolymer micelles as templates to synthesize nanoparticles of cobalt, gold, and a binary mixture thereof. Programmable memory characteristics show different charging/discharging behaviors according to the storage element configurations as confirmed by nanoscale device characterization. [source] An algorithm for evaluating the ethics of a placebo-controlled trialINTERNATIONAL JOURNAL OF CANCER, Issue 5 2001Robert J. Amdur M.D. Abstract The purpose of this article is to clarify the decision points that are important to consider when evaluating the ethics of a placebo-controlled trial. The ethical requirements for research involving human subjects are reviewed, and the rationale for and potential problems with concomitant placebo control are explained. A series of case discussions are used to illustrate each decision point. The critical decision points in the evaluation of the ethics of a placebo-controlled trial are as follows: (i) Is placebo being used in place of standard therapy? (ii) Is standard therapy likely to be effective? (iii) Is the toxicity of standard therapy such that patients routinely refuse this treatment? (iv) Could the use of placebo result in severe suffering or irreversible harm? (v) Is the variability in the placebo response such that it is reasonable to consider other options for the control group? (vi) Would a reasonable person with an average degree of altruism and risk aversiveness agree to participate in this study? The algorithm presented in this article gives researchers and research monitors (such as Institutional Review Board members) the tools they need to evaluate the ethics of a study that uses concomitant placebo control. © 2001 Wiley-Liss, Inc. [source] Analysis of enthalpy change with/without a heat pipe heat exchanger in a tropical air conditioning systemINTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 15 2006Y. H. Yau Abstract In an earlier paper (Yau, 2006. Application of a heat pipe heat exchanger to dehumidification enhancement in tropical HVAC systems,a baseline performance characteristics study. Int. J. Thermal Sci., accepted for publication), the baseline performance characteristics of the 8-row wickless heat pipe heat exchanger (HPHX) were established for it being used in a vertical configuration under tropical climate conditions. The present paper covers the tests and simulation conducted on the same experimental HVAC system without the HPHX installed, thereby determining the enthalpy change for the air passing through the chilled water coil (CWC) alone (i.e. without the pre-cooling or reheating effect of the HPHX). These experimental results, in comparison with those already obtained, would also allow an examination of how the reheat recovery with the 8-row HPHX installed was influenced by the same key inlet parameters. The final results show that the enthalpy change with a HPHX installed for all cases examined are significantly higher than enthalpy change without a HPHX installed, demonstrating that the cooling capability of the CWC was enhanced by the HPHX. Copyright © 2006 John Wiley & Sons, Ltd. [source] Thermal performance of aluminium-foam CPU heat exchangersINTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 11 2006H. Mahdi Abstract This study investigates the performance of existing central processing unit (CPU) heat exchangers and compares it with aluminium-foam heat exchangers in natural convection using an industrial set-up. Kapton flexible heaters are used to replicate the heat produced by a computer's CPU. A number of thermocouples are connected between the heater and the heat sink being used to measure the component's temperature. The thermocouples are also connected to a data-acquisition card to collect the data using LabVIEW program. The values obtained for traditional heat exchangers are compared to published data to validate experiments and set-up. The validated set-up was then utilized to test the aluminium-foam heat exchangers and compare its performance to that of common heat sinks. It is found that thermal resistance is reduced more than 70% by employing aluminium-foam CPU heat exchangers. The results demonstrate that this material provides an advantage on thermal dissipation under natural convection over most available technologies, as it considerably increases the surface-area-to-volume ratio. Furthermore, the aluminium-foam heat exchangers reduce the overall weight. Copyright © 2005 John wiley & Sons, Ltd. [source] Directed Evolution of an Enantioselective Enoate-Reductase: Testing the Utility of Iterative Saturation MutagenesisADVANCED SYNTHESIS & CATALYSIS (PREVIOUSLY: JOURNAL FUER PRAKTISCHE CHEMIE), Issue 18 2009Despina Abstract Directed evolution utilizing iterative saturation mutagenesis (ISM) has been applied to the old yellow enzyme homologue YqjM in the quest to broaden its substrate scope, while controlling the enantioselectivity in the bioreduction of a set of substituted cyclopentenone and cyclohexenone derivatives. Guided by the known crystal structure of YqjM, 20 residues were selected as sites for saturation mutagenesis, a pooling strategy based on the method of Phizicky [M. R. Martzen, S. M. McCraith, S. L. Spinelli, F. M. Torres, S. Fields, E. J. Grayhack, E. M. Phizicky, Science1999, 286, 1153,1155] being used in the GC screening process. The genes of some of the hits were subsequently employed as templates for randomization experiments at the other putative hot spots. Both (R)- and (S)-selective variants were evolved using 3-methylcyclohexenone as the model substrate in the asymmetric bioreduction of the olefinic functionality, only small mutant libraries and thus minimal screening effort being necessary. Some of the best mutants also proved to be excellent catalysts when testing other prochiral substrates without resorting to additional mutagenesis/screening experiments. Thus, the results constitute an important step forward in generalizing the utility of ISM as an efficient method in laboratory evolution of enzymes as catalysts in organic chemistry. [source] Catalytic synthesis of 1,6-dicarbamate hexane over MgO/ZrO2JOURNAL OF CHEMICAL TECHNOLOGY & BIOTECHNOLOGY, Issue 2 2007Fang Li Abstract MgO/ZrO2 catalyst was prepared for the synthesis of 1,6-dicarbamate hexane (HDC) using dimethyl carbonate (DMC) and 1,6-diamine hexane (HDA) as raw materials. When the catalyst is calcined at 600 °C and MgO load is 6 wt%, the catalyst exhibits better activity. When the concentration of catalyst is 2 g (100 mL),1 DMC, n(HDA):n(DMC) = 1:10, reaction time is 6 h under reflux temperature, and the yield of 1,6-dicarbamate hexane is 53.1%. HDC yield decreases from 53.1% to 35.3% after MgO/ZrO2 being used for three times. The decrease in specific surface area may be attributed to deactivation of MgO/ZrO2. Copyright © 2007 Society of Chemical Industry [source] Atrophy and anarchy: third national survey of nursing skill-mix and advanced nursing practice in ophthalmologyJOURNAL OF CLINICAL NURSING, Issue 12 2006Dip Nursing, Wladyslawa J. Czuber-Dochan MSc Aims and objectives., The aims of the study were to investigate the advanced nursing practice and the skill-mix of nurses working in ophthalmology. Background., The expansion of new nursing roles in the United Kingdom in the past decade is set against the background of a nursing shortage. The plan to modernize the National Health Service and improve the efficiency and delivery of healthcare services as well as to reduce junior doctors' hours contributes towards a profusion of new and more specialized and advanced nursing roles in various areas of nursing including ophthalmology. Design., A self-reporting quantitative questionnaire was employed. The study used comparative and descriptive statistical tests. Method., The questionnaires were distributed to all ophthalmic hospitals and units in the United Kingdom. Hospital and unit managers were responsible for completing the questionnaires. Results., Out of a total 181 questionnaires 117 were returned. There is a downward trend in the total number of nurses working in ophthalmology. The results demonstrate more nurses working at an advanced level. However, there is a general confusion regarding role interpretation at the advanced level of practice, evident through the wide range of job titles being used. There was inconsistency in the qualifications expected of these nurses. Conclusion., Whilst there are more nurses working at an advanced level this is set against an ageing workforce and an overall decline in the number of nurses in ophthalmology. There is inconsistency in job titles, grades, roles and qualifications for nurses who work at an advanced or higher level of practice. The Agenda for Change with its new structure for grading jobs in the United Kingdom may offer protection and consistency in job titles, pay and qualifications for National Health Service nurse specialists. The Nursing and Midwifery Council needs to provide clear guidelines to the practitioners on educational and professional requirements, to protect patients and nurses. Relevance to clinical practice., The findings indicate that there is a need for better regulations for nurses working at advanced nursing practice. [source] An exploration of the factors that influence the implementation of evidence into practiceJOURNAL OF CLINICAL NURSING, Issue 8 2004Jo Rycroft-Malone PhD Background., The challenges of implementing evidence-based practice are complex and varied. Against this background a framework has been developed to represent the multiple factors that may influence the implementation of evidence into practice. It is proposed that successful implementation is dependent upon the nature of the evidence being used, the quality of context, and, the type of facilitation required to enable the change process. This study sets out to scrutinize the elements of the framework through empirical enquiry. Aims and objectives., The aim of the study was to address the following questions: , What factors do practitioners identify as the most important in enabling implementation of evidence into practice? , What are the factors practitioners identify that mediate the implementation of evidence into practice? , Do the concepts of evidence, context and facilitation constitute the key elements of a framework for getting evidence into practice? Design and methods., The study was conducted in two phases. Phase 1: Exploratory focus groups (n = 2) were conducted to inform the development of an interview guide. This was used with individual key informants in case study sites. Phase 2: Two sites with on-going or recent implementation projects were studied. Within sites semi-structured interviews were conducted (n = 17). Results., A number of key issues in relation to the implementation of evidence into practice emerged including: the nature and role of evidence, relevance and fit with organizational and practice issues, multi-professional relationships and collaboration, role of the project lead and resources. Conclusions., The results are discussed with reference to the wider literature and in relation to the on-going development of the framework. Crucially the growing body of evidence reveals that a focus on individual approaches to implementing evidence-based practice, such as skilling-up practitioners to appraise research evidence, will be ineffective by themselves. Relevance to clinical practice., Key elements that require attention in implementing evidence into practice are presented and may provide a useful checklist for future implementation and evaluation projects. [source] Non-Mandatory Approaches to Environmental ProtectionJOURNAL OF ECONOMIC SURVEYS, Issue 3 2001Madhu Khanna The approach to environmental protection has been evolving from a regulation-driven, adversarial ,government-push' approach to a more proactive approach involving voluntary and often ,business-led' initiatives to self-regulate their environmental performance. This has been accompanied by increasing provision of environmental information about firms and products to enlist market forces and communities in creating a demand for corporate environmental self-regulation by signaling their preferences for environmentally friendly firms. This paper provides an overview of the non-mandatory approaches being used for environmental protection and surveys the existing theoretical literature analyzing the economic efficiency of such approaches relative to mandatory approaches. It also discusses empirical findings on the factors motivating self-regulation by firms and its implications for their economic and environmental performance. It examines the existing evidence on the extent to which information disclosure is effective in generating pressures from investors and communities on firms to improve their environmental performance. [source] Selection of evolutionary models for phylogenetic hypothesis testing using parametric methodsJOURNAL OF EVOLUTIONARY BIOLOGY, Issue 4 2001B. C. Emerson Recent molecular studies have incorporated the parametric bootstrap method to test a priori hypotheses when the results of molecular based phylogenies are in conflict with these hypotheses. The parametric bootstrap requires the specification of a particular substitutional model, the parameters of which will be used to generate simulated, replicate DNA sequence data sets. It has been both suggested that, (a) the method appears robust to changes in the model of evolution, and alternatively that, (b) as realistic model of DNA substitution as possible should be used to avoid false rejection of a null hypothesis. Here we empirically evaluate the effect of suboptimal substitution models when testing hypotheses of monophyly with the parametric bootstrap using data sets of mtDNA cytochrome oxidase I and II (COI and COII) sequences for Macaronesian Calathus beetles, and mitochondrial 16S rDNA and nuclear ITS2 sequences for European Timarcha beetles. Whether a particular hypothesis of monophyly is rejected or accepted appears to be highly dependent on whether the nucleotide substitution model being used is optimal. It appears that a parameter rich model is either equally or less likely to reject a hypothesis of monophyly where the optimal model is unknown. A comparison of the performance of the Kishino,Hasegawa (KH) test shows it is not as severely affected by the use of suboptimal models, and overall it appears to be a less conservative method with a higher rate of failure to reject null hypotheses. [source] |