Ones

Distribution by Scientific Domains
Distribution within Chemistry

Kinds of Ones

  • active ones
  • analytical ones
  • control ones
  • different ones
  • existing ones
  • experimental ones
  • good ones
  • important ones
  • large ones
  • larger ones
  • latter ones
  • longer ones
  • loved ones
  • major ones
  • measured ones
  • minor ones
  • negative ones
  • new ones
  • normal ones
  • novel ones
  • old ones
  • older ones
  • only ones
  • other ones
  • previous ones
  • rare ones
  • recent ones
  • same ones
  • similar ones
  • small ones
  • smaller ones
  • stable ones
  • susceptible ones
  • temperate ones
  • theoretical ones
  • traditional ones
  • urban ones
  • young ones
  • younger ones

  • Terms modified by Ones

  • ones used

  • Selected Abstracts


    Two Kinds of Creativity , But Which Ones?

    CREATIVITY AND INNOVATION MANAGEMENT, Issue 3 2004
    Geir Kaufmann
    It is argued that Kirton's theory of styles of creativity is conceptually and methodologically unsound. A solution to the conceptual and methodological dilemmas is offered by way of making a clear-cut distinction between novelty on the stimulus and novelty on the response side. This distinction is used as a platform for the development of a new taxonomy of different kinds of creativity and intelligent behaviour. A major feature of this new model is the distinction made between proactive and reactive creativity. The implications of this distinction for opening new avenues for a more differentiated assessment of creativity, as well as for the development of a conceptually firmer and more differentiated platform for developing new practical training programmes in creativity are suggested. [source]


    Synthesis and Physical Chemistry of s -Tetrazines: Which Ones are Fluorescent and Why?

    EUROPEAN JOURNAL OF ORGANIC CHEMISTRY, Issue 35 2009
    Yong-Hua Gong
    Abstract New fluorescent tetrazines have been prepared and their electrochemistry and fluorescence efficiency evaluated. The occurrence of fluorescence as well as the wavelength were found to be strongly dependent on the substituents, which have to be electronegative heteroatoms. This has been rationalized through a computational study that showed that the crucial factor is the nature of the HOMO, which determines the existence or not of fluorescence. (© Wiley-VCH Verlag GmbH & Co. KGaA, 69451 Weinheim, Germany, 2009) [source]


    A Meta-Analytic Investigation of Job Applicant Faking on Personality Measures

    INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT, Issue 4 2006
    Scott A. Birkeland
    This study investigates the extent to which job applicants fake their responses on personality tests. Thirty-three studies that compared job applicant and non-applicant personality scale scores were meta-analyzed. Across all job types, applicants scored significantly higher than non-applicants on extraversion (d=.11), emotional stability (d=.44), conscientiousness (d=.45), and openness (d=.13). For certain jobs (e.g., sales), however, the rank ordering of mean differences changed substantially suggesting that job applicants distort responses on personality dimensions that are viewed as particularly job relevant. Smaller mean differences were found in this study than those reported by Viswesvaran and Ones (Educational and Psychological Measurement, 59(2), 197,210), who compared scores for induced "fake-good" vs. honest response conditions. Also, direct Big Five measures produced substantially larger differences than did indirect Big Five measures. [source]


    Privatizing Medicaid-Funded Mental Health Services: Trading Old Political Challenges for New Ones

    AMERICAN JOURNAL OF ORTHOPSYCHIATRY, Issue 3 2002
    Matthew N. I. Oliver
    States have aggressively pursued privatizing the management of Medicaid-funded mental health services. Although privatized managed care addresses many concerns, it brings several challenges. This article evaluates the impact of privatization on Medicaid-funded mental health services and highlights several contracting issues that should be considered to ensure high-quality mental health care. [source]


    Ralph's Pretty-Good Grocery versus Ralph's Super Market: Separating Excellent Agencies from the Good Ones

    PUBLIC ADMINISTRATION REVIEW, Issue 1 2001
    Jeff Gill
    What makes a public agency perform at a high level? Some agencies are doing extremely well in their environment and it may be because they are lucky enough to have access to plentiful resources, excellent management, and a supportive public. Unfortunately, cases such as these provide little prescriptive evidence for public managers looking to improve their own agency's performance. We apply a new quantitative technique (SWAT) to educational outcome data for 534 school districts in Texas and identify those districts doing extremely well given their fixed and often limited inputs. This approach is useful because the truly superior agencies are those that do more with less, and public managers who lead their organizations to high performance levels despite limited resources provide potential solutions to others. [source]


    Rural Bioethical Issues of the Elderly: How Do They Differ From Urban Ones?

    THE JOURNAL OF RURAL HEALTH, Issue 4 2001
    Jacqueline J. Glover Ph.D.
    ABSTRACT: Typical ethical issues in health care for the elderly include decision making for elderly patients with and without capacity, advance directives, the use of life-sustaining technologies, and questions of access to services and justice. Obviously the same issues are relevant for elderly patients in rural settings. But the unique features of rural living add another dimension to ethical discourse and the care of patients, namely the primary importance of relationships. Rural bioethics is based on an ethic of familiarity, which alters our attention to such issues as confidentiality, multiple relationships, scope of practice, and access issues. The following article briefly outlines the unique features of rural bioethics and provides a case analysis. [source]


    Granularity in Relational Formalisms,With Application to Time and Space Representation

    COMPUTATIONAL INTELLIGENCE, Issue 4 2001
    Jérôme Euzenat
    Temporal and spatial phenomena can be seen at a more or less precise granularity, depending on the kind of perceivable details. As a consequence, the relationship between two objects may differ depending on the granularity considered. When merging representations of different granularity, this may raise problems. This paper presents general rules of granularity conversion in relation algebras. Granularity is considered independently of the specific relation algebra, by investigating operators for converting a representation from one granularity to another and presenting six constraints that they must satisfy. The constraints are shown to be independent and consistent and general results about the existence of such operators are provided. The constraints are used to generate the unique pairs of operators for converting qualitative temporal relationships (upward and downward) from one granularity to another. Then two fundamental constructors (product and weakening) are presented: they permit the generation of new qualitative systems (e.g. space algebra) from existing ones. They are shown to preserve most of the properties of granularity conversion operators. [source]


    Impulse-based dynamic simulation in linear time

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 4-5 2007
    Jan Bender
    Abstract This paper describes an impulse-based dynamic simulation method for articulated bodies which has a linear time complexity. Existing linear-time methods are either based on a reduced-coordinate formulation or on Lagrange multipliers. The impulse-based simulation has advantages over these well-known methods. Unlike reduced-coordinate methods, it handles nonholonomic constraints like velocity-dependent ones and is very easy to implement. In contrast to Lagrange multiplier methods the impulse-based approach has no drift problem and an additional stabilisation is not necessary. The presented method computes a simulation step in O(n) time for acyclic multi-body systems containing equality constraints. Closed kinematic chains can be handled by dividing the model into different acyclic parts. Each of these parts is solved independently from each other. The dependencies between the single parts are solved by an iterative method. In the same way inequality constraints can be integrated in the simulation process in order to handle collisions and permanent contacts with dynamic and static friction. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    A program for the design of linear time invariant control systems: CDMCAD

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 3 2004
    M. Koksal
    Abstract Coefficient Diagram Method (CDM) is a new method proposed for the analysis and design of linear time-invariant control systems. The control system design by this method results with satisfactory stability, time response and robustness properties compatible with, and in most cases better than, the ones obtained by the other present design methods. In this study, the design procedure described in the literature for CDM is improved so that a systematic and easy tool with understandable sufficient detail is presented. A visual toolbox, which can be used efficiently both for education and research, is obtained based on the procedure presented by using MATLAB. © 2004 Wiley Periodicals, Inc. Comput Appl Eng Educ 12: 165,174, 2004; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20011 [source]


    Detection of bacterial DNA by PCR and reverse hybridization in the 16S rRNA gene with particular reference to neonatal septicemia

    ACTA PAEDIATRICA, Issue 2 2001
    S Shang
    Aim: The clinical diagnosis of sepsis is difficult, particularly in neonates. It is necessary to develop a rapid and reliable method for detecting bacteria in blood and cerebrospinal fluid (CSF) Polymerase chain reaction (PCR) and reverse hybridization of the 16S rRNA gene would permit fast and sensitive determination of the presence of bacteria and differentiate gram-positive bacteria from gram-negative ones in clinical specimens. Methods: We developed a pair of primers according to the gene encoding 16SrRNA found in all bacteria. DNA fragments from different bacterial species and from clinical samples were detected with PCR, and with reverse hybridization using a universal bacterial probe, a gram-positive probe and a gram-negative probe. Results: A 371 bp DNA fragment was amplified from 20 different bacterial species. No signal was observed when human DNA and viruses were used as templates. The sensitivity could be improved to 10T -12 g. All 26 culture-positive clinical samples (22 blood samples and 4 CSF samples) were positive with PCR. The gram-negative and gram-positive probes hybridized to clinical samples and to known bacterial controls, as predicted by Gram's stain characteristics. Conclusions: Our results suggest that the method of PCR and reverse hybridization is rapid, sensitive and specific in detecting bacterial infections. This finding may be significant in the clinical diagnosis of sepsis in neonates. [source]


    Energy-Based Image Deformation

    COMPUTER GRAPHICS FORUM, Issue 5 2009
    Z. Karni
    Abstract We present a general approach to shape deformation based on energy minimization, and applications of this approach to the problems of image resizing and 2D shape deformation. Our deformation energy generalizes that found in the prior art, while still admitting an efficient algorithm for its optimization. The key advantage of our energy function is the flexibility with which the set of "legal transformations" may be expressed; these transformations are the ones which are not considered to be distorting. This flexibility allows us to pose the problems of image resizing and 2D shape deformation in a natural way and generate minimally distorted results. It also allows us to strongly reduce undesirable foldovers or self-intersections. Results of both algorithms demonstrate the effectiveness of our approach. [source]


    Automatic Facsimile of Chinese Calligraphic Writings

    COMPUTER GRAPHICS FORUM, Issue 7 2008
    Songhua Xu
    Abstract To imitate personal handwritings is non-trivial. In this paper, we attempt to address the challenging problem of automatic handwriting facsimile. We focus on Chinese calligraphic writings due to their rich variation in style, high artistic values and also the fact that they are among the most difficult candidates for the problem. We first analyze the structures and shapes of the constituent components, i.e., strokes and radicals, of characters in sample calligraphic writings by the same writer. To generate calligraphic writing in the style of the writer, we facsimile the individual character elements as well as the layout relationships used to compose the character, both in the writer's personal writing style. To test our algorithm, we compare our facsimileing results of Chinese calligraphic writings with the original writings. Our results are found to be acceptable for most cases, some of which are difficult to differentiate from the real ones. More results and supplementary materials are provided in our project website at http://www.cs.hku.hk/~songhua/facsimile/. [source]


    Automatic Conversion of Mesh Animations into Skeleton-based Animations

    COMPUTER GRAPHICS FORUM, Issue 2 2008
    Edilson De Aguiar
    Abstract Recently, it has become increasingly popular to represent animations not by means of a classical skeleton-based model, but in the form of deforming mesh sequences. The reason for this new trend is that novel mesh deformation methods as well as new surface based scene capture techniques offer a great level of flexibility during animation creation. Unfortunately, the resulting scene representation is less compact than skeletal ones and there is not yet a rich toolbox available which enables easy post-processing and modification of mesh animations. To bridge this gap between the mesh-based and the skeletal paradigm, we propose a new method that automatically extracts a plausible kinematic skeleton, skeletal motion parameters, as well as surface skinning weights from arbitrary mesh animations. By this means, deforming mesh sequences can be fully-automatically transformed into fullyrigged virtual subjects. The original input can then be quickly rendered based on the new compact bone and skin representation, and it can be easily modified using the full repertoire of already existing animation tools. [source]


    A Polymorphic Dynamic Network Loading Model

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 2 2008
    Nie Yu (Marco)
    The polymorphism, realized through a general node-link interface and proper discretization, offers several prominent advantages. First of all, PDNL allows road facilities in the same network to be represented by different traffic flow models based on the tradeoff of efficiency and realism and/or the characteristics of the targeted problem. Second, new macroscopic link/node models can be easily plugged into the framework and compared against existing ones. Third, PDNL decouples links and nodes in network loading, and thus opens the door to parallel computing. Finally, PDNL keeps track of individual vehicular quanta of arbitrary size, which makes it possible to replicate analytical loading results as closely as desired. PDNL, thus, offers an ideal platform for studying both analytical dynamic traffic assignment problems of different kinds and macroscopic traffic simulation. [source]


    Risk Modeling of Dependence among Project Task Durations

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 6 2007
    I-Tung Yang
    The assessments, however, can be strongly influenced by the dependence between task durations. In light of the need to address the dependence, the present study proposes a computer simulation model to incorporate and augment NORTA, a method for multivariate random number generation. The proposed model allows arbitrarily specified marginal distributions for task durations (need not be members of the same distribution family) and any desired correlation structure. This level of flexibility is of great practical value when systematic data is not available and planners have to rely on experts' subjective estimation. The application of the proposed model is demonstrated through scheduling a road pavement project. The proposed model is validated by showing that the sample correlation coefficients between task durations closely match the originally specified ones. Empirical comparisons between the proposed model and two conventional approaches, PERT and conventional simulation (without correlations), are used to illustrate the usefulness of the proposed model. [source]


    Increasing data reuse of sparse algebra codes on simultaneous multithreading architectures

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2009
    J. C. Pichel
    Abstract In this paper the problem of the locality of sparse algebra codes on simultaneous multithreading (SMT) architectures is studied. In these kind of architectures many hardware structures are dynamically shared among the running threads. This puts a lot of stress on the memory hierarchy, and a poor locality, both inter-thread and intra-thread, may become a major bottleneck in the performance of a code. This behavior is even more pronounced when the code is irregular, which is the case of sparse matrix ones. Therefore, techniques that increase the locality of irregular codes on SMT architectures are important to achieve high performance. This paper proposes a data reordering technique specially tuned for these kind of architectures and codes. It is based on a locality model developed by the authors in previous works. The technique has been tested, first, using a simulator of a SMT architecture, and subsequently, on a real architecture as Intel's Hyper-Threading. Important reductions in the number of cache misses have been achieved, even when the number of running threads grows. When applying the locality improvement technique, we also decrease the total execution time and improve the scalability of the code. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    On coordination and its significance to distributed and multi-agent systems

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2006
    Sascha Ossowski
    Abstract Coordination is one of those words: it appears in most science and social fields, in politics, warfare, and it is even the subject of sports talks. While the usage of the word may convey different ideas to different people, the definition of coordination in all fields is quite similar,it relates to the control, planning, and execution of activities that are performed by distributed (perhaps independent) actors. Computer scientists involved in the field of distributed systems and agents focus on the distribution aspect of this concept. They see coordination as a separate field from all the others,a field that rather complements standard fields such as the ones mentioned above. This paper focuses on explaining the term coordination in relation to distributed and multi-agent systems. Several approaches to coordination are described and put in perspective. The paper finishes with a look at what we are calling emergent coordination and its potential for efficiently handling coordination in open environments. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Identification and authentication of integrated circuits

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11 2004
    Blaise Gassend
    Abstract This paper describes a technique to reliably and securely identify individual integrated circuits (ICs) based on the precise measurement of circuit delays and a simple challenge,response protocol. This technique could be used to produce key-cards that are more difficult to clone than ones involving digital keys on the IC. We consider potential venues of attack against our system, and present candidate implementations. Experiments on Field Programmable Gate Arrays show that the technique is viable, but that our current implementations could require some strengthening before it can be considered as secure. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Motility-induced but not vasoactive intestinal peptide-induced increase in luminal alkalinization in rat duodenum is dependent on luminal Cl,

    ACTA PHYSIOLOGICA, Issue 2 2010
    L. Pihl
    Abstract Aim:, To investigate whether the motility- and the vasoactive intestinal peptide (VIP)-induced increase in luminal alkalinization in the duodenum is dependent on luminal Cl,. Methods:, Experiments were performed in anaesthetized rats in vivo. The proximal duodenum was perfused luminally with an isotonic solution, containing zero or low Cl, and the effects on luminal alkalinization, motility, fluid flux and epithelial permeability were determined. Parecoxib, a COX-2 inhibitor, was used to induce duodenal contractions. Results:, Control rats lacked duodenal wall contractions while parecoxib-treated ones exhibited contractions throughout the experiment. Most animals had a net fluid absorption during the perfusion with isotonic NaCl. Luminal alkalinization was about 100% higher in parecoxib-treated rats than in controls. Cl, -free solutions did not affect epithelial permeability or motility but decreased luminal alkalinization by ,50% and decreased net fluid absorption in both control and parecoxib-treated animals. Reduction in luminal Cl, decreased alkalinization in a concentration-dependent manner. The parecoxib-induced increase in alkalinization was markedly reduced in the absence of luminal Cl,. VIP increased luminal alkalinization and induced fluid secretion. The lack of luminal Cl, did not affect the VIP-induced increase in alkalinization but reduced fluid secretion. Conclusions:, The parecoxib-induced increase in luminal alkalinization is highly dependent on luminal Cl, and it is proposed that COX-2 inhibition, via induction of duodenal motility, enhances HCO3, efflux through stimulation of apical Cl,/HCO3, exchange in duodenal epithelial cells. Although the VIP-induced stimulation of fluid secretion is partly dependent on luminal Cl,, the VIP-induced increase in luminal alkalinization is not. [source]


    Considering effective divorce mediation: Three potential factors

    CONFLICT RESOLUTION QUARTERLY, Issue 4 2002
    Jerry Gale
    The purpose of this exploratory, qualitative stgdy was to examine mediator efect by employing a repeated measures research design in which we videotaped mediators working with actors and a scripted divorce case scenario. What factors distinpish higher-rated mediators from lower-rated ones? Our discourse analysys of four divorce mediations suggested three signijcant factors of injuence that finction interactively and afect both mediation outcome and process. Ethical implications regarding how mediators achieve success in these three domains are discussed. This article presents implications for researchers, trainers, and practitioners; it suggests important directions for fiture research with nonsimulated mediation. [source]


    Ethanol neurotoxicity and dentate gyrus development

    CONGENITAL ANOMALIES, Issue 3 2008
    Takanori Miki
    ABSTRACT Maternal alcohol ingestion during pregnancy adversely affects the developing fetus, often leading to fetal alcohol syndrome (FAS). One of the most severe consequences of FAS is brain damage that is manifested as cognitive, learning, and behavioral deficits. The hippocampus plays a crucial role in such abilities; it is also known as one of the brain regions most vulnerable to ethanol-induced neurotoxicity. Our recent studies using morphometric techniques have further shown that ethanol neurotoxicity appears to affect the development of the dentate gyrus in a region-specific manner; it was found that early postnatal ethanol exposure causes a transitory deficit in the hilus volume of the dentate gyrus. It is strongly speculated that such structural modifications, even transitory ones, appear to result in developmental abnormalities in the brain circuitry and lead to the learning disabilities observed in FAS children. Based on reports on possible factors deciding ethanol neurotoxicity to the brain, we review developmental neurotoxicity to the dentate gyrus of the hippocampal formation. [source]


    Serum and 24-hour Urine Analysis in Adult Cyanotic and Noncyanotic Congenital Heart Disease Patients

    CONGENITAL HEART DISEASE, Issue 3 2009
    Efrén Martínez-Quintana MD
    ABSTRACT Introduction., Glomerulopathy is a complication of congenital heart disease patients. The risk of developing renal impairment is particularly high in cyanotic patients. Objective., The aim of this study was to determine the prevalence of renal dysfunction and microalbumiuria in adult cyanotic and non cyanotic congenital heart disease patients. Methods., Fourteen cyanotic and 22 noncyanotic congenital heart disease patients were studied in the Adult Congenital Heart Disease Unit at the Complejo Hospitalario Universitario Insular-Materno Infantil. Demographic characteristics, complete blood count, and 24-hour urianalysis were obtained, including abdominal ultrasound in those with cyanosis. Results., No differences were seen between age (years) (27.4 ± 8.2; 26.4 ± 8.3; P = .71), sex, size, weight, or glomerular filtration rate (mL/min/1.73 m2) (81.1 ± 22.9 vs. 84.9 ± 9.2, P = .482) between cyanotic and noncyanotic patients. However, Eisenmenger patients had significantly impaired renal function when compared with noncyanotic patients (73.0 ± 17.3 vs. 84.9 ± 9.2 mL/min/1.73 m2, P = .023). Significant differences were obtained in oxygen saturation (%) (83.8 ± 5.8 vs. 97.8 ± 0.8; P = .000), hematocrit (%) (59.3 ± 8.1 vs. 40.9 ± 8.5; P = .000), platelets (103/µL) (161.5 ± 70.5 vs. 277.9 ± 57.6; P = .000), serum uric acid (mg/dL) (7.5 ± 2.3 vs. 5.6 ± 1.5; P = .008) and microalbuminuria (mg/24 hours) (12.8 [0, 700.2] vs. 2.4 [0, 18.9]; P = .000) between cyanotic and noncyanotic patients. Five cyanotic patients (35.7%) had microalbuminuria (>30 mg/24 hours) and three of them (21.4%) proteinuria (>1 g/24 hours). No significant differences were seen between serum and urine parameters between cyanotic patients who had microalbuminuria (>30 mg/24 hours) and those cyanotic patients who did not have it (<30 mg/24 hours). Conclusions., Renal impairment is frequently seen in congenital heart disease patients, being associated occasionally with proteinuria and microalbuminuria in cyanotic ones. [source]


    Informed Decisions Conservation Corridors and the Spread of Infectious Disease

    CONSERVATION, Issue 2 2002
    Leslie Bienen
    Reconnecting habitat will have consequences, in many forms and on many scales. The trick is to recognize these consequences, the negative ones as well as the positive, and understand them ahead of time, and in doing so to insure that human-engineered reconnectivity does the least harm and the most good. [source]


    Quantifying Plant Population Persistence in Human-Dominated Landscapes

    CONSERVATION BIOLOGY, Issue 4 2008
    DAWN M. LAWSON
    Base de Datos de la Diversidad Natural de California; conservación de plantas; crecimiento de la población; especies en peligro; paisajes urbanos Abstract:,We assessed population performance of rare plants across a gradient from rural to urban landscapes and evaluated 2 hypotheses central to strategic conservation planning: (1) population performance declines with increasing human dominance and (2) small populations perform poorly relative to larger ones. Assessing these hypotheses is critical to strategic conservation planning. The current conservation paradigm adheres to the well-established ecology theory that small isolated populations, particularly those in human-dominated landscapes, are the least likely to succeed over the long term. Consequently, conservation planning has strongly favored large, remote targets for protection. This shift in conservation toward ecosystem-based programs and protection of populations within large, remote systems has been at the expense of protection of the rarest of the rare species, the dominant paradigm for conservation driven by the endangered species act. Yet, avoiding conservation of small populations appears to be based more on theoretical understanding and expert opinion than empiricism. We used Natural Heritage data from California in an assessment of population performance of rare plants across a landscape with an urban-rural gradient. Population performance did not decrease in urban settings or for populations that were initially small. Our results are consistent with a pattern of few species extinctions within these landscapes over the past several decades. We conclude that these populations within compromised landscapes can contribute to overall biodiversity conservation. We further argue that conservation planning for biodiversity preservation should allocate relatively more resources to protecting urban-associated plant taxa because they may provide conservation benefit beyond simply protecting isolated populations; they may be useful in building social interest in conservation. Resumen:,Evaluamos el funcionamiento de la población de plantas raras a lo largo de un gradiente de paisajes rurales a urbanos y evaluamos 2 hipótesis centrales para la planificación estratégica de la conservación: (1) declinaciones en el funcionamiento poblacional con el incremento de la dominancia humana y (2) las poblaciones pequeñas funcionan pobremente en relación con las grandes. La evaluación de estas hipótesis es crítica para la planificación estratégica de la conservación. El paradigma actual de la conservación se adhiere a la teoría ecológica bien establecida que propone que las poblaciones pequeñas aisladas, particularmente en paisajes dominados por humanos, tienen menor probabilidad de sobrevivir a largo plazo. Consecuentemente, la planificación de la conservación ha favorecido objetivos grandes y remotos. Este cambio hacia programas de conservación basados en ecosistemas y la protección de poblaciones en sistemas extensos y remotos ha sido a costa de la protección de las especies más raras entre las raras, el paradigma dominante en la conservación conducida por el acta de especies en peligro. No obstante, la evasión de la conservación de poblaciones pequeñas parece estar basada más en entendimiento teórico y en la opinión de expertos que en el empirismo. Utilizamos datos del Patrimonio Natural de California en una evaluación del funcionamiento de plantas raras en un paisaje con un gradiente urbano a rural. El funcionamiento de la población no decreció en sitios urbanos o en poblaciones que eran pequeñas inicialmente. Nuestros resultados son consistentes con un patrón de extinción de especies en estos paisajes en las últimas décadas. Concluimos que estas poblaciones en paisajes comprometidos pueden contribuir a la conservación de la biodiversidad en general. También argumentamos que la planificación de la conservación para la preservación de la biodiversidad debería asignar más recursos para la protección de taxa de plantas asociadas a ambientes urbanos porque pueden proporcionar beneficios de conservación más allá de simplemente proteger poblaciones aisladas; pueden ser útiles para construir el interés social por la conservación. [source]


    Correlations among Extinction Risks Assessed by Different Systems of Threatened Species Categorization

    CONSERVATION BIOLOGY, Issue 6 2004
    JULIAN J. O'GRADY
    análisis de viabilidad poblacional; categorías de amenaza; especies en peligro; riesgo de extinción Abstract:,Many different systems are used to assess levels of threat faced by species. Prominent ones are those used by the World Conservation Union, NatureServe, and the Florida Game and Freshwater Fish Commission (now the Florida Fish and Wildlife Conservation Commission). These systems assign taxa a threat ranking by assessing their demographic and ecological characteristics. These threat rankings support the legislative protection of species and guide the placement of conservation programs in order of priority. It is not known, however, whether these assessment systems rank species in a similar order. To resolve this issue, we assessed 55 mainly vertebrate taxa with widely differing life histories under each of these systems and determined the rank correlations among them. Moderate, significant positive correlations were seen among the threat rankings provided by the three systems (correlations 0.58,0.69). Further, the threat rankings for taxa obtained using these systems were significantly correlated to their rankings based on predicted probability of extinction within 100 years as determined by population viability analysis (correlations 0.28,0.37). The different categorization systems, then, yield related but not identical threat rankings, and these rankings are associated with predicted extinction risk. Resumen:,Se utilizan muchos sistemas diferentes para evaluar los niveles de amenaza que enfrentan las especies. Son prominentes los utilizados por World Conservation Union, NatureServe Heritage y Florida Game and Freshwater Fish Commission (ahora Florida Fish and Wildlife Conservation Commission). Estos sistemas asignan una categoría de amenaza a los taxa mediante la evaluación de sus características demográficas y ecológicas. Estas categorías de amenaza sustentan a la protección legislativa de especies y guían la definición de prioridades en programas de conservación. Sin embargo, se desconoce si estos sistemas de evaluación categorizan a las especies en orden similar. Para resolver este tema, evaluamos 55 taxa, principalmente de vertebrados, con historias de vidas muy diferentes con cada uno de estos sistemas y determinamos las correlaciones entre las categorías. Hubo correlaciones positivas moderadas entre las categorías de amenaza proporcionadas por los tres sistemas (correlaciones 0.58-0.69). Más aun, las categorías de amenaza proporcionados por estos sistemas estuvieron correlacionadas significativamente con las categorías definidas con base en la probabilidad de extinción pronosticada en 100 años determinada por análisis de viabilidad poblacional (correlaciones 0.28-0.37). Por lo tanto, los diferentes sistemas de categorización están proporcionando categorías de amenazas relacionadas pero no idénticas, y estas categorías están relacionadas con el riesgo de extinción pronosticado. [source]


    Care for the Adult Family Members of Victims of Unexpected Cardiac Death

    ACADEMIC EMERGENCY MEDICINE, Issue 12 2006
    Robert Zalenski MD
    Abstract More than 300,000 sudden coronary deaths occur annually in the United States, despite declining cardiovascular death rates. In 2000, deaths from heart disease left an estimated 190,156 new widows and 68,493 new widowers. A major unanswered question for emergency providers is whether the immediate care of the loved ones left behind by the deceased should be a therapeutic task for the staff of the emergency department in the aftermath of a fatal cardiac arrest. Based on a review of the literature, the authors suggest that more research is needed to answer this question, to assess the current immediate needs and care of survivors, and to find ways to improve care of the surviving family of unexpected cardiac death victims. This would include improving quality of death disclosure, improving care for relatives during cardiopulmonary resuscitation of their family member, and improved methods of referral for services for prevention of psychological and cardiovascular morbidity during bereavement. [source]


    Assessment of balsam of Peru patch tests

    CONTACT DERMATITIS, Issue 6 2000
    Bolli Bjarnason
    To find an ideal test technique for as low a dose of balsam of Peru (Myroxylon Pereirae) as possible, subjects testing positive to balsam of Peru are re-tested with a 25% concentration of balsam of Peru in petrolatum. Applications are with Finn Chambers® for 6 different application times, and directly by foils for 96 h (4 days (D)). The goals are to confirm which subjects are positive and which are not, and, using that information, to see if it is possible to distinguish between these 2 groups, tested concomitantly at much lower serial dose levels, in terms of perfusion or by visual assessments. 5 different serial doses are applied with strips for 3,96 h (4D) and with foils for 96 h (4D). The Finn Chamber® tests allow a distinction between visually positive and negative subjects supported by perfusion assessments. With the foils, a 24× lower serial dose level than with the 25% test substance is sufficient to distinguish between positive and negative subjects in terms of perfusion values. This approach requires readings up to 9 days. With this test, the visual approach yields only 3 of 10 positive subjects. This study demonstrates that a lower test dose is possible with perfusion assessments compared to visual ones. [source]


    Development and validation of a smoothing-splines-based correction method for improving the analysis of CEST-MR images

    CONTRAST MEDIA & MOLECULAR IMAGING, Issue 4 2008
    J. Stancanello
    Abstract Chemical exchange saturation transfer (CEST) imaging is an emerging MRI technique relying on the use of endogenous or exogenous molecules containing exchangeable proton pools. The heterogeneity of the water resonance frequency offset plays a key role in the occurrence of artifacts in CEST-MR images. To limit this drawback, a new smoothing-splines-based method for fitting and correcting Z -spectra in order to compensate for low signal-to-noise ratio (SNR) without any a priori model was developed. Global and local voxel-by-voxel Z -spectra were interpolated by smoothing splines with smoothing terms aimed at suppressing noise. Thus, a map of the water frequency offset (,zero' map) was used to correctly calculate the saturation transfer (ST) for each voxel. Simulations were performed to compare the method to polynomials and zero-only-corrected splines on the basis of SNR improvement. In vitro acquisitions of capillaries containing solutions of LIPOCEST agents at different concentrations were performed to experimentally validate the results from simulations. Additionally, ex vivo investigations of bovine muscle mass injected with LIPOCEST agents were performed as a function of increasing pulse power. The results from simulations and experiments highlighted the importance of a proper ,zero' correction (15% decrease of fictitious CEST signal in phantoms and ex vivo preparations) and proved the method to be more accurate compared with the previously published ones, often providing a SNR higher than 5 in different simulated and experimentally noisy conditions. In conclusion, the proposed method offers an accurate tool in CEST investigation. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Hydrogen Balmer Spectrum from a High-Pressure Arc Discharge: Revisited

    CONTRIBUTIONS TO PLASMA PHYSICS, Issue 4-5 2007
    B. Omar
    Abstract The interpretation of hydrogen Balmer spectra emitted from a high-pressure arc discharge (Radtke and Günther, Contrib. Plasma Phys. 26, 143 (1986)) is re-examined. Assuming local thermodynamic equilibrium, synthetic Balmer spectra are calculated for given temperature and density conditions. Radiation transport is accounted for using a one-dimensional plasma layer model. The lineshape of bound-bound transitions is determined using a microscopic quantum-statistical approach. Free-free and free-bound contributions are added by taking expressions from literature. Comparing the synthetic spectra with experimental ones, plasma temperature and density conditions are inferred. The plasma parameters are confronted with theoretical results for the compositions of dense plasma. (© 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Sustainable entrepreneurship in SMEs: a case study analysis

    CORPORATE SOCIAL RESPONSIBILITY AND ENVIRONMENTAL MANAGEMENT, Issue 3 2010
    Cheryl Rodgers
    Abstract Sustainability is oft thought of as the privilege of the large corporate , with sufficient funds to invest in anything from effective green Public Relations (PR) to improving its carbon footprint. What is perhaps less well-understood and documented is the range of activities undertaken by small and medium enterprises (SMEs), including very small entrepreneurial start-ups, some of which base their entire business rationale on sustainable principles. This paper uses a case study approach to explore the modus operandi of ecopreneurship and draws on both primary research and secondary data to develop and explore sustainable entrepreneurship in this sector. Preliminary findings suggest that ecopreneurial SMEs are looking to other goals alongside financial ones and are prepared to go to significant lengths to achieve such goals. Monetary measures are not, of course, entirely absent, but are very strongly conditioned by the ecoconscious nature of the business. In short, sustainability imperatives remain paramount. Copyright © 2010 John Wiley & Sons, Ltd and ERP Environment. [source]