Applicable

Distribution by Scientific Domains

Kinds of Applicable

  • method applicable
  • model applicable
  • only applicable
  • system applicable
  • technique applicable

  • Terms modified by Applicable

  • applicable approach
  • applicable method
  • applicable only

  • Selected Abstracts


    PRINCIPLES OF POLITICS APPLICABLE TO ALL GOVERNMENTS BY BENJAMIN CONSTANT,

    ECONOMIC AFFAIRS, Issue 3 2004
    Dennis O'Keeffe
    In this review of Constant's Principles of Politics Applicable to All Governments, Constant's belief in liberal economics and the importance of tradition is analysed. [source]


    A Facile Route to Polymer Solar Cells with Optimum Morphology Readily Applicable to a Roll-to-Roll Process without Sacrificing High Device Performances

    ADVANCED MATERIALS, Issue 35 2010
    Hui Joon Park
    A new fabrication method for polymer solar cells that can produce optimized vertical distribution of components is reported. The favorable donor,acceptor morphology showing a well-organized photo-induced charge transporting pathway with fine nanodomains and high crystallinity is achieved. This process is also readily scalable to a large-area and high-speed roll-to-roll process without sacrificing high device performances, even without a PEDOT:PSS layer. [source]


    Balance assessment in patients with peripheral arthritis: applicability and reliability of some clinical assessments

    PHYSIOTHERAPY RESEARCH INTERNATIONAL, Issue 4 2001
    Anne Marie Norén MSc PT
    Abstract Background and Purpose Many individuals with peripheral arthritis blame decreased balance as a reason for limiting their physical activity. It is therefore important to assess and improve their balance. The purpose of the present study was to evaluate the applicability and the reliability of some clinical balance assessment methods for people with arthritis and various degrees of disability. Method To examine the applicability and reliability of balance tests, 65, 19 and 22 patients, respectively, with peripheral arthritis participated in sub-studies investigating the applicability, inter-rater reliability and test,retest stability of the following methods: walking on a soft surface, walking backwards, walking in a figure-of-eight, the balance sub-scale of the Index of Muscle Function (IMF), the Timed Up and Go (TUG) test and the Berg balance scale. Results For patients with moderate disability walking in a figure-of-eight was found to be the most discriminative test, whereas ceiling effects were found for the Berg balance scale. Patients with severe disability were generally able to perform the TUG test and the Berg Balance Scale without ceiling effects. Inter-rater reliability was moderate to high and test,retest stability was satisfactory for all methods assessed. Conclusions Applicable and reliable assessment methods of clinical balance were identified for individuals with moderate and severe disability, whereas more discriminative tests need to be developed for those with limited disability. Copyright © 2001 Whurr Publishers Ltd. [source]


    A Versatile Birth,Death Model Applicable to Four Distinct Problems

    AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 1 2004
    J. Gani
    Summary This paper revisits a simple birth,death model which arises in slightly different forms in four distinct stochastic problems. These are the barbershop queue, coupon collecting, vocabulary usage and geological dating. Discrete and continuous time Markov chains are used to characterize these problems. Somewhat different questions are posed for each particular case, and practical results are derived for each process. The paper concludes with some comments on the versatility of this applied probability model. [source]


    ChemInform Abstract: Generally Applicable and Efficient Oxidative Heck Reaction of Arylboronic Acids with Olefins Catalyzed by Cyclopalladated Ferrocenylimine under Base- and Ligand-Free Conditions.

    CHEMINFORM, Issue 24 2010
    Yuting Leng
    Abstract ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 100 leading journals. To access a ChemInform Abstract of an article which was published elsewhere, please select a "Full Text" option. The original article is trackable via the "References" option. [source]


    ChemInform Abstract: Enantioselective Synthesis of (+)-Monobromophakellin and (+)-Phakellin: A Concise Phakellin Annulation Strategy Applicable to Palau,amine.

    CHEMINFORM, Issue 24 2008
    Shaohui Wang
    Abstract ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 200 leading journals. To access a ChemInform Abstract of an article which was published elsewhere, please select a "Full Text" option. The original article is trackable via the "References" option. [source]


    Regio- and Stereoselective Synthesis of a trans-4-[60]Fullerenobisacetic Acid Derivative by a Tether-Directed Biscyclopropanation: A Diacid Component Applicable for the Synthesis of Regio- and Stereo-Regular [60]Fullerene Pearl-Necklace Polyamides.

    CHEMINFORM, Issue 47 2002
    Tetsuo Hino
    No abstract is available for this article. [source]


    A MATLAB toolbox for solving acid-base chemistry problems in environmental engineering applications

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 4 2005
    Chetan T. Goudar
    Abstract A MATLAB toolbox incorporating several computer programs has been developed in an attempt to automate laborious calculations in acid-base chemistry. Such calculations are routinely used in several environmental engineering applications including the design of wastewater treatment systems and for predicting contaminant fate and transport in the subsurface. The computer programs presented in this study do not replace student thinking involved in formulating the problem solving strategy but are merely tools that simplify the actual problem solving process. They encompass a wide variety of acid-base chemistry topics including equilibrium constant calculations, construction of distribution diagrams for mono and multiprotic systems, ionic strength and activity coefficient calculations, and buffer index calculations. All programs are characterized by an intuitive graphical user interface where the user supplies input information. Program outputs are either numerical or graphical depending upon the nature of the problem. The application of this approach to solving actual acid-base chemistry problems is illustrated by computing the pH and equilibrium composition of a 0.1 M Na2CO3 system at 30°C using several programs in the toolbox. As these programs simplify lengthy computations such as ionization fraction and activity coefficient calculations, it is hoped they will help bring more complicated problems to the environmental engineering classroom and enhance student understanding of important concepts that are applicable to real-world systems. The programs are available free of charge for academic use from the authors. © 2005 Wiley Periodicals, Inc. Comput Appl Eng Educ 13: 257,265, 2005; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20051 [source]


    SecondSkin: An interactive method for appearance transfer

    COMPUTER GRAPHICS FORUM, Issue 7 2009
    A. Van Den Hengely
    Abstract SecondSkin estimates an appearance model for an object visible in a video sequence, without the need for complex interaction or any calibration apparatus. This model can then be transferred to other objects, allowing a non-expert user to insert a synthetic object into a real video sequence so that its appearance matches that of an existing object, and changes appropriately throughout the sequence. As the method does not require any prior knowledge about the scene, the lighting conditions, or the camera, it is applicable to video which was not captured with this purpose in mind. However, this lack of prior knowledge precludes the recovery of separate lighting and surface reflectance information. The SecondSkin appearance model therefore combines these factors. The appearance model does require a dominant light-source direction, which we estimate via a novel process involving a small amount of user interaction. The resulting model estimate provides exactly the information required to transfer the appearance of the original object to new geometry composited into the same video sequence. [source]


    DiFi: Fast 3D Distance Field Computation Using Graphics Hardware

    COMPUTER GRAPHICS FORUM, Issue 3 2004
    Avneesh Sud
    We present an algorithm for fast computation of discretized 3D distance fields using graphics hardware. Given a set of primitives and a distance metric, our algorithm computes the distance field for each slice of a uniform spatial grid baly rasterizing the distance functions of the primitives. We compute bounds on the spatial extent of the Voronoi region of each primitive. These bounds are used to cull and clamp the distance functions rendered for each slice. Our algorithm is applicable to all geometric models and does not make any assumptions about connectivity or a manifold representation. We have used our algorithm to compute distance fields of large models composed of tens of thousands of primitives on high resolution grids. Moreover, we demonstrate its application to medial axis evaluation and proximity computations. As compared to earlier approaches, we are able to achieve an order of magnitude improvement in the running time. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Distance fields, Voronoi regions, graphics hardware, proximity computations [source]


    Near-Term Travel Speed Prediction Utilizing Hilbert,Huang Transform

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 8 2009
    Khaled Hamad
    In this study, we propose an innovative methodology for such prediction. Because of the inherently direct derivation of travel time from speed data, the study was limited to the use of speed only as a single predictor. The proposed method is a hybrid one that combines the use of the empirical mode decomposition (EMD) and a multilayer feedforward neural network with backpropagation. The EMD is the key part of the Hilbert,Huang transform, which is a newly developed method at NASA for the analysis of nonstationary, nonlinear time series. The rationale for using the EMD is that because of the highly nonlinear and nonstationary nature of link speed series, by decomposing the time series into its basic components, more accurate forecasts would be obtained. We demonstrated the effectiveness of the proposed method by applying it to real-life loop detector data obtained from I-66 in Fairfax, Virginia. The prediction performance of the proposed method was found to be superior to previous forecasting techniques. Rigorous testing of the distribution of prediction errors revealed that the model produced unbiased predictions of speeds. The superiority of the proposed model was also verified during peak periods, midday, and night. In general, the method was accurate, computationally efficient, easy to implement in a field environment, and applicable to forecasting other traffic parameters. [source]


    A Comparative Study of Modal Parameter Identification Based on Wavelet and Hilbert,Huang Transforms

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 1 2006
    Banfu Yan
    Special attention is given to some implementation issues, such as the modal separation and end effect in the WT, the optimal parameter selection of the wavelet function, the new stopping criterion for the empirical mode decomposition (EMD) and the end effect in the HHT. The capabilities of these two techniques are compared and assessed by using three examples, namely a numerical simulation for a damped system with two very close modes, an impact test on an experimental model with three well-separated modes, and an ambient vibration test on the Z24-bridge benchmark problem. The results demonstrate that for the system with well-separated modes both methods are applicable when the time,frequency resolutions are sufficiently taken into account, whereas for the system with very close modes, the WT method seems to be more theoretical and effective than HHT from the viewpoint of parameter design. [source]


    Ductility of Reinforced Concrete Flat Slab-Column Connections

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 3 2005
    Maria Anna Polak
    Post peak-load ductility of connections in reinforced concrete framed structures is essential for ensuring structural integrity and preventing local failure that may lead to progressive collapse of such systems. The importance of ductility for resistance against abnormal loading and the role of transverse reinforcement in providing ductility is discussed, and a new shear-strengthening technique, shear bolts, is presented. Shear bolts are a special type of reinforcement developed specially for retrofitting of existing, previously built, flat slabs. The results of an experimental work are presented which show how transverse reinforcement increases punching shear capacity and post-failure ductility of slab-column connections. The described work also applies a specially developed finite element formulation based on layered shell elements, to the analysis of continuous reinforced concrete slabs. The formulation is applicable for global structural analysis of slabs failing in flexure or punching modes. The finite element and experimental results are compared in the article. [source]


    How accurately can parameters from exponential models be estimated?

    CONCEPTS IN MAGNETIC RESONANCE, Issue 2 2005
    A Bayesian view
    Abstract Estimating the amplitudes and decay rate constants of exponentially decaying signals is an important problem in NMR. Understanding how the uncertainty in the parameter estimates depends on the data acquisition parameters and on the "true" but unknown values of the exponential signal parameters is an important step in designing experiments and determining the amount and quality of the data that must be gathered to make good parameter estimates. In this article, Bayesian probability theory is applied to this problem. Explicit relationships between the data acquisition parameters and the "true" but unknown exponential signal parameters are derived for the cases of data containing one and two exponential signal components. Because uniform prior probabilities are purposely employed, the results are broadly applicable to experimental parameter estimation. © 2005 Wiley Periodicals, Inc. Concepts Magn Reson Part A 27A: 73,83, 2005 [source]


    A large-scale monitoring and measurement campaign for web services-based applications

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2010
    Riadh Ben Halima
    Abstract Web Services (WS) can be considered as the most influent enabling technology for the next generation of web applications. WS-based application providers will face challenging features related to nonfunctional properties in general and to performance and QoS in particular. Moreover, WS-based developers have to provide solutions to extend such applications with self-healing (SH) mechanisms as required for autonomic computing to face the complexity of interactions and to improve availability. Such solutions should be applicable when the components implementing SH mechanisms are deployed on both or only one platform on the WS providers and requesters sides depending on the deployment constraints. Associating application-specific performance requirements and monitoring-specific constraints will lead to complex configurations where fine tuning is needed to provide SH solutions. To contribute to enhancing the design and the assessment of such solutions for WS technology, we designed and implemented a monitoring and measurement framework, which is part of a larger Self-Healing Architectures (SHA) developed during the European WS-DIAMOND project. We implemented the Conference Management System (CMS), a real WS-based complex application. We achieved a large-scale experimentation campaign by deploying CMS on top of SHA on the French grid Grid5000. We experienced the problem as if we were a service provider who has to tune reconfiguration strategies. Our results are available on the web in a structured database for external use by the WS community. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Concepts for computer center power management

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2 2010
    A. DiRienzo
    Abstract Electrical power usage contributes significantly to the operational costs of large computer systems. At the Hypersonic Missile Technology Research and Operations Center (HMT-ROC) our system usage patterns provide a significant opportunity to reduce operating costs since there are a small number of dedicated users. The relatively predictable nature of our usage patterns allows for the scheduling of computational resource availability. We take advantage of this predictability to shut down systems during periods of low usage to reduce power consumption. With interconnected computer cluster systems, reducing the number of online nodes is more than a simple matter of throwing the power switch on a portion of the cluster. The paper discusses these issues and an approach for power reduction strategies for a computational system with a heterogeneous system mix that includes a large (1560-node) Apple Xserve PowerPC supercluster. In practice, the average load on computer systems may be much less than the peak load although the infrastructure supporting the operation of large computer systems in a computer or data center must still be designed with the peak loads in mind. Given that a significant portion of the time, systems loads can be less than full peak, an opportunity exists for cost savings if idle systems can be dynamically throttled back, slept, or shut off entirely. The paper describes two separate strategies that meet the requirements for both power conservation and system availability at HMT-ROC. The first approach, for legacy systems, is not much more than a brute force approach to power management which we call Time-Driven System Management (TDSM). The second approach, which we call Dynamic-Loading System Management (DLSM), is applicable to more current systems with ,Wake-on-LAN' capability and takes a more granular approach to the management of system resources. The paper details the rule sets that we have developed and implemented in the two approaches to system power management and discusses some results with these approaches. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Usability levels for sparse linear algebra components,

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 12 2008
    M. Sosonkina
    Abstract Sparse matrix computations are ubiquitous in high-performance computing applications and often are their most computationally intensive part. In particular, efficient solution of large-scale linear systems may drastically improve the overall application performance. Thus, the choice and implementation of the linear system solver are of paramount importance. It is difficult, however, to navigate through a multitude of available solver packages and to tune their performance to the problem at hand, mainly because of the plethora of interfaces, each requiring application adaptations to match the specifics of solver packages. For example, different ways of setting parameters and a variety of sparse matrix formats hinder smooth interactions of sparse matrix computations with user applications. In this paper, interfaces designed for components that encapsulate sparse matrix computations are discussed in the light of their matching with application usability requirements. Consequently, we distinguish three levels of interfaces, high, medium, and low, corresponding to the degree of user involvement in the linear system solution process and in sparse matrix manipulations. We demonstrate when each interface design choice is applicable and how it may be used to further users' scientific goals. Component computational overheads caused by various design choices are also examined, ranging from low level, for matrix manipulation components, to high level, in which a single component contains the entire linear system solver. Published in 2007 by John Wiley & Sons, Ltd. [source]


    Experimental analysis of a mass storage system

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 15 2006
    Shahid Bokhari
    Abstract Mass storage systems (MSSs) play a key role in data-intensive parallel computing. Most contemporary MSSs are implemented as redundant arrays of independent/inexpensive disks (RAID) in which commodity disks are tied together with proprietary controller hardware. The performance of such systems can be difficult to predict because most internal details of the controller behavior are not public. We present a systematic method for empirically evaluating MSS performance by obtaining measurements on a series of RAID configurations of increasing size and complexity. We apply this methodology to a large MSS at Ohio Supercomputer Center that has 16 input/output processors, each connected to four 8 + 1 RAID5 units and provides 128 TB of storage (of which 116.8 TB are usable when formatted). Our methodology permits storage-system designers to evaluate empirically the performance of their systems with considerable confidence. Although we have carried out our experiments in the context of a specific system, our methodology is applicable to all large MSSs. The measurements obtained using our methods permit application programmers to be aware of the limits to the performance of their codes. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Study of a highly accurate and fast protein,ligand docking method based on molecular dynamics

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2005
    M. Taufer
    Abstract Few methods use molecular dynamics simulations in concert with atomically detailed force fields to perform protein,ligand docking calculations because they are considered too time demanding, despite their accuracy. In this paper we present a docking algorithm based on molecular dynamics which has a highly flexible computational granularity. We compare the accuracy and the time required with well-known, commonly used docking methods such as AutoDock, DOCK, FlexX, ICM, and GOLD. We show that our algorithm is accurate, fast and, because of its flexibility, applicable even to loosely coupled distributed systems such as desktop Grids for docking. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Leadership: a New Frontier in Conservation Science

    CONSERVATION BIOLOGY, Issue 4 2009
    JIM C. MANOLIS
    estrategia; influencia; liderazgo; manejo; política Abstract:,Leadership is a critical tool for expanding the influence of conservation science, but recent advances in leadership concepts and practice remain underutilized by conservation scientists. Furthermore, an explicit conceptual foundation and definition of leadership in conservation science are not available in the literature. Here we drew on our diverse leadership experiences, our reading of leadership literature, and discussions with selected conservation science leaders to define conservation-science leadership, summarize an exploratory set of leadership principles that are applicable to conservation science, and recommend actions to expand leadership capacity among conservation scientists and practitioners. We define 2 types of conservation-science leadership: shaping conservation science through path-breaking research, and advancing the integration of conservation science into policy, management, and society at large. We focused on the second, integrative type of leadership because we believe it presents the greatest opportunity for improving conservation effectiveness. We identified 8 leadership principles derived mainly from the "adaptive leadership" literature: recognize the social dimension of the problem; cycle frequently through action and reflection; get and maintain attention; combine strengths of multiple leaders; extend your reach through networks of relationships; strategically time your effort; nurture productive conflict; and cultivate diversity. Conservation scientists and practitioners should strive to develop themselves as leaders, and the Society for Conservation Biology, conservation organizations, and academia should support this effort through professional development, mentoring, teaching, and research. Resumen:,El liderazgo es una herramienta crítica para la expansión de la influencia de la ciencia de la conservación, pero los avances recientes en los conceptos y práctica del liderazgo son subutilizados por los científicos de la conservación. Más aun, en la literatura no hay una fundamentación conceptual ni definición de liderazgo en la ciencia de la conservación. Aquí nos basamos en nuestras experiencias de liderazgo, nuestra lectura de literatura sobre liderazgo y discusiones con líderes selectos de la ciencia de conservación para definir liderazgo para la ciencia de la conservación, resumir un conjunto exploratorio de principios de liderazgo aplicables a la ciencia de la conservación y recomendar acciones para expandir la capacidad de liderazgo entre los científicos y los practicantes de la conservación. Definimos dos tipos de liderazgo de la ciencia de la conservación: configuración de la ciencia de la conservación mediante investigación original, y avance hacia la integración del liderazgo en la ciencia de la conservación en la política, el manejo y la sociedad en general. Nos centramos en el segundo tipo de liderazgo porque consideramos que presenta la mejor oportunidad para mejorar la efectividad de la conservación. Identificamos ocho principios de liderazgo derivados principalmente de la literatura sobre "liderazgo adaptativo": reconocer la dimensión social del problema; alternar entre acción y reflexión frecuentemente; obtener y mantener atención; combinar fortalezas de múltiples líderes; extender el alcance mediante redes de relaciones; organizar el esfuerzo estratégicamente; evitar conflictos productivos y desarrollar la biodiversidad. Los científicos y los practicantes de la conservación deberían esforzarse para desarrollarse como líderes y la Sociedad para la Biología de la Conservación, las organizaciones de conservación y la academia deberían respaldar este esfuerzo mediante el desarrollo profesional, la tutoría, la enseñanza y la investigación. [source]


    Quantification of Extinction Risk: IUCN's System for Classifying Threatened Species

    CONSERVATION BIOLOGY, Issue 6 2008
    GEORGINA M. MACE
    definición de prioridades de conservación; especies amenazadas; Lista Roja UICN; riesgo de extinción Abstract:,The International Union for Conservation of Nature (IUCN) Red List of Threatened Species was increasingly used during the 1980s to assess the conservation status of species for policy and planning purposes. This use stimulated the development of a new set of quantitative criteria for listing species in the categories of threat: critically endangered, endangered, and vulnerable. These criteria, which were intended to be applicable to all species except microorganisms, were part of a broader system for classifying threatened species and were fully implemented by IUCN in 2000. The system and the criteria have been widely used by conservation practitioners and scientists and now underpin one indicator being used to assess the Convention on Biological Diversity 2010 biodiversity target. We describe the process and the technical background to the IUCN Red List system. The criteria refer to fundamental biological processes underlying population decline and extinction. But given major differences between species, the threatening processes affecting them, and the paucity of knowledge relating to most species, the IUCN system had to be both broad and flexible to be applicable to the majority of described species. The system was designed to measure the symptoms of extinction risk, and uses 5 independent criteria relating to aspects of population loss and decline of range size. A species is assigned to a threat category if it meets the quantitative threshold for at least one criterion. The criteria and the accompanying rules and guidelines used by IUCN are intended to increase the consistency, transparency, and validity of its categorization system, but it necessitates some compromises that affect the applicability of the system and the species lists that result. In particular, choices were made over the assessment of uncertainty, poorly known species, depleted species, population decline, restricted ranges, and rarity; all of these affect the way red lists should be viewed and used. Processes related to priority setting and the development of national red lists need to take account of some assumptions in the formulation of the criteria. Resumen:,La Lista Roja de Especies Amenazadas de la UICN (Unión Internacional para la Conservación de la Naturaleza) fue muy utilizada durante la década de l980 para evaluar el estatus de conservación de especies para fines políticos y de planificación. Este uso estimuló el desarrollo de un conjunto nuevo de criterios cuantitativos para enlistar especies en las categorías de amenaza: en peligro crítico, en peligro y vulnerable. Estos criterios, que se pretendía fueran aplicables a todas las especies excepto microorganismos, eran parte de un sistema general para clasificar especies amenazadas y fueron implementadas completamente por la UICN en 2000. El sistema y los criterios han sido ampliamente utilizados por practicantes y científicos de la conservación y actualmente apuntalan un indicador utilizado para evaluar el objetivo al 2010 de la Convención de Diversidad Biológica. Describimos el proceso y el respaldo técnico del sistema de la Lista Roja de la IUCN. Los criterios se refieren a los procesos biológicos fundamentales que subyacen en la declinación y extinción de una población. Pero, debido a diferencias mayores entre especies, los procesos de amenaza que los afectan y la escasez de conocimiento sobre la mayoría de las especies, el sistema de la UICN tenía que ser amplio y flexible para ser aplicable a la mayoría de las especies descritas. El sistema fue diseñado para medir los síntomas del riesgo de extinción, y utiliza cinco criterios independientes que relacionan aspectos de la pérdida poblacional y la declinación del rango de distribución. Una especie es asignada a una categoría de amenaza si cumple el umbral cuantitativo por lo menos para un criterio. Los criterios, las reglas acompañantes y las directrices utilizadas por la UICN tienen la intención de incrementar la consistencia, transparencia y validez de su sistema de clasificación, pero requiere algunos compromisos que afectan la aplicabilidad del sistema y las listas de especies que resultan. En particular, se hicieron selecciones por encima de la evaluación de incertidumbre, especies poco conocidas, especies disminuidas, declinación poblacional, rangos restringidos y rareza; todas estas afectan la forma en que las listas rojas deberían ser vistas y usadas. Los procesos relacionados con la definición de prioridades y el desarrollo de las listas rojas nacionales necesitan considerar algunos de los supuestos en la formulación de los criterios. [source]


    The Ecological Future of the North American Bison: Conceiving Long-Term, Large-Scale Conservation of Wildlife

    CONSERVATION BIOLOGY, Issue 2 2008
    ERIC W. SANDERSON
    Bison bison; conservación de especies; Declaración de Vermejo; metas de conservación; representación ecológica Abstract:,Many wide-ranging mammal species have experienced significant declines over the last 200 years; restoring these species will require long-term, large-scale recovery efforts. We highlight 5 attributes of a recent range-wide vision-setting exercise for ecological recovery of the North American bison (Bison bison) that are broadly applicable to other species and restoration targets. The result of the exercise, the "Vermejo Statement" on bison restoration, is explicitly (1) large scale, (2) long term, (3) inclusive, (4) fulfilling of different values, and (5) ambitious. It reads, in part, "Over the next century, the ecological recovery of the North American bison will occur when multiple large herds move freely across extensive landscapes within all major habitats of their historic range, interacting in ecologically significant ways with the fullest possible set of other native species, and inspiring, sustaining and connecting human cultures." We refined the vision into a scorecard that illustrates how individual bison herds can contribute to the vision. We also developed a set of maps and analyzed the current and potential future distributions of bison on the basis of expert assessment. Although more than 500,000 bison exist in North America today, we estimated they occupy <1% of their historical range and in no place express the full range of ecological and social values of previous times. By formulating an inclusive, affirmative, and specific vision through consultation with a wide range of stakeholders, we hope to provide a foundation for conservation of bison, and other wide-ranging species, over the next 100 years. Resumen:,Muchas especies de mamíferos de distribución amplia han experimentado declinaciones significativas durante los últimos 200 años; la restauración de estas especies requerirá esfuerzos de recuperación a largo plazo y a gran escala. Resaltamos 5 atributos de un reciente ejercicio de gran visión para la recuperación ecológica del bisonte de Norte América (Bison bison) que son aplicables en lo general a otras especies y objetivos de restauración. El resultado del ejercicio, la "Declaración de Vermejo", explícitamente es (1) de gran escala, (2) de largo plazo, (3) incluyente, (4) satisfactor de valores diferentes y (5) ambicioso. En parte, establece que "En el próximo siglo, la recuperación ecológica del Bisonte de Norte América ocurrirá cuando múltiples manadas se desplacen libremente en los extensos paisajes de todos los hábitats importantes en su rango de distribución histórica, interactúen de manera significativa ecológicamente con el conjunto más completo de otras especies nativas e inspiren, sostengan y conecten culturas humanas." Refinamos esta visión en una tarjeta de puntuación que ilustra cómo las manadas de bisonte individuales pueden contribuir a la visión. También desarrollamos un conjunto de mapas y analizamos las distribuciones actuales y potencialmente futuras del bisonte con base en la evaluación de expertos. Aunque actualmente existen más de 500,000 bisontes en Norte América, estimamos que ocupan <1% de su distribución histórica y no expresan el rango completo de valores ecológicos y culturales de otros tiempos. Mediante la formulación de una visión incluyente, afirmativa y específica basada en la consulta a una amplia gama de interesados, esperamos proporcionar un fundamento para la conservación del bisonte, y otras especies de distribución amplia, para los próximos 100 años. [source]


    Metropolitan Open-Space Protection with Uncertain Site Availability

    CONSERVATION BIOLOGY, Issue 2 2005
    ROBERT G. HAIGHT
    acceso público; Chicago; modelo de selección de sitio; optimización; representación de especies Abstract:,Urban planners acquire open space to protect natural areas and provide public access to recreation opportunities. Because of limited budgets and dynamic land markets, acquisitions take place sequentially depending on available funds and sites. To address these planning features, we formulated a two-period site selection model with two objectives: maximize the expected number of species represented in protected sites and maximize the expected number of people with access to protected sites. These objectives were both maximized subject to an upper bound on area protected over two periods. The trade-off between species representation and public access was generated by the weighting method of multiobjective programming. Uncertainty was represented with a set of probabilistic scenarios of site availability in a linear-integer formulation. We used data for 27 rare species in 31 candidate sites in western Lake County, near the city of Chicago, to illustrate the model. Each trade-off curve had a concave shape in which species representation dropped at an increasing rate as public accessibility increased, with the trade-off being smaller at higher levels of the area budget. Several sites were included in optimal solutions regardless of objective function weights, and these core sites had high species richness and public access per unit area. The area protected in period one depended on current site availability and on the probabilities of sites being undeveloped and available in the second period. Although the numerical results are specific for our study, the methodology is general and applicable elsewhere. Resumen:,Planificadores urbanos adquieren espacios abiertos para proteger áreas naturales y proporcionar acceso público a oportunidades de recreación. Debido a presupuestos limitados y a la dinámica de los mercados de terrenos, las adquisiciones se llevan a cabo secuencialmente en función de la disponibilidad de fondos y sitios. Para atender estas características de la planificación, formulamos un modelo de selección de sitios de dos períodos con dos objetivos: maximizar el número esperado de especies representado en sitios protegidos y maximizar el número esperado de personas con acceso a sitios protegidos. Ambos objetivos fueron maximizados con un límite superior en la superficie protegida en los dos períodos. El balance entre la representación de especies y el acceso público fue generado por el método de ponderación de programación de multiobjetivos. La incertidumbre fue representada con un conjunto de escenarios probabilísticos de la disponibilidad de sitios en una formulación lineal-integral. Para demostrar el modelo, utilizamos datos para 27 especies raras en 31 sitios potenciales en el oeste del Condado Lake, cerca de la ciudad de Chicago. Cada curva tenía forma cóncava y la representación de especies descendió a medida que incrementó la accesibilidad pública, con un menor equilibrio en niveles altos del presupuesto para el área. Varios sitios fueron incluidos en soluciones óptimas independientemente de las funciones de ponderación de los objetivos, y estos sitios tuvieron alta riqueza de especies y acceso público por unidad de área. La superficie protegida en el período uno dependió de la disponibilidad de sitios y de las probabilidades de que los sitios no fueran desarrollados y de su disponibilidad en el segundo período. Aunque los resultados numéricos son específicos a nuestro estudio, la metodología es general y aplicable en otros sitios. [source]


    An in vivo model to evaluate the efficacy of barrier creams on the level of skin penetration of chemicals

    CONTACT DERMATITIS, Issue 1 2006
    Alexa Teichmann
    The reservoir function and the barrier function are important properties of the skin. The reservoir function is dependent on the barrier function which, however, needs support by protective measures, in particular under working conditions. Barrier creams represent a possibility to protect the skin. In the present study, a method was developed to investigate the effectiveness of reservoir closure by different formulations. Patent Blue V in water was used as a model penetrant. Its penetration, with and without barrier cream treatment, was analyzed by tape stripping in combination with UV/VIS spectroscopic measurements. The investigations showed that the stratum corneum represents a reservoir for topically applied Patent Blue V in water. Furthermore, the barrier investigations showed that vaseline and bees wax form a 100% barrier on the skin surface. The third barrier cream, containing waxes and surfactant, only partially showed a protective effect against the penetration of Patent Blue V in water. Strong interindividual differences were observed for this barrier product. In conclusion, it was assumed that the application of barrier creams cannot replace other protective measures and should be maximally used to inhibit low-grade irritants or in combination with other protectants or in body areas where other protective measures are not applicable. [source]


    High-throughput screening of chemical exchange saturation transfer MR contrast agents

    CONTRAST MEDIA & MOLECULAR IMAGING, Issue 3 2010
    Guanshu Liu
    Abstract A new high-throughput MRI method for screening chemical exchange saturation transfer (CEST) agents is demonstrated, allowing simultaneous testing of multiple samples with minimal attention to sample configuration and shimming of the main magnetic field (B0). This approach, which is applicable to diamagnetic, paramagnetic and liposome CEST agents, employs a set of inexpensive glass or plastic capillary tubes containing CEST agents put together in a cheap plastic tube holder, without the need for liquid between the tubes to reduce magnetic susceptibility effects. In this setup, a reference image of direct water saturation spectra is acquired in order to map the absolute water frequency for each volume element (voxel) in the sample image, followed by an image of saturation transfer spectra to determine the CEST properties. Even though the field over the total sample is very inhomogeneous due to air,tube interfaces, the shape of the direct saturation spectra is not affected, allowing removal of susceptibility shift effects from the CEST data by using the absolute water frequencies from the reference map. As a result, quantitative information such as the mean CEST intensity for each sample can be extracted for multiple CEST agents at once. As an initial application, we demonstrate rapid screening of a library of 16 polypeptides for their CEST properties, but in principle the number of tubes is limited only by the available signal-noise-ratio, field of view and gradient strength for imaging. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Space-Charge Limited Current from Plasma-Facing Material Surface

    CONTRIBUTIONS TO PLASMA PHYSICS, Issue 1-3 2004
    S. Takamura
    Abstract We have derived an exact theoretical expression for the space-charge limited current from the solid surfaces adjacent to plasmas that is applicable for an arbitrary sheath voltage. Our expression shows that the spacecharge limited current tends to saturate with the sheath voltage. This new formula is evaluated by 1-D Particle in Cell (PIC) simulation and experiment, and is in a good agreement with the simulation and experimental results. We have also obtained an analytical equation fitted to the new formula based on conventional Child-Langmuir formula by taking into account a more sophisticated dependence of the electrode potential and the plasma density through the effect of Debye shielding and a sheath expansion due to increased voltage across the sheath. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Situations of opportunity , Hammarby Sjöstad and Stockholm City's process of environmental management

    CORPORATE SOCIAL RESPONSIBILITY AND ENVIRONMENTAL MANAGEMENT, Issue 2 2008
    Örjan Svane
    Abstract Hammarby Sjöstad is a large brownfield development in Stockholm guided by extensive environmental objectives. This case study focuses on the environmental management of the city's project team. A main aim was methodology development related to the concept of situations of opportunity , how to study those periods when the team had great influence over the process. Goal conflicts on for example energy use and the lake view were identified. The team used policy instruments such as development contracts and design competitions. Some of the situations identified contributed little to the environmental management, for example the detailed planning. Others were more successful, for example the integration of infrastructural systems. Success situations were unique or created by the team, and had less formal power. Other situations had more power, but were burdened with a prehistory of routines and agreements. The methodology should also be applicable to other processes of environmental management. Copyright © 2007 John Wiley & Sons, Ltd and ERP Environment. [source]


    On the concept of a universal audit of quality and environmental management systems

    CORPORATE SOCIAL RESPONSIBILITY AND ENVIRONMENTAL MANAGEMENT, Issue 3 2002
    Stanislav Karapetrovic
    There is a definite trend in industry today toward the integration of internal management systems (MSs), including those for managing quality, environment, health and safety, and social accountability. The standards describing the minimum requirements for such systems have been made largely compatible, but are not yet fully aligned or integrated. Apart from several national standards for integrated quality, environment and safety MSs, the world has yet to see a corresponding and internationally accepted guideline. In contrast, integrative standardization activities in the realm of MS auditing are proceeding in full force, with the introduction of the pioneering ISO 19011 guideline for quality and environmental auditing expected soon. This paper focuses on the concepts, principles and practices of a truly generic audit, applicable for the evaluation of diverse aspects of organizational performance against the criteria stated in MS standards. A universal audit model based on the systems approach and several important questions regarding the compatibility and integration of the current auditing schemes are discussed. These issues include the ability of integrated audits to foster unification of supported MSs, as well as different strategies for the development of a universal audit guideline (UAG) and integration of function-specific audits. Copyright © 2002 John Wiley & Sons, Ltd. and ERP Environment. [source]


    The Implementation of Innovation by a Multinational Operating in Two Different Environments: A Comparative Study

    CREATIVITY AND INNOVATION MANAGEMENT, Issue 2 2002
    Mohamed Zain
    The aim of the paper is to examine the innovation initiatives and processes followed by two subsidiaries of a German multinational company operating in Europe and Asia and to compare the innovativeness of their operations in these two locations. The study examined the innovation processes followed by the two subsidiary firms operating in Germany and Malaysia, the actual problems faced by them, the critical success factors involved in the implementation, and the work climates of the firms. Interestingly, it was found that both firms followed similar innovation processes. Nevertheless, different types of problems and critical success factors were applicable to both firms. The results showed that the Malaysian subsidiary faced more behavioural problems while the German subsidiary encountered more technical problems. Further, the study showed that a lack of knowledge was the common problem faced equally by both firms. The study demonstrated that the German subsidiary had better working climate compared to its counterpart in Malaysia. Finally, the German subsidiary was found to be more innovation,active than the Malaysian subsidiary as it introduced more types of innovation, interacted with more types of entity in the external environment and introduced more types of training. [source]


    Proof of principle: An HIV p24 microsphere immunoassay with potential application to HIV clinical diagnosis,

    CYTOMETRY, Issue 3 2009
    Pascale Ondoa
    Abstract The measurement of CD4 counts and viral loads on a single instrument such as an affordable flow cytometer could considerably reduce the cost related to the follow-up of antiretroviral therapy in resource-poor settings. The aim of this study was to assess whether the HIV-1 p24 antigen could be measured using a microsphere-based flow cytometric (FC) assay and the experimental conditions necessary for processing plasma samples. A commercial anti-p24 antibody pair from Biomaric was used to develop a p24 microsphere immunoassay (MIA) using HIV culture supernatant as the source of antigen. The ultrasensitive Perkin Elmer enzyme immunoassay (EIA) served as a reference assay. Quantification of HIV p24 using the heat-mediated immune complex disruption format described for plasma samples was feasible using the Biomaric MIA and applicable to a broad range of HIV-1 Group M subtypes. The inclusion of a tyramide amplification step was successful and increased the fluorescence signal up to 3 logs as compared with the MIA without amplification. The analytical sensitivity of this ultrasensitive Biomaric assay reached 1 pg/mL, whereas the ultrasensitive Perkin Elmer EIA was sensitive to less than 0.17 pg/mL. Our data indicate, for the first time, that the principle of p24 detection using the heat-denatured ultrasensitive format can be applied to FC. © 2008 Clinical Cytometry Society [source]