Values Used (value + used)

Distribution by Scientific Domains


Selected Abstracts


Survey of methodologies for developing media screening values for ecological risk assessment

INTEGRATED ENVIRONMENTAL ASSESSMENT AND MANAGEMENT, Issue 4 2005
Mace G. Barron
Abstract This review evaluates the methodologies of 13 screening value (SV) compilations that have been commonly used in ecological risk assessment (ERA), including compilations from state and U.S. federal agencies, the Oak Ridge National Laboratory (ORNL), Canada, The Netherlands, and Australia. The majority of surfacewater SVs were primarily derived for the protection of aquatic organisms using 2 approaches: (1) a statistical assessment of toxicity values by species groupings, such as "ambient water quality criteria," or (2) extrapolation of a lowest observed adverse effect level determined from limited toxicity data using an uncertainty factor. Sediment SVs were primarily derived for the protection of benthic invertebrates using 2 approaches: (1) statistical interpretations of databases on the incidence of biological effects and chemical concentrations in sediment, or (2) values derived from equilibrium partitioning based on a surfacewater SV. Soil SVs were derived using a diversity of approaches and were usually based on the lowest value determined from soil toxicity to terrestrial plants or invertebrates and, less frequently, from modeled, incidental soil ingestion or chemical accumulation in terrestrial organisms. The various SV compilations and methodologies had varying levels of conservatism and were not consistent in the pathways and receptors considered in the SV derivation. Many SVs were derived from other compilations and were based on outdated values, or they relied on only older toxicity data. Risk assessors involved in ERA should carefully evaluate the technical basis of SVs and consider the uncertainty in any value used to determine the presence or absence of risk and the need for further assessment. [source]


Detection of four oxidation sites in viral prolyl-4-hydroxylase by top-down mass spectrometry

PROTEIN SCIENCE, Issue 10 2003
Ying Ge
Abstract Oxidative inactivation is a common problem for enzymatic reactions that proceed via iron oxo intermediates. In an investigation of the inactivation of a viral prolyl-4-hydroxylase (26 kD), electrospray mass spectrometry (MS) directly shows the degree of oxidation under varying experimental conditions, but indicates the addition at most of three oxygen atoms per molecule. Thus, molecular ion masses (M + nO) of one sample indicate the oxygen atom adducts n = 0, 1, 2, 3, and 4 of 35, 41, 19, 5 ± 3, and <2%, respectively; "top-down" MS/MS of these ions show oxidation at the sites R28,V31, E95,F107, and K216 of 22%, 28%, and 34%, respectively, but with a possible (,4%) fourth site at V125,D150. However, for the doubly oxidized molecular ions (increasing the precursor oxygen content from 0.94 to 2), MS/MS showed an easily observable ,13% oxygen at the V125,D150 site. For the "bottom-up" approach, detection of the ,4% oxidation at the V125,D150 site by MS analysis of a proteolysis mixture would have been very difficult. The unmodified peptide containing this site would represent a few percent of the proteolysis mixture; the oxidized peptide not only would be just ,4% of this, but the uniqueness of its mass value (,1,2 kD) would be far less than the 11,933 Dalton value used here. Using different molecular ion precursors for top-down MS/MS also provides kinetic data from a single sample, that is, from molecular ions with 0.94 and 2 oxygens. Little oxidation occurs at V125,D150 until K216 is oxidized, suggesting that these are competitively catalyzed by the iron center; among several prolyl-4-hydroxylases the K216, H137, and D139 are conserved residues. [source]


Restoration of a species-rich fen-meadow after abandonment: response of 64 plant species to management

APPLIED VEGETATION SCIENCE, Issue 1 2000
A.B. Hald
Hansen (1981) Abstract. Eleven years of abandonment of a species-rich fen-meadow under undisturbed environmental conditions resulted in transformation into areas with tall herb-, sedge- and rush-dominated communities and areas with Alnus thicket. Species cover was measured in permanent plots in both community types and succession was monitored during 14 yr of restoration following reintroduction of management. The annual increase in accumulated species number followed a log-log-time linear regression during 10 yr of grazing management. The expected number of years taken before this annual rate was equal to annual extinction, i.e. a stable situation according to species density, was up to six. The response of 64 species to management was evaluated through paired statistical tests of changes in cover and frequency over time. In total, 55 species could each be allocated to one unique response model (monotone or non-monotone, concave models) independently of the importance value used (cover or frequency) and type of management (grazing following felling or mowing and mowing without grazing). Species which increased in response to grazing had the most persistent seed banks and CR-strategies, while species decreasing in response to grazing had less persistent seed banks and CS-strategies. Some of the species which increased due to grazing followed a model with a local maximum in cover and frequency. The results are discussed in relation to management of species with high cover value during restoration succession. [source]


An Assessment of the Potential Value of Elevated Homocysteine in Predicting Alcohol-withdrawal Seizures

EPILEPSIA, Issue 5 2006
Stefan Bleich
Summary:,Purpose: Higher homocysteine levels were found in actively drinking patients with alcohol dependence. Recent studies have shown that high homocysteine levels are associated with alcohol-withdrawal seizures. The aim of the present study was to calculate the best predictive cutoff value of plasma homocysteine levels in actively drinking alcoholics (n = 88) with first-onset alcohol-withdrawal seizures. Methods: The present study included 88 alcohol-dependent patients of whom 18 patients had a first-onset withdrawal seizure. All patients were active drinkers and had an established diagnosis of alcohol dependence, according to the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV). Sensitivity and specificity were calculated by using every homocysteine plasma level found in the study population as cut-off value. A Bayes theorem was used to calculate positive (PPV) and negative (NPV) predictive values for all cutoff values used. Results: The highest combined sensitivity and specificity was reached at a homocysteine plasma cutoff value of 23.9 ,M. Positive predictive values ranged from 0.23 to 0.745; the maximum was reached at a homocysteine plasma level of 41.7 ,M. Negative predictive values ranged from 0.50 to 0.935, with a maximum at a homocysteine plasma level of 15.8,M. Conclusions: Homocysteine levels above this cutoff value on admission are a useful screening tool to identify actively drinking patients at higher risk of alcohol-withdrawal seizures. This pilot study gives further hints that biologic markers may be helpful to predict patients at risk for first-onset alcohol-withdrawal seizures. [source]


Erosion models: quality of spatial predictions

HYDROLOGICAL PROCESSES, Issue 5 2003
Victor Jetten
Abstract An Erratum has been published for this article in Hydrological Processes 18(3) 2004, 595. An overview is given on the predictive quality of spatially distributed runoff and erosion models. A summary is given of the results of model comparison workshops organized by the Global Change and Terrestrial Ecosystems Focus 3 programme, as well as other results obtained by individual researchers. The results concur with the generally held viewpoint in the literature that the predictive quality of distributed models is moderately good for total discharge at the outlet, and not very good for net soil loss. This is only true if extensive calibration is done: uncalibrated results are generally bad. The more simple lumped models seem to perform equally well as the more complex distributed models, although the latter produce more detailed spatially distributed results that can aid the researcher. All these results are outlet based: models are tested on lumped discharge and soil loss or on hydrographs and sedigraphs. Surprisingly few tests have been done on the comparison of simulated and modelled erosion patterns, although this may arguably be just as important in the sense of designing anti-erosion measures and determining source and sink areas. Two studies are shown in which the spatial performance of the erosion model LISEM (Limburg soil erosion model) is analysed. It seems that: (i) the model is very sensitive to the resolution (grid cell size); (ii) the spatial pattern prediction is not very good; (iii) the performance becomes better when the results are resampled to a lower resolution and (iv) the results are improved when certain processes in the model (in this case gully incision) are restricted to so called ,critical areas', selected from the digital elevation model with simple rules. The difficulties associated with calibrating and validating spatially distributed soil erosion models are, to a large extent, due to the large spatial and temporal variability of soil erosion phenomena and the uncertainty associated with the input parameter values used in models to predict these processes. They will, therefore, not be solved by constructing even more complete, and therefore more complex, models. However, the situation may be improved by using more spatial information for model calibration and validation rather than output data only and by using ,optimal' models, describing only the dominant processes operating in a given landscape. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Thermochemistry for enthalpies and reaction paths of nitrous acid isomers

INTERNATIONAL JOURNAL OF CHEMICAL KINETICS, Issue 7 2007
Rubik Asatryan
Recent studies show that nitrous acid, HONO, a significant precursor of the hydroxyl radical in the atmosphere, is formed during the photolysis of nitrogen dioxide in soils. The term nitrous acid is largely used interchangeably in the atmospheric literature, and the analytical methods employed do not often distinguish between the HONO structure (nitrous acid) and HNO2 (nitryl hydride or isonitrous acid). The objective of this study is to determine the thermochemistry of the HNO2 isomer, which has not been determined experimentally, and to evaluate its thermal and atmospheric stability relative to HONO. The thermochemistry of these isomers is also needed for reference and internal consistency in the calculation of larger nitrite and nitryl systems. We review, evaluate, and compare the thermochemical properties of several small nitric oxide and hydrogen nitrogen oxide molecules. The enthalpies of HONO and HNO2 are calculated using computational chemistry with the following methods of analysis for the atomization, isomerization, and work reactions using closed- and open-shell reference molecules. Three high-level composite methods G3, CBS-QB3, and CBS-APNO are used for the computation of enthalpy. The enthalpy of formation, ,Hof(298 K), for HONO is determined as ,18.90 ± 0.05 kcal mol,1 (,79.08 ± 0.2 kJ mol,1) and as ,10.90 ± 0.05 kcal mol,1 (,45.61 ± 0.2 kJ mol,1) for nitryl hydride (HNO2), which is significantly higher than values used in recent NOx combustion mechanisms. H-NO2 is the weakest bond in isonitrous acid; but HNO2 will isomerize to HONO with a similar barrier to the HONO bond energy; thus, it also serves as a source of OH in atmospheric chemistry. Kinetics of the isomerization is determined; a potential energy diagram of H/N/O2 system is presented, and an analysis of the triplet surface is initiated. © 2007 Wiley Periodicals, Inc. Int J Chem Kinet 39: 378,398, 2007 [source]


Modelling and design considerations on CML gates under high-current effects

INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 6 2005
M. Alioto
Abstract In this paper, the effect of the transit time degradation of bipolar transistors on the power-delay trade-off in CML gates and their design is dealt with. A delay model which accounts for the transit time increase due to the high bias current values used in high-speed applications is derived by generalizing an approach previously proposed by the same authors (IEEE Trans. CAD 1999; 18(9):1369,1375; Model and Design of Bipolar and MOS Current,Mode Logic (CML, ECL and SCL Digital Circuits), Kluwer Academic Publisher: Dordrecht, 2005). The resulting closed-form delay expression is achieved by properly simplifying the SPICE model, and has an explicit dependence on the bias current which determines the power consumption of CML gates. Accordingly, the delay model is used to gain insight into the power-delay trade-off by considering the effect of the transit time degradation in high-speed designs. In particular, the cases where such effects can be neglected are identified, to better understand how the transit time degradation affects the performance of CML gates for current bipolar technologies. The proposed model has a simple and compact expression, thus it turns out to be suitable for pencil-and-paper evaluations, as well as fast timing analysis. Simulations of CML circuits with a 20-GHz bipolar process show that the model has a very good accuracy in a wide range of current and loading conditions. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Bayesian inference in a piecewise Weibull proportional hazards model with unknown change points

JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 4 2007
J. Casellas
Summary The main difference between parametric and non-parametric survival analyses relies on model flexibility. Parametric models have been suggested as preferable because of their lower programming needs although they generally suffer from a reduced flexibility to fit field data. In this sense, parametric survival functions can be redefined as piecewise survival functions whose slopes change at given points. It substantially increases the flexibility of the parametric survival model. Unfortunately, we lack accurate methods to establish a required number of change points and their position within the time space. In this study, a Weibull survival model with a piecewise baseline hazard function was developed, with change points included as unknown parameters in the model. Concretely, a Weibull log-normal animal frailty model was assumed, and it was solved with a Bayesian approach. The required fully conditional posterior distributions were derived. During the sampling process, all the parameters in the model were updated using a Metropolis,Hastings step, with the exception of the genetic variance that was updated with a standard Gibbs sampler. This methodology was tested with simulated data sets, each one analysed through several models with different number of change points. The models were compared with the Deviance Information Criterion, with appealing results. Simulation results showed that the estimated marginal posterior distributions covered well and placed high density to the true parameter values used in the simulation data. Moreover, results showed that the piecewise baseline hazard function could appropriately fit survival data, as well as other smooth distributions, with a reduced number of change points. [source]


Unique Preparation of Hexaboride Nanocubes: A First Example of Boride Formation by Combustion Synthesis

JOURNAL OF THE AMERICAN CERAMIC SOCIETY, Issue 10 2010
Raghunath Kanakala
Nanocubes of LaB6 and Sm0.8B6 have been synthesized using low-temperature combustion synthesis, a technique that had only been used previously for the preparation of oxides. The hexaboride nanocubes were prepared using lanthanum nitrate or samarium nitrate, carbohydrazide, and boron powders. The furnace temperature for synthesis was kept at 320°C, lower than the typical temperature values used in combustion processes for the preparation of oxides. After synthesis, the nanocubes were characterized using X-ray diffraction, scanning electron microscopy, and X-ray photoelectron spectroscopy. The combustion process was analyzed using differential scanning calorimetry, which shows that the formation of the carbohydrazide and nitrate melts as well as the formation of a complex between metal ions and carbohydrazide are crucial steps for the reaction. The technique results in high-purity powders with a unique cubic morphology, in which the corners of the cubes can be used as point sources for efficient electron emission. [source]


Interspecies allometric scaling: prediction of clearance in large animal species: Part II: mathematical considerations

JOURNAL OF VETERINARY PHARMACOLOGY & THERAPEUTICS, Issue 5 2006
M. MARTINEZ
Interspecies scaling is a useful tool for the prediction of pharmacokinetic parameters from animals to humans, and it is often used for estimating a first-time in human dose. However, it is important to appreciate the mathematical underpinnings of this scaling procedure when using it to predict pharmacokinetic parameter values across animal species. When cautiously applied, allometry can be a tool for estimating clearance in veterinary species for the purpose of dosage selection. It is particularly valuable during the selection of dosages in large zoo animal species, such as elephants, large cats and camels, for which pharmacokinetic data are scant. In Part I, allometric predictions of clearance in large animal species were found to pose substantially greater risks of inaccuracies when compared with that observed for humans. In this report, we examine the factors influencing the accuracy of our clearance estimates from the perspective of the relationship between prediction error and such variables as the distribution of body weight values used in the regression analysis, the influence of a particular observation on the clearance estimate, and the ,goodness of fit' (R2) of the regression line. Ultimately, these considerations are used to generate recommendations regarding the data to be included in the allometric prediction of clearance in large animal species. [source]


Effects of temperature on isotopic enrichment in Daphnia magna: implications for aquatic food-web studies

RAPID COMMUNICATIONS IN MASS SPECTROMETRY, Issue 14 2003
M. Power
Laboratory experiments were conducted with Daphnia magna and Hyalella sp. grown on a single food source of known isotopic composition at a range of temperatures spanning the physiological optima for each species. Daphnia raised at 26.5°C were enriched in ,13C and ,15N by 3.1 and 2.8,, respectively, relative to diet. Daphnia raised at 12.8°C were enriched 1.7 and 5.0, in ,13C and ,15N, respectively. Results imply a significant negative relationship between the ,13C and ,15N of primary consumers when a temperature gradient exists. Similar responses were observed for Hyalella. Results indicate a general increase in ,13C enrichment and decrease in ,15N enrichment as temperature rises. Deviations from the commonly applied isotopic enrichment values used in aquatic ecology were attributed to changes in temperature-mediated physiological rates. Field data from a variety of sources also showed a general trend toward ,13C enrichment with increasing temperature in marine and lacustrine zooplankton. Multivariate regression models demonstrated that, in oligotrophic and mesotrophic lakes, zooplankton ,13C was related to lake-specific POM ,13C, lake surface temperature and latitude. Temperature-dependent isotopic separation (enrichment) between predator and prey should be taken into consideration when interpreting the significance of isotopic differences within and among aquatic organisms and ecosystems, and when assigning organisms to food-web positions on the basis of observed isotope values. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Methodology and model for performance and cost comparison of innovative treatment technologies at wood preserving sites

REMEDIATION, Issue 1 2001
Mark L. Evans
Wood preserving facilities have used a variety of compounds, including pentachlorophenol (PCP), creosote, and certain metals, to extend the useful life of wood products. Past operations and waste management practices resulted in soil and water contamination at a portion of the more than 700 wood preserving sites in the United States (EPA, 1997). Many of these sites are currently being addressed under federal, state, or voluntary cleanup programs. The U.S. Environmental Protection Agency (EPA) National Risk Management Research Laboratory (NRMRL) has responded to the need for information aimed at facilitating remediation of wood preserving sites by conducting treatability studies, issuing guidance, and preparing reports. This article presents a practical methodology and computer model for screening the performances and comparing the costs of seven innovative technologies that could be used for the treatment of contaminated soils at user-specified wood preserving sites. The model incorporates a technology screening function and a cost-estimating function developed from literature searches and vendor information solicited for this study. This article also provides background information on the derivation of various assumptions and default values used in the model, common contaminants at wood preserving sites, and recent trends in the cleanup of such sites. © 2001 John Wiley & Sons, Inc. [source]