Test Data (test + data)

Distribution by Scientific Domains

Kinds of Test Data

  • patch test data

  • Terms modified by Test Data

  • test data set

  • Selected Abstracts


    PATHOGEN DETECTION IN FOOD MICROBIOLOGY LABORATORIES: AN ANALYSIS OF QUALITATIVE PROFICIENCY TEST DATA, 1999,2007

    JOURNAL OF FOOD SAFETY, Issue 4 2009
    DANIEL C. EDSON
    ABSTRACT The objective of this study was to assess laboratories' ability to detect or rule out the presence of four common food pathogens: Escherichia coli O157:H7, Salmonella spp., Listeria monocytogenes and Campylobacter spp. To do this, qualitative proficiency test data provided by one proficiency test provider from 1999 to 2007 were examined. The annual and cumulative 9-year percentages of false-negative and false-positive responses were calculated. The cumulative 9-year false-negative rates were 7.8% for E. coli O157:H7, 5.9% for Salmonella spp., 7.2% for L. monocytogenes and 13.6% for Campylobacter spp. Atypical strains and low concentrations of bacteria were more likely to be missed, and the data showed no trend of improving performance over time. Percentages of false-positive results were below 5.0% for all four pathogens. PRACTICAL APPLICATIONS The results imply that food testing laboratories often fail to detect the presence of these four food pathogens in real food specimens. To improve pathogen detection, supervisors should ensure that testing personnel are adequately trained, that recommended procedures are followed correctly, that samples are properly prepared, that proper conditions (temperature, atmosphere and incubation time) are maintained for good bacterial growth and that recommended quality control procedures are followed. Supervisors should also always investigate reasons for unsatisfactory proficiency test results and take corrective action. Finally, more research is needed into testing practices and proficiency test performance in food testing laboratories. [source]


    Significance of Modeling Error in Structural Parameter Estimation

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 1 2001
    Masoud Sanayei
    Structural health monitoring systems rely on algorithms to detect potential changes in structural parameters that may be indicative of damage. Parameter-estimation algorithms seek to identify changes in structural parameters by adjusting parameters of an a priori finite-element model of a structure to reconcile its response with a set of measured test data. Modeling error, represented as uncertainty in the parameters of a finite-element model of the structure, curtail capability of parameter estimation to capture the physical behavior of the structure. The performance of four error functions, two stiffness-based and two flexibility-based, is compared in the presence of modeling error in terms of the propagation rate of the modeling error and the quality of the final parameter estimates. Three different types of parameters are used in the parameter estimation procedure: (1) unknown parameters that are to be estimated, (2) known parameters assumed to be accurate, and (3) uncertain parameters that manifest the modeling error and are assumed known and not to be estimated. The significance of modeling error is investigated with respect to excitation and measurement type and locations, the type of error function, location of the uncertain parameter, and the selection of unknown parameters to be estimated. It is illustrated in two examples that the stiffness-based error functions perform significantly better than the corresponding flexibility-based error functions in the presence of modeling error. Additionally, the topology of the structure, excitation and measurement type and locations, and location of the uncertain parameters with respect to the unknown parameters can have a significant impact on the quality of the parameter estimates. Insight into the significance of modeling error and its potential impact on the resulting parameter estimates is presented through analytical and numerical examples using static and modal data. [source]


    Structural testing criteria for message-passing parallel programs

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 16 2008
    S. R. S. Souza
    Abstract Parallel programs present some features such as concurrency, communication and synchronization that make the test a challenging activity. Because of these characteristics, the direct application of traditional testing is not always possible and adequate testing criteria and tools are necessary. In this paper we investigate the challenges of validating message-passing parallel programs and present a set of specific testing criteria. We introduce a family of structural testing criteria based on a test model. The model captures control and data flow of the message-passing programs, by considering their sequential and parallel aspects. The criteria provide a coverage measure that can be used for evaluating the progress of the testing activity and also provide guidelines for the generation of test data. We also describe a tool, called ValiPar, which supports the application of the proposed testing criteria. Currently, ValiPar is configured for parallel virtual machine (PVM) and message-passing interface (MPI). Results of the application of the proposed criteria to MPI programs are also presented and analyzed. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Skin sensitizing properties of the ethanolamines mono-, di-, and triethanolamine.

    CONTACT DERMATITIS, Issue 5 2009
    Data analysis of a multicentre surveillance network (IVDK, review of the literature
    Numerous publications address the skin sensitizing potential of the short chain alkanolamines triethanolamine (TEA), diethanolamine (DEA), monoethanolamine (MEA), which are not skin sensitizing according to animal studies. Regarding TEA, we analysed patch test data of 85 098 patients who had been tested with TEA 2.5% petrolatum by Information Network of Departments of Dermatology (IVDK) to identify particular exposures possibly associated with an elevated risk of sensitization. Altogether, 323 patients (0.4%) tested positive. The profile of patch test reactions indicates a slightly irritant potential rather than a true allergic response in many cases. Although used widely, no exposure associated with an increased risk of TEA sensitization was identified. Therefore, the risk of sensitization to TEA seems to be very low. MEA and DEA were patch tested in a much more aimed fashion in 9602 and 8791 patients, respectively when prevalence of contact allergy was 3.8% and 1.8%. MEA is the prominent allergen in metalworkers with exposure to water-based metalworking fluids (wbMWFs); DEA is probably used in cutting fluids less frequently nowadays. Chronic damage to the skin barrier resulting from wbMWF, the alkalinity of ethanolamines (increasing from TEA to MEA), and other cofactors may contribute to a notable sensitization risk. [source]


    Epidemiological data on consumer allergy to p -phenylenediamine

    CONTACT DERMATITIS, Issue 6 2008
    Jacob Pontoppidan Thyssen
    Many women and men now dye their hair. p-Phenylenediamine (PPD) is a frequent and important component of permanent hair dye products; exposure to it may cause allergic contact sensitization, acute dermatitis, and severe facial oedema. To increase our understanding of PPD allergy, we reviewed published literature containing PPD patch test data from dermatitis patients and individuals in the general population. This was performed to estimate the median prevalence and the weighted average of PPD sensitization and thereby assess the burden of PPD-containing hair care products on health. Literature was examined using PubMed,MEDLINE, Biosis, and Science Citation Index. The median prevalence among dermatitis patients was 4.3% in Asia, 4% in Europe, and 6.2% in North America. A widespread increase in the prevalence of PPD sensitization was observed among Asian dermatitis patients. In Europe, a decrease in the 1970s was replaced by a plateau with steady, high prevalences ranging between 2% and 6%. The prevalence remained high in North America, although a decreasing tendency was observed. Contact allergy to PPD is an important health issue for both women and men. More stringent regulation and enforcement are required as public health measures to reduce the burden of disease that exposure to PPD has brought to populations. [source]


    Skin-sensitizing and irritant properties of propylene glycol

    CONTACT DERMATITIS, Issue 5 2005
    Data analysis of a multicentre surveillance network (IVDK, review of the literature
    In the several publications reviewed in this article, propylene glycol (PG; 1,2-propylene glycol) is described as a very weak contact sensitizer, if at all. However, particular exposures to PG-containing products might be associated with an elevated risk of sensitization. To identify such exposures, we analysed patch test data of 45 138 patients who have been tested with 20% PG in water between 1992 and 2002. Out of these, 1044 patients (2.3%) tested positively, 1083 showed a doubtful, follicular or erythematous reaction (2.4%) and 271 explicit irritant reactions (0.6%). This profile of patch test reactions is indicative of a slightly irritant preparation, and thus, many of the ,weak positive' reactions must probably be interpreted as false positive. No private or occupational exposures associated with an increased risk of PG sensitization were identified, except for lower leg dermatitis. Therefore, according to our patch test data, PG seems to exhibit very low sensitization potential, and the risk for sensitization to PG on uncompromised skin seems to be very low. [source]


    Strong irritants masquerading as skin allergens: the case of benzalkonium chloride

    CONTACT DERMATITIS, Issue 4 2004
    David A. Basketter
    Chemicals may possess a number of hazards to human health including the ability to cause skin irritation and contact allergy. Identification and characterization of these properties should fall within predictive toxicology, but information derived from human exposure, including clinical experience, is also of importance. In this context, it is of interest to review the case of benzalkonium chloride, a cationic surfactant. This chemical is a well-known skin irritant, but on occasions it has also been reported to have allergenic properties, typically on the basis of positive diagnostic patch test data. Because the accumulated knowledge concerning the properties of a chemical is employed as the basis for its regulatory classification (e.g. in Europe), as well as for informing the clinical community with respect to the diagnosis of irritant versus allergic contact dermatitis (ACD), it is important to distinguish properly which chemicals are simply irritants from those which are both irritant and allergenic on skin. A review of the information on benzalkonium chloride confirms that it is a significant skin irritant. However, both predictive test results and clinical data lead to the conclusion that benzalkonium chloride is, at most, an extremely rare allergen, except perhaps in the eye, but with many supposed cases of ACD being likely to arise from the misinterpretation of patch test data. As a consequence, this substance should not normally be regarded as, or classified as, a significant skin sensitizer. [source]


    One hundred males with Asperger syndrome: a clinical study of background and associated factors

    DEVELOPMENTAL MEDICINE & CHILD NEUROLOGY, Issue 10 2004
    Mats Cederlund MD
    The objective of this study was to investigate the background and associated factors in a representative group of young males with Asperger syndrome (AS) presenting at a specialized autism clinic. One hundred males aged 5 years 6 months to 24 years 6 months, with a mean age of 11 years 4 months (SD 3y 10mo), who had a clinical diagnosis of AS were included in the study. An in-depth review of their medical records and neuropsychological test data was performed. There was a high rate (51%) of non-verbal learning disability (defined as Verbal IQ more than 15 points higher than Performance IQ), but otherwise there was little or no support for the notion of right-hemisphere brain dysfunction being at the core of the syndrome. There was a very high rate of close relatives with autism spectrum problems, but also high rates of prenatal and perinatal problems, including prematurity and postmaturity. In comparison with general population data, those with AS very often had a combination of genetic and prenatal and perinatal risk factors. Non-verbal learning disability test results applied in about half the group. There was a subgroup of individuals with AS who had macrocephalus. However, there was no support for an association of AS with low body mass index. [source]


    Identification of structural and soil properties from vibration tests of the Hualien containment model

    EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 1 2005
    J. Enrique Luco
    Abstract Measurements of the response of the ¼-scale reinforced concrete Hualien (Taiwan) containment model obtained during forced vibration tests are used to identify some of the characteristics of the superstructure and the soil. In particular, attempts are made to determine the fixed-base modal frequencies, modal damping ratios, modal masses and participation factors associated with translation and rocking of the base. The shell superstructure appears to be softer than could have been predicted on the basis of the given geometry and of test data for the properties of concrete. Estimates of the shear-wave velocity and damping ratio in the top layer of soil are obtained by matching the observed and theoretical system frequency and peak amplitude of the response at the top of the structure. The resulting models for the superstructure and the soil lead to theoretical results for the displacement and rotations at the base and top of the structure which closely match the observed response. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Using Dimensionality-Based DIF Analyses to Identify and Interpret Constructs That Elicit Group Differences

    EDUCATIONAL MEASUREMENT: ISSUES AND PRACTICE, Issue 1 2005
    Mark J. Gierl
    In this paper I describe and illustrate the Roussos-Stout (1996) multidimensionality-based DIF analysis paradigm, with emphasis on its implication for the selection of a matching and studied subtest for DIF analyses. Standard DIF practice encourages an exploratory search for matching subtest items based on purely statistical criteria, such as a failure to display DIF. By contrast, the multidimensional DIF paradigm emphasizes a substantively-informed selection of items for both the matching and studied subtest based on the dimensions suspected of underlying the test data. Using two examples, I demonstrate that these two approaches lead to different interpretations about the occurrence of DIF in a test. It is argued that selecting a matching and studied subtest, as identified using the DIF analysis paradigm, can lead to a more informed understanding of why DIF occurs. [source]


    Using a Geographic Information System to identify areas with potential for off-target pesticide exposure

    ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 8 2006
    Thomas G. Pfleeger
    Abstract In many countries, numerous tests are required as part of the risk assessment process before chemical registration to protect human health and the environment from unintended effects of chemical releases. Most of these tests are not based on ecological or environmental relevance but, rather, on consistent performance in the laboratory. A conceptual approach based on Geographic Information System (GIS) technology has been developed to identify areas that are vulnerable to nontarget chemical exposure. This GIS-based approach uses wind speed, frequency of those winds, pesticide application rates, and spatial location of agricultural crops to identify areas with the highest potential for pesticide exposure. A test scenario based on an incident in Idaho (USA) was used to identify the relative magnitude of risk from off-target movement of herbicides to plants in the conterminous United States. This analysis indicated that the western portion of the Corn Belt, the central California valley, southeastern Washington, the Willamette Valley of Oregon, and agricultural areas bordering the Great Lakes are among those areas in the United States that appear to have the greatest potential for off-target movement of herbicides via drift. Agricultural areas, such as the Mississippi River Valley and the southeastern United States, appears to have less potential, possibly due to lower average wind speeds. Ecological risk assessments developed for pesticide registration would be improved by using response data from species common to high-risk areas instead of extrapolating test data from species unrelated to those areas with the highest potential for exposure. [source]


    From organisms to populations: Modeling aquatic toxicity data across two levels of biological organization

    ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 2 2006
    Sandy Raimondo
    Abstract A critical step in estimating the ecological effects of a toxicant is extrapolating organism-level response data across higher levels of biological organization. In the present study, the organism-to-population link is made for the mysid, Americamysis bahia, exposed to a range of concentrations of six toxicants. Organism-level responses observed were categorized as no effect, delayed reproduction, reduced overall reproduction, or both reduced overall reproduction and survival. Population multiplication rates of each toxicant concentration were obtained from matrix models developed from organism-level endpoints and placed into the four categories of organism-level responses. Rates within each category were compared with growth rates modeled for control populations. Population multiplication rates were significantly less than control growth rates only for concentrations at which overall reproduction and both reproduction and survival were significantly less than the control values on the organism level. Decomposition analysis of the significant population-level effects identified reduced reproduction as the primary contributor to a reduced population multiplication rate at all sublethal concentrations and most lethal concentrations. Mortality was the primary contributor to reduced population growth rate only when survival was less than 25% of control survival. These results suggest the importance of altered reproduction in population-level risk assessment and emphasizes the need for complete life-cycle test data to make an explicit link between the organism and population levels. [source]


    Bacterial energetics, stoichiometry, and kinetic modeling of 2,4-Dinitrotoluene biodegradation in a batch respirometer

    ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 12 2004
    Chunlong Zhang
    Abstract A stoichiometric equation and kinetic model were developed and validated using experimental data from batch respirometer studies on the biodegradation of 2,4-dinitrotoluene (DNT). The stoichiometric equation integrates bacterial energetics and is revised from that in a previous study by including the mass balance of phosphorus (P) in the biomass. Stoichiometric results on O2 consumption, CO2 evolution, and nitrite evolution are in good agreement with respirometer data. However, the optimal P requirement is significantly higher than the stoichiometrically derived P, implying potentially limited bioavailability of P and the need for buffering capacity in the media to mitigate the adverse pH effect for optimal growth of DNT-degrading bacteria. An array of models was evaluated to fit the O2/CO2 data acquired experimentally and the DNT depletion data calculated from derived stoichiometric coefficients and cell yield. The deterministic, integrated Monod model provides the goodness of fit to the test data on DNT depletion, and the Monod model parameters (Ks, X0, ,max, and Y) were estimated by nonlinear regression. Further analyses with an equilibrium model (MINTEQ) indicate the interrelated nature of medium chemical compositions in controlling the rate and extent of DNT biodegradation. Results from the present batch respirometer study help to unravel some key factors in controlling DNT biodegradation in complex remediation systems, in particular the interactions between acidogenic DNT bacteria and various parameters, including pH and P, the latter of which could serve as a nutrient, a buffer, and a controlling factor on the bioavailable fractions of minerals (Ca, Fe, Zn, and Mo) in the medium. [source]


    A strategy to reduce the numbers of fish used in acute ecotoxicity testing of pharmaceuticals

    ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 12 2003
    Thomas H. Hutchinson
    Abstract The pharmaceutical industry gives high priority to animal welfare in the process of drug discovery and safety assessment. In the context of environmental assessments of active pharmaceutical ingredients (APIs), existing U.S. Food and Drug Administration and draft European regulations may require testing of APIs for acute ecotoxicity to algae, daphnids, and fish (base-set ecotoxicity data used to derive the predicted no-effect concentration [PNECwater] from the most sensitive of three species). Subject to regulatory approval, it is proposed that testing can be moved from fish median lethal concentration (LC50) testing (typically using ,42 fish/API) to acute threshold tests using fewer fish (typically 10 fish/API). To support this strategy, we have collated base-set ecotoxicity data from regulatory studies of 91 APIs (names coded for commercial reasons). For 73 of the 91 APIs, the algal median effect concentration (EC50) and daphnid EC50 values were lower than or equal to the fish LC50 data. Thus, for approximately 80% of these APIs, algal and daphnid acute EC50 data could have been used in the absence offish LC50 data to derive PNECwater values. For the other 18 APIs, use of an acute threshold test with a step-down factor of 3.2 is predicted to give comparable PNECwater outcomes. Based on this preliminary scenario of 91 APIs, this approach is predicted to reduce the total number offish used from 3,822 to 1,025 (,73%). The present study, although preliminary, suggests that the current regulatory requirement for fish LC50 data regarding APIs should be succeeded by fish acute threshold (step-down) test data, thereby achieving significant animal welfare benefits with no loss of data for PNECwater estimates. [source]


    A new method for assessing high-temperature crack growth

    FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 9 2005
    A. J. FOOKES
    ABSTRACT Experimental creep crack growth (CCG) test data are obtained by following standards that characterize CCG rates using the C* parameter. Such data are then used in high-temperature failure assessment procedures. An alternative approach to defect assessment at high-temperature failure is an extension of the R6 failure assessment diagram (FAD). At high temperature, creep toughness, Kcmat, can be estimated from CCG tests and replaces low-temperature toughness in R6. This approach has the advantage that it is not necessary to establish a creep fracture regime, such as small-scale, primary or widespread creep. Also, a new strain-based FAD has been developed, potentially allowing variations of stress and temperature to be accommodated. In this paper, the results of a series of crack growth tests performed on ex-service 316H stainless steel at 550 °C are examined in the light of the limitations imposed by ASTM for CCG testing. The results are then explored in terms of toughness and presented in FADs. [source]


    Interaction equations for multiaxial fatigue assessment of welded structures

    FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 11 2004
    M. BÄCKSTRÖM
    ABSTRACT Multiaxial fatigue data from 233 welded test specimens taken from eight different studies have been evaluated based on three published interaction equations for normal and shear stress. The interaction equations were obtained from SFS 2378, Eurocode 3 and International Institute of Welding (IIW) recommendations. Fatigue classes for normal and shear stress were obtained directly from the design guidance documents. Additionally, mean fatigue strengths were determined by regression analysis of bending only and torsion only data for different specimen types. In some cases, the S,N slopes assumed by the different standards were not appropriate for the test data. Specimens that showed significantly different cracking locations or cracking mode between bending and torsion were not easily correlated by the interaction equations. Interaction equations work best in cases where both the normal stress and the shear stress tend to produce crack initiation and growth in the same location and in the same direction. The use of a damage summation of 0.5 for non-proportional loading as recommended by IIW was consistent with experimental observations for tube-to-plate specimens. Other codes used a damage sum of unity. [source]


    Thermal properties of gypsum plasterboard at high temperatures

    FIRE AND MATERIALS, Issue 1 2002
    Geoff Thomas
    Light timber frame wall and floor assemblies typically use gypsum-based boards as a lining to provide fire resistance. In order to model the thermal behaviour of such assemblies, the thermo-physical properties of gypsum plasterboard must be determined. The relevant literature and the chemistry of the two consecutive endothermic dehydration reactions that gypsum undergoes when heated are reviewed. The values determined for the thermo-physical properties are modified to create smooth enthalpy and thermal conductivity curves suitable for input into a finite element heat transfer model. These values are calibrated within a reasonable range and then validated using furnace and fire test data. The type of plasterboard used in these tests is an engineered product similar to the North American type C board. The temperature at which the second dehydration reaction occurs is altered to be consistent with later research with little apparent affect on the comparison with test results. Values for specific heat, mass loss rates and thermal conductivity for gypsum plasterboard that are suitable for use in finite element heat transfer modelling of light timber frame wall and floor assemblies are recommended. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Modelling the basin-scale demography of Calanus finmarchicus in the north-east Atlantic

    FISHERIES OCEANOGRAPHY, Issue 5 2005
    DOUGLAS C. SPEIRS
    Abstract In this paper, we report on a coupled physical,biological model describing the spatio-temporal distribution of Calanus finmarchicus over an area of the North Atlantic and Norwegian Sea from 56°N, 30°W to 72°N, 20°E. The model, which explicitly represents all the life-history stages, is implemented in a highly efficient discrete space,time format which permits wide-ranging dynamic exploration and parameter optimization. The underlying hydrodynamic driving functions come from the Hamburg Shelf-Ocean Model (HAMSOM). The spatio-temporal distribution of resources powering development and reproduction is inferred from SeaWiFS sea-surface colour observations. We confront the model with distributional data inferred from continuous plankton recorder observations, overwintering distribution data from a variety of EU, UK national and Canadian programmes which were collated as part of the Trans-Atlantic Study of Calanus (TASC) programme, and high-frequency stage-resolved point time-series obtained as part of the TASC programme. We test two competing hypotheses concerning the control of awakening from diapause and conclude that only a mechanism with characteristics similar to photoperiodic control can explain the test data. [source]


    Winter diatom blooms in a regulated river in South Korea: explanations based on evolutionary computation

    FRESHWATER BIOLOGY, Issue 10 2007
    DONG-KYUN KIM
    Summary 1. An ecological model was developed using genetic programming (GP) to predict the time-series dynamics of the diatom, Stephanodiscus hantzschii for the lower Nakdong River, South Korea. Eight years of weekly data showed the river to be hypertrophic (chl. a, 45.1 ± 4.19 ,g L,1, mean ± SE, n = 427), and S. hantzschii annually formed blooms during the winter to spring flow period (late November to March). 2. A simple non-linear equation was created to produce a 3-day sequential forecast of the species biovolume, by means of time series optimization genetic programming (TSOGP). Training data were used in conjunction with a GP algorithm utilizing 7 years of limnological variables (1995,2001). The model was validated by comparing its output with measurements for a specific year with severe blooms (1994). The model accurately predicted timing of the blooms although it slightly underestimated biovolume (training r2 = 0.70, test r2 = 0.78). The model consisted of the following variables: dam discharge and storage, water temperature, Secchi transparency, dissolved oxygen (DO), pH, evaporation and silica concentration. 3. The application of a five-way cross-validation test suggested that GP was capable of developing models whose input variables were similar, although the data are randomly used for training. The similarity of input variable selection was approximately 51% between the best model and the top 20 candidate models out of 150 in total (based on both Root Mean Squared Error and the determination coefficients for the test data). 4. Genetic programming was able to determine the ecological importance of different environmental variables affecting the diatoms. A series of sensitivity analyses showed that water temperature was the most sensitive parameter. In addition, the optimal equation was sensitive to DO, Secchi transparency, dam discharge and silica concentration. The analyses thus identified likely causes of the proliferation of diatoms in ,river-reservoir hybrids' (i.e. rivers which have the characteristics of a reservoir during the dry season). This result provides specific information about the bloom of S. hantzschii in river systems, as well as the applicability of inductive methods, such as evolutionary computation to river-reservoir hybrid systems. [source]


    Assigning macroinvertebrate tolerance classifications using generalised additive models

    FRESHWATER BIOLOGY, Issue 5 2004
    Lester L. Yuan
    Summary 1. Macroinvertebrates are frequently classified in terms of their tolerance to human disturbance and pollution. These tolerance values have been used effectively to assess the biological condition of running waters. 2. Generalised additive models were used to associate the presence and absence of different macroinvertebrate genera with different environmental gradients. The model results were then used to classify each genera as sensitive, intermediately tolerant or tolerant to different stressor gradients as quantified by total phosphorus concentration, sulphate ion concentration, qualitative habitat score and stream pH. The analytical approach provided a means of estimating stressor-specific tolerance classifications while controlling for covarying, natural environmental gradients. 3. Computed tolerance classification generally conformed with expectations and provided some capacity for distinguishing between different stressors in test data. [source]


    Using species distribution models to identify suitable areas for biofuel feedstock production

    GCB BIOENERGY, Issue 2 2010
    JASON M. EVANS
    Abstract The 2007 Energy Independence and Security Act mandates a five-fold increase in US biofuel production by 2022. Given this ambitious policy target, there is a need for spatially explicit estimates of landscape suitability for growing biofuel feedstocks. We developed a suitability modeling approach for two major US biofuel crops, corn (Zea mays) and switchgrass (Panicum virgatum), based upon the use of two presence-only species distribution models (SDMs): maximum entropy (Maxent) and support vector machines (SVM). SDMs are commonly used for modeling animal and plant distributions in natural environments, but have rarely been used to develop landscape models for cultivated crops. AUC, Kappa, and correlation measures derived from test data indicate that SVM slightly outperformed Maxent in modeling US corn production, although both models produced significantly accurate results. When compared with results from a mechanistic switchgrass model recently developed by Oak Ridge National Laboratory (ORNL), SVM results showed higher correlation than Maxent results with models fit using county-scale point inputs of switchgrass production derived from expert opinion estimates. However, Maxent results for an alternative switchgrass model developed with point inputs from research trial sites showed higher correlation to the ORNL model than the corresponding results obtained from SVM. Further analysis indicates that both modeling approaches were effective in predicting county-scale increases in corn production from 2006 to 2007, a time period in which US corn production increased by 24%. We conclude that presence-only methods are a powerful first-cut tool for estimating relative land suitability across geographic regions in which candidate biofuel feedstocks can be grown, and may also provide important insight into potential land-use change patterns likely to be associated with increased biofuel demand. [source]


    A damage mechanics model for power-law creep and earthquake aftershock and foreshock sequences

    GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2000
    Ian G. Main
    It is common practice to refer to three independent stages of creep under static loading conditions in the laboratory: namely transient, steady-state, and accelerating. Here we suggest a simple damage mechanics model for the apparently trimodal behaviour of the strain and event rate dependence, by invoking two local mechanisms of positive and negative feedback applied to constitutive rules for time-dependent subcritical crack growth. In both phases, the individual constitutive rule for measured strain , takes the form ,(t),=,,0,[1,+,t/m,]m, where , is the ratio of initial crack length to rupture velocity. For a local hardening mechanism (negative feedback), we find that transient creep dominates, with 0,<,m,<,1. Crack growth in this stage is stable and decelerating. For a local softening mechanism (positive feedback), m,<,0, and crack growth is unstable and accelerating. In this case a quasi-static instability criterion , , , can be defined at a finite failure time, resulting in the localization of damage and the formation of a throughgoing fracture. In the hybrid model, transient creep dominates in the early stages of damage and accelerating creep in the latter stages. At intermediate times the linear superposition of the two mechanisms spontaneously produces an apparent steady-state phase of relatively constant strain rate, with a power-law rheology, as observed in laboratory creep test data. The predicted acoustic emission event rates in the transient and accelerating phases are identical to the modified Omori laws for aftershocks and foreshocks, respectively, and provide a physical meaning for the empirical constants measured. At intermediate times, the event rate tends to a relatively constant background rate. The requirement for a finite event rate at the time of the main shock can be satisfied by modifying the instability criterion to having a finite crack velocity at the dynamic failure time, dx/dt , VR,, where VR is the dynamic rupture velocity. The same hybrid model can be modified to account for dynamic loading (constant stress rate) boundary conditions, and predicts the observed loading rate dependence of the breaking strength. The resulting scaling exponents imply systematically more non-linear behaviour for dynamic loading. [source]


    Inversion of time-dependent nuclear well-logging data using neural networks

    GEOPHYSICAL PROSPECTING, Issue 1 2008
    Laura Carmine
    ABSTRACT The purpose of this work was to investigate a new and fast inversion methodology for the prediction of subsurface formation properties such as porosity, salinity and oil saturation, using time-dependent nuclear well logging data. Although the ultimate aim is to apply the technique to real-field data, an initial investigation as described in this paper, was first required; this has been carried out using simulation results from the time-dependent radiation transport problem within a borehole. Simulated neutron and ,-ray fluxes at two sodium iodide (NaI) detectors, one near and one far from a pulsed neutron source emitting at ,14 MeV, were used for the investigation. A total of 67 energy groups from the BUGLE96 cross section library together with 567 property combinations were employed for the original flux response generation, achieved by solving numerically the time-dependent Boltzmann radiation transport equation in its even parity form. Material property combinations (scenarios) and their correspondent teaching outputs (flux response at detectors) are used to train the Artificial Neural Networks (ANNs) and test data is used to assess the accuracy of the ANNs. The trained networks are then used to produce a surrogate model of the expensive, in terms of computational time and resources, forward model with which a simple inversion method is applied to calculate material properties from the time evolution of flux responses at the two detectors. The inversion technique uses a fast surrogate model comprising 8026 artificial neural networks, which consist of an input layer with three input units (neurons) for porosity, salinity and oil saturation; and two hidden layers and one output neuron representing the scalar photon or neutron flux prediction at the detector. This is the first time this technique has been applied to invert pulsed neutron logging tool information and the results produced are very promising. The next step in the procedure is to apply the methodology to real data. [source]


    Slug Test Analysis to Evaluate Permeability of Compressible Materials

    GROUND WATER, Issue 4 2008
    Hangseok Choi
    The line-fitting methods such as the Hvorslev method and the Bouwer and Rice method provide a rapid and simple means to analyze slug test data for estimating in situ hydraulic conductivity (k) of geologic materials. However, when analyzing a slug test in a relatively compressible geologic formation, these conventional methods may have difficulties fitting a straight line to the semilogarithmic plot of the test data. Data from relatively compressible geologic formations frequently show a concave-upward curvature because of the effect of the compressibility or specific storage (Ss). To take into account the compressibility of geologic formations, a modified line-fitting method is introduced, which expands on Chirlin's (1989) approach to the case of a partially penetrating well with the basic-time-lag fitting method. A case study for a compressible till is made to verify the proposed method by comparing the results from the proposed methods with those obtained using a type-curve method (Kansas Geological Survey method [Hyder et al. 1994]). [source]


    Inverse Modeling of Coastal Aquifers Using Tidal Response and Hydraulic Tests

    GROUND WATER, Issue 6 2007
    Andrés Alcolea
    Remediation of contaminated aquifers demands a reliable characterization of hydraulic connectivity patterns. Hydraulic diffusivity is possibly the best indicator of connectivity. It can be derived using the tidal response method (TRM), which is based on fitting observations to a closed-form solution. Unfortunately, the conventional TRM assumes homogeneity. The objective of this study was to overcome this limitation and use tidal response to identify preferential flowpaths. Additionally, the procedure requires joint inversion with hydraulic test data. These provide further information on connectivity and are needed to resolve diffusivity into transmissivity and storage coefficient. Spatial variability is characterized using the regularized pilot points method. Actual application may be complicated by the need to filter tidal effects from the response to pumping and by the need to deal with different types of data, which we have addressed using maximum likelihood methods. Application to a contaminated artificial coastal fill leads to flowpaths that are consistent with the materials used during construction and to solute transport predictions that compare well with observations. We conclude that tidal response can be used to identify connectivity patterns. As such, it should be useful when designing measures to control sea water intrusion. [source]


    Estimating Aquifer Transmissivity from Specific Capacity Using MATLAB

    GROUND WATER, Issue 4 2005
    Stephen G. McLin
    Historically, specific capacity information has been used to calculate aquifer transmissivity when pumping test data are unavailable. This paper presents a simple computer program written in the MATLAB programming language that estimates transmissivity from specific capacity data while correcting for aquifer partial penetration and well efficiency. The program graphically plots transmissivity as a function of these factors so that the user can visually estimate their relative importance in a particular application. The program is compatible with any computer operating system running MATLAB, including Windows, Macintosh OS, Linux, and Unix. Two simple examples illustrate program usage. [source]


    Notice of Plagiarism: A Single Recovery Type Curve from Theis' Exact Solution

    GROUND WATER, Issue 1 2004
    Article first published online: 9 OCT 200
    Shortly after the September-October 2003 issue of the journal was mailed, three readers called our attention to similarities between the paper by N. Samani and M. Pasandi (2003, ?A single recovery type curve from Theis? exact solution,?Ground Water 41, no. 5: 602-607) and a paper published in 1980 by Ram G. Agarwal. Agarwal?s paper, ?A new method to account for producing time effects when drawdown type curves are used to analyze pressure buildup and other test data,? was published by the Society for Petroleum Engineers (1980, in Society of Petroleum Engineers 55th Annual Fall Technical Conference, September 2 1-24, Dallas, Texas: SPE Paper 9289). An investigation by the journal verified that the approach and some of the wording used in the two papers are identical. Dr. Samani and Mr. Pasandi acknowledge the similarity and offer an explanation and apology. [source]


    A Numerical Model and Spreadsheet Interface for Pumping Test Analysis

    GROUND WATER, Issue 4 2001
    Gary S. Johnson
    Curve-matching techniques have been the standard method of aquifer test analysis for several decades. A variety of techniques provide the capability of evaluating test data from confined, unconfined, leaky aquitard, and other conditions. Each technique, however, is accompanied by a set of assumptions, and evaluation of a combination of conditions can be complicated or impossible due to intractable mathematics or nonuniqueness of the solution. Numerical modeling of pumping tests provides two major advantages: (1) the user can choose which properties to calibrate and what assumptions to make; and (2) in the calibration process the user is gaining insights into the conceptual model of the flow system and uncertainties in the analysis. Routine numerical modeling of pumping tests is now practical due to computer hardware and software advances of the last decade. The RADFLOW model and spreadsheet interface presented in this paper is an easy-to-use numerical model for estimation of aquifer properties from pumping test data. Layered conceptual models and their properties are evaluated in a trial-and-error estimation procedure. The RADFLOW model can treat most combinations of confined, unconfined, leaky aquitard, partial penetration, and borehole storage conditions. RADFLOW is especially useful in stratified aquifer systems with no identifiable lateral boundaries. It has been verified to several analytical solutions and has been applied in the Snake River Plain Aquifer to develop and test conceptual models and provide estimates of aquifer properties. Because the model assumes axially symmetrical flow, it is limited to representing multiple aquifer layers that are laterally continuous. [source]


    Analytic Determination of Hydrocarbon Transmissivity from Baildown Tests

    GROUND WATER, Issue 1 2000
    David Huntley
    Hydrocarbon baildown tests involve the rapid removal of floating hydrocarbon from an observation or production well, followed by monitoring the rate of recovery of both the oil/air and oil/water interfaces. This test has been used erroneously for several years to calculate the "true thickness" of hydrocarbon in the adjacent formation. More recent analysis of hydrocarbon distribution by Farr et al. (1990), Lenhard and Parker (1990), Huntley et al. (1994), and others have shown that, under vertical equilibrium conditions, there is no thickness exaggeration of hydrocarbon in a monitoring well, though there is a significant volume exaggeration. This body of work can be used to demonstrate that the calculation of a "true hydrocarbon thickness" using a baildown test has no basis in theory. The same body of work, however, also demonstrates that hydrocarbon saturations are typically much less than one, and are often below 0.5. Because the relative permeability decreases as hydrocarbon saturation decreases, the effective conductivity and mobility of the hydrocarbon is much less than that of water, even ignoring the effects of increased viscosity and decreased density. It is important to evaluate this decreased mobility of hydrocarbon due to partial pore saturation, as it has substantial impacts on both risk and remediation. This paper presents two analytic approaches to the analysis of hydrocarbon baildown test results to determine hydrocarbon transmissivity. The first approach is based on a modification of the Bouwer and Rice (1976) analysis of slug withdrawal test data. The second approach is based on a modification of Jacob and Lohman's (1952) constant drawdown,variable discharge aquifer test approach. The first approach can be applied only when the effective water transmissivity across the screened interval to water is much greater than the effective hydrocarbon transmissivity. When this condition is met, the two approaches give effectively identical results. [source]


    Identifying rotation and oscillation in surface tension measurement using an oscillating droplet method

    HEAT TRANSFER - ASIAN RESEARCH (FORMERLY HEAT TRANSFER-JAPANESE RESEARCH), Issue 7 2008
    Shumpei Ozawa
    Abstract We proposed a new approach to identify the frequencies of droplet rotation and m=±2 oscillation that degrade the accuracy of surface tension measurement by an oscillating droplet method. Frequencies of droplet rotation and m=±2 oscillation can be identified by a phase unwrapping analysis of time dependence of the deflection angle for the maximum diameter of the droplet image observed from above. The present method was validated, using test data with given frequencies. © 2008 Wiley Periodicals, Inc. Heat Trans Asian Res, 37(7): 421,430, 2008; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/htj.20214 [source]