Home About us Contact | |||
Overestimate
Kinds of Overestimate Selected AbstractsTechniques to measure the dry aeolian deposition of dust in arid and semi-arid landscapes: a comparative study in West NigerEARTH SURFACE PROCESSES AND LANDFORMS, Issue 2 2008Dirk Goossens Abstract Seven techniques designed to measure the dry aeolian deposition of dust on a desert surface were tested during field experiments in Niger, central-west Africa. Deposition fluxes were measured during eight periods of 3,4 days each. Experimental techniques tested were the MDCO (marble dust collector) method, the Frisbee method, the glass plate method (optical analysis of dust deposited on glass surfaces using particle imaging software), the soil surface method (deposition on a simulated desert floor) and the CAPYR (capteur pyramidal) method. Theoretical techniques tested were the inferential method and the combination method (gradient method extended with a deposition term for coarse dust particles). The results obtained by the MDCO, Frisbee, inferential and combination methods could be directly compared by converting the data to identical standard conditions (deposition on a water surface producing no resuspension). The results obtained by the other methods (glass plate, soil surface, CAPYR) were compared relatively. The study shows that the crude (unconverted) deposition fluxes of the five experimental techniques were similar, while the crude deposition fluxes calculated by the two theoretical techniques were substantially higher, of the order of four to five times as high as for the experimental techniques. Recalculation of the data to identical environmental conditions (the standard water surface) resulted in nearly identical deposition fluxes for the MDCO, Frisbee, inferential and combination techniques, although the latter two still had slightly higher values (but the differences remained small). The measurements illustrate the need to include a grain shape factor in theoretical dust deposition models. Without such a factor, theoretical models overestimate the deposition. The paper also discusses the advantages and disadvantages of the techniques tested. Copyright © 2007 John Wiley & Sons, Ltd. [source] Estimation of erosion and deposition volumes in a large, gravel-bed, braided river using synoptic remote sensingEARTH SURFACE PROCESSES AND LANDFORMS, Issue 3 2003Stuart N. Lane Abstract System-scale detection of erosion and deposition is crucial in order to assess the transferability of findings from scaled laboratory and small field studies to larger spatial scales. Increasingly, synoptic remote sensing has the potential to provide the necessary data. In this paper, we develop a methodology for channel change detection, coupled to the use of synoptic remote sensing, for erosion and deposition estimation, and apply it to a wide, braided, gravel-bed river. This is based upon construction of digital elevation models (DEMs) using digital photogrammetry, laser altimetry and image processing. DEMs of difference were constructed by subtracting DEM pairs, and a method for propagating error into the DEMs of difference was used under the assumption that each elevation in each surface contains error that is random, independent and Gaussian. Data were acquired for the braided Waimakariri River, South Island, New Zealand. The DEMs had a 1·0 m pixel resolution and covered an area of riverbed that is more than 1 km wide and 3·3 km long. Application of the method showed the need to use survey-specific estimates of point precision, as project design and manufacturer estimates of precision overestimate a priori point quality. This finding aside, the analysis showed that even after propagation of error it was possible to obtain high quality DEMs of difference for process estimation, over a spatial scale that has not previously been achieved. In particular, there was no difference in the ability to detect erosion and deposition. The estimates of volumes of change, despite being downgraded as compared with traditional cross-section survey in terms of point precision, produced more reliable erosion and deposition estimates as a result of the large improvement in spatial density that synoptic methods provide. Copyright © 2003 John Wiley & Sons, Ltd. [source] Influence of reed stem density on foredune developmentEARTH SURFACE PROCESSES AND LANDFORMS, Issue 11 2001S. M. Arens Abstract Vegetation density on foredunes exerts an important control on aeolian sediment transport and deposition, and therefore on profile development. In a long-term monitoring field experiment, three plots were planted with regular grids of reed bundles in three different densities: 4, 2 and 1 bundles per m2. This study reports on the differences in profile development under the range of vegetation densities. Topographic profiles were measured between May 1996 and April 1997. Results indicate important differences in profile development for the three reed bundle densities: in the highest density plot a distinct, steep dune developed, while in the lowest density a more gradual and smooth sand ramp was deposited. When the stems had been completely buried, differences in profile evolution vanished. After a second planting of reed stems in January 1997 the process was repeated. In May 1997, all plots had gained a sand volume ranging from 11·5 to 12·3 m3 m,1, indicating that the sediment budget is relatively constant, regardless of the particular profile evolution. The field evidence is compared with simulations of profile development, generated by the foredune development model SAFE. The model successfully reproduces the overall profile development, but in general, the equations used for vegetation,transport interaction overestimate the effect of vegetation. This causes some deviations between field and model results. Several reasons for this are discussed. Based on the experiments reported here, recommendations are given for further research. Copyright © 2001 John Wiley & Sons, Ltd. [source] Empirical estimate of fundamental frequencies and damping for Italian buildingsEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 8 2009Maria Rosaria Gallipoli Abstract The aim of this work is to estimate the fundamental translational frequencies and relative damping of a large number of existing buildings, performing ambient vibration measurements. The first part of the work is devoted to the comparison of the results obtained with microtremor measurements with those obtained from earthquake recordings using four different techniques: horizontal-to-vertical spectral ratio, standard spectral ratio, non-parametric damping analysis (NonPaDAn) and half bandwidth method. We recorded local earthquakes on a five floors reinforced concrete building with a pair of accelerometers located on the ground and on top floor, and then collected microtremors at the same location of the accelerometers. The agreement between the results obtained with microtremors and earthquakes has encouraged extending ambient noise measurements to a large number of buildings. We analysed the data with the above-mentioned methods to obtain the two main translational frequencies in orthogonal directions and their relative damping for 80 buildings in the urban areas of Potenza and Senigallia (Italy). The frequencies determined with different techniques are in good agreement. We do not have the same satisfactory results for the estimates of damping: the NonPaDAn provides estimates that are less dispersed and grouped around values that appear to be more realistic. Finally, we have compared the measured frequencies with other experimental results and theoretical models. Our results confirm, as reported by previous authors, that the theoretical period,height relationships overestimate the experimental data. Copyright © 2008 John Wiley & Sons, Ltd. [source] Modeling missing binary outcome data in a successful web-based smokeless tobacco cessation programADDICTION, Issue 6 2010Keith Smolkowski ABSTRACT Aim To examine various methods to impute missing binary outcome from a web-based tobacco cessation intervention. Design The ChewFree randomized controlled trial used a two-arm design to compare tobacco abstinence at both the 3- and 6-month follow-up for participants randomized to either an enhanced web-based intervention condition or a basic information-only control condition. Setting Internet in the United States and Canada. Participants Secondary analyses focused upon 2523 participants in the ChewFree trial. Measurements Point-prevalence tobacco abstinence measured at 3- and 6-month follow-up. Findings The results of this study confirmed the findings for the original ChewFree trial and highlighted the use of different missing-data approaches to achieve intent-to-treat analyses when confronted with substantial attrition. The use of different imputation methods yielded results that differed in both the size of the estimated treatment effect and the standard errors. Conclusions The choice of imputation model used to analyze missing binary outcome data can affect substantially the size and statistical significance of the treatment effect. Without additional information about the missing cases, they can overestimate the effect of treatment. Multiple imputation methods are recommended, especially those that permit a sensitivity analysis of their impact. [source] The accuracy of regulatory cost estimates: a study of the London congestion charging schemeENVIRONMENTAL POLICY AND GOVERNANCE, Issue 2 2007Chris Sherrington Abstract This paper considers the accuracy of regulatory cost estimates using the London congestion charging scheme as a case study. In common with other regulations, ex ante estimates of the direct costs of the scheme were produced by the regulator, Transport for London. Reviews of a number of environmental and industrial regulations have shown that ex ante costs tend to exceed the ex post (or outturn) costs. This study finds that while Transport for London moderately overestimated the total costs of the scheme (by 16%) there was a significant overestimate of chargepayer compliance costs (by 64%). The main reasons for this were greater than expected reductions in traffic and unanticipated technological innovation. As the compliance cost is essentially the cost of transacting payment of the charge, these results have wider implications for other similar regulations. One example is the proposed national road user charging in the UK, where it could reasonably be expected that the ex post cost of compliance will again be lower than the ex ante estimate, and that compliance costs will continue to reduce over time. Copyright © 2007 John Wiley & Sons, Ltd and ERP Environment. [source] Bioenergetic and pharmacokinetic model for exposure of common loon (Gavia Immer) chicks to methylmercuryENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 4 2007William H. Karasov Abstract A bioenergetics model was used to predict food intake of common loon (Gavia immer) chicks as a function of body mass during development, and a pharmacokinetics model, based on first-order kinetics in a single compartment, was used to predict blood Hg level as a function of food intake rate, food Hg content, body mass, and Hg absorption and elimination. Predictions were tested in captive growing chicks fed trout (Salmo gairdneri) with average MeHg concentrations of 0.02 (control), 0.4, and 1.2 ,g/g wet mass (delivered as CH3HgCl). Predicted food intake matched observed intake through 50 d of age but then exceeded observed intake by an amount that grew progressively larger with age, reaching a significant overestimate of 28% by the end of the trial. Respiration in older, nongrowing birds probably was overestimated by using rates measured in younger, growing birds. Close agreement was found between simulations and measured blood Hg, which varied significantly with dietary Hg and age. Although chicks may hatch with different blood Hg levels, their blood level is determined mainly by dietary Hg level beyond approximately two weeks of age. The model also may be useful for predicting Hg levels in adults and in the eggs that they lay, but its accuracy in both chicks and adults needs to be tested in free-living birds. [source] Measured partitioning coefficients for parent and alkyl polycyclic aromatic hydrocarbons in 114 historically contaminated sediments: Part 1.ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 11 2006KOC values Abstract Polycyclic aromatic hydrocarbon (PAH) partitioning coefficients between sediment organic carbon and water (KOC) values were determined using 114 historically contaminated and background sediments collected from eight different rural and urban waterways in the northeastern United States. More than 2,100 individual KC values were measured in quadruplicate for PAHs ranging from two to six rings, along with the first reported KOC values for alkyl PAHs included in the U.S. Environmental Protection Agency's (U.S. EPA) sediment narcosis model for the prediction of PAH toxicity to benthic organisms. Sediment PAH concentrations ranged from 0.2 to 8,600 ,g/g (U.S. EPA 16 parent PAHs), but no observable trends in KOC values with concentration were observed for any of the individual PAHs. Literature KOC values that are commonly used for environmental modeling are similar to the lowest measured values for a particular PAH, with actual measured values typically ranging up to two orders of magnitude higher for both background and contaminated sediments. For example, the median log KOC values we determined for naphthalene, pyrene, and benzo[a]pyrene were 4.3, 5.8, and 6.7, respectively, compared to typical literature KOC values for the same PAHs of 2.9, 4.8, and 5.8, respectively. Our results clearly demonstrate that the common practice of using PAH KOC values derived from spiked sediments and modeled values based on n -octanol,water coefficients can greatly overestimate the actual partitioning of PAHs into water from field sediments. [source] THE MUTATION MATRIX AND THE EVOLUTION OF EVOLVABILITYEVOLUTION, Issue 4 2007Adam G. Jones Evolvability is a key characteristic of any evolving system, and the concept of evolvability serves as a unifying theme in a wide range of disciplines related to evolutionary theory. The field of quantitative genetics provides a framework for the exploration of evolvability with the promise to produce insights of global importance. With respect to the quantitative genetics of biological systems, the parameters most relevant to evolvability are the G -matrix, which describes the standing additive genetic variances and covariances for a suite of traits, and the M -matrix, which describes the effects of new mutations on genetic variances and covariances. A population's immediate response to selection is governed by the G -matrix. However, evolvability is also concerned with the ability of mutational processes to produce adaptive variants, and consequently the M -matrix is a crucial quantitative genetic parameter. Here, we explore the evolution of evolvability by using analytical theory and simulation-based models to examine the evolution of the mutational correlation, r,, the key parameter determining the nature of genetic constraints imposed by M. The model uses a diploid, sexually reproducing population of finite size experiencing stabilizing selection on a two-trait phenotype. We assume that the mutational correlation is a third quantitative trait determined by multiple additive loci. An individual's value of the mutational correlation trait determines the correlation between pleiotropic effects of new alleles when they arise in that individual. Our results show that the mutational correlation, despite the fact that it is not involved directly in the specification of an individual's fitness, does evolve in response to selection on the bivariate phenotype. The mutational variance exhibits a weak tendency to evolve to produce alignment of the M -matrix with the adaptive landscape, but is prone to erratic fluctuations as a consequence of genetic drift. The interpretation of this result is that the evolvability of the population is capable of a response to selection, and whether this response results in an increase or decrease in evolvability depends on the way in which the bivariate phenotypic optimum is expected to move. Interestingly, both analytical and simulation results show that the mutational correlation experiences disruptive selection, with local fitness maxima at ,1 and +1. Genetic drift counteracts the tendency for the mutational correlation to persist at these extreme values, however. Our results also show that an evolving M -matrix tends to increase stability of the G -matrix under most circumstances. Previous studies of G -matrix stability, which assume nonevolving M -matrices, consequently may overestimate the level of instability of G relative to what might be expected in natural systems. Overall, our results indicate that evolvability can evolve in natural systems in a way that tends to result in alignment of the G -matrix, the M -matrix, and the adaptive landscape, and that such evolution tends to stabilize the G -matrix over evolutionary time. [source] An evaluation of the self-heating hazards of cerium(IV) nitrated treated towels using differential scanning calorimetry and thermogravimetric analysisFIRE AND MATERIALS, Issue 6 2007J. R. Hartman Abstract This study measured the Arrhenius kinetic parameters and heat of reaction using thermogravimetric analysis (TGA) and differential scanning calorimetric (DSC) for the combustion of untreated towels and towels treated with cerium(IV) nitrate. These parameters were used to calculate the self-heating parameters, M and P (Self-heating: Evaluating and Controlling the Hazard. Her Majesty's Stationery Office: London, 1984) and the critical pile sizes of the towels at several temperatures. The results from the TGA/DSC experiments support the conclusions by Beyler et al. (Fire and Materials 2005; 30:223,240) that the cerium(IV) nitrate treatment of towels significantly enhances the ignitability of the towels but that self-heating is not a hazard for normal temperature storage scenarios other than bulk storage. It was found that the kinetic reaction data measured by TGA and DSC are only useful for predicting the specific reaction hazard for materials stored above 100°C. A comparison of the self-heating parameters measured by oven and kinetic reaction data methods for a number of materials suggests that the kinetic reaction data overestimate the critical pile size at temperatures below 100°C. In addition, it was found that the kinetic reaction data measured by TGA can be used to determine the relative self-heating hazards for modified materials. TGA testing with towels saturated with a 0.5 N solution of cerium(IV) nitrate (Ce(NO3)4) in a 2.0 N solution of nitric acid, a 2.0 N solution of sodium nitrate in 2.0 N nitric acid and simple 2.0 N nitric acid, showed that the sodium nitrate and nitric acid treated samples reacted at the same temperatures as the untreated towels, while cerium(IV) nitrate markedly reduced the reaction temperature. These tests clearly point to the importance of the cerium(IV) ion as an oxidizing agent. Thus, the TGA testing provided in a matter of days, insights that would have required months of oven testing. Copyright © 2006 John Wiley & Sons, Ltd. [source] Public Sector Decentralisation: Measurement Concepts and Recent International Trends,FISCAL STUDIES, Issue 3 2005Dan Stegarescu Abstract This paper deals with the problems encountered in defining and measuring the degree of fiscal decentralisation. Drawing on a recent analytical framework of the OECD, different measures of tax autonomy and revenue decentralisation are presented which consider the tax-raising powers of sub-central governments. Taking account of changes in the assignment of decision-making competencies over the course of time, new time series of annual data on the degree of fiscal decentralisation are provided for 23 OECD countries over the period between 1965 and 2001. It is shown that common measures usually employed tend to overestimate the extent of fiscal decentralisation considerably. Evidence is also provided of increasing fiscal decentralisation in a majority of OECD countries during the last three decades. [source] Tissue Oxygenation Does Not Predict Central Venous Oxygenation in Emergency Department Patients With Severe Sepsis and Septic ShockACADEMIC EMERGENCY MEDICINE, Issue 4 2010Anthony M. Napoli MD Abstract Objectives:, This study sought to determine whether tissue oxygenation (StO2) could be used as a surrogate for central venous oxygenation (ScVO2) in early goal-directed therapy (EGDT). Methods:, The study enrolled a prospective convenience sample of patients aged ,18 years with sepsis and systolic blood pressure <90 mm Hg after 2 L of normal saline or lactate >4 mmol, who received a continuous central venous oximetry catheter. StO2 and ScVO2 were measured at 15-minute intervals. Data were analyzed using a random coefficients model, correlations, and Bland-Altman plots. Results:, There were 284 measurements in 40 patients. While a statistically significant relationship existed between StO2 and ScVO2 (F(1,37) = 10.23, p = 0.002), StO2 appears to systematically overestimate at lower ScVO2 and underestimate at higher ScVO2. This was reflected in the fixed effect slope of 0.49 (95% confidence interval [CI] = 0.266 to 0.720) and intercept of 34 (95% CI = 14.681 to 50.830), which were significantly different from 1 and 0, respectively. The initial point correlation (r = 0.5) was fair, but there was poor overall agreement (bias = 4.3, limits of agreement = ,20.8 to 29.4). Conclusions:, Correlation between StO2 and ScVO2 was fair. The two measures trend in the same direction, but clinical use of StO2 in lieu of ScVO2 is unsubstantiated due to large and systematic biases. However, these biases may reflect real physiologic states. Further research may investigate if these measures could be used in concert as prognostic indicators. ACADEMIC EMERGENCY MEDICINE 2010; 17:349,352 © 2010 by the Society for Academic Emergency Medicine [source] Analysis of a Distribution of Point Events Using the Network-Based Quadrat MethodGEOGRAPHICAL ANALYSIS, Issue 4 2008Shino Shiode This study proposes a new quadrat method that can be applied to the study of point distributions in a network space. While the conventional planar quadrat method remains one of the most fundamental spatial analytical methods on a two-dimensional plane, its quadrats are usually identified by regular, square grids. However, assuming that they are observed along a network, points in a single quadrat are not necessarily close to each other in terms of their network distance. Using planar quadrats in such cases may distort the representation of the distribution pattern of points on a network. The network-based units used in this article, on the other hand, consist of subsets of the actual network, providing more accurate aggregation of the data points along the network. The performance of the network-based quadrat method is compared with that of the conventional quadrat method through a case study on a point distribution on a network. The ,2 statistic and Moran's I statistic of the two quadrat types indicate that (1) the conventional planar quadrat method tends to overestimate the overall degree of dispersion and (2) the network-based quadrat method derives a more accurate estimate on the local similarity. The article also performs sensitivity analysis on network and planar quadrats across different scales and with different spatial arrangements, in which the abovementioned statistical tendencies are also confirmed. [source] BARGEN continuous GPS data across the eastern Basin and Range province, and implications for fault system dynamicsGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2004Nathan A. Niemi SUMMARY We collected data from a transect of continuous Global Positioning System (GPS) sites across the eastern Basin and Range province at latitude 39°N from 1997,2000. Intersite velocities define a region ,350 km wide of broadly distributed strain accumulation at ,10 nstr yr,1. On the western margin of the region, site EGAN, ,10 km north of Ely, Nevada, moved at a rate of 3.9 ± 0.2 mm yr,1 to the west relative to site CAST, which is on the Colorado Plateau. Velocities of most sites to the west of Ely moved at an average rate of ,3 mm yr,1 relative to CAST, defining an area across central Nevada that does not appear to be extending significantly. The late Quaternary geological velocity field, derived using seismic reflection and neotectonic data, indicates a maximum velocity of EGAN with respect to the Colorado Plateau of ,4 mm yr,1, also distributed relatively evenly across the region. The geodetic and late Quaternary geological velocity fields, therefore, are consistent, but strain release on the Sevier Desert detachment and the Wasatch fault appears to have been anomalously high in the Holocene. Previous models suggesting horizontal displacement rates in the eastern Basin and Range near 3 mm yr,1, which focused mainly along the Wasatch zone and Intermountain seismic belt, may overestimate the Holocene Wasatch rate by at least 50 per cent and the Quaternary rate by nearly an order of magnitude, while ignoring potentially major seismogenic faults further to the west. [source] Geographical range size heritability: what do neutral models with different modes of speciation predict?GLOBAL ECOLOGY, Issue 3 2007David Mouillot ABSTRACT Aim, Phylogenetic conservatism or heritability of the geographical range sizes of species (i.e. the tendency for closely related species to share similar range sizes) has been predicted to occur because of the strong phylogenetic conservatism of niche traits. However, the extent of such heritability in range size is disputed and the role of biology in shaping this attribute remains unclear. Here, we investigate the level of heritability of geographical range sizes that is generated from neutral models assuming no biological differences between species. Methods, We used three different neutral models, which differ in their speciation mode, to simulate the life-history of 250,000 individuals in a square lattice of 50 × 50 cells. These individuals can speciate, reproduce, migrate and die in the metacommunity according to stochastic events. We ran each model for 3000 steps and recorded the range size of each species at each step. The heritability of geographical range size was assessed using an asymmetry coefficient between range sizes of sister species and using the coefficient of correlation between the range sizes of ancestors and their descendants. Results, Our results demonstrated the ability of neutral models to mimic some important observed patterns in the heritability of geographical range size. Consistently, sister species exhibited higher asymmetry in range sizes than expected by chance, and correlations between the range sizes of ancestor,descendant species pairs, although often weak, were almost invariably positive. Main conclusions, Our findings suggest that, even without any biological trait differences, statistically significant heritability in the geographical range sizes of species can be found. This heritability is weaker than that observed in some empirical studies, but suggests that even here a substantial component of heritability may not necessarily be associated with niche conservatism. We also conclude that both present-day and fossil data sets may provide similar information on the heritability of the geographical range sizes of species, while the omission of rare species will tend to overestimate this heritability. [source] Estimation of Degradation Rates by Satisfying Mass Balance at the InletGROUND WATER, Issue 4 2010Vedat Batu Using a steady-state mass conservative solute transport analytical solution that is based on the third-type (or flux-type or Cauchy) source condition, a method is developed to estimate the degradation parameters of solutes in groundwater. Then, the inadequacy of the methods based on the first-type source-based analytical solute transport solution is presented both theoretically and through an example. It is shown that the third-type source analytical solution exactly satisfies the mass balance constraint at the inlet location. It is also shown that the first-type source (or constant source concentration or Dirichlet) solution fails to satisfy the mass balance constraint at the inlet location and the degree of the failure depends on the value of the degradation as well as the flow and solute transport parameters. The error in the first-type source solution is determined with dimensionless parameters by comparing its results with the third-type source solution. Methods for estimating the degradation parameter values that are based on the first-type steady-state solute transport solution may significantly overestimate the degradation parameter values depending on the values of flow and solute transport parameters. It is recommended that the third-type source solution be used in estimating degradation parameters using measured concentrations instead of the first-type source solution. [source] Gravel-Corrected Kd ValuesGROUND WATER, Issue 6 2000Daniel I. Kaplan Standard measurements of solute sorption to sediments are typically made on the <2 mm sediment fraction. This fraction is used by researchers to standardize the method and to ease experimental protocol so that large labware is not required to accommodate the gravel fraction (>2 mm particles). Since sorption is a phenomenon directly related to surface area, sorption measurements based on the <2 mm fraction would be expected to overestimate actual whole-sediment values for sediments containing gravel. This inaccuracy is a problem for ground water contaminant transport modelers who use laboratory-derived sorption values, typically expressed as a distribution coefficients (Kd), to calculate the retardation factor (Rf), a parameter that accounts for solute-sediment chemical interactions. The objectives of this laboratory study were to quantify the effect of gravel on strontium Kd and Rf values and to develop an empirical method to calculate gravel-corrected Kdgc values for the study site (Hanford Site in Richland, Washington). Three gravel corrections, Kd values, were evaluated: a correction based on the assumption that the gravel simply diluted the Kd<2mm and had no sorption capacity (Kdgc,g=0), a correction based on the assumption that the Kd of the intact sediment (Kdtot was a composite of the Kd<2mm and the Kd>2mm (Kdgc,g = x), and a correction based on surface area (Kdgc,surf). On average, Kd<2mm tended to overestimate Kdtot by 28% to 47%; Kdgc,g = x overestimated Kdtot by only 3% to 5%; and Kdgc,g = 0 and Kdgc,surf underestimated Kdtot by 10% to 39%. Kdgc,g = x provided the best estimate of actual values (Kdtot); however, Kdgc,g = 0 was appreciably easier to acquire. Although other contaminants will likely have different gravel-correction values, these results have important implications regarding the traditional approach to modeling contaminant transport which uses Kd<2mm values. Such calculations may overestimate the tendency of gravel-containing sediments to retard contaminant migration. [source] Cost,benefit analysis involving addictive goods: contingent valuation to estimate willingness-to-pay for smoking cessationHEALTH ECONOMICS, Issue 2 2009David L. Weimer Abstract The valuation of changes in consumption of addictive goods resulting from policy interventions presents a challenge for cost,benefit analysts. Consumer surplus losses from reduced consumption of addictive goods that are measured relative to market demand schedules overestimate the social cost of cessation interventions. This article seeks to show that consumer surplus losses measured using a non-addicted demand schedule provide a better assessment of social cost. Specifically, (1) it develops an addiction model that permits an estimate of the smoker's compensating variation for the elimination of addiction; (2) it employs a contingent valuation survey of current smokers to estimate their willingness-to-pay (WTP) for a treatment that would eliminate addiction; (3) it uses the estimate of WTP from the survey to calculate the fraction of consumer surplus that should be viewed as consumer value; and (4) it provides an estimate of this fraction. The exercise suggests that, as a tentative first and rough rule-of-thumb, only about 75% of the loss of the conventionally measured consumer surplus should be counted as social cost for policies that reduce the consumption of cigarettes. Additional research to estimate this important rule-of-thumb is desirable to address the various caveats relevant to this study. Copyright © 2008 John Wiley & Sons, Ltd. [source] Measurement of informal care: an empirical study into the valid measurement of time spent on informal caregivingHEALTH ECONOMICS, Issue 5 2006Bernard van den Berg Abstract The incorporation of informal care into economic evaluations of health care is troublesome. The debate focuses on the valuation of time spent on informal caregiving, while time measurement, a related and may be even a more important issue, tends to be neglected. Valid time measurement is a necessary condition for the valuation of informal care. In this paper, two methods of time measurement are compared and evaluated: the diary, which is considered the gold standard, and the recall method, which is applied more often. The main objective of this comparison is to explore the validity of the measurement of time spent on providing informal care. In addition, this paper gives empirical evidence regarding the measurement of joint production and the separation between ,normal' housework and additional housework due to the care demands of the care recipients. Finally, the test,retest stability for the recall method is assessed. A total of 199 persons giving informal care to a heterogeneous population of care recipients completed the diary and the recall questionnaire. Corrected for joint production, informal caregivers spent almost 5.8 h a day on providing informal care. If one assumes that respondents take into account joint production when completing the recall questionnaire, the recall method is a valid instrument to measure time spent on providing informal care compared to the diary. Otherwise, the recall method is likely to overestimate the time spent on providing informal care. Moreover, the recall method proves to be unstable over time. This could be due to learning effects from completing a diary. Copyright © 2005 John Wiley & Sons, Ltd. [source] The impact of diabetes on employment: genetic IVs in a bivariate probitHEALTH ECONOMICS, Issue 5 2005H. Shelton Brown III Abstract Diabetes has been shown to have a detrimental impact on employment and labor market productivity, which results in lost work days and higher mortality/disability. This study utilizes data from the Border Epidemiologic Study on Aging to analyze the endogeneity of diabetes in an employment model. We use family history of diabetes as genetic instrumental variables. We show that assuming that diabetes is an exogenous variable results in an overestimate (underestimate) of the negative impact of diabetes on female (male) employment. Our results are particularly relevant in the case of populations where genetic predisposition has an important role in the etiology of diabetes. Copyright © 2004 John Wiley & Sons, Ltd. [source] Violated wishes about division of childcare labor predict early coparenting process during stressful and nonstressful family evaluations,INFANT MENTAL HEALTH JOURNAL, Issue 4 2008Inna Khazan Prior research has indicated that expectant parents overestimate the extent to which fathers will take part in the "work" of parenting, with mothers often becoming disenchanted when these expectations are violated following the baby's arrival. In this study, we examine the role of violated wishes concerning childcare involvement in accounting for variability in maternal and paternal marital satisfaction, and in early coparenting behavior as assessed during family-interaction sessions. The results indicate possible negative effects of violated wishes on the enacted family process and confirm previous findings regarding the effects of marital satisfaction. In addition, we uncovered differences in the way that violated maternal wishes are related to coparenting during playful and mildly stressful family interactions. [source] Patient perceptions of the risks and benefits of infliximab for the treatment of inflammatory bowel diseaseINFLAMMATORY BOWEL DISEASES, Issue 1 2008Corey A. Siegel MD Abstract Background: For a patient to make informed, preference based decisions, they must be able to balance the risks and benefits of treatment. The aim of this study was to determine patients' and parents' perceptions of the risks and benefits of infliximab for the treatment of inflammatory bowel disease (IBD). Methods: Adult patients with IBD and parents of patients attending IBD patient education symposiums were asked to complete a questionnaire regarding the risks and benefits of infliximab. Results: One hundred and sixty-five questionnaires were completed. A majority (59%) of respondents expected a remission rate greater than 50% at 1 year and 18% expected a remission rate greater than 70% at 1 year. More than one-third (37%) of respondents answered that infliximab is not associated with a risk of lymphoma and 67% responded that the lymphoma risk is no higher than twice that of the general population. When presented a scenario of a hypothetical new drug for IBD with risks mirroring those estimated for infliximab, 64% of respondents indicated that they would not take the medication, despite its described benefits. Thirty percent of these patients were either currently taking or had previously taken infliximab. Patients actively taking infliximab predicted the highest remission rates for the infliximab (P = 0.05), and parents of patients predicted the lowest (P = 0.01). Parents estimated a higher risk of lymphoma than patients (P = 0.003). Risk and benefit perception was independent of gender and age of patient respondents. Conclusions: Compared to published literature, patients and parents of patients overestimate the benefit of infliximab and underestimate its risks. We conclude that effective methods for communicating risks and benefits to patients need to be developed. (Inflamm Bowel Dis 2007) [source] Chronic effects of polychlorinated dibenzofurans on mink in laboratory and field environmentsINTEGRATED ENVIRONMENTAL ASSESSMENT AND MANAGEMENT, Issue 2 2009Matthew J Zwiernik Abstract Mink are often used as a sentinel species in ecological risk assessments of chemicals such as polychlorinated biphenyls (PCBs), dibenzo- p -dioxins (PCDDs), and dibenzofurans (PCDFs) that cause toxicity mediated through the aromatic hydrocarbon receptor. Considerable toxicological information is available on the effects of PCBs and PCDDs on mink, but limited toxicological information is available for PCDFs. Thus, exposure concentrations at which adverse effects occur could not be determined reliably for complex mixtures in which PCDFs dominate the total calculated concentration of 2,3,7,8-tetrachlorodibenzo- p -dioxin equivalent (TEQ). Two studies were conducted to evaluate the potential toxicity of PCDFs to mink. The first was a chronic exposure, conducted under controlled laboratory conditions, in which mink were exposed to 2,3,7,8-tetrachlorodibenzofuran (2,3,7,8-TCDF) concentrations as great as 2.4 × 103 ng 2,3,7,8-TCDF/kg wet-weight (ww) diet or 2.4 × 102 ng TEQ2006-WHO-mammal/kg ww diet. In that study, transient decreases in body masses of kits relative to the controls was the only statistically significant effect observed. The second study was a 3-y field study during which indicators of individual health, including hematological and morphological parameters, were determined for mink exposed chronically to a mixture of PCDDs and PCDFs under field conditions. In the field study, there were no statistically significant differences in any of the measured parameters between mink exposed to a median estimated dietary dose of 31 ng TEQ2006-WHO-mammal/kg ww and mink from an upstream reference area where they had a median dietary exposure of 0.68 ng TEQ2006-WHO-mammal/kg ww. In both studies, concentrations of TEQ2006-WHO-mammal to which the mink were exposed exceeded those at which adverse effects, based on studies with PCDD and PCB congeners, would have been expected. Yet in both instances where PCDF congeners were the sole or predominant source of the TEQ2006-WHO-mammal, predicted adverse effects were not observed. Taken together, the results of these studies suggest that the values of the mammalian-specific toxicity equivalency factors suggested by the World Health Organization overestimate the toxic potency of PCDFs to mink. Therefore, hazard cannot be accurately predicted by making comparisons to toxicity reference values derived from exposure studies conducted with PCBs or PCDDs in situations where mink are exposed to TEQ mixtures dominated by PCDFs. [source] Numerical simulation of bolt-supported tunnels by means of a multiphase model conceived as an improved homogenization procedureINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 13 2008Patrick de Buhan Abstract This paper examines the possibility of applying a homogenization procedure to analyze the convergence of a tunnel reinforced by bolts, regarded as periodically distributed linear inclusions. Owing to the fact that a classical homogenization method fails to account for the interactions prevailing between the bolts and the surrounding ground and thus tends to significantly overestimate the reinforcement effect in terms of convergence reduction, a so-called multiphase model is presented and developed, aimed at improving the classical homogenization method. Indeed, according to this model, the bolt-reinforced ground is represented at the macroscopic scale as the superposition of two mutually interacting continuous phases, describing the ground and the reinforcement network, respectively. It is shown that such a multiphase approach can be interpreted as an extension of the homogenization procedure, thus making it possible to capture the ground,reinforcement interaction in a proper way, provided the constitutive parameters of the model and notably those relating to the interaction law can be identified from the reinforced ground characteristics. The numerical implementation of this model in a finite element method-based computer code is then carried out, and a first illustrative application is finally presented. Copyright © 2008 John Wiley & Sons, Ltd. [source] Modelling poroelastic hollow cylinder experiments with realistic boundary conditionsINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 12 2004S. Jourine Abstract A general poroelastic solution for axisymmetrical plane strain problems with time dependent boundary conditions is developed in Laplace domain. Time-domain results are obtained using numerical inversion of the Laplace transform. Previously published solutions can be considered as special cases of the proposed solution. In particular, we could reproduce numerical results for solid and hollow poroelastic cylinders with suddenly applied load/pressure (Rice and Cleary, Rev. Geophys. Space Phys. 1976; 14:227; Schmitt, Tait and Spann, Int. J. Rock Mech. Min. Sci. 1993; 30:1057; Cui and Abousleiman, ASCE J. Eng. Mech. 2001; 127:391). The new solution is used to model laboratory tests on thick-walled hollow cylinders of Berea sandstone subjected to intensive pressure drawdown. In the experiments, pressure at the inner boundary of the hollow cylinder is observed to decline exponentially with a decay constant of 3,5 1/s. It is found that solutions with idealized step-function type inner boundary conditions overestimate the induced tensile radial stresses considerably. Although basic poroelastic phenomena can be modelled properly at long time following a stepwise change in pressure, realistic time varying boundary conditions predict actual rock behaviour better at early time. Experimentally observed axial stresses can be matched but appear to require different values for , and , than are measured at long time. The proposed solution can be used to calculate the stress and pore pressure distributions around boreholes under infinite/finite boundary conditions. Prospective applications include investigating the effect of gradually changing pore pressure, modelling open-hole cavity completions, and describing the phenomenon of wellbore collapse (bridging) during oil or gas blowouts. Copyright © 2004 John Wiley & Sons, Ltd. [source] Friction and degradation of rock joint surfaces under shear loadsINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 10 2001F. Homand Abstract The morpho-mechanical behaviour of one artificial granite joint with hammered surfaces, one artificial regularly undulated joint and one natural schist joint was studied. The hammered granite joints underwent 5 cycles of direct shear under 3 normal stress levels ranging between 0.3 and 4 MPa. The regularly undulated joint underwent 10 cycles of shear under 6 normal stress levels ranging between 0.5 and 5 MPa and the natural schist replicas underwent a monotonics shear under 5 normal stress levels ranging between 0.4 and 2.4 MPa. These direct shear tests were performed using a new computer-controlled 3D-shear apparatus. To characterize the morphology evolution of the sheared joints, a laser sensor profilometer was used to perform surface data measurements prior to and after each shear test. Based on a new characterization of joint surface roughness viewed as a combination of primary and secondary roughness and termed by the joint surface roughness, SRs, one parameter termed ,joint surface degradation', Dw, has been defined to quantify the degradation of the sheared joints. Examinations of SRs and Dw prior to and after shearing indicate that the hammered surfaces are more damaged than the two other surfaces. The peak strength of hammered joint with zero-dilatancy, therefore, significantly differs from the classical formulation of dilatant joint strength. An attempt has been made to model the peak strength of hammered joint surfaces and dilatant joints with regard to their surface degradation in the course of shearing and two peak strength criteria are proposed. Input parameters are initial morphology and initial surface roughness. For the hammered surfaces, the degradation mechanism is dominant over the phenomenon of dilatancy, whereas for a dilatant joint both mechanisms are present. A comparison between the proposed models and the experimental results indicates a relatively good agreement. In particular, compared to the well-known shear strength criteria of Ladanyi and Archambault or Saeb, these classical criteria significantly underestimate and overestimate the observed peak strength, respectively, under low and high normal stress levels. In addition and based on our experimental investigations, we put forward a model to predict the evolution of joint morphology and the degree of degradation during the course of shearing. Degradations of the artificial undulated joint and the natural schist joint enable us to verify the proposed model with a relatively good agreement. Finally, the model of Ladanyi and Archambault dealing with the proportion of total joint area sheared through asperities, as, once again, tends to underestimate the observed degradation. Copyright © 2001 John Wiley & Sons, Ltd. [source] Accuracy of self-reported weight and height: Relationship with eating psychopathology among young womenINTERNATIONAL JOURNAL OF EATING DISORDERS, Issue 4 2009Caroline Meyer PhD Abstract Objective: Self-reported height and weight data are commonly reported within eating disorders research. The aims of this study are to demonstrate the accuracy of self-reported height and weight and to determine whether that accuracy is associated with levels of eating psychopathology among a group of young nonclinical women. Method: One hundred and four women were asked to report their own height and weight. They then completed the Eating Disorders Examination-Questionnaire. Finally, they were weighed and their height was measured in a standardized manner. Accuracy scores for height and weight were calculated by subtracting their actual weight and height from their self-reports. Results: Overall, the women overestimated their heights and underestimated their weights, leading to significant errors in body mass index where self-report is used. Those women with high eating concerns were likely to overestimate their weight, whereas those with high weight concerns were more likely to underestimate it. Discussion: These data show that self-reports of height and weight are inaccurate in a way that skews any research that depends on them. The errors are influenced by eating psychopathology. These findings highlight the importance of obtaining objective height and weight data, particularly when comparing those data with those of patients with eating disorders. © 2008 by Wiley Periodicals, Inc. Int J Eat Disord 2009 [source] Degree of discrepancy between self and other-reported everyday functioning by cognitive status: dementia, mild cognitive impairment, and healthy eldersINTERNATIONAL JOURNAL OF GERIATRIC PSYCHIATRY, Issue 9 2005Sarah Tomaszewski Farias Abstract Background Previous studies show individuals with dementia overestimate their cognitive and functional abilities compared to reports from caregivers. Few studies have examined whether individuals with Mild Cognitive Impairment (MCI) also tend to underestimate their deficits. In this study we examined whether degree of discrepancy between patient and informant-reported everyday functioning was associated with cognitive status. Methods The sample consisted of 111 ethnically diverse community-dwelling older adults (46 Caucasians and 65 Hispanic individuals), which was divided into four diagnostic categories: cognitively normal, MCI-memory impaired, MCI-nonmemory impaired, and demented. Everyday functional abilities were measured using both a self-report and informant-report version of the Daily Function Questionnaire (DFQ). A Difference Score was calculated by subtracting patients' DFQ score from their informants' score. Results DFQ Difference Scores were significantly higher in the demented group compared to normals and both of the MCI groups. However, the Difference Scores for the MCI groups were not significantly different than the normals. Further, while patient reported everyday functioning did not differ among the four diagnostic groups, informant reported functional status was significantly different across all diagnostic groups except MCI-nonmemory impaired vs normals. Performance on objective memory testing was associated with informant-rated but not patient-rated functional status. Demographic characteristics of the patients and informants, including ethnicity, had no association with the degree of discrepancy between raters. Conclusions Although there may be some mild functional changes associated particularly with the MCI-memory impaired subtype, individuals with MCI do not appear to under-report their functional status as can often been seen in persons with dementia. Copyright © 2005 John Wiley & Sons, Ltd. [source] Global carbonate accumulation rates from Cretaceous to Present and their implications for the carbon cycle modelISLAND ARC, Issue 1 2001T. Nakamori Abstract Global carbonate accumulation rates on the surface of the earth, including not only platforms but also continental margin slopes and deep-sea from the Cretaceous to Present, are estimated by compiling previous geologic studies. These rates are revised, taking account of the erosional effect of the sediments on the platform and deep-sea. Long-term model carbonate fluxes from the ocean to the crust are calculated on the basis of the carbon cycle model (GEOCARB of Berner 1991). The rates based on the actual geologic data indicate much lower values than model fluxes, excluding the Pliocene and Quaternary. The discrepancy could be attributed to the two misunderstandings, namely an overestimate of carbonate accumulation rate for the Quaternary and an incorrect use of the higher Quaternary rate for a boundary condition of the model. The carbonate accumulation rate for the Pliocene to Quaternary is lowered from 29.8 × 1018 mol/Ma (modified from Opdyke & Wilkinson 1988) to 14.8 × 1018 mol/Ma in the present study, assuming that the rate from Quaternary to Pliocene is almost the same as the Miocene value. New model fluxes are recalculated with the new boundary condition in the Quaternary (14.8 × 1018 mol/Ma). Revised model fluxes show general trends of high rates in 120 Ma or 130 Ma, and a low rate in 0 Ma, and are in agreement with the accumulation rate pattern. [source] A risk-averse user equilibrium model for route choice problem in signal-controlled networksJOURNAL OF ADVANCED TRANSPORTATION, Issue 4 2010William H.K. Lam Abstract This paper proposes a new risk-averse user equilibrium (RAUE) model to estimate the distribution of traffic flows over road networks with taking account the effects of accident risks due to the conflicting traffic flows (left- and right-turning and through traffic flows) at signalized intersections. It is assumed in the proposed model that drivers consider simultaneously both the travel time and accident risk in their route choices. The accident risk of a route is measured by the potential accident rate on that route. The RAUE conditions are formulated as an equivalent path-based variational inequality problem which can be solved by a path-based solution algorithm. It is shown that the traditional user equilibrium (UE) model is in fact a special case of the proposed model. A numerical example on a grid network is used to illustrate the application of the proposed model and to compare the results with the conventional UE traffic assignment. Numerical results show that the traditional UE model may underestimate the total system travel time and overestimate the system accident rate. Sensitivity tests are also carried out to assess the effects of drivers' preferences, signal control parameters (i.e., green time proportions), and various network demand levels on the route choice problem. Copyright © 2010 John Wiley & Sons, Ltd. [source] |