Home About us Contact | |||
Exponential
Kinds of Exponential Terms modified by Exponential Selected AbstractsTemporal analysis of spatial covariance of SO2 in EuropeENVIRONMETRICS, Issue 4 2007Marco Giannitrapani Abstract In recent years, the number of applications of spatial statistics has enormously increased in environmental and ecological sciences. A typical problem is the sampling of a pollution field, with the common objective of spatial interpolation. In this paper, we present a spatial analysis across time, focusing on sulphur dioxide (SO2) concentrations monitored from 1990 to 2001 at 125 sites across Europe. Four different methods of trend estimation have been used, and comparisons among them are shown. Spherical, Exponential and Gaussian variograms have been fitted to the residuals and compared. Time series analyses of the range, sill and nugget have been undertaken and a suggestion for defining a unique spatial correlation matrix for the overall time period of analysis is proposed. Copyright © 2006 John Wiley & Sons, Ltd. [source] Empirical models of UV total radiation and cloud effect studyINTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 9 2010David Mateos Villán Abstract Several empirical models of hourly ultraviolet total radiation (UVT) have been proposed in this study. Measurements of UVT radiation, 290,385 nm, have been recorded at ground level from February 2001 to June 2008 in Valladolid, Spain (latitude 41°40,N, longitude 4°50,W and 840 m a.s.l.). The empirical models have emerged due to the lack of some radiometric variables in measuring stations. Hence, good forecasts of them can be obtained from usual measures in these stations. Therefore, some advantages of the empirical models are that they allow the estimation of past missing data in the database and the forecast of future ultraviolet solar availability. In this study, reported models in the bibliography have been assessed and recalibrated. New expressions have been proposed that allow obtaining hourly values of ultraviolet radiation from global radiation measures and parameters as clearness index and relative optical air mass. The accuracy of these models has been assessed through the following statistical indices: mean bias, mean-absolute bias and root-mean-square errors whose values are close to zero, below 7% and below 10%, respectively. Two new clear sky models have been used to evaluate two new parameters: ultraviolet and global cloud modification factors, which can help to understand the role of the clouds on solar radiation. The ultraviolet cloud modification factor depends on cloudiness in such a way that its value under overcast skies is half of the cloudless skies one. Exponential and potential fits are the best relationships between both cloud factors. Finally, these parameters have been used to build new UV empirical models which show low values of the statistical indices mentioned above. Copyright © 2009 Royal Meteorological Society [source] An investigation of incident frequency, duration and lanes blockage for determining traffic delayJOURNAL OF ADVANCED TRANSPORTATION, Issue 3 2009Yi (Grace) Qi Traffic delay caused by incidents is closely related to three variables: incident frequency, incident duration, and the number of lanes blocked by an incident that is directly related to the bottleneck capacity. Relatively, incident duration has been more extensively studied than incident frequency and the number of lanes blocked in an incident. In this study, we provide an investigation of the influencing factors for all of these three variables based on an incident data set that was collected in New York City (NYC). The information about the incidents derived from the identification can be used by incident management agencies in NYC for strategic policy decision making and daily incident management and traffic operation. In identifying the influencing factors for incident frequency, a set of models, including Poisson and Negative Binomial regression models and their zero-inflated models, were considered. An appropriate model was determined based on a model decision-making tree. The influencing factors for incident duration were identified based on hazard-based models where Exponential, Weibull, Log-logistic, and Log-normal distributions were considered for incident duration. For the number of lanes blocked in an incident, the identification of the influencing factors was based on an Ordered Probit model which can better capture the order inherent in the number of lanes blocked in an incident. As identified in this study, rain is the only factor that significantly influenced incident frequency. For incident duration and the number of lanes blocked in an incident, various factors had significant impact. As concluded in this study, there is a strong need to identify the influencing factors in terms of different types of incidents and the roadways where the incidents occured. [source] Individual response of growing pigs to sulphur amino acid intakeJOURNAL OF ANIMAL PHYSIOLOGY AND NUTRITION, Issue 1 2008J. Heger Summary Two N balance experiments were conducted to study the individual response of growing pigs to limiting amino acid (AA) intake. Series of fifteen diets with increasing concentration of sulphur amino acids (SAA, Expt 1) or methionine in the presence of excess cystine (Expt 2) were fed sequentially to nine pigs during a 15-day experimental period. The concentration of the AA under test ranged from 50% to 140% of the requirement while other essential AA were given in a 25% excess relative to the limiting AA. N retention was related to the limiting AA intake using rectilinear and curvilinear models. In Expt 1, the quadratic-plateau model fitted the individual data significantly better (p = 0.01) than the linear-plateau model. No difference was found between the two models in Expt. 2, presumably due to the sparing effect of excess cystine on methionine utilization. Exponential, saturation kinetics or four-parameter logistic models fitted to data for all pigs showed that their goodness of fit was similar to those of quadratic-plateau or linear-plateau models. Significant differences (p < 0.05) were found between individual plateau values for N retention within each experiment while the slopes of the regression lines did not significantly differ either in Expt 1 (p = 0.07) or Expt 2 (p = 0.45). There was a positive correlation between the slope and plateau values of the linear-plateau model in Expt 1 (r = 0.74, p = 0.02) but no significant correlation was found in Expt 2 (r = ,0.48, p = 0.13). Marginal efficiencies of SAA and methionine utilization derived from the linear-plateau model were 0.43 and 0.65 respectively. Based on linear-plateau and quadratic-plateau models, daily requirements of SAA and methionine for a 50 kg pig were estimated to be 13.0 and 5.9 g and 14.3 and 6.1 g respectively. [source] The effect of culture growth phase on induction of the heat shock response in Yersinia enterocolitica and Listeria monocytogenesJOURNAL OF APPLIED MICROBIOLOGY, Issue 2 2000C.M.M. McMahon The effect of culture growth phase on induction of the heat shock response in Yersinia enterocolitica and Listeria monocytogenes, was examined. Exponential or stationary preconditioned cultures were heat shocked and survivor numbers estimated using selective and overlay/resuscitation recovery techniques. The results indicate that prior heat shock induced increased heat resistance in both micro-organisms to higher heat treatments. Heat-shocked cells of each micro-organism were able to survive much longer than non-heat-shocked cells when heated at 55 °C. The size of the change in heat resistance between heat-shocked and non-heat-shocked cells was greatest for exponential cultures (X:X). Results indicate that the overall relative thermal resistance of each pathogen was dependent on cell growth phase. Stationary cultures (S:S) were significantly (P < 0·01) more thermotolerant than exponential cultures (X:X) under identical processing conditions. Under most conditions, the use of an overlay/resuscitation recovery medium resulted in higher D -values (P < 0·05) compared with a selective recovery medium. [source] Is Post-War Economic Growth Exponential?THE AUSTRALIAN ECONOMIC REVIEW, Issue 2 2006Sören Wibe In this article, we argue that there are strong reasons for using linear instead of exponential models when analysing post-war economic growth. Incorrect model specifications will lead to misinterpretations of the underlying economic reality and to erroneous economic forecasts. Our argument is based on an empirical investigation of real GDP per capita growth in 25 OECD countries (and three country aggregates) during the post-war period using the Box-Cox transformation method. The conclusion is that per capita growth is generally (more or less) linear (and definitely not exponential) for the level of economic development represented by these countries. Based on this we argue that analyses of growth should use linear instead of exponential models. This change of model could give new insights into problems connected with economic growth. [source] An empirical analysis of multi-period hedges: Applications to commercial and investment assetsTHE JOURNAL OF FUTURES MARKETS, Issue 6 2005Jimmy E. Hilliard This study measures the performance of stacked hedge techniques with applications to investment assets and to commercial commodities. The naive stacked hedge is evaluated along with three other versions of the stacked hedge, including those which use exponential and minimum variance ratios. Three commercial commodities (heating oil, light crude oil, and unleaded gasoline) and three investment assets (British Pounds, Deutsche Marks, and Swiss Francs) are examined. The evidence suggests that stacked hedges perform better with investment assets than with commercial commodities. Specifically, deviations from the cost-of-carry model result in nontrivial hedge errors in the stacked hedge. Exponential and minimum variance hedge ratios were found to marginally improve the hedging performance of the stack. © 2005 Wiley Periodicals, Inc. Jrl Fut Mark 25:587,606, 2005 [source] Utility Functions for Ceteris Paribus PreferencesCOMPUTATIONAL INTELLIGENCE, Issue 2 2004Michael McGeachie Ceteris paribus (all-else equal) preference statements concisely represent preferences over outcomes or goals in a way natural to human thinking. Although deduction in a logic of such statements can compare the desirability of specific conditions or goals, many decision-making methods require numerical measures of degrees of desirability. To permit ceteris paribus specifications of preferences while providing quantitative comparisons, we present an algorithm that compiles a set of qualitative ceteris paribus preferences into an ordinal utility function. Our algorithm is complete for a finite universe of binary features. Constructing the utility function can, in the worst case, take time exponential in the number of features, but common independence conditions reduce the computational burden. We present heuristics using utility independence and constraint-based search to obtain efficient utility functions. [source] NMR and the uncertainty principle: How to and how not to interpret homogeneous line broadening and pulse nonselectivity.CONCEPTS IN MAGNETIC RESONANCE, Issue 5 2008IV. (Un?)certainty Abstract Following the treatments presented in Parts I, II, and III, I herein address the popular notion that the frequency of a monochromatic RF pulse as well as that of a monochromatic FID is "in effect" uncertain due to the (Heisenberg) Uncertainty Principle, which also manifests itself in the fact that the FT-spectrum of these temporal entities is spread over a nonzero frequency band. I will show that the frequency spread should not be interpreted as "in effect" meaning a range of physical driving RF fields in the former, and "spin frequencies" in the latter case. The fact that a shorter pulse or a more quickly decaying FID has a wider FT-spectrum is in fact solely due to the Fourier Uncertainty Principle, which is a less well known and easily misunderstood concept. A proper understanding of the Fourier Uncertainty Principle tells us that the FT-spectrum of a monochromatic pulse is not "broad" because of any "uncertainty" in the RF frequency, but because the spectrum profile carries all of the pulse's features (frequency, phase, amplitude, length, temporal location) coded into the complex amplitudes of the FT-spectrum's constituent eternal basis harmonic waves. A monochromatic RF pulse's capability to excite nonresonant magnetizations is in fact a purely classical off-resonance effect that has nothing to do with "uncertainty". Analogously, "Lorentzian lineshape" means exactly the same thing physically as "exponential decay," and all inferences as to the physical reasons for that decay must be based on independent assumptions or observations. © 2008 Wiley Periodicals, Inc. Concepts Magn Reson Part A 32A: 373,404, 2008. [source] Investigation of the Influence of Overvoltage, Auxiliary Glow Current and Relaxation Time on the Electrical Breakdown Time Delay Distributions in NeonCONTRIBUTIONS TO PLASMA PHYSICS, Issue 2 2005. A. Maluckov Abstract Results of the statistical analysis of the electrical breakdown time delay for neon-filled tube at 13.3 mbar are presented in this paper. Experimental distributions of the breakdown time delay were established on the basis of 200 successive and independent measurements, for different overvoltages, relaxation times and auxiliary glows. Obtained experimental distributions deviate from usual exponential distribution. Breakdown time delay distributions are numerically generated, usingMonte-Carlo method, as the compositions of the two independent random variables with an exponential and a Gaussian distribution. Theoretical breakdown time delay distribution is obtained from the convolution of the exponential and Gaussian distribution. Performed analysis shows that the crucial parameter that determines the complex structure of time delay is the overvoltage and if it is of the order of few percentage, then distribution of time delay must be treated as an convolution of two random variables. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Negative per capita effects of purple loosestrife and reed canary grass on plant diversity of wetland communitiesDIVERSITY AND DISTRIBUTIONS, Issue 4 2006Shon S. Schooler ABSTRACT Invasive plants can simplify plant community structure, alter ecosystem processes and undermine the ecosystem services that we derive from biotic diversity. Two invasive plants, purple loosestrife (Lythrum salicaria) and reed canary grass (Phalaris arundinacea), are becoming the dominant species in many wetlands across temperate North America. We used a horizontal, observational study to estimate per capita effects (PCEs) of purple loosestrife and reed canary grass on plant diversity in 24 wetland communities in the Pacific Northwest, USA. Four measures of diversity were used: the number of species (S), evenness of relative abundance (J), the Shannon,Wiener index (H,) and Simpson's index (D). We show that (1) the PCEs on biotic diversity were similar for both invasive species among the four measures of diversity we examined; (2) the relationship between plant diversity and invasive plant abundance ranges from linear (constant slope) to negative exponential (variable slope), the latter signifying that the PCEs are density-dependent; (3) the PCEs were density-dependent for measures of diversity sensitive to the number of species (S, H,, D) but not for the measure that relied solely upon relative abundance (J); and (4) invader abundance was not correlated with other potential influences on biodiversity (hydrology, soils, topography). These results indicate that both species are capable of reducing plant community diversity, and management strategies need to consider the simultaneous control of multiple species if the goal is to maintain diverse plant communities. [source] Landslide inventories and their statistical propertiesEARTH SURFACE PROCESSES AND LANDFORMS, Issue 6 2004Bruce D. Malamud Abstract Landslides are generally associated with a trigger, such as an earthquake, a rapid snowmelt or a large storm. The landslide event can include a single landslide or many thousands. The frequency,area (or volume) distribution of a landslide event quanti,es the number of landslides that occur at different sizes. We examine three well-documented landslide events, from Italy, Guatemala and the USA, each with a different triggering mechanism, and ,nd that the landslide areas for all three are well approximated by the same three-parameter inverse-gamma distribution. For small landslide areas this distribution has an exponential ,roll-over' and for medium and large landslide areas decays as a power-law with exponent -2·40. One implication of this landslide distribution is that the mean area of landslides in the distribution is independent of the size of the event. We also introduce a landslide-event magnitude scale mL = log(NLT), with NLT the total number of landslides associated with a trigger. If a landslide-event inventory is incomplete (i.e. smaller landslides are not included), the partial inventory can be compared with our landslide probability distribution, and the corresponding landslide-event magnitude inferred. This technique can be applied to inventories of historical landslides, inferring the total number of landslides that occurred over geologic time, and how many of these have been erased by erosion, vegetation, and human activity. We have also considered three rockfall-dominated inventories, and ,nd that the frequency,size distributions differ substantially from those associated with other landslide types. We suggest that our proposed frequency,size distribution for landslides (excluding rockfalls) will be useful in quantifying the severity of landslide events and the contribution of landslides to erosion. Copyright © 2004 John Wiley & Sons, Ltd. [source] Seed weevils living on the edge: pressures and conflicts over body size in the endoparasitic Curculio larvaeECOLOGICAL ENTOMOLOGY, Issue 3 2009RAÚL BONAL Abstract 1.,Body size in parasitic insects can be subjected to contrasting selective pressures, especially if they complete their development within a single host. On the one hand, a larger body size is associated with a higher fitness. On the other hand, the host offers a discrete amount of resources, thus constraining the evolution of a disproportionate body size. 2.,The present study used the weevil Curculio elephas as a study model. Larvae develop within a single acorn, feeding on its cotyledons, and larval body size is strongly related to individual fitness. 3.,The relationship between larval and acorn size was negatively exponential. Larval growth was constrained in small acorns, which did not provide enough food for the weevils to attain their potential size. Larval size increased and levelled off in acorns over a certain size (inflexion point), in which cotyledons were rarely depleted. When there were more than one larva per acorn, a larger acorn was necessary to avoid food depletion. 4.,The results show that C. elephas larvae are sometimes endoparasitic, living on the edge of host holding capacity. If they were smaller they could avoid food depletion more easily, but the fitness benefits linked to a larger size have probably promoted body size increase. The strong negative effects of conspecific competition may have possibly influenced female strategy of laying a single egg per seed. 5.,Being larger and fitter, but always within the limits of the available host sizes, may be one main evolutionary dilemma in endoparasites. [source] A cost analysis of ranked set sampling to estimate a population meanENVIRONMETRICS, Issue 3 2005Rebecca A. Buchanan Abstract Ranked set sampling (RSS) can be a useful environmental sampling method when measurement costs are high but ranking costs are low. RSS estimates of the population mean can have higher precision than estimates from a simple random sample (SRS) of the same size, leading to potentially lower sampling costs from RSS than from SRS for a given precision. However, RSS introduces ranking costs not present in SRS; these costs must be considered in determining whether RSS is cost effective. We use a simple cost model to determine the minimum ratio of measurement to ranking costs (cost ratio) necessary in order for RSS to be as cost effective as SRS for data from the normal, exponential, and lognormal distributions. We consider both equal and unequal RSS allocations and two types of estimators of the mean: the typical distribution-free (DF) estimator and the best linear unbiased estimator (BLUE). The minimum cost ratio necessary for RSS to be as cost effective as SRS depends on the underlying distribution of the data, as well as the allocation and type of estimator used. Most minimum necessary cost ratios are in the range of 1,6, and are lower for BLUEs than for DF estimators. The higher the prior knowledge of the distribution underlying the data, the lower the minimum necessary cost ratio and the more attractive RSS is over SRS. Copyright © 2005 John Wiley & Sons, Ltd. [source] On the distribution of wildfire sizesENVIRONMETRICS, Issue 6 2003Frederic Paik Schoenberg Abstract A variety of models for the wildfire size distribution are examined using data on Los Angeles County wildfires greater than 100 acres between 1950 and 2000. In addition to graphs and likelihood criteria, Kolmogorov,Smirnoff and Cramer,von Mises statistics are used to compare the models. The tapered Pareto distribution appears to fit the data quite well and offers some advantages over the untapered Pareto distribution, while alternatives including the lognormal, half-normal, exponential and extremal distributions fit poorly. The size distribution appears to be quite stable over the examination period, though inspection of the transformed wildfire sizes for the tapered Pareto reveals some limited trend in the residuals, indicating a very slight gradual decrease in the average fire size in Los Angeles County over these 50 years. Copyright © 2003 John Wiley & Sons, Ltd. [source] Synaptic stimulation of nicotinic receptors in rat sympathetic ganglia is followed by slow activation of postsynaptic potassium or chloride conductancesEUROPEAN JOURNAL OF NEUROSCIENCE, Issue 8 2000Oscar Sacchi Abstract Two slow currents have been described in rat sympathetic neurons during and after tetanization of the whole preganglionic input. Both effects are mediated by nicotinic receptors activated by native acetylcholine (ACh). A first current, indicated as IAHPsyn, is calcium dependent and voltage independent, and is consistent with an IAHP -type potassium current sustained by calcium ions accompanying the nicotinic synaptic current. The conductance activated by a standard synaptic train was ,,3.6 nS per neuron; it was detected in isolation in 14 out of a 52-neuron sample. A novel current, IADPsyn, was described in 42/52 of the sample as a post-tetanic inward current, which increased in amplitude with increasing membrane potential negativity and exhibited a null-point close to the holding potential and the cell momentary chloride equilibrium potential. IADPsyn developed during synaptic stimulation and decayed thereafter according to a single exponential (mean ,,= 148.5 ms) in 18 neurons or according to a two-exponential time course (, = 51.8 and 364.9 ms, respectively) in 19 different neurons. The mean peak conductance activated was ,,20 nS per neuron. IADPsyn was calcium independent, it was affected by internal and external chloride concentration, but was insensitive to specific blockers (anthracene-9-carboxylic acid, 9AC) of the chloride channels open in the resting neuron. It is suggested that gADPsyn represents a specific chloride conductance activatable by intense nicotinic stimulation; in some neurons it is even associated with single excitatory postsynaptic potentials (EPSCs). Both IAHP and IADPsyn are apparently devoted to reduce neuronal excitability during and after intense synaptic stimulation. [source] Comparison of electromagnetic field for two different lightning pulse current modelsEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 4 2001A. Andreotti In this paper the electromagnetic field produced by a lightning current analytically described by the so called double exponential model is compared with the field produced by the same current (same peak, rise and decay time) analytically described by the model proposed by Heidler. The exponential model has been widely used in literature for its simplicity and its relatively good accuracy. The Heidler model is more complex, but removes the main problem shown by the double exponential: the non-zero derivative at the beginning of the lightning pulse in contrast with the physical phenomenon. In the lightning electromagnetic pulse (LEMP) simulations both models are now used. In the paper we aim to make a comparison between the two. In particular we show that the two models are fairly equivalent in the frequency range up to 2 MHz, namely the range of interest of typical lightning phenomena. In addition, the double exponential model is shown to be conservative for higher frequencies. [source] THE POPULATION GENETICS OF ADAPTATION: THE ADAPTATION OF DNA SEQUENCESEVOLUTION, Issue 7 2002H. Allen Orr Abstract I describe several patterns characterizing the genetics of adaptation at the DNA level. Following Gillespie (1983, 1984, 1991), I consider a population presently fixed for the ith best allele at a locus and study the sequential substitution of favorable mutations that results in fixation of the fittest DNA sequence locally available. Given a wild type sequence that is less than optimal, I derive the fitness rank of the next allele typically fixed by natural selection as well as the mean and variance of the jump in fitness that results when natural selection drives a substitution. Looking over the whole series of substitutions required to reach the best allele, I show that the mean fitness jumps occurring throughout an adaptive walk are constrained to a twofold window of values, assuming only that adaptation begins from a reasonably fit allele. I also show that the first substitution and the substitution of largest effect account for a large share of the total fitness increase during adaptation. I further show that the distribution of selection coefficients fixed throughout such an adaptive walk is exponential (ignoring mutations of small effect), a finding reminiscent of that seen in Fisher's geometric model of adaptation. Last, I show that adaptation by natural selection behaves in several respects as the average of two idealized forms of adaptation, perfect and random. [source] THE FITNESS EFFECTS OF SPONTANEOUS MUTATIONS IN CAENORHABDITIS ELEGANSEVOLUTION, Issue 4 2000Larissa L. Vassilieva Abstract. Spontaneous mutation to mildly deleterious alleles has emerged as a potentially unifying component of a variety of observations in evolutionary genetics and molecular evolution. However, the biological significance of hypotheses based on mildly deleterious mutation depends critically on the rate at which new mutations arise and on their average effects. A long-term mutation-accumulation experiment with replicate lines of the nematode Caenorhabditis elegans maintained by single-progeny descent indicates that recurrent spontaneous mutation causes approximately 0.1% decline in fitness per generation, which is about an order of magnitude less than that suggested by previous studies with Drosophila. Two rather different approaches, Bateman-Mukai and maximum likelihood, suggest that this observation, along with the observed rate of increase in the variance of fitness among lines, is consistent with a genomic deleterious mutation rate for fitness of approximately 0.03 per generation and with an average homozygous effect of approximately 12%. The distribution of mutational effects for fitness appears to have a relatively low coefficient of variation, being no more extreme than expected for a negative exponential, and for one composite fitness measure (total progeny production) approaches constancy of effects. These results are derived from assays in a benign environment. At stressful temperatures, estimates of the genomic deleterious mutation rate (for genes expressed at such temperatures) is sixfold lower, whereas those for the average homozygous effect is approximately eightfold higher. Our results are reasonably compatible with existing estimates for flies, when one considers the differences between these species in the number of germ-line cell divisions per generation and the magnitude of transposable element activity. [source] Growth kinetics of microorganisms isolated from Alaskan soil and permafrost in solid media frozen down to ,35°CFEMS MICROBIOLOGY ECOLOGY, Issue 2 2007Nicolai S. Panikov Abstract We developed a procedure to culture microorganisms below freezing point on solid media (cellulose powder or plastic film) with ethanol as the sole carbon source without using artificial antifreezes. Enrichment from soil and permafrost obtained on such frozen solid media contained mainly fungi, and further purification resulted in isolation of basidiomycetous yeasts of the genera Mrakia and Leucosporidium as well as ascomycetous fungi of the genus Geomyces. Contrary to solid frozen media, the enrichment of liquid nutrient solutions at 0°C or supercooled solutions stabilized by glycerol at ,1 to ,5°C led to the isolation of bacteria representing the genera Polaromonas, Pseudomonas and Arthrobacter. The growth of fungi on ethanol,microcrystalline cellulose media at ,8°C was exponential with generation times of 4.6,34 days, while bacteria displayed a linear or progressively declining curvilinear dynamic. At ,17 to ,0°C the growth of isolates and entire soil community on 14C-ethanol was continuous and characterized by yields of 0.27,0.52 g cell C (g of C-substrate),1, similar to growth above the freezing point. The ,state of maintenance,' implying measurable catabolic activity of non-growing cells, was not confirmed. Below ,18 to ,35°C, the isolated organisms were able to grow only transiently for 3 weeks after cooling with measurable respiratory and biosynthetic (14CO2 uptake) activity. Then metabolic activity declined to zero, and microorganisms entered a state of reversible dormancy. [source] Stream macroinvertebrate occurrence along gradients in organic pollution and eutrophicationFRESHWATER BIOLOGY, Issue 7 2010NIKOLAI FRIBERG Summary 1.,We analysed a large number of concurrent samples of macroinvertebrate communities and chemical indicators of eutrophication and organic pollution [total-P, total-N, NH4,N, biological oxygen demand (BOD5)] from 594 Danish stream sites. Samples were taken over an 11-year time span as part of the Danish monitoring programme on the aquatic environment. Macroinvertebrate communities were sampled in spring using a standardised kick-sampling procedure whereas chemical variables were sampled six to 24 times per year per site. Habitat variables were assessed once when macroinvertebrates were sampled. 2.,The plecopteran Leuctra showed a significant negative exponential relationship (r2 = 0.90) with BOD5 and occurred at only 16% of the sites with BOD5 above 1.6 mg L,1. Sharp declines with increasing BOD5 levels were found for the trichopteran families Sericostomatidae and Glossosomatidae although they appeared to be slightly less sensitive than Leuctra. Other plecopterans such as Isoperla showed a similar type of response curve to Leuctra (negative exponential) but occurred at sites with relatively high concentrations of BOD5 up to 3,4 mg L,1. In contrast, the response curve of the isopod Asellus aquaticus followed a saturation function reaching a plateau above 3,4 mg L,1 BOD5 and the dipteran Chironomus showed an exponential increase in occurrence with increasing BOD5 concentration. 3.,Macroinvertebrate occurrence appeared to be related primarily to concentrations of BOD5, NH4,N and total-P whereas there were almost no relationships to total-N. Occurrence of a number of taxa showed a stronger relationship to habitat conditions (width and substrate) than chemical variables. 4.,Important macroinvertebrate taxa are reduced at concentrations of BOD5 that are normally perceived as indicating unimpacted stream site conditions. Our results confirmed sensitivity/tolerance patterns used by existing bioassessment systems only to some degree. [source] A comparative study of the dispersal of 10 species of stream invertebratesFRESHWATER BIOLOGY, Issue 9 2003J. M. Elliott Summary 1. Apart from downstream dispersal through invertebrate drift, few quantitative data are available to model the dispersal of stream invertebrates, i.e. the outward spreading of animals from their point of origin or release. The present study provides comparative data for 10 species, using two independent methods: unmarked animals in six stream channels built over a stony stream and marked animals in the natural stream. Experiments were performed in April and June 1973 and 1974, with initial numbers of each species varying from 20 to 80 in the stream channels and 20 to 60 for marked animals. 2. Results were the same for marked invertebrates and those in the channels. Dispersal was not density-dependent; the number of dispersing animals was a constant proportion of the initial number for each species. The relationship between upstream or downstream dispersal distance and the number of animals travelling that distance was well described by an inverse power function for all species (exponential and log models were poorer fits). Results varied between species but were similar within species for the 4 months, and therefore were unaffected by variations in mean water velocity (range 0.04,0.35 m s,1) or water temperature (range 6.7,8.9 °C in April, 12.1,14.8 °C in June). 3. Species were arranged in order, according to their dispersal abilities. Three carnivores (Perlodes, Rhyacophila, Isoperla) dispersed most rapidly (70,91% in 24 h, maximum distances 9.5,13.5 m per day), followed by two species (Protonemura, Rhithrogena) in which about half their initial numbers dispersed (50,51% in 24 h, 7.5,8 m per day), and four species (Ecdyonurus, Hydropsyche, Gammarus, Baetis) in which less than half dispersed (33,40% in 24 h, 5.5,7 m per day). Dispersal was predominantly upstream for all nine species. Few larvae (20%) of Potamophylax dispersed, with similar maximum upstream and downstream distances of 3.5 m per day. The mean time spent drifting downstream was known for seven species from previous studies, and correlated positively with their dispersal distances. Therefore, the species formed a continuum from rapid to very slow dispersers. These interspecific differences should be considered when evaluating the role of dispersal in the maintenance of genetic diversity in stream invertebrates, and in their ability to colonise or re-colonise habitats. [source] Biodiversity on land and in the seaGEOLOGICAL JOURNAL, Issue 3-4 2001Michael J. Benton Abstract Life on land today is as much as 25 times as diverse as life in the sea. Paradoxically, this extraordinarily high level of continental biodiversity has been achieved in a shorter time and it occupies a much smaller area of the Earth's surface than does marine biodiversity. Raw palaeontological data suggest very different models for the diversification of life on land and in the sea. The well-studied marine fossil record appears to show evidence for an equilibrium model of diversification, with phases of rapid radiation, followed by plateaux that may indicate times of equilibrium diversity. The continental fossil record shows exponential diversification from the Silurian to the present. These differences appear to be real: the continental fossil record is unlikely to be so poor that all evidence for a high initial equilibrial diversity has been lost. In addition, it is not clear that the apparently equilibrial marine model is correct, since it is founded on studies at familial level. At species level, a logistic family-level curve probably breaks down to an exponential. The rocketing diversification rates of flowering plants, insects, and other land life are evidently hugely different from the more sluggish rates of diversification of life in the sea, perhaps as a result of greater endemism and habitat complexity on land. Copyright © 2001 John Wiley & Sons, Ltd. [source] Estimates of CO2 uptake and release among European forests based on eddy covariance dataGLOBAL CHANGE BIOLOGY, Issue 9 2004Albert I. J. M. Van Dijk Abstract The net ecosystem exchange (NEE) of forests represents the balance of gross primary productivity (GPP) and respiration (R). Methods to estimate these two components from eddy covariance flux measurements are usually based on a functional relationship between respiration and temperature that is calibrated for night-time (respiration) fluxes and subsequently extrapolated using daytime temperature measurements. However, respiration fluxes originate from different parts of the ecosystem, each of which experiences its own course of temperature. Moreover, if the temperature,respiration function is fitted to combined data from different stages of biological development or seasons, a spurious temperature effect may be included that will lead to overestimation of the direct effect of temperature and therefore to overestimates of daytime respiration. We used the EUROFLUX eddy covariance data set for 15 European forests and pooled data per site, month and for conditions of low and sufficient soil moisture, respectively. We found that using air temperature (measured above the canopy) rather than soil temperature (measured 5 cm below the surface) yielded the most reliable and consistent exponential (Q10) temperature,respiration relationship. A fundamental difference in air temperature-based Q10 values for different sites, times of year or soil moisture conditions could not be established; all were in the range 1.6,2.5. However, base respiration (R0, i.e. respiration rate scaled to 0°C) did vary significantly among sites and over the course of the year, with increased base respiration rates during the growing season. We used the overall mean Q10 of 2.0 to estimate annual GPP and R. Testing suggested that the uncertainty in total GPP and R associated with the method of separation was generally well within 15%. For the sites investigated, we found a positive relationship between GPP and R, indicating that there is a latitudinal trend in NEE because the absolute decrease in GPP towards the pole is greater than in R. [source] Bird species numbers in an archipelago of reeds at Lake Velence, HungaryGLOBAL ECOLOGY, Issue 6 2000András Báldi Abstract 1,Bird species numbers were studied on 109 reed islands at Lake Velence, Hungary, in the 1993 and 1994 breeding seasons. The aim was to describe and account for the abundance and distribution patterns of the bird species. 2,It was expected that an exponential model would fit the calculated species,area curves. However, for the 1993 data, both the power function (LogS ~ LogArea) and the exponential (S ~ LogArea) models did so, while the power function, exponential and linear (S ~ A) models fitted the curves for the 1994 data. 3,The results showed that the pattern was not random: a collection of small islands held more species than a few large islands with the same total area. 4,The relative species richness of small islands is a result of the preference of most common passerine bird species for the edges of reed islands. Most individuals were found in the first 5 m of the reedbed, and no edge avoidance was detected on a local spatial scale. Large, rarer species (e.g. Great White Egret), however, were found to be dependent on large reed islands. 5,Comparison of results with two other studies on bird communities of reed islands revealed that the type of landscape matrix (e.g. deep water, shallow water or agricultural lands) among reed patches significantly influences bird communities. Deep water was dominated by grebes and coot, shallow water by reed-nesting passerines, and farmed areas by reed- and bush-nesting passerines. [source] Predicting the Tails of Breakthrough Curves in Regional-Scale Alluvial SystemsGROUND WATER, Issue 4 2007Yong Zhang The late tail of the breakthrough curve (BTC) of a conservative tracer in a regional-scale alluvial system is explored using Monte Carlo simulations. The ensemble numerical BTC, for an instantaneous point source injected into the mobile domain, has a heavy late tail transforming from power law to exponential due to a maximum thickness of clayey material. Haggerty et al.'s (2000) multiple-rate mass transfer (MRMT) method is used to predict the numerical late-time BTCs for solutes in the mobile phase. We use a simple analysis of the thicknesses of fine-grained units noted in boring logs to construct the memory function that describes the slow decline of concentrations at very late time. The good fit between the predictions and the numerical results indicates that the late-time BTC can be approximated by a summation of a small number of exponential functions, and its shape depends primarily on the thicknesses and the associated volume fractions of immobile water in "blocks" of fine-grained material. The prediction of the late-time BTC using the MRMT method relies on an estimate of the average advective residence time, tad. The predictions are not sensitive to estimation errors in tad, which can be approximated by , where is the arithmetic mean ground water velocity and L is the transport distance. This is the first example of deriving an analytical MRMT model from measured hydrofacies properties to predict the late-time BTC. The parsimonious model directly and quantitatively relates the observable subsurface heterogeneity to nonlocal transport parameters. [source] A Simple Model of Soil-Gas Concentrations Sparged into an Unlined Unsaturated ZoneGROUND WATER MONITORING & REMEDIATION, Issue 2 2003David W. Ostendorf We derive an analytical model of soil-gas contamination sparged into an imlined unsaturated zone. A nonaqueous phase liquid (NAPL) source lies in the capillary fringe, with an exponential sparge constant within the radius of influence and a constant ambient evaporation rate beyond. Advection, diffusion, and dispersion govern the conservative soil-gas response, expressed as a quasi-steady series solution with radial Bessel and hyperbolic vertical dependence. Simulations suggest that sparged contamination initially spreads beyond the radius of influence down a negative gradient. This gradient eventually reverses, leading to a subsequent influx of ambient contamination. Soil-gas concentrations accordingly reflect slowly varying source conditions as well as slowly varying diffusive transport through the radius of influence. The two time scales are independent: One depends on NAPL, airflow, and capillary fringe characteristics, the other on soil moisture, gaseous diffusivity, and unsaturated zone thickness. The influx of ambient contamination generates an asymptotic soil-gas concentration much less than the initial source concentration. The simple model is applied to a pilot-scale sparging study at Plattsburgh Air Force Base in upstate New York, with physically plausible results. [source] Is waiting-time prioritisation welfare improving?HEALTH ECONOMICS, Issue 2 2008Hugh Gravelle Abstract Rationing by waiting time is commonly used in health care systems with zero or low money prices. Some systems prioritise particular types of patient and offer them lower waiting times. We investigate whether prioritisation is welfare improving when the benefit from treatment is the sum of two components, one of which is not observed by providers. We show that positive prioritisation (shorter waits for patients with higher observable benefit) is welfare improving if the mean observable benefit of the patients who are indifferent about receiving the treatment is smaller than the mean observable benefit of the patients who receive the treatment. This is true (a) if the distribution of the unobservable benefit is uniform for any distribution of the observable benefit; or (b) if the distribution of the observable benefit is uniform and the distribution of the unobservable benefit is log-concave. We also show that prioritisation is never welfare increasing if and only if the distribution of unobservable benefit is negative exponential. Copyright © 2007 John Wiley & Sons, Ltd. [source] On the use of partial probability weighted moments in the analysis of hydrological extremesHYDROLOGICAL PROCESSES, Issue 10 2007Ugo Moisello Abstract The use of partial probability weighted moments (PPWM) for estimating hydrological extremes is compared to that of probability weighted moments (PWM). Firstly, estimates from at-site data are considered. Two Monte Carlo analyses, conducted using continuous and empirical parent distributions (of peak discharge and daily rainfall annual maxima) and applying four different distributions (Gumbel, Fréchet, GEV and generalized Pareto), show that the estimates obtained from PPWMs are better than those obtained from PWMs if the parent distribution is unknown, as happens in practice. Secondly, the use of partial L-moments (obtained from PPWMs) as diagnostic tools is considered. The theoretical partial L-diagrams are compared with the experimental data. Five different distributions (exponential, Pareto, Gumbel, GEV and generalized Pareto) and 297 samples of peak discharge annual maxima are considered. Finally, the use of PPWMs with regional data is investigated. Three different kinds of regional analyses are considered. The first kind is the regression of quantile estimates on basin area. The study is conducted applying the GEV distribution to peak discharge annual maxima. The regressions obtained with PPWMs are slightly better than those obtained with PWMs. The second kind of regional analysis is the parametric one, of which four different models are considered. The congruence between local and regional estimates is examined, using peak discharge annual maxima. The congruence degree is sometimes higher for PPWMs, sometimes for PWMs. The third kind of regional analysis uses the index flood method. The study, conducted applying the GEV distribution to synthetic data from a lognormal joint distribution, shows that better estimates are obtained sometimes from PPWMs, sometimes from PWMs. All the results seem to indicate that using PPWMs can constitute a valid tool, provided that the influence of ouliers, of course higher with censored samples, is kept under control. Copyright © 2007 John Wiley & Sons, Ltd. [source] Power function decay of hydraulic conductivity for a TOPMODEL-based infiltration routineHYDROLOGICAL PROCESSES, Issue 18 2006Jun Wang Abstract TOPMODEL rainfall-runoff hydrologic concepts are based on soil saturation processes, where soil controls on hydrograph recession have been represented by linear, exponential, and power function decay with soil depth. Although these decay formulations have been incorporated into baseflow decay and topographic index computations, only the linear and exponential forms have been incorporated into infiltration subroutines. This study develops a power function formulation of the Green and Ampt infiltration equation for the case where the power n = 1 and 2. This new function was created to represent field measurements in the New York City, USA, Ward Pound Ridge drinking water supply area, and provide support for similar sites reported by other researchers. Derivation of the power-function-based Green and Ampt model begins with the Green and Ampt formulation used by Beven in deriving an exponential decay model. Differences between the linear, exponential, and power function infiltration scenarios are sensitive to the relative difference between rainfall rates and hydraulic conductivity. Using a low-frequency 30 min design storm with 4·8 cm h,1 rain, the n = 2 power function formulation allows for a faster decay of infiltration and more rapid generation of runoff. Infiltration excess runoff is rare in most forested watersheds, and advantages of the power function infiltration routine may primarily include replication of field-observed processes in urbanized areas and numerical consistency with power function decay of baseflow and topographic index distributions. Equation development is presented within a TOPMODEL-based Ward Pound Ridge rainfall-runoff simulation. Copyright © 2006 John Wiley & Sons, Ltd. [source] |