Home About us Contact | |||
Threshold Value (threshold + value)
Selected AbstractsChanges in Quality of Life in Epilepsy: How Large Must They Be to Be Real?EPILEPSIA, Issue 1 2001Samuel Wiebe Summary: ,Purpose: The study goal was to assess the magnitude of change in generic and epilepsy-specific health-related quality-of-life (HRQOL) instruments needed to exclude chance or error at various levels of certainty in patients with medically refractory epilepsy. Methods: Forty patients with temporal lobe epilepsy and clearly defined criteria of clinical stability received HRQOL measurements twice, 3 months apart, using the Quality of Life in Epilepsy Inventory-89 and -31 (QOLIE-89 and QOLIE-31), Liverpool Impact of Epilepsy, adverse drug events, seizure severity scales, and the Generic Health Utilities Index (HUI-III). Standard error of measurement and test-retest reliability were obtained for all scales and for QOLIE-89 subscales. Using the Reliable Change Index described by Jacobson and Truax, we assessed the magnitude of change required by HRQOL instruments to be 90 and 95% certain that real change has occurred, as opposed to change due to chance or measurement error. Results: Clinical features, point estimates and distribution of HRQOL measures, and test-retest reliability (all > 0.70) were similar to those previously reported. Score changes of ±13 points in QOLIE-89, ±15 in QOLIE-31, ±6.3 in Liverpool seizure severity,ictal, ±11 in Liverpool adverse drug events, ±0.25 in HUI-III, and ±9.5 in impact of epilepsy exclude chance or measurement error with 90% certainty. These correspond, respectively, to 13, 15, 17, 18, 25, and 32% of the potential range of change of each instrument. Conclusions: Threshold values for real change varied considerably among HRQOL tools but were relatively small for QOLIE-89, QOLIE-31, Liverpool Seizure Severity, and adverse drug events. In some instruments, even relatively large changes cannot rule out chance or measurement error. The relation between the Reliable Change Index and other measures of change and its distinction from measures of minimum clinically important change are discussed. [source] Determination of Transverse Dispersion Coefficients from Reactive Plume LengthsGROUND WATER, Issue 2 2006Olaf A. Cirpka With most existing methods, transverse dispersion coefficients are difficult to determine. We present a new, simple, and robust approach based on steady-state transport of a reacting agent, introduced over a certain height into the porous medium of interest. The agent reacts with compounds in the ambient water. In our application, we use an alkaline solution injected into acidic ambient water. Threshold values of pH are visualized by adding standard pH indicators. Since aqueous-phase acid-base reactions can be considered practically instantaneous and the only process leading to mixing of the reactants is transverse dispersion, the length of the plume is controlled by the ratio of transverse dispersion to advection. We use existing closed-form expressions for multidimensional steady-state transport of conservative compounds in order to evaluate the concentration distributions of the reacting compounds. Based on these results, we derive an easy-to-use expression for the length of the reactive plume; it is proportional to the injection height squared, times the velocity, and inversely proportional to the transverse dispersion coefficient. Solving this expression for the transverse dispersion coefficient, we can estimate its value from the length of the alkaline plume. We apply the method to two experimental setups of different dimension. The computed transverse dispersion coefficients are rather small. We conclude that at slow but realistic ground water velocities, the contribution of effective molecular diffusion to transverse dispersion cannot be neglected. This results in plume lengths that increase with increasing velocity. [source] Efficiency of point abundance sampling by electro-fishing modified for short fishesJOURNAL OF APPLIED ICHTHYOLOGY, Issue 5 2003M. Scholten Summary The assessment of fish densities using point abundance sampling by electro-fishing requires information about the size of the sample area. For electro-fishing the effective fishing range depends on biological effects such as species and length of fish as well as physical effects like conductivity of water or substrate type. The present study investigates systematically the impact of conductivity and substrate type on the extension of the electrical field of a battery-powered electro-fishing gear (DEKA 3000, Marsberg, Germany), modified for larval and juvenile fishes. Threshold values for galvanotaxis were examined for juvenile fishes of five species in terms of current densities. Based on 71 experiments a general function relating body length to current density threshold values was developed. Optimal electrical current flow periods of 10 s were determined. For three different substrate types (gravel, sand, mud) a formula has been developed to quantify biological and physical effects on the effective fishing range. Each equation included information on the length of fish and the ambient conductivity. An increase in the effective fishing range of about 10% every 0.1 mS cm,1 was established. Reduction of the fishing range over muddy substrate was about 20,30% compared with coarse gravel or sand. This study provides a sufficient tool to calculate area-related densities of larval and juvenile fishes in different habitat types of a large river system using point abundance sampling by electro-fishing. Finally, calculated fish densities were evaluated by different types of fishing gear. [source] SENSORY FLAVOR PROFILING AND MAPPING OF MARKET SAMPLES OF CUMIN (CUMINUM CYMINUM L.)JOURNAL OF FOOD QUALITY, Issue 4 2004ANUPAMA DATTATREYA ABSTRACT Eight market samples of cumin (R1, R2, R3, R4, R5, R6, R7 and R8) from different regions of India were examined for sensory quality by conducting threshold tests, time-intensity (TI) profiling and flavor profiling. Principal component analysis (PCA) was carried out to group the samples. Threshold values ranged from 0.006 to 0.017% with R7 and R8 lots showing lower values for their thresholds (0.006%). Higher intensity of aroma of R7 and R8 was further confirmed by more of a lingering aroma as shown by the TI study. Flavor profiling by quantitative descriptive analysis showed that the market samples of cumin did not differ significantly (P , 0.05). Mapping of samples using PCA technique showed, based on intensity of attributes, four distinct groups comprising a) R1 and R3, b) R7, c) R2 and R5 and d) R4 and R8. R6 occupied a position in between a and b. [source] Threshold values of visceral fat and waist girth in Japanese obese childrenPEDIATRICS INTERNATIONAL, Issue 5 2005Kohtaro Asayama AbstractBackground:,In order to define the diagnostic criteria for visceral adipose tissue (VAT) accumulation and abdominal obesity in Japanese youths, a cross-sectional, multicenter study was conducted. Methods:,Subjects were 194 boys and 96 girls ranging in age from 6 to 15 years. Obese youths were classified according to the occurrence of abnormal values in serum triglyceride, alanine aminotransferase or insulin level. A threshold value of each criterion was calculated, using the analysis of receiver operating characteristic (ROC) curve. The areas of total abdominal adipose tissue (AT), VAT and subcutaneous adipose tissue (SAT) were estimated by single slice computed tomography at the level of umbilicus. Results:,VAT area was greater in boys than it was in girls. The critical values for VAT area and waist circumference in all subjects were 54.8 cm2 and 83.5 cm, respectively. The values for the area under the ROC curves were VAT area > total AT area > waist circumference > SAT area > percentage overweight > percentage body fat. The sensitivity and specificity for VAT area were 90.5 and 79.5%, respectively. Those for waist circumference were high enough (> 70%) for clinical use. In the linear regression analysis assigning VAT area as an independent variable and waist circumference as a dependent variable, the expected value for the waist circumference was 82 cm. Conclusion:,In Japanese obese youths ranging in age from 6 to 15 years, the diagnostic criteria for the waist circumference was 82 cm, and that for VAT area was 55 cm2. [source] Threshold values of random K -SAT from the cavity methodRANDOM STRUCTURES AND ALGORITHMS, Issue 3 2006Stephan Mertens Abstract Using the cavity equations of Mézard, Parisi, and Zecchina Science 297 (2002), 812; Mézard and Zecchina, Phys Rev E 66 (2002), 056126 we derive the various threshold values for the number of clauses per variable of the random K -satisfiability problem, generalizing the previous results to K , 4. We also give an analytic solution of the equations, and some closed expressions for these thresholds, in an expansion around large K. The stability of the solution is also computed. For any K, the satisfiability threshold is found to be in the stable region of the solution, which adds further credit to the conjecture that this computation gives the exact satisfiability threshold.© 2005 Wiley Periodicals, Inc. Random Struct. Alg., 2006 [source] Long-Term Monitoring and Identification of Bridge Structural ParametersCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 2 2009Serdar Soyoz This three-span 111-m long bridge is instrumented with 13 acceleration sensors at both the superstructure and the columns. The sensor data are transmitted to a server computer wirelessly. Modal parameters of the bridge, that is, the frequencies and the modal shapes were identified by processing 1,707 vibration data sets collected under traffic excitations, based on which the bridge structural parameters, stiffness and mass, and the soil spring values were identified by employing the neural network technique. The identified superstructure stiffness at the beginning of the monitoring was 97% of the stiffness value based on the design drawings. In the identified modal frequencies, a variation from ,10% to +10% was observed over the monitoring period. In the identified stiffness values of the bridge superstructure, a variation from ,3% to +3% was observed over the monitoring period. Based on the statistical analysis of the collected data for each year, 5% decrease in the first modal frequency and 2% decrease in the superstructure stiffness were observed over the 5-year monitoring period. Probability density functions were obtained for stiffness values each year. Stiffness threshold values for the collapse of the bridge under the operational loading can be determined. Then the number of years can be assessed for which the area under the proposed probability density functions is greater than the threshold value. So the information obtained in this study is valuable for studying aging and long-term performance assessment of similar bridges. [source] On the development of low-level auditory discrimination and deficits in dyslexiaDYSLEXIA, Issue 2 2004Burkhart Fischer Abstract Absolute auditory thresholds, frequency resolution and temporal resolution develop with age. It is still discussed whether low-level auditory performance is of clinical significance,specifically, for delayed maturation of central auditory processing. Recently, five new auditory tasks were used to study the development of low-level auditory discrimination. It was found that the development lasts up to the age of 16,18 years (on an average). Very similar tasks were now used with 432 controls and 250 dyslexic subjects in the age range of 7,22 years. For both groups the performance in one of the tasks was not related to the performance in another task indicating that the five tasks challenge independent subfunctions of auditory processing. Surprisingly high numbers of subjects were classified as low performers (LP), because they could not perform one or the other task at its easiest level and no threshold value could be assigned. For the dyslexics the incidence of LP was considerably increased in all tasks and age groups as compared with the age matched controls. The development of dynamic visual and optomotor functions and the corresponding deficits in dyslexia are discussed in relation to the auditory data presented here. Copyright © 2004 John Wiley & Sons, Ltd. [source] Point process methodology for on-line spatio-temporal disease surveillanceENVIRONMETRICS, Issue 5 2005Peter Diggle Abstract We formulate the problem of on-line spatio-temporal disease surveillance in terms of predicting spatially and temporally localised excursions over a pre-specified threshold value for the spatially and temporally varying intensity of a point process in which each point represents an individual case of the disease in question. Our point process model is a non-stationary log-Gaussian Cox process in which the spatio-temporal intensity, ,(x,t), has a multiplicative decomposition into two deterministic components, one describing purely spatial and the other purely temporal variation in the normal disease incidence pattern, and an unobserved stochastic component representing spatially and temporally localised departures from the normal pattern. We give methods for estimating the parameters of the model, and for making probabilistic predictions of the current intensity. We describe an application to on-line spatio-temporal surveillance of non-specific gastroenteric disease in the county of Hampshire, UK. The results are presented as maps of exceedance probabilities, P{R(x,t)c|data}, where R(x,t) is the current realisation of the unobserved stochastic component of ,(x,t) and c is a pre-specified threshold. These maps are updated automatically in response to each day's incident data using a web-based reporting system. Copyright © 2005 John Wiley & Sons, Ltd. [source] Design of change detection algorithms based on the generalized likelihood ratio testENVIRONMETRICS, Issue 8 2001Giovanna Capizzi Abstract A design procedure for detecting additive changes in a state-space model is proposed. Since the mean of the observations after the change is unknown, detection algorithms based on the generalized likelihood ratio test, GLR, and on window-limited type GLR, are considered. As Lai (1995) pointed out, it is very difficult to find a satisfactory choice of both window size and threshold for these change detection algorithms. The basic idea of this article is to estimate, through the stochastic approximation of Robbins and Monro, the threshold value which satisfies a constraint on the mean between false alarms, for a specified window size. A convenient stopping rule, based on the first passage time of an F -statistic below a fixed boundary, is used to terminate the iterative approximation. Then, the window size which produces the most desirable out-of-control ARL, for a fixed value of the in-control ARL, can be selected. These change detection algorithms are applied to detect biases on the measurements of ozone, recorded from one monitoring site of Bologna (Italy). Comparisons of the ARL profiles reveal that the full-GLR scheme provides much more protection than the window-limited GLR schemes against small shifts in the process, but the modified window-limited GLR provides more protection against large shifts. Copyright © 2001 John Wiley & Sons, Ltd. [source] Calcite and gypsum solubility products in water-saturated salt-affected soil samples at 25°C and at least up to 14 dS m,1EUROPEAN JOURNAL OF SOIL SCIENCE, Issue 2 2010F. Visconti Calcite and gypsum are salts of major ions characterized by poor solubility compared with other salts that may precipitate in soils. Knowledge of calcite and gypsum solubility products in water-saturated soil samples substantially contributes to a better assessment of processes involved in soil salinity. The new SALSOLCHEMIS code for chemical equilibrium assessment was parameterized with published analytical data for aqueous synthetic calcite and gypsum-saturated solutions. Once parameterized, SALSOLCHEMIS was applied to calculations of the ionic activity products of calcium carbonate and calcium sulphate in 133 water-saturated soil samples from an irrigated salt-affected agricultural area in a semi-arid Mediterranean climate. During parameterization, sufficiently constant values for the ionic activity products of calcium carbonate and calcium sulphate were obtained only when the following were used in SALSOLCHEMIS: (i) the equations of Sposito & Traina for the free ion activity coefficient calculation, (ii) the assumption of the non-existence of the Ca (HCO 3)+ and CaCO3o ion pairs and (iii) a paradigm of total ion activity coefficients. The value of 4.62 can be assumed to be a reliable gypsum solubility product (pKs) in simple aqueous and soil solutions, while a value of 8.43 can only be assumed as a reliable calcite solubility product (pKs) in simple aqueous solutions. The saturated pastes and saturation extracts were found to be calcite over-saturated, with the former significantly being less so (p IAP = 8.29) than the latter (p IAP = 8.22). The calcite over-saturation of saturated pastes increased with the soil organic matter content. Nevertheless, the inhibition of calcite precipitation is caused by the soluble organic matter from a dissolved organic carbon threshold value that lies between 7 and 12 mm. The hypothesis of thermodynamic equilibrium is more adequate for the saturated pastes than for the saturation extracts. [source] Correlations between pyrolysis combustion flow calorimetry and conventional flammability tests with halogen-free flame retardant polyolefin compoundsFIRE AND MATERIALS, Issue 1 2009Jeffrey M. Cogen Abstract Seven halogen-free flame retardant (FR) compounds were evaluated using pyrolysis combustion flow calorimetry (PCFC) and cone calorimetry. Performance of wires coated with the compounds was evaluated using industry standard flame tests. The results suggest that time to peak heat release rate (PHRR) and total heat released (THR) in cone calorimetry (and THR and temperature at PHRR in PCFC) be given more attention in FR compound evaluation. Results were analyzed using flame spread theory. As predicted, the lateral flame spread velocity was independent of PHRR and heat release capacity. However, no angular dependence of flame spread velocity was observed. Thus, the thermal theory of ignition and flame spread, which assumes that ignition at the flame front occurs at a particular flame and ignition temperature, provides little insight into the performance of the compounds. However, results are consistent with a heat release rate greater than about 66kW/m2 during flame propagation for sustained ignition of insulated wires containing mineral fillers, in agreement with a critical heat release rate criterion for burning. Mineral fillers can reduce heat release rate below the threshold value by lowering the flaming combustion efficiency and fuel content. A rapid screening procedure using PCFC is suggested by logistic regression of the binary (burn/no-burn) results. Copyright © 2008 John Wiley & Sons, Ltd. [source] Does high nitrogen loading prevent clear-water conditions in shallow lakes at moderately high phosphorus concentrations?FRESHWATER BIOLOGY, Issue 1 2005María A. González Sagrario Summary 1. The effect of total nitrogen (TN) and phosphorus (TP) loading on trophic structure and water clarity was studied during summer in 24 field enclosures fixed in, and kept open to, the sediment in a shallow lake. The experiment involved a control treatment and five treatments to which nutrients were added: (i) high phosphorus, (ii) moderate nitrogen, (iii) high nitrogen, (iv) high phosphorus and moderate nitrogen and (v) high phosphorus and high nitrogen. To reduce zooplankton grazers, 1+ fish (Perca fluviatilis L.) were stocked in all enclosures at a density of 3.7 individuals m,2. 2. With the addition of phosphorus, chlorophyll a and the total biovolume of phytoplankton rose significantly at moderate and high nitrogen. Cyanobacteria or chlorophytes dominated in all enclosures to which we added phosphorus as well as in the high nitrogen treatment, while cryptophytes dominated in the moderate nitrogen enclosures and the controls. 3. At the end of the experiment, the biomass of the submerged macrophytes Elodea canadensis and Potamogeton sp. was significantly lower in the dual treatments (TN, TP) than in single nutrient treatments and controls and the water clarity declined. The shift to a turbid state with low plant coverage occurred at TN >2 mg N L,1 and TP >0.13,0.2 mg P L,1. These results concur with a survey of Danish shallow lakes, showing that high macrophyte coverage occurred only when summer mean TN was below 2 mg N L,1, irrespective of the concentration of TP, which ranged between 0.03 and 1.2 mg P L,1. 4. Zooplankton biomass and the zooplankton : phytoplankton biomass ratio, and probably also the grazing pressure on phytoplankton, remained overall low in all treatments, reflecting the high fish abundance chosen for the experiment. We saw no response to nutrition addition in total zooplankton biomass, indicating that the loss of plants and a shift to the turbid state did not result from changes in zooplankton grazing. Shading by phytoplankton and periphyton was probably the key factor. 5. Nitrogen may play a far more important role than previously appreciated in the loss of submerged macrophytes at increased nutrient loading and for the delay in the re-establishment of the nutrient loading reduction. We cannot yet specify, however, a threshold value for N that would cause a shift to a turbid state as it may vary with fish density and climatic conditions. However, the focus should be widened to use control of both N and P in the restoration of eutrophic shallow lakes. [source] Cyclomorphosis in Daphnia lumholtzi induced by temperatureFRESHWATER BIOLOGY, Issue 2 2000Peder M. Yurista Summary 1Cyclomorphosis is a well known phenomenon in Daphnia that involves a regular, seasonal, or induced change in body allometry. Long helmets and tail spines were induced in laboratory cultures of Daphnia lumholtzi with temperature of 31 °C as the proximal cue (temperature of locally occurring peak abundance in Kentucky Lake). The effect was greater in embryos than juveniles or adults exposed to the temperature cue. 2The temperature cue appears to have a threshold value (animals cultured at 25 or 28 °C did not develop elongated helmets or spines). The helmet and spine length receded both with D. lumholtzi kept at a constant 31 °C temperature and when water temperature was decreased. 3The induced helmet in this experiment (0.66 mm, 1.0 mm animal) was significantly longer than values reported in the literature for induction by planktivorous fish kairomones (0.25 mm, 1.2 mm animal). The strong response to a proximal cue of temperature may require the second weaker chemical cue for maintenance. It is suggested that a synergistic explanation with two cues may be more appropriate for cyclomorphosis induction and maintenance in Daphnia lumholtzi that could be tested with further studies. [source] Electric-Field-Assisted Nanostructuring of a Mott InsulatorADVANCED FUNCTIONAL MATERIALS, Issue 17 2009Vincent Dubost Abstract Here, the first experimental evidence for a strong electromechanical coupling in the Mott insulator GaTa4Se8 that allows highly reproducible nanoscaled writing by means of scanning tunneling microscopy (STM) is reported. The local electric field across the STM junction is observed to have a threshold value above which the clean (100) surface of GaTa4Se8 becomes mechanically instable: at voltage biases >1.1,V, the surface suddenly inflates and comes in contact with the STM tip, resulting in nanometer-sized craters. The formed pattern can be indestructibly "read" by STM at a lower voltage bias, thus allowing 5,Tdots inch,2 dense writing/reading at room temperature. The discovery of the electromechanical coupling in GaTa4Se8 might give new clues in the understanding of the electric pulse induced resistive switching recently observed in this stoichiometric Mott insulator. [source] Geoelectric dimensionality in complex geological areas: application to the Spanish Betic ChainGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2004Anna Martí SUMMARY Rotational invariants of the magnetotelluric impedance tensor may be used to obtain information on the geometry of underlying geological structures. The set of invariants proposed by Weaver et al. (2000) allows the determination of a suitable dimensionality for the modelling of observed data. The application of the invariants to real data must take into account the errors in the data and also the fact that geoelectric structures in the Earth will not exactly fit 1-D, 2-D or simple 3-D models. In this work we propose a method to estimate the dimensionality of geoelectric structures based on the rotational invariants, bearing in mind the experimental error of real data. A data set from the Betic Chain (Spain) is considered. We compare the errors of the invariants estimated by different approaches: classical error propagation, generation of random Gaussian noise and bootstrap resampling, and we investigate the matter of the threshold value to be used in the determination of dimensionality. We conclude that the errors of the invariants can be properly estimated by classical error propagation, but the generation of random values is better to ensure stability in the errors of strike direction and distortion parameters. The use of a threshold value between 0.1 and 0.15 is recommended for real data of medium to high quality. The results for the Betic Chain show that the general behaviour is 3-D with a disposition of 2-D structures, which may be correlated with the nature of the crust of the region. [source] Amplified Spontaneous Emission of Poly(ladder-type phenylene)s , The Influence of Photophysical Properties on ASE Thresholds,ADVANCED FUNCTIONAL MATERIALS, Issue 20 2008Frédéric Laquai Abstract Amplified spontaneous emission (ASE) of a series of blue-emitting poly(ladder-type phenylene)s (LPPP)s has been studied in thin film polymer waveguide structures. The chemically well-defined step-ladder polymers consist of an increasing number of bridged phenylene rings per monomer unit starting from fully arylated poly(ladder-type indenofluorene) up to poly(ladder-type pentaphenylene). The ASE characteristics of the polymers including the onset threshold values for ASE, the gain and loss coefficients as well as the photoluminescence (PL) properties, i.e., the solid state fluorescence lifetimes, decay kinetics and solid state quantum efficiencies have been studied by time-resolved PL spectroscopy. A fully arylated polyfluorene has been synthesized and its photophysical properties were compared to the step-ladder polymers. Steady-state photoinduced absorption and ultrafast transient absorption spectroscopy have been used to study excited state absorption of singlet and triplet states and polarons present in the solid state. The results demonstrate a minimum regarding the onset threshold value of ASE for a fully arylated poly(ladder-type indenofluorene) and a successive increase of the ASE threshold for the step-ladder polymers with more bridged phenylene rings. In particular, carbazole-containing step-ladder LPPPs exhibit significantly increased ASE threshold values as compared to their carbazole-free analogues due to a pronounced overlap of stimulated emission (SE) and photoinduced absorption (PA). [source] Rain-gauge network evaluation and augmentation using geostatisticsHYDROLOGICAL PROCESSES, Issue 14 2008Ke-Sheng Cheng Abstract Rain-gauge networks are often used to provide estimates of area average rainfall or point rainfalls at ungauged locations. The level of accuracy a network can achieve depends on the total number and locations of gauges in the network. A geostatistical approach for evaluation and augmentation of an existing rain-gauge network is proposed in this study. Through variogram analysis, hourly rainfalls are shown to have higher spatial variability than annual rainfalls, with hourly Mei-Yu rainfalls having the highest spatial variability. A criterion using ordinary kriging variance is proposed to assess the accuracy of rainfall estimation using the acceptance probability defined as the probability that estimation error falls within a desired range. Based on the criterion, the percentage of the total area with acceptable accuracy Ap under certain network configuration can be calculated. A sequential algorithm is also proposed to prioritize rain-gauges of the existing network, identify the base network, and relocate non-base gauges. Percentage of the total area with acceptable accuracy is mostly contributed by the base network. In contrast, non-base gauges provide little contribution to Ap and are subject to removal or relocation. Using a case study in northern Taiwan, the proposed approach demonstrates that the identified base network which comprises of approximately two-thirds of the total rain-gauges can achieve almost the same level of performance (expressed in terms of percentage of the total area with acceptable accuracy) as the complete network for hourly Mei-Yu rainfall estimation. The percentage of area with acceptable accuracy can be raised from 56% to 88% using an augmented network. A threshold value for the percentage of area with acceptable accuracy is also recommended to help determine the number of non-base gauges which need to be relocated. Copyright © 2007 John Wiley & Sons, Ltd. [source] Predicting unit plot soil loss in Sicily, south ItalyHYDROLOGICAL PROCESSES, Issue 5 2008V. Bagarello Abstract Predicting soil loss is necessary to establish soil conservation measures. Variability of soil and hydrological parameters complicates mathematical simulation of soil erosion processes. Methods for predicting unit plot soil loss in Sicily were developed by using 5 years of data from replicated plots. At first, the variability of the soil water content, runoff, and unit plot soil loss values collected at fixed dates or after an erosive event was investigated. The applicability of the Universal Soil Loss Equation (USLE) was then tested. Finally, a method to predict event soil loss was developed. Measurement variability decreased as the mean increased above a threshold value but it was low also for low values of the measured variable. The mean soil loss predicted by the USLE was lower than the measured value by 48%. The annual values of the soil erodibility factor varied by seven times whereas the mean monthly values varied between 1% and 244% of the mean annual value. The event unit plot soil loss was directly proportional to an erosivity index equal to , being QRRe the runoff ratio times the single storm erosion index. It was concluded that a relatively low number of replicates of the variable of interest may be collected to estimate the mean for both high and particularly low values of the variable. The USLE with the mean annual soil erodibility factor may be applied to estimate the order of magnitude of the mean soil loss but it is not usable to estimate soil loss at shorter temporal scales. The relationship for estimating the event soil loss is a modified version of the USLE-M, given that it includes an exponent for the QRRe term. Copyright © 2007 John Wiley & Sons, Ltd. [source] Variational h -adaption in finite deformation elasticity and plasticityINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 5 2007J. Mosler Abstract We ropose a variational h -adaption strategy in which the evolution of the mesh is driven directly by the governing minimum principle. This minimum principle is the principle of minimum potential energy in the case of elastostatics; and a minimum principle for the incremental static problem of elasto-viscoplasticity. In particular, the mesh is refined locally when the resulting energy or incremental pseudo-energy released exceeds a certain threshold value. In order to avoid global recomputes, we estimate the local energy released by mesh refinement by means of a lower bound obtained by relaxing a local patch of elements. This bound can be computed locally, which reduces the complexity of the refinement algorithm to O(N). We also demonstrate how variational h -refinement can be combined with variational r -refinement to obtain a variational hr -refinement algorithm. Because of the strict variational nature of the h -refinement algorithm, the resulting meshes are anisotropic and outperform other refinement strategies based on aspect ratio or other purely geometrical measures of mesh quality. The versatility and rate of convergence of the resulting approach are illustrated by means of selected numerical examples. Copyright © 2007 John Wiley & Sons, Ltd. [source] Adverse trends in male reproductive health: we may have reached a crucial ,tipping point'INTERNATIONAL JOURNAL OF ANDROLOGY, Issue 2 2008A.-M. Andersson Summary Healthy men produce an enormous number of sperms, far more than necessary for conception. However, several studies suggest that semen samples where the concentration of sperms is below 40 mill/mL may be associated with longer time to pregnancy or even subfertility, and specimens where the concentration of sperms is below 15 mill/mL may carry a high risk of infertility. Historic data from the 1940s show that the bulk of young men at that time had sperm counts far above 40 mill/mL with averages higher than 100 mill/mL. However, recent surveillance studies of young men from the general populations of young men in Northern Europe show that semen quality is much poorer. In Denmark approximately 40 percent of the men have now sperm counts below 40 mill/mL. A simulation assuming that average sperm count had declined from 100 mill/mL in ,old times' to a current level close to 40 mill/mL indicated that the first decline in average sperm number of 20,40 mill/mL might not have had much effect on pregnancy rates, as the majority of men would still have had counts far above the threshold value. However, due to the assumed decline in semen quality, the sperm counts of the majority of 20 year old European men are now so low that we may be close to the crucial tipping point of 40 mill/mL spermatozoa. Consequently, we must face the possibility of more infertile couples and lower fertility rates in the future. [source] HDL-c is a powerful lipid predictor of cardiovascular diseasesINTERNATIONAL JOURNAL OF CLINICAL PRACTICE, Issue 11 2007E. Bruckert Summary Relationship between HDL-c and cardiovascular diseases:, Beyond the role of low-density lipoprotein cholesterol (LDL-c) in the development of atherosclerosis, growing evidence suggest that high-density lipoprotein cholesterol (HDL-c) is a powerful predictor of cardiovascular disease. Indeed, epidemiological, mechanistic and intervention studies suggest that low HDL-c is a major cardiovascular risk factor and that increasing HDL-c plasma levels may be beneficial, particularly in patients with low HDL-c levels. The inverse association between HDL-c concentrations and cardiovascular risk is continuous without threshold value. Thus, any categorical definition of low HDL-c is arbitrary. Protective effects of HDL:, HDL particles are highly heterogeneous in structure and intravascular metabolism. Antiatherogenic properties of HDL include its role in the reverse cholesterol transfer, besides its antioxidant, anti-inflammatory and antiapoptotic activities. What should clinicians do?:, From a practical point of view, HDL-c should be systematically measured to assess the cardiovascular risk in patients. The first step to consider in subjects with low HDL-c is to look for specific causes and give advice to change inappropriate lifestyle components associated with low HDL-c, such as smoking, lack of physical exercise and overweight. Patients with very low HDL-c need a thorough evaluation by specialist physicians. Statins are associated with a modest increase of HDL-c (5%) while fibrates and nicotinic acid increase HDL-c by 10% and 20% respectively. [source] Non-linear interest rate dynamics and forecasting: evidence for US and Australian interest ratesINTERNATIONAL JOURNAL OF FINANCE & ECONOMICS, Issue 2 2009David G. McMillan Abstract Recent empirical finance research has suggested the potential for interest rate series to exhibit non-linear adjustment to equilibrium. This paper examines a variety of models designed to capture these effects and compares both their in-sample and out-of-sample performance with a linear alternative. Using short- and long-term interest rates we report evidence that a logistic smooth-transition error-correction model is able to best characterize the data and provide superior out-of-sample forecasts, especially for the short rate, over both linear and non-linear alternatives. This model suggests that market dynamics differ depending on whether the deviations from long-run equilibrium are above or below the threshold value. Copyright © 2007 John Wiley & Sons, Ltd. [source] Using quantitative real-time PCR to detect salmonid prey in scats of grey Halichoerus grypus and harbour Phoca vitulina seals in Scotland , an experimental and field studyJOURNAL OF APPLIED ECOLOGY, Issue 2 2008I. Matejusová Summary 1There is considerable debate over the impact of seal predation on salmonid populations in both the Atlantic and Pacific oceans. Conventional hard-part analysis of scats has suggested that salmonids represent a minor component of the diet of grey seals (Halichoerus grypus) and harbour seals (Phoca vitulina) in the UK. However, it is unclear whether this is an accurate reflection of the diet or due to methodological problems. To investigate this issue, we applied quantitative PCR (qPCR) to examine the presence of salmonids in the diet of seals in the Moray Firth, UK, during the summers of 2003 and 2005. 2Two qPCR assays were designed to detect Atlantic salmon Salmo salar and sea trout Salmo trutta DNA in field samples and experimentally spiked scats. The proportion of scats sampled in the field that were positive for salmonid DNA was low (Ş10%). However, the DNA technique consistently resulted in more positive scats than when hard-part analysis was used. 3An experimental study using spiked scat material revealed a highly significant negative relationship between Ct values obtained from the Atlantic salmon qPCR assay and the proportion of Atlantic salmon material added to scats. The Ct value denotes the cycle number at which the increasing fluorescence signal of target DNA crosses a threshold value. Ct values from field-collected seal scats suggested they contained a very low concentration of salmonid remains (1,5%) based on an approximate calibration curve constructed from the experimental data. 4Synthesis and applications. The qPCR assay approach was shown to be highly efficient and consistent in detection of salmonids from seal scats, and to be more sensitive than conventional hard-parts analysis. Nevertheless, our results confirm previous studies indicating that salmonids are not common prey for seals in these Scottish estuaries. These studies support current management practice, which focuses on control of the small number of seals that move into key salmonid rivers, rather than targeting the larger groups of animals that haul-out in nearby estuaries. [source] Your Drug, My Drug, or Our Drugs: How Aggressive Should We Be With Antihypertensive Therapy?JOURNAL OF CLINICAL HYPERTENSION, Issue 2005Joseph L. Izzo Jr. MD In the prevention of hypertensive complications, especially stroke and kidney disease, "lower is better" because for each decrease of 20 mm Hg systolic or 10 mm Hg diastolic pressure in the population, cardiovascular risk is halved. Ideally, the goal for each patient should be to reach the lowest blood pressure that is well tolerated, a value that may be well below the arbitrary threshold value of 140/90 mm Hg. For the majority of "uncomplicated hypertensives," the question of single-drug therapy is essentially moot, because more than one agent is almost always required to optimally control blood pressure. In individuals who already have heart or kidney disease, there are compelling indications for the use of drugs that block the renin-angiotensin system, but the large outcome studies that spawned these recommendations are themselves combination trials. Thus, in virtually all patients, more than one drug is indicated. The best combinations take advantage of long durations of action and complementary mechanisms of action of the component and are not only able to effectively lower blood pressure, but also to favorably affect the natural history of hypertensive complications. Regimens,including fixed-dose combination products,that combine a thiazide diuretic or calcium antagonist with an angiotensin-converting enzyme inhibitor or angiotensin receptor blocker are most efficient. In summary, why would an astute clinician (or informed patient) be satisfied with the relatively limited effects of any single class of antihypertensive agents when better overall protection is possible? [source] Factual memories of ICU: recall at two years post-discharge and comparison with delirium status during ICU admission , a multicentre cohort studyJOURNAL OF CLINICAL NURSING, Issue 9 2007Brigit L Roberts RN, IC Cert Aims and objective., To examine the relationship between observed delirium in ICU and patients' recall of factual events up to two years after discharge. Background., People, the environment, and procedures are frequently cited memories of actual events encountered in ICU. These are often perceived as stressors to the patients and the presence of several such stressors has been associated with the development of reduced health-related quality of life or post-traumatic stress syndrome. Design., Prospective cohort study using interview technique. Method., The cohort was assembled from 152 patients who participated in a previously conducted multi-centre study of delirium incidence in Australian ICUs. The interviews involved a mixture of closed- and open-ended questions. Qualitative responses regarding factual memories were analysed using thematic analysis. A five-point Likert scale with answers from ,always' to ,never' was used to ask about current experiences of dream, anxiety, sleep problems, fears, irritability and/or mood swings. Scoring ranged from 6 to 30 with a mid-point value of 18 indicating a threshold value for the diagnosis of post-traumatic stress syndrome. A P -value of <0·05 was considered significant for all analyses. Results., Forty-one (40%) out of 103 potential participants consented to take part in the follow-up interview; 18 patients (44%) had been delirious and 23 patients (56%) non-delirious during the ICU admission. The non-participants (n = 62) formed a control group to ensure a representative sample; 83% (n = 34) reported factual memories either with or without recall of dreaming. Factual memories were significantly less common (66% cf. 96%) in delirious patients (OR 0·09, 95%CI 0·01,0·85, p = 0·035). Five topics emerged from the thematic analysis: ,procedures', ,staff', ,comfort', ,visitors', and ,events'. Based on the current experiences, five patients (12%, four non-delirious and one delirious) scored ,18 indicative of symptoms of post-traumatic stress syndrome; this did not reach statistical significance. Memory of transfer out of ICU was less frequent among the delirious patients (56%, n = 10) than among the non-delirious patients (87%, n = 20) (p = 0·036). Conclusion., Most patients have factual memories of their ICU stay. However, delirious patients had significantly less factual recall than non-delirious patients. Adverse psychological sequelae expressed as post-traumatic stress syndrome was uncommon in our study. Every attempt must be made to ensure that the ICU environment is as hospitable as possible to decrease the stress of critical illness. Post-ICU follow-up should include filling in the ,missing gaps', particularly for delirious patients. Ongoing explanations and a caring environment may assist the patient in making a complete recovery both physically and mentally. Relevance to clinical practice., This study highlights the need for continued patient information, re-assurance and optimized comfort. While health care professionals cannot remove the stressors of the ICU treatments, we must minimize the impact of the stay. It must be remembered that most patients are aware of their surroundings while they are in the ICU and it should, therefore, be part of ICU education to include issues regarding all aspects of patient care in this particularly vulnerable subset of patients to optimize their feelings of security, comfort and self-respect. [source] The evolution of cooperation and altruism , a general framework and a classification of modelsJOURNAL OF EVOLUTIONARY BIOLOGY, Issue 5 2006L. LEHMANN Abstract One of the enduring puzzles in biology and the social sciences is the origin and persistence of intraspecific cooperation and altruism in humans and other species. Hundreds of theoretical models have been proposed and there is much confusion about the relationship between these models. To clarify the situation, we developed a synthetic conceptual framework that delineates the conditions necessary for the evolution of altruism and cooperation. We show that at least one of the four following conditions needs to be fulfilled: direct benefits to the focal individual performing a cooperative act; direct or indirect information allowing a better than random guess about whether a given individual will behave cooperatively in repeated reciprocal interactions; preferential interactions between related individuals; and genetic correlation between genes coding for altruism and phenotypic traits that can be identified. When one or more of these conditions are met, altruism or cooperation can evolve if the cost-to-benefit ratio of altruistic and cooperative acts is greater than a threshold value. The cost-to-benefit ratio can be altered by coercion, punishment and policing which therefore act as mechanisms facilitating the evolution of altruism and cooperation. All the models proposed so far are explicitly or implicitly built on these general principles, allowing us to classify them into four general categories. [source] Pikeperch Sander lucioperca trapped between niches: foraging performance and prey selection in a piscivore on a planktivore dietJOURNAL OF FISH BIOLOGY, Issue 4 2008A. Persson The foraging behaviour of planktivorous pikeperch Sander lucioperca during their first growing season was analysed. Field data showed that S. lucioperca feed on extremely rare prey at the end of the summer, suggesting the presence of a bottleneck. In experiments, foraging ability of planktivorous S. lucioperca was determined when fish were feeding on different prey types (Daphnia magna or Chaoborus spp.) and sizes (D. magna of lengths 1 or 2·5 mm) when they occurred alone. From these results, the minimum density requirement of each prey type was analysed. The energy gain for three different foraging strategies was estimated; a specialized diet based on either large D. magna or Chaoborus spp. or a generalist diet combining both prey types. Prey value estimates showed that Chaoborus spp. should be the preferred prey, assuming an energy maximizing principle. In prey choice experiments, S. lucioperca largely followed this principle, including D. magna in the diet only when the density of the Chaoborus spp. was below a threshold value. Splitting the foraging bout into different sequences, however, resulted in a somewhat different pattern. During an initial phase, S. lucioperca captured both prey as encountered and then switched to Chaoborus spp. if prey density was above the threshold level. The prey selection observed was mainly explained by sampling behaviour and incomplete information about environmental quality, whereas satiation only had marginal effects. It was concluded that the observed diet based on rare prey items was in accordance with an optimal foraging strategy and may generate positive growth in the absence of prey fish in suitable sizes. [source] Asymmetry in the link between the yield spread and industrial production: threshold effects and forecastingJOURNAL OF FORECASTING, Issue 5 2004Ivan Paya Abstract We analyse the nonlinear behaviour of the information content in the spread for future real economic activity. The spread linearly predicts one-year-ahead real growth in nine industrial production sectors of the USA and four of the UK over the last 40 years. However, recent investigations on the spread,real activity relation have questioned both its linear nature and its time-invariant framework. Our in-sample empirical evidence suggests that the spread,real activity relationship exhibits asymmetries that allow for different predictive power of the spread when past spread values were above or below some threshold value. We then measure the out-of-sample forecast performance of the nonlinear model using predictive accuracy tests. The results show that significant improvement in forecasting accuracy, at least for one-step-ahead forecasts, can be obtained over the linear model. Copyright © 2004 John Wiley & Sons, Ltd. [source] Optimal farm size in an uncertain land market: the case of Kyrgyz RepublicAGRICULTURAL ECONOMICS, Issue 2009Sara Savastano Option value theory; Farm size; Uncertainty; Irreversibility Abstract This article applies a real options model to the problem of land development. Making use of the 1998,2001 Kyrgyz Household Budget Survey, we show that when the hypothesis of decreasing return to scale holds, the relation between the threshold value of revenue per hectare and the amount of land cultivated is positive. In addition, the relation between the threshold and the amount of land owned is positive in the case of continuous supply of land and negative when there is discontinuous supply of land. The direct consequence is that, in the first case, smaller farms will be more willing to rent land and exercise the option where, in the second case, larger farms will exercise first. The results suggest three main conclusions: (i) the combination of uncertainty and irreversibility is an important factor in land development decisions, (ii) farmer behavior is consistent with the continuous profit maximization model, and (iii) farming unit revenue tends to be positively related to farm size, once uncertainty is properly accounted for. [source] |