Home About us Contact | |||
Simple Models (simple + models)
Selected AbstractsA QM/MM Study of Cisplatin,DNA Oligonucleotides: From Simple Models to Realistic SystemsCHEMISTRY - A EUROPEAN JOURNAL, Issue 22 2006Arturo Robertazzi Abstract QM/MM calculations were employed to investigate the role of hydrogen bonding and , stacking in several single- and double-stranded cisplatin,DNA structures. Computed geometrical parameters reproduce experimental structures of cisplatin and its complex with guanine,phosphate,guanine. Following QM/MM optimisation, single-point DFT calculations allowed estimation of intermolecular forces through atoms in molecules (AIM) analysis. Binding energies of platinated single-strand DNA qualitatively agree with myriad experimental and theoretical studies showing that complexes of guanine are stronger than those of adenine. The topology of all studied complexes confirms that platination strongly affects the stability of both single- and double-stranded DNAs: PtNH,,,X (X = N or O) interactions are ubiquitous in these complexes and account for over 70,% of all H-bonding interactions. The , stacking is greatly reduced by both mono- and bifunctional complexation: the former causes a loss of about 3,4 kcal,mol,1, whereas the latter leads to more drastic disruption. The effect of platination on Watson,Crick GC is similar to that found in previous studies: major redistribution of energy occurs, but the overall stability is barely affected. The BH&H/AMBER/AIM approach was also used to study platination of a double-stranded DNA octamer d(CCTG*G*TCC),d(GGACCAGG), for which an experimental structure is available. Comparison between theory and experiment is satisfactory, and also reproduces previous DFT-based studies of analogous structures. The effect of platination is similar to that seen in model systems, although the effect on GC pairing was more pronounced. These calculations also reveal weaker, secondary interactions of the form Pt,,,O and Pt,,,N, detected in several single- and double-stranded DNA. [source] Anisotropic contraction in forisomes: Simple models won't fitCYTOSKELETON, Issue 5 2008Winfried S. Peters Abstract Forisomes are ATP-independent, Ca2+ -driven contractile protein bodies acting as reversible valves in the phloem of plants of the legume family. Forisome contraction is anisotropic, as shrinkage in length is associated with radial expansion and vice versa. To test the hypothesis that changes in length and width are causally related, we monitored Ca2+ - and pH-dependent deformations in the exceptionally large forisomes of Canavalia gladiata by high-speed photography, and computed time-courses of derived geometric parameters (including volume and surface area). Soybean forisomes, which in the resting state resemble those of Canavalia geometrically but have less than 2% of the volume, were also studied to identify size effects. Calcium induced sixfold volume increases in forisomes of both species; in soybean, responses were completed in 0.15 s, compared to about 0.5 s required for a rapid response in Canavalia followed by slow swelling for several minutes. This size-dependent behavior supports the idea that forisome contractility might rest on similar mechanisms as those of polyelectrolyte gels, a class of artificial "smart" materials. In both species, time-courses of forisome length and diameter were variable and lacked correlation, arguing against a simple causal relationship between changes in length and width. Moreover, changes in the geometry of soybean forisomes differed qualitatively between Ca2+ - and pH-responses, suggesting that divalent cations and protons target different sites on the forisome proteins. Cell Motil. Cytoskeleton 2008. © 2008 Wiley-Liss, Inc. [source] On morphometric properties of basins, scale effects and hydrological responseHYDROLOGICAL PROCESSES, Issue 1 2003Roger Moussa Abstract One of the important problems in hydrology is the quantitative description of river system structure and the identification of relationships between geomorphological properties and hydrological response. Digital elevation models (DEMs) generally are used to delineate the basin's limits and to extract the channel network considering pixels draining an area greater than a threshold area S. In this paper, new catchment shape descriptors, the geometric characteristics of an equivalent ellipse that has the same centre of gravity, the same principal inertia axes, the same area and the same ratio of minimal inertia moment to maximal inertia moment as the basin, are proposed. They are applied in order to compare and classify the structure of seven basins located in southern France. These descriptors were correlated to hydrological properties of the basins' responses such as the lag time and the maximum amplitude of a geomorphological unit hydrograph calculated at the basin outlet by routing an impulse function through the channel network using the diffusive wave model. Then, we analysed the effects of the threshold area S on the topological structure of the channel network and on the evolution of the source catchment's shape. Simple models based on empirical relationships between the threshold S and the morphometric properties were established and new catchment shape indexes, independent of the observation scale S, were defined. This methodology is useful for geomorphologists dealing with the shape of source basins and for hydrologists dealing with the problem of scale effects on basin topology and on relationships between the basin morphometric properties and the hydrological response. Copyright © 2002 John Wiley & Sons, Ltd. [source] Is there safety in numbers?MEDICAL AND VETERINARY ENTOMOLOGY, Issue 4 2007The effect of cattle herding on biting risk from tsetse flies Abstract In sub-Saharan Africa, tsetse (Glossina spp.) transmit species of Trypanosoma which threaten 45,50 million cattle with trypanosomiasis. These livestock are subject to various herding practices which may affect biting rates on individual cattle and hence the probability of infection. In Zimbabwe, studies were made of the effect of herd size and composition on individual biting rates by capturing tsetse as they approached and departed from groups of one to 12 cattle. Flies were captured using a ring of electrocuting nets and bloodmeals were analysed using DNA markers to identify which individual cattle were bitten. Increasing the size of a herd from one to 12 adults increased the mean number of tsetse visiting the herd four-fold and the mean feeding probability from 54% to 71%; the increased probability with larger herds was probably a result of fewer flies per host, which, in turn, reduced the hosts' defensive behaviour. For adults and juveniles in groups of four to eight cattle, > 89% of bloodmeals were from the adults, even when these comprised just 13% of the herd. For groups comprising two oxen, four cows/heifers and two calves, a grouping that reflects the typical composition of communal herds in Zimbabwe, , 80% of bloodmeals were from the oxen. Simple models of entomological inoculation rates suggest that cattle herding practices may reduce individual trypanosomiasis risk by up to 90%. These results have several epidemiological and practical implications. First, the gregarious nature of hosts needs to be considered in estimating entomological inoculation rates. Secondly, heterogeneities in biting rates on different cattle may help to explain why disease prevalence is frequently lower in younger/smaller cattle. Thirdly, the cost and effectiveness of tsetse control using insecticide-treated cattle may be improved by treating older/larger hosts within a herd. In general, the patterns observed with tsetse appear to apply to other genera of cattle-feeding Diptera (Stomoxys, Anopheles, Tabanidae) and thus may be important for the development of strategies for controlling other diseases affecting livestock. [source] Simple models for predicting transmission properties of photonic crystal fibersMICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 7 2006Rachad Albandakji Abstract Simple, fast, and efficient 1D models for evaluating the transmission properties of photonic crystal fibers are proposed. Using these models, axial propagation constant, chromatic dispersion, effective area, and leakage loss can be predicted with a reasonable accuracy but much faster than often time-consuming 2D analytical and numerical techniques and with much less computational resources. It is shown that the results are in good agreement with the published data available in the literature. © 2006 Wiley Periodicals, Inc. Microwave Opt Technol Lett 48: 1286,1290, 2006; Published online in Wiley InterScience (www. interscience.wiley.com). DOI 10.1002/mop.21624 [source] Simple models for evaluating effects of small leaks on the gas barrier properties of food packagesPACKAGING TECHNOLOGY AND SCIENCE, Issue 2 2003Donghwan Chung Abstract A detailed theoretical analysis and calculations were made for providing a simple and explicit means to evaluate the effects of small leaks on the barrier properties of food packages. Small leaks, such as pinholes and channel leaks, were approximated as cylindrical pores with diameters of 50,300,,m. The first part of the current study proposes a simple mathematical model based on Fick's law of diffusion, which accounts for both the gas leakage across small leaks and the gas permeation across package walls. The model uses an effective permeability that depends on leak dimensions, type of diffusing gas, type of packaging material and gas conditions around the leak ends. In the second part of the study, three practical cases are presented to illustrate the application of the proposed model in examining the significance of leaks. These demonstrate in a simple and explicit manner that for LDPE packages: (a) leaks affect the oxygen transfer more than the water vapour transfer; (b) leak effects are more significant at lower storage temperatures; and (c) that for high gas barrier packages, the effect of leaks is very important and should not be neglected. The model can be also used to arrive at conclusions about the significance of leaks in other packaging situations (e.g. other than LDPE packaging materials) and to correct the shelf-life estimation of gas- and water vapour-sensitive foods for errors from package leaks. Copyright © 2003 John Wiley & Sons, Ltd. [source] Modulation by phytochrome of the blue light-induced extracellular acidification by leaf epidermal cells of pea (Pisum sativum L.): a kinetic analysisTHE PLANT JOURNAL, Issue 5 2000J. Theo M. Elzenga Summary Blue light induces extracellular acidification, a prerequisite of cell expansion, in epidermis cells of young pea leaves, by stimulation of the proton pumping-ATPase activity in the plasma membrane. A transient acidification, reaching a maximum 2.5,5 min after the start of the pulse, could be induced by pulses as short as 30 msec. A pulse of more than 3000 ,mol m,2 saturated this response. Responsiveness to a second light pulse was recovered with a time constant of about 7 min. The fluence rate-dependent lag time and sigmoidal increase of the acidification suggested the involvement of several reactions between light perception and activation of the ATPase. In wild-type pea plants, the fluence response relation for short light pulses was biphasic, with a component that saturates at low fluence and one that saturates at high fluence. The phytochrome-deficient mutant pcd2 showed a selective loss of the high-fluence component, suggesting that the high-fluence component is phytochrome-dependent and the low-fluence component is phytochrome-independent. Treatment with the calmodulin inhibitor W7 also led to the elimination of the phytochrome-dependent high-fluence component. Simple models adapted from the one used to simulate blue light-induced guard cell opening failed to explain one or more elements of the experimental data. The hypothesis that phytochrome and a blue light receptor interact in a short-term photoresponse is endorsed by model calculations based upon a three-step signal transduction cascade, of which one component can be modulated by phytochrome. [source] BONDSYM: SIMULINK-based educational software for analysis of dynamic systemCOMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 2 2010J.A. Calvo Abstract This article presents an educational software called BONDSYM developed to allow engineering students to learn easily and quickly about the analysis of dynamic systems through the Bond Graph method. This software uses the SIMULINK library of MATLAB, which has proven to be an excellent choice in order to implement and solve the dynamic equations involved. The application allows for the representation of the behavior of a dynamic system analyzed through the Bond Graph theory in order to understand the dynamic equations and the physical phenomena involved. Based on block diagram of SIMULINK, the different "bonds" of Bond Graph can be integrated as SIMULINK blocks in order to generate the dynamic model. A few simple models are analyzed through this application. © 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ 18: 238,251, 2010; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20246 [source] Plasma Edge Physics with B2-EireneCONTRIBUTIONS TO PLASMA PHYSICS, Issue 1-2 2006R. Schneider Abstract The B2-Eirene code package was developed to give better insight into the physics in the scrape-off layer (SOL), which is defined as the region of open field-lines intersecting walls. The SOL is characterised by the competition of parallel and perpendicular transport defining by this a 2D system. The description of the plasma-wall interaction due to the existence of walls and atomic processes are necessary ingredients for an understanding of the scrape-off layer. This paper concentrates on understanding the basic physics by combining the results of the code with experiments and analytical models or estimates. This work will mainly focus on divertor tokamaks, but most of the arguments and principles can be easily adapted also to other concepts like island divertors in stellarators or limiter devices. The paper presents the basic equations for the plasma transport and the basic models for the neutral transport. This defines the basic ingredients for the SOLPS (Scrape-Off Layer Plasma Simulator) code package. A first level of understanding is approached for pure hydrogenic plasmas based both on simple models and simulations with B2-Eirene neglecting drifts and currents. The influence of neutral transport on the different operation regimes is here the main topic. This will finish with time-dependent phenomena for the pure plasma, so-called Edge Localised Modes (ELMs). Then, the influence of impurities on the SOL plasma is discussed. For the understanding of impurity physics in the SOL one needs a rather complex combination of different aspects. The impurity production process has to be understood, then the effects of impurities in terms of radiation losses have to be included and finally impurity transport is necessary. This will be introduced with rising complexity starting with simple estimates, analysing then the detailed parallel force balance and the flow pattern of impurities. Using this, impurity compression and radiation instabilities will be studied. This part ends, combining all the elements introduced before, with specific, detailed results from different machines. Then, the effect of drifts and currents is introduced and their consequences presented. Finally, some work on deriving scaling laws for the anomalous turbulent transport based on automatic edge transport code fitting procedures will be described. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] An answer to an important controversy and the need for caution when using simple models to predict inelastic earthquake response of buildings with torsionEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 5 2010Stavros A. Anagnostopoulos Abstract This paper presents evidence that the extension of conclusions based on the widely used simplified, one story, eccentric systems of the shear-beam type, to actual, nonsymmetric buildings and consequent assessments of the pertinent code provisions, can be quite erroneous, unless special care is taken to match the basic properties of the simplified models to those of the real buildings. The evidence comes from comparisons of results obtained using three variants of simplified models, with results from the inelastic dynamic response of three- and five-story eccentric buildings computed with detailed MDOF systems, where the members are idealized with the well-known plastic hinge model. In addition, a convincing answer is provided on a pertinent hanging controversy: For frame-type buildings, designed in accordance with the dynamic provisions of modern codes (such as EC8 or IBC2000), which allow reduced shears at the stiff edge due to torsion, the frames at the flexible sides are the critical elements in terms of ductility demands. Copyright © 2009 John Wiley & Sons, Ltd. [source] Is CEO Pay Really Inefficient?EUROPEAN FINANCIAL MANAGEMENT, Issue 3 2009A Survey of New Optimal Contracting Theories D2; D3; G34; J3 Abstract Bebchuk and Fried (2004) argue that executive compensation is set by CEOs themselves rather than boards on behalf of shareholders, since many features of observed pay packages may appear inconsistent with standard optimal contracting theories. However, it may be that simple models do not capture several complexities of real-life settings. This article surveys recent theories that extend traditional frameworks to incorporate these dimensions, and show that the above features can be fully consistent with efficiency. For example, optimal contracting theories can explain the recent rapid increase in pay, the low level of incentives and their negative scaling with firm size, pay-for-luck, the widespread use of options (as opposed to stock), severance pay and debt compensation, and the insensitivity of incentives to risk. [source] LEAKY PREZYGOTIC ISOLATION AND POROUS GENOMES: RAPID INTROGRESSION OF MATERNALLY INHERITED DNAEVOLUTION, Issue 4 2005Kai M. A. Chan Abstract Accurate phylogenies are crucial for understanding evolutionary processes, especially species diversification. It is commonly assumed that "good" species are sufficiently isolated genetically that gene genealogies represent accurate phylogenies. However, it is increasingly clear that good species may continue to exchange genetic material through hybridization (introgression). Many studies of closely related species reveal introgression of some genes without others, often with more rapid introgression of maternally inherited chloroplast or mitochondrial DNA (cpDNA, mtDNA). We seek a general explanation for this biased introgression using simple models of common reproductive isolating barriers (RIBs). We compare empirically informed models of prezygotic isolation (for pre- and postinsemination mechanisms of both female choice and male competition) with postzygotic isolation and demonstrate that rate of introgression depends critically upon type of RIB and mode of genetic inheritance (maternal versus biparental versus paternal). Our frequency-dependent prezygotic RIBs allow much more rapid introgression of biparentally and maternally inherited genes than do commonly modeled postzygotic RIBs (especially maternally inherited DNA). After considering the specific predictions in the context of empirical observations, we conclude that our model of prezygotic RIBs is a general explanation for biased introgression of maternally inherited genomic components. These findings suggest that we should use extreme caution when interpreting single gene genealogies as species phylogenies, especially for cpDNA and mtDNA. [source] Kinetic Monte Carlo Simulations of Precipitation,ADVANCED ENGINEERING MATERIALS, Issue 12 2006E. Clouet Abstract We present some recent applications of the atomistic diffusion model and of the kinetic Monte Carlo (KMC) algorithm to systems of industrial interest, i.e. Al-Zr-Sc and Fe-Nb-C alloys, or to model systems. These applications include study of homogeneous and heterogeneous precipitation as well as of phase transformation under irradiation. The KMC simulations are also used to test the main assumptions and limitations of more simple models and classical theories used in the industry, e.g. the classical nucleation theory. [source] Climate, competition, and the coexistence of island lizardsFUNCTIONAL ECOLOGY, Issue 2 2006L. B. BUCKLEY Summary 1The influence of environmental temperatures and competition combine to determine the distributions of island lizards. Neither a bioenergetic model nor simple models of competition alone can account for the distributions. A mechanistic, bioenergetic model successfully predicts how the abundance of a solitary Anolis lizard species will decline along an island's elevation gradient. However, the abundance trends for sympatric lizards diverge from the predictions of the non-interactive model. 2Here we incorporate competition in the bioenergetic model and examine how different forms of competition modify the temperature-based abundance predictions. 3Applying the bioenergetic model with competition to an island chain tests whether the model can successfully predict on which islands two lizards species will coexist. 4Coexistence is restricted to the two largest islands, which the model predicts have substantially greater carrying capacities than the smaller islands. The model successfully predicts that competition prevents species coexistence on the smallest islands. However, the model predicts that the mid-sized islands are capable of supporting substantial populations of both species. Additional island characteristics, such as habitat diversity, resource availability and temporal disturbance patterns, may prevent coexistence. [source] MCMC-based linkage analysis for complex traits on general pedigrees: multipoint analysis with a two-locus model and a polygenic componentGENETIC EPIDEMIOLOGY, Issue 2 2007Yun Ju Sung Abstract We describe a new program lm_twoqtl, part of the MORGAN package, for parametric linkage analysis with a quantitative trait locus (QTL) model having one or two QTLs and a polygenic component, which models additional familial correlation from other unlinked QTLs. The program has no restriction on number of markers or complexity of pedigrees, facilitating use of more complex models with general pedigrees. This is the first available program that can handle a model with both two QTLs and a polygenic component. Competing programs use only simpler models: one QTL, one QTL plus a polygenic component, or variance components (VC). Use of simple models when they are incorrect, as for complex traits that are influenced by multiple genes, can bias estimates of QTL location or reduce power to detect linkage. We compute the likelihood with Markov Chain Monte Carlo (MCMC) realization of segregation indicators at the hypothesized QTL locations conditional on marker data, summation over phased multilocus genotypes of founders, and peeling of the polygenic component. Simulated examples, with various sized pedigrees, show that two-QTL analysis correctly identifies the location of both QTLs, even when they are closely linked, whereas other analyses, including the VC approach, fail to identify the location of QTLs with modest contribution. Our examples illustrate the advantage of parametric linkage analysis with two QTLs, which provides higher power for linkage detection and better localization than use of simpler models. Genet. Epidemiol. © 2006 Wiley-Liss, Inc. [source] Project Labor Agreements' Effect on School Construction Costs in MassachusettsINDUSTRIAL RELATIONS, Issue 1 2010DALE BELMAN This paper investigates the impact of Project Labor Agreements (PLAs) on school construction cost in Massachusetts. Although simple models exhibit a large positive effect of PLAs on construction costs, such effects are absent from more completely specified models. Further investigation finds sufficient dissimilarity in schools built with and without PLAs that it is difficult to distinguish the cost effects of PLAs from the cost effects of factors that underlie the use of PLAs. [source] AN ON-THE-JOB SEARCH MODEL OF CRIME, INEQUALITY, AND UNEMPLOYMENT*INTERNATIONAL ECONOMIC REVIEW, Issue 3 2004Kenneth Burdett We extend simple search models of crime, unemployment, and inequality to incorporate on-the-job search. This is valuable because, although simple models are useful, on-the-job search models are more interesting theoretically and more relevant empirically. We characterize the wage distribution, unemployment rate, and crime rate theoretically, and use quantitative methods to illustrate key results. For example, we find that increasing the unemployment insurance replacement rate from 53 to 65 percent increases unemployment and crime rates from 10 and 2.7 percent to 14 and 5.2 percent. We show multiple equilibria arise for some fairly reasonable parameters; in one case, unemployment can be 6 or 23 percent, and crime 0 or 10 percent, depending on the equilibrium. [source] The evolution of, and revolution in, land surface schemes designed for climate modelsINTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 5 2003A. J. Pitman Abstract The land surface is a key component of climate models. It controls the partitioning of available energy at the surface between sensible and latent heat, and it controls the partitioning of available water between evaporation and runoff. The land surface is also the location of the terrestrial carbon sink. Evidence is increasing that the influence of the land surface is significant on climate and that changes in the land surface can influence regional- to global-scale climate on time scales from days to millennia. Further, there is now a suggestion that the terrestrial carbon sink may decrease as global temperatures increase as a consequence of rising CO2 levels. This paper provides the theoretical background that explains why the land surface should play a central role in climate. It also provides evidence, sourced from climate model experiments, that the land surface is of central importance. This paper then reviews the development of land surface models designed for climate models from the early, very simple models through to recent efforts, which include a coupling of biophysical processes to represent carbon exchange. It is pointed out that significant problems remain to be addressed, including the difficulties in parameterizing hydrological processes, root processes, sub-grid-scale heterogeneity and biogeochemical cycles. It is argued that continued development of land surface models requires more multidisciplinary efforts by scientists with a wide range of skills. However, it is also argued that the framework is now in place within the international community to build and maintain the latest generation of land surface models. Further, there should be considerable optimism that consolidating the recent rapid advances in land surface modelling will enhance our capability to simulate the impacts of land-cover change and the impacts of increasing CO2 on the global and regional environment. Copyright © 2003 Royal Meteorological Society [source] Alternatives to pilot plant experiments in cheese-ripening studiesINTERNATIONAL JOURNAL OF DAIRY TECHNOLOGY, Issue 4 2001Shakeel-ur-rehman Experimental studies on cheese have several objectives, from assessing the influence of the microflora and enzymes indigenous to milk to evaluating starters and adjuncts. Several studies have been undertaken to evaluate the influence of an individual ripening agent in the complex environment of cheese. Cheesemaking experiments, even on a pilot scale, are expensive and time-consuming, and when controlled bacteriological conditions are needed, pilot plant experiments are difficult to perform. Cheese curd slurries are simple models that can be prepared under sterile conditions in the laboratory and can be used as an intermediate between test tubes and cheese trials, but probably cannot replace the latter. Miniature model cheeses are similar to pilot plant cheeses and can be manufactured under sterile conditions. Several approaches to assess the role of cheese-ripening agents are reviewed in this paper. [source] Simplified models for the performance evaluation of desiccant wheel dehumidificationINTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 1 2003M. Beccali Abstract In the present communication, simple models have been presented to evaluate the performance of rotary desiccant wheels based on different kind of solid desiccants e.g. silica gel and LiCl. The first part of the paper presents ,Model 54' which is developed for silica gel desiccant rotor. The model has been derived from the interpolation of experimental data obtained from the industry and the correlations have been developed for predicting outlet temperature and absolute humidity. The ,Model 54' consists of 54 coefficients corresponding to each correlation for outlet absolute humidity and temperature and it is found that the model predicts very well the performance of silica gel desiccant rotor (Type-I). In the second part of the paper, a psychrometric model has been presented to obtain relatively simple correlations for outlet temperature and absolute humidity. The developed psychometric model is based on the correlations between the relative humidity and enthalpy of supply and regeneration air streams. The model is used to predict the performance of three type of desiccant rotors manufactured by using different kind of solid desiccants (Type I, II and III). The model is tested corresponding to a wide range of measurement data. The developed psychometric model is simple in nature and able to predict very well the performance of different kind of desiccant rotors. Copyright © 2002 John Wiley & Sons, Ltd. [source] A model of bovine tuberculosis in the badger Melesmeles: an evaluation of control strategiesJOURNAL OF APPLIED ECOLOGY, Issue 3 2001G.C. Smith Summary 1,An individual-based stochastic simulation model was used to investigate the control of bovine tuberculosis (TB) in the European badger Meles meles. Nearly all population and epidemiological parameters were derived from one study site, and the transmission of TB from badgers to cattle was included. The latter is an essential step if reactive badger control strategies are to be modelled. 2,The model appeared to underestimate slightly the rate of population recovery following widespread culling. This may have been due to simulating an isolated population with no immigration and no compensatory increase in fecundity. This should not affect the relative efficacy of each control strategy, but does require further investigation. 3,Of the historical methods of badger control, gassing and the ,clean ring' strategies were the most effective at reducing disease prevalence in the badger and cattle herd breakdown rates. These results agree with those of earlier models. 4,The proactive badger removal operation as part of the current field trial should cause a dramatic decrease in the number of cattle herd breakdowns, but also has the greatest effect on the badger population size. 5,The proactive use of a live test to detect TB, followed by vaccination, appears to reduce substantially cattle herd breakdowns and disease prevalence in the badger. 6,Three combined control strategies gave the best initial reduction in cattle herd breakdown rate and disease prevalence in the badger: (i) a proactive cull followed by reactive test and cull; (ii) a continued vaccination and proactive test and cull; and (iii) a continuous proactive test and cull. 7,The results of simulation models suggest that badger vaccination is a very good method of TB control. This is at odds with simple models and requires further investigation. [source] Predicting habitat distribution and frequency from plant species co-occurrence dataJOURNAL OF BIOGEOGRAPHY, Issue 6 2007Christine Römermann Abstract Aim, Species frequency data have been widely used in nature conservation to aid management decisions. To determine species frequencies, information on habitat occurrence is important: a species with a low frequency is not necessarily rare if it occupies all suitable habitats. Often, information on habitat distribution is available for small geographic areas only. We aim to predict grid-based habitat occurrence from grid-based plant species distribution data in a meso-scale analysis. Location, The study was carried out over two spatial extents: Germany and Bavaria. Methods, Two simple models were set up to examine the number of characteristic plant species needed per grid cell to predict the occurrence of four selected habitats (species data from FlorKart, http://www.floraweb.de). Both models were calibrated in Bavaria using available information on habitat distribution, validated for other federal states, and applied to Germany. First, a spatially explicit regression model (generalized linear model (GLM) with assumed binomial error distribution of response variable) was obtained. Second, a spatially independent optimization model was derived that estimated species numbers without using spatial information on habitat distribution. Finally, an additional uncalibrated model was derived that calculated the frequencies of 24 habitats. It was validated using NATURA2000 habitat maps. Results, Using the Bavarian models it was possible to predict habitat distribution and frequency from the co-occurrence of habitat-specific species per grid cell. As the model validations for other German federal states were successful, the models were applied to all of Germany, and habitat distribution and frequencies could be retrieved for the national scale on the basis of habitat-specific species co-occurrences per grid cell. Using the third, uncalibrated model, which includes species distribution data only, it was possible to predict the frequencies of 24 habitats based on the co-occurrence of 24% of formation-specific species per grid cell. Predicted habitat frequencies deduced from this third model were strongly related to frequencies of NATURA2000 habitat maps. Main conclusions, It was concluded that it is possible to deduce habitat distributions and frequencies from the co-occurrence of habitat-specific species. For areas partly covered by habitat mappings, calibrated models can be developed and extrapolated to larger areas. If information on habitat distribution is completely lacking, uncalibrated models can still be applied, providing coarse information on habitat frequencies. Predicted habitat distributions and frequencies can be used as a tool in nature conservation, for example as correction factors for species frequencies, as long as the species of interest is not included in the model set-up. [source] The accuracy of matrix population model projections for coniferous trees in the Sierra Nevada, CaliforniaJOURNAL OF ECOLOGY, Issue 4 2005PHILLIP J. VAN MANTGEM Summary 1We assess the use of simple, size-based matrix population models for projecting population trends for six coniferous tree species in the Sierra Nevada, California. We used demographic data from 16 673 trees in 15 permanent plots to create 17 separate time-invariant, density-independent population projection models, and determined differences between trends projected from initial surveys with a 5-year interval and observed data during two subsequent 5-year time steps. 2We detected departures from the assumptions of the matrix modelling approach in terms of strong growth autocorrelations. We also found evidence of observation errors for measurements of tree growth and, to a more limited degree, recruitment. Loglinear analysis provided evidence of significant temporal variation in demographic rates for only two of the 17 populations. 3Total population sizes were strongly predicted by model projections, although population dynamics were dominated by carryover from the previous 5-year time step (i.e. there were few cases of recruitment or death). Fractional changes to overall population sizes were less well predicted. Compared with a null model and a simple demographic model lacking size structure, matrix model projections were better able to predict total population sizes, although the differences were not statistically significant. Matrix model projections were also able to predict short-term rates of survival, growth and recruitment. Mortality frequencies were not well predicted. 4Our results suggest that simple size-structured models can accurately project future short-term changes for some tree populations. However, not all populations were well predicted and these simple models would probably become more inaccurate over longer projection intervals. The predictive ability of these models would also be limited by disturbance or other events that destabilize demographic rates. [source] MHC studies in nonmodel vertebrates: what have we learned about natural selection in 15 years?JOURNAL OF EVOLUTIONARY BIOLOGY, Issue 3 2003L. Bernatchez Abstract Elucidating how natural selection promotes local adaptation in interaction with migration, genetic drift and mutation is a central aim of evolutionary biology. While several conceptual and practical limitations are still restraining our ability to study these processes at the DNA level, genes of the major histocompatibility complex (MHC) offer several assets that make them unique candidates for this purpose. Yet, it is unclear what general conclusions can be drawn after 15 years of empirical research that documented MHC diversity in the wild. The general objective of this review is to complement earlier literature syntheses on this topic by focusing on MHC studies other than humans and mice. This review first revealed a strong taxonomic bias, whereby many more studies of MHC diversity in natural populations have dealt with mammals than all other vertebrate classes combined. Secondly, it confirmed that positive selection has a determinant role in shaping patterns of nucleotide diversity in MHC genes in all vertebrates studied. Yet, future tests of positive selection would greatly benefit from making better use of the increasing number of models potentially offering more statistical rigour and higher resolution in detecting the effect and form of selection. Thirdly, studies that compared patterns of MHC diversity within and among natural populations with neutral expectations have reported higher population differentiation at MHC than expected either under neutrality or simple models of balancing selection. Fourthly, several studies showed that MHC-dependent mate preference and kin recognition may provide selective factors maintaining polymorphism in wild outbred populations. However, they also showed that such reproductive mechanisms are complex and context-based. Fifthly, several studies provided evidence that MHC may significantly influence fitness, either by affecting reproductive success or progeny survival to pathogens infections. Overall, the evidence is compelling that the MHC currently represents the best system available in vertebrates to investigate how natural selection can promote local adaptation at the gene level despite the counteracting actions of migration and genetic drift. We conclude this review by proposing several directions where future research is needed. [source] Stochastic Volatility Corrections for Interest Rate DerivativesMATHEMATICAL FINANCE, Issue 2 2004Peter Cotton We study simple models of short rates such as the Vasicek or CIR models, and compute corrections that come from the presence of fast mean-reverting stochastic volatility. We show how these small corrections can affect the shape of the term structure of interest rates giving a simple and efficient calibration tool. This is used to price other derivatives such as bond options. The analysis extends the asymptotic method developed for equity derivatives in Fouque, Papanicolaou, and Sircar (2000b). The assumptions and effectiveness of the theory are tested on yield curve data. [source] The accuracy of downward short- and long-wave radiation at the earth's surface calculated using simple modelsMETEOROLOGICAL APPLICATIONS, Issue 1 2004J. W. Finch Estimates of the downward global solar and long-wave radiations are commonly made using simple models. We have tested the estimates produced by a number of these simple models against the values predicted by the radiative transfer model used in a climate model in order to determine their suitability for global applications. For clear sky, two simple models were comparable, but under cloudy conditions a combination of a clear-sky model based on the Angstrom-Prescott equation (which deals with the downwelling solar radiation) with a cloud transmissivity utilising total cloud fraction proved best. The lowest root mean square errors were 27 W m,2 for clear-sky global solar radiation and 90 W m,2 for cloudy conditions. For downward long-wave radiation in clear-sky conditions, the model of Garratt (1992) performed best with a root mean square error of 24 W m,2. However, in cloudy conditions the model of Idso & Jackson (1969) performed best with a root mean square error of 22 W m,2, and, as it performs nearly as well as that of Garratt (1992) in clear-sky conditions, it is probably the best choice. Copyright © 2004 Royal Meteorological Society. [source] Understanding the halo-mass and galaxy-mass cross-correlation functionsMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 1 2008Eric Hayashi ABSTRACT We use the Millennium Simulation (MS) to measure the cross-correlation between halo centres and mass (or equivalently the average density profiles of dark haloes) in a Lambda cold dark matter (,CDM) cosmology. We present results for radii in the range 10 h,1 kpc < r < 30 h,1 Mpc and for halo masses in the range 4 × 1010 < M200 < 4 × 1014 h,1 M,. Both at z= 0 and at z= 0.76 these cross-correlations are surprisingly well fitted if the inner region is approximated by a density profile of NFW or Einasto form, the outer region by a biased version of the linear mass autocorrelation function, and the maximum of the two is adopted where they are comparable. We use a simulation of galaxy formation within the MS to explore how these results are reflected in cross-correlations between galaxies and mass. These are directly observable through galaxy,galaxy lensing. Here also we find that simple models can represent the simulation results remarkably well, typically to ,10 per cent. Such models can be used to extend our results to other redshifts, to cosmologies with other parameters, and to other assumptions about how galaxies populate dark haloes. Our galaxy formation simulation already reproduces current galaxy,galaxy lensing data quite well. The characteristic features predicted in the galaxy,galaxy lensing signal should provide a strong test of the ,CDM cosmology as well as a route to understanding how galaxies form within it. [source] An XMM,Newton observation of Ark 120: the X-ray spectrum of a ,bare' Seyfert 1 nucleusMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 1 2004S. Vaughan ABSTRACT We report on a long (100 ks) XMM,Newton observation of the bright Seyfert 1 galaxy Arakelian 120. The source previously showed no signs of intrinsic reddening in its infrared,ultraviolet continuum and previous observations had shown no evidence for ionized absorption in either the ultraviolet or X-ray bands. The new XMM,Newton Reflection Grating Spectrometer data place tight limits on the presence of an ionized X-ray absorber and confirm that the X-ray spectrum of Ark 120 is essentially unmodified by intervening matter. Thus Ark 120 can be considered a ,bare' Seyfert 1 nucleus. This observation therefore offers a clean view of the X-ray spectrum of a ,normal' Seyfert galaxy free from absorption effects. The spectrum shows a Doppler broadened iron emission line (FWHM , 3 × 104 km s,1) and a smooth, continuous soft excess which appears to peak at an energy ,0.5 keV. This adds weight to the claim that genuine soft excesses (i.e. those due to a real steepening of the underlying continuum below ,2 keV) are ubiquitous in Seyfert 1 spectra. However, the detailed shape of the excess could not be reproduced by any of the simple models tested (power laws, blackbodies, Comptonized blackbodies, accretion disc reflection). This observation therefore demonstrates both the need to understand the soft excess (as a significant contributor to the luminosity of most Seyfert 1s) and the inability of the existing, simple models to explain it. [source] Modelling the role of social behavior in the persistence of the alpine marmot Marmota marmotaOIKOS, Issue 1 2003Volker Grimm A general rule of thumb for biological conservation obtained from simple models of hypothetical species is that for populations with strong environmental noise moderate increases in habitat size or quality do not substantially reduce extinction risk. However, whether this rule also holds for real species with complex behavior, such as social species with breeding units and reproductive suppression, is uncertain. Here we present a population viability analysis of the alpine marmot Marmota marmota, which displays marked social behavior, i.e. it lives in social groups of up to twenty individuals. Our analysis is based on a long-term field study carried out in the Bavarian Alps since 1982. During the first fifteen years of this study, 687 marmots were individually marked and the movements and fate of 98 dispersing marmots were recorded with radio-telemetry. Thus, in contrast to most other viability analyses of spatially structured populations, good data about dispersal exist. A model was constructed which is individual-based, spatially explicit at the scale of clusters of neighbouring territories, and spatially implicit at larger scales. The decisive aspect of marmot life history, winter mortality, is described by logistic regression where mortality is increased by age and the severity of winter, and decreased by the number of subdominant individuals present in a group. Model predictions of group size distribution are in good agreement with the results of the field study. The model shows that the effect of sociality on winter mortality is very effective in buffering environmental harshness and fluctuations. This underpins theoretical results stating that the appropriate measure of the strength of environmental noise is the ratio between the variance of population growth rate and the intrinsic rate of increase. The lessons from our study for biological conservation are that simple, unstructured models may not be sufficient to assess the viability of species with complex behavioral traits, and that even moderate increases in habitat capacity may substantially reduce extinction risk even if environmental fluctuations seem high. [source] Use of models to assess the reduction in contamination of water bodies by agricultural pesticides through the implementation of policy instruments: a case study of the Voluntary Initiative in the UKPEST MANAGEMENT SCIENCE (FORMERLY: PESTICIDE SCIENCE), Issue 12 2006James Garratt Abstract Through normal agricultural use, pesticides may reach environmental water bodies via several routes of entry. Various policies and initiatives exist to reduce the effects of pesticides in the environment. One such initiative in place in the UK is the Voluntary Initiative (VI). The VI is a voluntary scheme put forward by the Crop Protection Association with other crop protection and farming organisations to reduce the environmental impacts of pesticides. Mathematical models of pesticide fate can usefully be applied to examine the impact of factors influencing the contamination of water bodies by pesticides. The work reported here used water quality models to examine how changes in farmer behaviour could potentially impact pesticide contamination of environmental water bodies. As far as possible, uncalibrated, standard regulatory models were used. Where suitable models were not available, simple models were defined for the purposes of the study and calibrated using literature data. Scenarios were developed to represent different standards of practice with respect to pesticide user behaviour. The development of these scenarios was guided by the Crop Protection Management Plan (CPMP) aspect of the VI. A framework for the use of modelling in the evaluation of the VI is proposed. The results of the modelling study suggest that, in several areas, widespread adoption of the measures proposed in the VI could lead to reductions in pesticide contamination of environmental water bodies. These areas include pesticide contamination from farmyards, spray drift and field runoff. In other areas (including pesticide leaching to groundwater and contamination of surface water from field drains) the benefits that may potentially be gained from the VI are less clear. A framework to evaluate the VI should take into consideration the following aspects: (1) groundwater is more at risk when there is a combination of leachable compounds, vulnerable soils, shallow groundwater and high product usage; (2) surface water contamination from drains is most likely when heavy rain falls soon after application, the soils are vulnerable and product usage is high; (3) surface water contamination from drift is most likely when the distance between the spray boom and water body is small and product usage is high; (4) surface water contamination from farmyards is dependent on the nature of the farmyard surface, the competence of the spray operator and the level of product usage. Any policy or initiative to reduce pesticide contamination should be measured against farmer behaviour in these areas. © Crown copyright 2006. Reproduced with the permission of Her Majesty's Stationery Office. Published by John Wiley & Sons, Ltd. [source] |