Modelling Approach (modelling + approach)

Distribution by Scientific Domains
Distribution within Life Sciences


Selected Abstracts


Modelling Approach for Planar Self-Breathing PEMFC and Comparison with Experimental Results,

FUEL CELLS, Issue 4 2004
A. Schmitz
Abstract This paper presents a model-based analysis of a proton exchange membrane fuel cell,(PEMFC) with a planar design as the power supply for portable applications. The cell is operated with hydrogen and consists of an open cathode side allowing for passive, self-breathing, operation. This planar fuel cell is fabricated using printed circuit board,(PCB) technology. Long-term stability of this type of fuel cell has been demonstrated. A stationary, two-dimensional, isothermal, mathematical model of the planar fuel cell is developed. Fickian diffusion of the gaseous components,(O2, H2, H2O) in the gas diffusion layers and the catalyst layers is accounted for. The transport of water is considered in the gaseous phase only. The electrochemical reactions are described by the Tafel equation. The potential and current balance equations are solved separately for protons and electrons. The resulting system of partial differential equations is solved by a finite element method using FEMLAB,(COMSOL Inc.) software. Three different cathode opening ratios are realized and the corresponding polarization curves are measured. The measurements are compared to numerical simulation results. The model reproduces the shape of the measured polarization curves and comparable limiting current density values, due to mass transport limitation, are obtained. The simulated distribution of gaseous water shows that an increase of the water concentration under the rib occurs. It is concluded that liquid water may condense under the rib leading to a reduction of the open pore space accessible for gas transport. Thus, a broad rib not only hinders the oxygen supply itself, but may also cause additional mass transport problems due to the condensation of water. [source]


A Mixed Modelling Approach for Randomized Experiments with Repeated Measures

JOURNAL OF AGRONOMY AND CROP SCIENCE, Issue 4 2004
H. P. Piepho
Abstract Repeated measurements on the same experimental unit are common in plant research. Due to lack of randomization and the serial ordering of observations on the same unit, such data give rise to correlations, which need to be accounted for in statistical analysis. Mixed modelling provides a flexible framework for this task. The present paper proposes a general method to formulate mixed models for designed experiments with repeated measurements. The approach is exemplified by way of several examples. [source]


Modelling approach to analyse the effects of nitrification inhibition on primary production

FUNCTIONAL ECOLOGY, Issue 1 2009
S. Boudsocq
Summary 1Wet tropical savannas have high grass productivity despite the fact that nitrogen is generally limiting for primary production and soil nutrient content is typically very low. Nitrogen recycling, and especially nitrification, is supposed to be a strong determinant of the balance between conservation and loss of nutrients at the ecosystem level. The important primary production observed in wet tropical savannas might be due to a tight nutrient cycling and the fact that some grass species inhibit soil nitrification. 2Using a general theoretical ecosystem model taking both nitrate and ammonium into account, we investigate analytically, using a four,compartment-differential-equation system the general conditions under which nitrification inhibition enhances primary production. We then estimate the quantitative impact of such a mechanism on the dynamics and budget of nitrogen in a well-documented ecosystem, the Lamto savanna (Ivory Coast). This ecosystem is dominated by the grass Hyparrhenia diplandra, which drastically reduces nitrification in the whole savanna except for a small zone. While this small zone supports a lower grass primary production, nitrification is higher, most likely due to the presence of another genotype of H. diplandra, which has no effect on nitrification processes. Ultimately, we test whether differences in nitrification fluxes can alone explain this variation in primary production. 3Model analysis shows that nitrification inhibition enhances primary production only if the recycling efficiency , that is, the fraction of nitrogen passing through a compartment that stays inside the ecosystem , of ammonium is higher than the recycling efficiency of nitrate. This condition probably manifests itself in most soils as ammonium is less mobile than nitrate and is not touched by denitrification. It also depends partially on the relative affinity of plants for ammonium or nitrate. The numerical predictions for this model in the Lamto savanna show that variations in nitrification inhibition capacity may explain observed differences in primary production. 4In conclusion we find that nitrification inhibition is a process which probably enhances ecosystem fertility in a sustainable way, particularly in situations of high nitrate leaching and denitrification fluxes. This mechanism could explain the ecological advantage exhibited by native African grasses over indigenous grasses in South-American pastures. [source]


The adoption behaviour of information technology industry in increasing business-to-business integration sophistication

INFORMATION SYSTEMS JOURNAL, Issue 1 2010
William Y C Wang
Abstract A supply chain is not a linear type of inter-firm structure but is often considered as a network. Business networks are underpinned by the firms' resources, social legitimacy and associated power, which are also seen in the adoption theories of business-to-business integration (B2Bi) in the supply chain. However, there appears to be scarcity of the discussion on the theoretical relationship between them. This paper aims to enrich the previous findings of technology adoption theories in a business-to-business context by proposing a structural model and using Structural Equation Modelling approach to test it. It focuses on the integrated supply chain to test, analyse and extend the adoption factors to the use of computer-based information systems (IS). The survey data were collected in the Taiwanese Information Technology Industry. The path analyses indicate the answers for three issues raised from the research framework and confirm the associations between a firm's existing system support readiness and the network determinants outside organizational boundaries. Further, it identifies the interrelationships among these factors and indicates that some of them mediate the enterprises' behaviour on investments to increase current IS for B2Bi purposes. [source]


Fishery-induced demographic changes in the timing of spawning: consequences for reproductive success,

FISH AND FISHERIES, Issue 3 2009
Peter J. Wright
Abstract Demography can have a significant effect on reproductive timing and the magnitude of such an effect can be comparable to environmentally induced variability. This effect arises because the individuals of many fish species spawn progressively earlier within a season and may produce more egg batches over a longer period as they get older, thus extending their lifetime spawning duration. Inter-annual variation in spawning time is a critical factor in reproductive success because it affects the early environmental conditions experienced by progeny and the period they have to complete phases of development. By reducing the average lifetime spawning duration within a fish stock, fishing pressure could be increasing the variability in reproductive success and reducing long-term stock reproductive potential. Empirical estimates of selection on birth date, from experiments and using otolith microstructure, demonstrate that there is considerable variation in selection on birth date both within a spawning season and between years. The few multi-year studies that have linked egg production with the survival of progeny to the juvenile stage further highlight the uncertainty that adults face in timing their spawning to optimize offspring survival. The production of many small batches of eggs over a long period of time within a season and over a lifetime is therefore likely to decrease variance and increase mean progeny survival. Quantifying this effect of demography on variability in survival requires a focus on lifetime reproductive success rather than year specific relationships between recruitment and stock reproductive potential. Modelling approaches are suggested that can better quantify the likely impact of changing spawning times on year-class strength and lifetime reproductive potential. The evidence presented strengthens the need to avoid fishing severely age truncated fish stocks. [source]


Estimation of the day-specific probabilities of conception: current state of the knowledge and the relevance for epidemiological research

PAEDIATRIC & PERINATAL EPIDEMIOLOGY, Issue 2006
Courtney D. Lynch
Summary Conception, as defined by the fertilisation of an ovum by a sperm, marks the beginning of human development. Currently, a biomarker of conception is not available; as conception occurs shortly after ovulation, the latter can be used as a proxy for the time of conception. In the absence of serial ultrasound examinations, ovulation cannot be readily visualised leaving researchers to rely on proxy measures of ovulation that are subject to error. The most commonly used proxy measures include: charting basal body temperature, monitoring cervical mucus, and measuring urinary metabolites of oestradiol and luteinising hormone. Establishing the timing of the ovulation and the fertile window has practical utility in that it will assist couples in appropriately timing intercourse to achieve or avoid pregnancy. Identifying the likely day of conception is clinically relevant because it has the potential to facilitate more accurate pregnancy dating, thereby reducing the iatrogenic risks associated with uncertain gestation. Using data from prospective studies of couples attempting to conceive, several researchers have developed models for estimating the day-specific probabilities of conception. Elucidating these will allow researchers to more accurately estimate the day of conception, thus spawning research initiatives that will expand our current limited knowledge about the effect of exposures at critical periconceptional windows. While basal body temperature charting and cervical mucus monitoring have been used with success in field-based studies for many years, recent advances in science and technology have made it possible for women to get instant feedback regarding their daily fertility status by monitoring urinary metabolites of reproductive hormones in the privacy of their own homes. Not only are innovations such as luteinising hormone test kits and digital fertility monitors likely to increase study compliance and participation rates, they provide valuable prospective data that can be used in epidemiological research. Although we have made great strides in estimating the timing and length of the fertile window, more work is needed to elucidate the day-specific probabilities of conception using proxy measures of ovulation that are inherently subject to error. Modelling approaches that incorporate the use of multiple markers of ovulation offer great promise to fill these important data gaps. [source]


Modelling approaches to compare sorption and degradation of metsulfuron-methyl in laboratory micro-lysimeter and batch experiments

PEST MANAGEMENT SCIENCE (FORMERLY: PESTICIDE SCIENCE), Issue 12 2003
Maik Heistermann
Abstract Results of laboratory batch studies often differ from those of outdoor lysimeter or field plot experiments,with respect to degradation as well as sorption. Laboratory micro-lysimeters are a useful device for closing the gap between laboratory and field by both including relevant transport processes in undisturbed soil columns and allowing controlled boundary conditions. In this study, sorption and degradation of the herbicide metsulfuron-methyl in a loamy silt soil were investigated by applying inverse modelling techniques to data sets from different experimental approaches under laboratory conditions at a temperature of 10 °C: first, batch-degradation studies and, second, column experiments with undisturbed soil cores (28 cm length × 21 cm diameter). The column experiments included leachate and soil profile analysis at two different run times. A sequential extraction method was applied in both study parts in order to determine different binding states of the test item within the soil. Data were modelled using ModelMaker and Hydrus-1D/2D. Metsulfuron-methyl half-life in the batch-experiments (t1/2 = 66 days) was shown to be about four times higher than in the micro-lysimeter studies (t1/2 about 17 days). Kinetic sorption was found to be a significant process both in batch and column experiments. Applying the one-rate-two-site kinetic sorption model to the sequential extraction data, it was possible to associate the stronger bonded fraction of metsulfuron-methyl with its kinetically sorbed fraction in the model. Although the columns exhibited strong significance of multi-domain flow (soil heterogeneity), the comparison between bromide and metsulfuron-methyl leaching and profile data showed clear evidence for kinetic sorption effects. The use of soil profile data had significant impact on parameter estimates concerning sorption and degradation. The simulated leaching of metsulfuron-methyl as it resulted from parameter estimation was shown to decrease when soil profile data were considered in the parameter estimation procedure. Moreover, it was shown that the significance of kinetic sorption can only be demonstrated by the additional use of soil profile data in parameter estimation. Thus, the exclusive use of efflux data from leaching experiments at any scale can lead to fundamental misunderstandings of the underlying processes. Copyright © 2003 Society of Chemical Industry [source]


Tangible Heritage: Production of Astrolabes on a Laser Engraver

COMPUTER GRAPHICS FORUM, Issue 8 2008
G. Zotti
I.3.5 [Computer Graphics]: Computational geometry and object modelling , geometric algorithms, languages and systems; I.3.8 [Computer Graphics]: Applications Abstract The astrolabe, an analog computing device, used to be the iconic instrument of astronomers during the Middle Ages. It allowed a multitude of operations of practical astronomy which were otherwise cumbersome to perform in an epoch when mathematics had apparently almost been forgotten. Usually made from wood or sheet metal, a few hundred instruments, mostly from brass, survived until today and are valuable museum showpieces. This paper explains a procedural modelling approach for the construction of the classical kinds of astrolabes, which allows a wide variety of applications from plain explanatory illustrations to three-dimensional (3D) models, and even the production of working physical astrolabes usable for public or classroom demonstrations. [source]


Assessing a numerical cellular braided-stream model with a physical model

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 5 2005
Andrea B. Doeschl-Wilson
Abstract A. B. Murray and C. Paola (1994, Nature, vol. 371, pp. 54,57; 1997, Earth Surface Processes and Landforms, vol. 22, pp. 1001,1025) proposed a cellular model for braided river dynamics as an exploratory device for investigating the conditions necessary for the occurrence of braiding. The model reproduces a number of the general morphological and dynamic features of braided rivers in a simplified form. Here we test the representation of braided channel morphodynamics in the Murray,Paola model against the known characteristics (mainly from a sequence of high resolution digital elevation models) of a physical model of a braided stream. The overall aim is to further the goals of the exploratory modelling approach by first investigating the capabilities and limitations of the existing model and then by proposing modifications and alternative approaches to modelling of the essential features of braiding. The model confirms the general inferences of Murray and Paola (1997) about model performance. However, the modelled evolution shows little resemblance to the real evolution of the small-scale laboratory river, although this depends to some extent on the coarseness of the grid used in the model relative to the scale of the topography. The model does not reproduce the bar-scale topography and dynamics even when the grid scale and amplitude of topography are adapted to be equivalent to the original Murray,Paola results. Strong dependence of the modelled processes on local bed slopes and the tendency for the model to adopt its own intrinsic scale, rather than adapt to the scale of the pre-existing topography, appear to be the main causes of the differences between numerical model results and the physical model morphology and dynamics. The model performance can be improved by modification of the model equations to more closely represent the water surface but as an exploratory approach hierarchical modelling promises greater success in overcoming the identified shortcomings. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Testing a model for predicting the timing and location of shallow landslide initiation in soil-mantled landscapes

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 9 2003
M. Casadei
Abstract The growing availability of digital topographic data and the increased reliability of precipitation forecasts invite modelling efforts to predict the timing and location of shallow landslides in hilly and mountainous areas in order to reduce risk to an ever-expanding human population. Here, we exploit a rare data set to develop and test such a model. In a 1·7 km2 catchment a near-annual aerial photographic coverage records just three single storm events over a 45 year period that produced multiple landslides. Such data enable us to test model performance by running the entire rainfall time series and determine whether just those three storms are correctly detected. To do this, we link a dynamic and spatially distributed shallow subsurface runoff model (similar to TOPMODEL) to an in,nite slope model to predict the spatial distribution of shallow landsliding. The spatial distribution of soil depth, a strong control on local landsliding, is predicted from a process-based model. Because of its common availability, daily rainfall data were used to drive the model. Topographic data were derived from digitized 1 : 24 000 US Geological Survey contour maps. Analysis of the landslides shows that 97 occurred in 1955, 37 in 1982 and ,ve in 1998, although the heaviest rainfall was in 1982. Furthermore, intensity,duration analysis of available daily and hourly rainfall from the closest raingauges does not discriminate those three storms from others that did not generate failures. We explore the question of whether a mechanistic modelling approach is better able to identify landslide-producing storms. Landslide and soil production parameters were ,xed from studies elsewhere. Four hydrologic parameters characterizing the saturated hydraulic conductivity of the soil and underlying bedrock and its decline with depth were ,rst calibrated on the 1955 landslide record. Success was characterized as the most number of actual landslides predicted with the least amount of total area predicted to be unstable. Because landslide area was consistently overpredicted, a threshold catchment area of predicted slope instability was used to de,ne whether a rainstorm was a signi,cant landslide producer. Many combinations of the four hydrological parameters performed equally well for the 1955 event, but only one combination successfully identi,ed the 1982 storm as the only landslide-producing storm during the period 1980,86. Application of this parameter combination to the entire 45 year record successfully identi,ed the three events, but also predicted that two other landslide-producing events should have occurred. This performance is signi,cantly better than the empirical intensity,duration threshold approach, but requires considerable calibration effort. Overprediction of instability, both for storms that produced landslides and for non-producing storms, appears to arise from at least four causes: (1) coarse rainfall data time scale and inability to document short rainfall bursts and predict pressure wave response; (2) absence of local rainfall data; (3) legacy effect of previous landslides; and (4) inaccurate topographic and soil property data. Greater resolution of spatial and rainfall data, as well as topographic data, coupled with systematic documentation of landslides to create time series to test models, should lead to signi,cant improvements in shallow landslides forecasting. Copyright © 2003 John Wiley & Sons, Ltd. [source]


An attenuation model for distant earthquakes

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 2 2004
Adrian Chandler
Abstract Large magnitude earthquakes generated at source,site distances exceeding 100km are typified by low-frequency (long-period) seismic waves. Such induced ground shaking can be disproportionately destructive due to its high displacement, and possibly high velocity, shaking characteristics. Distant earthquakes represent a potentially significant safety hazard in certain low and moderate seismic regions where seismic activity is governed by major distant sources as opposed to nearby (regional) background sources. Examples are parts of the Indian sub-continent, Eastern China and Indo-China. The majority of ground motion attenuation relationships currently available for applications in active seismic regions may not be suitable for handling long-distance attenuation, since the significance of distant earthquakes is mainly confined to certain low to moderate seismicity regions. Thus, the effects of distant earthquakes are often not accurately represented by conventional empirical models which were typically developed from curve-fitting earthquake strong-motion data from active seismic regions. Numerous well-known existing attenuation relationships are evaluated in this paper, to highlight their limitations in long-distance applications. In contrast, basic seismological parameters such as the Quality factor (Q -factor) could provide a far more accurate representation for the distant attenuation behaviour of a region, but such information is seldom used by engineers in any direct manner. The aim of this paper is to develop a set of relationships that provide a convenient link between the seismological Q -factor (amongst other factors) and response spectrum attenuation. The use of Q as an input parameter to the proposed model enables valuable local seismological information to be incorporated directly into response spectrum predictions. The application of this new modelling approach is demonstrated by examples based on the Chi-Chi earthquake (Taiwan and South China), Gujarat earthquake (Northwest India), Nisqually earthquake (region surrounding Seattle) and Sumatran-fault earthquake (recorded in Singapore). Field recordings have been obtained from these events for comparison with the proposed model. The accuracy of the stochastic simulations and the regression analysis have been confirmed by comparisons between the model calculations and the actual field observations. It is emphasized that obtaining representative estimates for Q for input into the model is equally important.Thus, this paper forms part of the long-term objective of the authors to develop more effective communications across the engineering and seismological disciplines. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Modelling species distributions in Britain: a hierarchical integration of climate and land-cover data

ECOGRAPHY, Issue 3 2004
Richard G. Pearson
A modelling framework for studying the combined effects of climate and land-cover changes on the distribution of species is presented. The model integrates land-cover data into a correlative bioclimatic model in a scale-dependent hierarchical manner, whereby Artificial Neural Networks are used to characterise species' climatic requirements at the European scale and land-cover requirements at the British scale. The model has been tested against an alternative non-hierarchical approach and has been applied to four plant species in Britain: Rhynchospora alba, Erica tetralix, Salix herbacea and Geranium sylvaticum. Predictive performance has been evaluated using Cohen's Kappa statistic and the area under the Receiver Operating Characteristic curve, and a novel approach to identifying thresholds of occurrence which utilises three levels of confidence has been applied. Results demonstrate reasonable to good predictive performance for each species, with the main patterns of distribution simulated at both 10 km and 1 km resolutions. The incorporation of land-cover data was found to significantly improve purely climate-driven predictions for R. alba and E. tetralix, enabling regions with suitable climate but unsuitable land-cover to be identified. The study thus provides an insight into the roles of climate and land-cover as determinants of species' distributions and it is demonstrated that the modelling approach presented can provide a useful framework for making predictions of distributions under scenarios of changing climate and land-cover type. The paper confirms the potential utility of multi-scale approaches for understanding environmental limitations to species' distributions, and demonstrates that the search for environmental correlates with species' distributions must be addressed at an appropriate spatial scale. Our study contributes to the mounting evidence that hierarchical schemes are characteristic of ecological systems. [source]


Patterns and causes of species richness: a general simulation model for macroecology

ECOLOGY LETTERS, Issue 9 2009
Nicholas J. Gotelli
Abstract Understanding the causes of spatial variation in species richness is a major research focus of biogeography and macroecology. Gridded environmental data and species richness maps have been used in increasingly sophisticated curve-fitting analyses, but these methods have not brought us much closer to a mechanistic understanding of the patterns. During the past two decades, macroecologists have successfully addressed technical problems posed by spatial autocorrelation, intercorrelation of predictor variables and non-linearity. However, curve-fitting approaches are problematic because most theoretical models in macroecology do not make quantitative predictions, and they do not incorporate interactions among multiple forces. As an alternative, we propose a mechanistic modelling approach. We describe computer simulation models of the stochastic origin, spread, and extinction of species' geographical ranges in an environmentally heterogeneous, gridded domain and describe progress to date regarding their implementation. The output from such a general simulation model (GSM) would, at a minimum, consist of the simulated distribution of species ranges on a map, yielding the predicted number of species in each grid cell of the domain. In contrast to curve-fitting analysis, simulation modelling explicitly incorporates the processes believed to be affecting the geographical ranges of species and generates a number of quantitative predictions that can be compared to empirical patterns. We describe three of the ,control knobs' for a GSM that specify simple rules for dispersal, evolutionary origins and environmental gradients. Binary combinations of different knob settings correspond to eight distinct simulation models, five of which are already represented in the literature of macroecology. The output from such a GSM will include the predicted species richness per grid cell, the range size frequency distribution, the simulated phylogeny and simulated geographical ranges of the component species, all of which can be compared to empirical patterns. Challenges to the development of the GSM include the measurement of goodness of fit (GOF) between observed data and model predictions, as well as the estimation, optimization and interpretation of the model parameters. The simulation approach offers new insights into the origin and maintenance of species richness patterns, and may provide a common framework for investigating the effects of contemporary climate, evolutionary history and geometric constraints on global biodiversity gradients. With further development, the GSM has the potential to provide a conceptual bridge between macroecology and historical biogeography. [source]


Are there general mechanisms of animal home range behaviour?

ECOLOGY LETTERS, Issue 6 2008
A review, prospects for future research
Abstract Home range behaviour is a common pattern of space use, having fundamental consequences for ecological processes. However, a general mechanistic explanation is still lacking. Research is split into three separate areas of inquiry , movement models based on random walks, individual-based models based on optimal foraging theory, and a statistical modelling approach , which have developed without much productive contact. Here we review recent advances in modelling home range behaviour, focusing particularly on the problem of identifying mechanisms that lead to the emergence of stable home ranges from unbounded movement paths. We discuss the issue of spatiotemporal scale, which is rarely considered in modelling studies, as well as highlighting the need to consider more closely the dynamical nature of home ranges. Recent methodological and theoretical advances may soon lead to a unified approach, however, conceptually unifying our understanding of linkages among home range behaviour and ecological or evolutionary processes. [source]


Recruitment of burbot (Lota lota L.) in Lake Erie: an empirical modelling approach

ECOLOGY OF FRESHWATER FISH, Issue 3 2010
M. A. Stapanian
Stapanian MA, Witzel LD, Cook A. Recruitment of burbot (Lota lota L.) in Lake Erie: an empirical modelling approach. Ecology of Freshwater Fish 2010: 19: 326,337. Published 2010. This article is a US Government work and is in the public domain in the USA Abstract,,, World-wide, many burbot Lota lota (L.) populations have been extirpated or are otherwise in need of conservation measures. By contrast, burbot made a dramatic recovery in Lake Erie during 1993,2001 but declined during 2002,2007, due in part to a sharp decrease in recruitment. We used Akaike's Information Criterion to evaluate 129 linear regression models that included all combinations of one to seven ecological indices as predictors of burbot recruitment. Two models were substantially supported by the data: (i) the number of days in which water temperatures were within optimal ranges for burbot spawning and development combined with biomass of yearling and older (YAO) yellow perch Perca flavescens (Mitchill); and (ii) biomass of YAO yellow perch. Warmer winter water temperatures and increases in yellow perch biomass were associated with decreases in burbot recruitment. Continued warm winter water temperatures could result in declines in burbot recruitment, particularly in the southern part of the species' range. [source]


INTEGRATED MODELLING OF WATER POLICY SCENARIOS IN THE GREAT BARRIER REEF REGION

ECONOMIC PAPERS: A JOURNAL OF APPLIED ECONOMICS AND POLICY, Issue 3 2005
Alexander Smajgl
The Reef Water Quality Protection Plan defined a landmark in the political discussion on water use in the Great Barrier Reef (GBR) region. In order to develop a decision support tool that integrates market values and non-market values we combine Computable General Equilibrium (CGE) modelling with multi-attribute utility theory (MAUT) to integrate socio-economic, ecological and hydrological aspects of water use. In two scenarios the applied modelling approach of this paper is explained. [source]


Relationship between thermal conductivity and water content of soils using numerical modelling

EUROPEAN JOURNAL OF SOIL SCIENCE, Issue 3 2003
P. Cosenza
Summary There is no simple and general relationship between the thermal conductivity of a soil, ,, and its volumetric water content, ,, because the porosity, n, and the thermal conductivity of the solid fraction, ,s, play a major part. Experimental data including measurements of all the variables are scarce. Using a numerical modelling approach, we have shown that the microscopic arrangement of water influences the relation between , and ,. Simulated values for n ranging from 0.4 to 0.6, ,s ranging from 2 to 5 W m,1 K,1 and , from 0.1 to 0.4 can be fitted by a simple linear formula that takes into account n, ,s and ,. The results given by this formula and by the quadratic parallel (QP) model widely used in physical property studies are in satisfactory agreement with published data both for saturated rocks and for unsaturated soils. Consequently, the linear formula and the QP model can be used as practical and efficient tools to investigate the effects of water content and porosity on the thermal conductivity of the soil and hence to optimize the design of thermal in situ techniques for monitoring water content. [source]


Coupled FEM and lumped circuit model of the electromagnetic response of coaxially insulated windings in two slot cores

EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 6 2007
Pär Holmberg
Abstract This paper presents a coupled FEM and lumped circuit modelling approach that is primarily intended for high-frequency and overvoltage simulations of rotating electric machines with coaxially insulated windings, such as Powerformer and Motorformer. The magnetic fields and their interaction with the conductors of the winding are simulated with the aid of a FEM-program. The displacement current and its losses are modelled with an external lumped circuit. To consider eddy current losses, the stranded conductors and the laminated steel cores are replaced by homogeneous bodies with similar losses over a wide frequency range. The approach is illustrated and experimentally verified for a set-up with a cable wound around two slot cores. The model agrees well with measurements up to 1,MHz. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Analytical modelling of users' behaviour and performance metrics in key distribution schemes

EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 1 2010
Massimo Tornatore
Access control for group communications must ensure that only legitimate users can access the authorised data streams. This could be done by distributing an encrypting key to each member of the group to be secured. To achieve a high level of security, the group key should be changed every time a user joins or leaves the group, so that a former group member has no access to current communications and a new member has no access to previous communications. Since group memberships could be very dynamic, the group key should be changed frequently. So far, different schemes for efficient key distribution have been proposed to limit the key-distribution overhead. In previous works, the performance comparison among these different schemes have been based on simulative experiments, where users join and leave secure groups according to a basic statistical model of users' behaviour. In this paper, we propose a new statistical model to account for the behaviour of users and compare it to the modelling approach so far adopted in the literature. Our new model is able to to lead the system to a steady state (allowing a superior statistical confidence of the results), as opposed to current models in which the system is permanently in a transient and diverging state. We also provide analytical formulations of the main performance metrics usually adopted to evaluate key distribution systems, such as rekey overheads and storage overheads. Then, we validate our simulative outcomes with results obtained by analytical formulations. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Towards predictive modelling of the electrophysiology of the heart

EXPERIMENTAL PHYSIOLOGY, Issue 5 2009
Edward Vigmond
The simulation of cardiac electrical function is an example of a successful integrative multiscale modelling approach that is directly relevant to human disease. Today we stand at the threshold of a new era, in which anatomically detailed, tomographically reconstructed models are being developed that integrate from the ion channel to the electromechanical interactions in the intact heart. Such models hold high promise for interpretation of clinical and physiological measurements, for improving the basic understanding of the mechanisms of dysfunction in disease, such as arrhythmias, myocardial ischaemia and heart failure, and for the development and performance optimization of medical devices. The goal of this article is to present an overview of current state-of-art advances towards predictive computational modelling of the heart as developed recently by the authors of this article. We first outline the methodology for constructing electrophysiological models of the heart. We then provide three examples that demonstrate the use of these models, focusing specifically on the mechanisms for arrhythmogenesis and defibrillation in the heart. These include: (1) uncovering the role of ventricular structure in defibrillation; (2) examining the contribution of Purkinje fibres to the failure of the shock; and (3) using magnetic resonance imaging reconstructed heart models to investigate the re-entrant circuits formed in the presence of an infarct scar. [source]


Integrating modelling and experiments to assess dynamic musculoskeletal function in humans

EXPERIMENTAL PHYSIOLOGY, Issue 2 2006
J. W. Fernandez
Magnetic resonance imaging, bi-plane X-ray fluoroscopy and biomechanical modelling are enabling technologies for the non-invasive evaluation of muscle, ligament and joint function during dynamic activity. This paper reviews these various technologies in the context of their application to the study of human movement. We describe how three-dimensional, subject-specific computer models of the muscles, ligaments, cartilage and bones can be developed from high-resolution magnetic resonance images; how X-ray fluoroscopy can be used to measure the relative movements of the bones at a joint in three dimensions with submillimetre accuracy; how complex 3-D dynamic simulations of movement can be performed using new computational methods based on non-linear control theory; and how musculoskeletal forces derived from such simulations can be used as inputs to elaborate finite-element models of a joint to calculate contact stress distributions on a subject-specific basis. A hierarchical modelling approach is highlighted that links rigid-body models of limb segments with detailed finite-element models of the joints. A framework is proposed that integrates subject-specific musculoskeletal computer models with highly accurate in vivo experimental data. [source]


The validation of some methods of notch fatigue analysis

FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 5 2000
Taylor
This paper is concerned with the testing and validation of certain methods of notch analysis which the authors have developed theoretically in earlier publications. These methods were developed for use with finite element (FE) analysis in order to predict the fatigue limits of components containing stress concentrations. In the present work we tested and compared these methods using data from standard notches taken from the literature, covering a range of notch geometries, loading types, R -ratios and materials: a total of 47 different data sets were analysed. The greatest predictive success was achieved with critical-distance methods known as the point, line and area methods: 94% of these predictions fell within 20% of the experimental fatigue limits. This was a significant improvement on previous methods of this kind, e.g. that of Klesnil and Lucas [(1980) Fatigue of Metallic Materials, Elsevier Science]. Methods based on the Smith and Miller [(1978) Int. J. Mech. Sci. 20, 201,206] concept of crack-like notches were successful in 42% of cases; they experienced difficulties dealing with very small notches, and could be improved by using an ElHaddad-type correction factor, giving 87% success. An approach known as ,crack modelling' allowed the Smith and Miller method to be used with non-standard stress concentrations, where notch geometry is ill defined; this modification, with the same short-crack correction, had 68% success. It was concluded that the critical-distance approach is more accurate and can be more easily used to analyse components of complex shape, however, the crack modelling approach is sometimes preferable because it can be used with less mesh refinement. [source]


Population Ageing, Fiscal Pressure and Tax Smoothing: A CGE Application to Australia,

FISCAL STUDIES, Issue 2 2006
Ross Guest
Abstract This paper analyses the fiscal pressure from population ageing using an intertemporal CGE model, applied to Australia, and compares the results with those of a recent government-commissioned study. The latter study uses an alternative modelling approach based on extrapolation rather than optimising behaviour of consumers and firms. The deadweight losses from the fiscal pressure caused by population ageing are equivalent to an annual loss of consumption of $260 per person per year in 2003 dollars in the balanced-budget scenario. A feasible degree of tax smoothing would reduce this welfare loss by an equivalent of $70 per person per year. Unlike the extrapolation-based model, the CGE approach takes account of feedback effects of ageing-induced tax increases on consumption and labour supply, which in turn impact on the ultimate magnitude of fiscal pressure and therefore tax increases. However, a counterfactual simulation suggests that the difference in terms of deadweight losses between the two modelling approaches is modest, at about $30 per person per year. [source]


Potential changes in skipjack tuna (Katsuwonus pelamis) habitat from a global warming scenario: modelling approach and preliminary results

FISHERIES OCEANOGRAPHY, Issue 4-5 2003
Harilaos Loukos
Abstract Recent studies suggest a reduction of primary production in the tropical oceans because of changes in oceanic circulation under global warming conditions caused by increasing atmospheric CO2 concentration. This might affect the productivity of medium and higher trophic levels with potential consequences on marine resources such as tropical tuna. Here we combine the projections of up-to-date climate and ocean biogeochemical models with recent concepts of representation of fish habitat based on prey abundance and ambient temperature to gain some insight into the impact of climate change on skipjack tuna (Katsuwonus pelamis), the species that dominates present-day tuna catch. For a world with doubled atmospheric CO2 concentration, our results suggest significant large-scale changes of skipjack habitat in the equatorial Pacific. East of the date line, conditions could be improved by an extension of the present favourable habitat zones of the western equatorial Pacific, a feature reminiscent of warming conditions associated with El Niño events. Despite its simplicity and the associated underlying hypothesis, this first simulation is used to stress future research directions and key issues for modelling developments associated to global change. [source]


Pharmacokinetic predictions in children by using the physiologically based pharmacokinetic modelling

FUNDAMENTAL & CLINICAL PHARMACOLOGY, Issue 6 2008
F. Bouzom
Abstract Nowadays, 50,90% of drugs used in children have never been actually studied in this population. Consequently, either our children are often exposed to the risk of adverse drug events or to lack of efficacy, or they are unable to benefit from a number of therapeutic advances offered to adults, as no clinical study has been properly performed in children. Actually the main methods used to calculate the dose for a child are based on allometric methods taking into account different categories of age, the body weight and/or the body surface area. Unfortunately, these calculation methods consider the children as small adults, which is not the case. Physiologically based pharmacokinetics is one way to integrate the physiological changes occurring in the childhood and to anticipate their impact on the pharmacokinetic processes: absorption, distribution, metabolism and excretion/elimination. From different examples, the application of this modelling approach is discussed as a possible and valuable method to minimize the ethical and technical difficulties of conducting research in children. [source]


Modelling patterned ground distribution in Finnish Lapland: an integration of topographical, ground and remote sensing information

GEOGRAFISKA ANNALER SERIES A: PHYSICAL GEOGRAPHY, Issue 1 2006
Jan Hjort
Abstract New data technologies and modelling methods have gained more attention in the field of periglacial geomorphology during the last decade. In this paper we present a new modelling approach that integrates topographical, ground and remote sensing information in predictive geomorphological mapping using generalized additive modelling (GAM). First, we explored the roles of different environmental variable groups in determining the occurrence of non-sorted and sorted patterned ground in a fell region of 100 km2 at the resolution of 1 ha in northern Finland. Second, we compared the predictive accuracy of ground-topography- and remote-sensing-based models. The results indicate that non-sorted patterned ground is more common at lower altitudes where the ground moisture and vegetation abundance is relatively high, whereas sorted patterned ground is dominant at higher altitudes with relatively high slope angle and sparse vegetation cover. All modelling results were from good to excellent in model evaluation data using the area under the curve (AUC) values, derived from receiver operating characteristic (ROC) plots. Generally, models built with remotely sensed data were better than ground-topography-based models and combination of all environmental variables improved the predictive ability of the models. This paper confirms the potential utility of remote sensing information for modelling patterned ground distribution in subarctic landscapes. [source]


Patterns and Determinants of Historical Woodland Clearing in Central-Western New South Wales, Australia

GEOGRAPHICAL RESEARCH, Issue 4 2007
MICHAEL BEDWARD
Abstract We consider the history of woodland clearing in central western New South Wales, Australia, which has led to the present highly cleared and fragmented landscape. A combined approach is used examining available historical land-use data and using regression analysis to relate the pattern of cleared and wooded areas in the recent landscape to environmental variables, taking into account the contagious nature of clearing. We also ask whether it would be possible to apply a simple simulation modelling approach to reconstruct a credible historical sequence of clearing in the study area. The historical data indicate that annual clearing rates have varied substantially in the study area and selective tree removal (ringbarking and thinning) has been common. These findings make it unlikely that a simple simulation approach would replicate the spatial and temporal sequence of woodland loss. Our regression results show that clearing patterns can be related to environmental variables, particularly annual rainfall and estimated pre-European vegetation type, but that patterns are dominated by contagion. [source]


Geomorphology Fluid Flow Modelling: Can Fluvial Flow Only Be Modelled Using a Three-Dimensional Approach?

GEOGRAPHY COMPASS (ELECTRONIC), Issue 1 2008
R. J. Hardy
The application of numerical models to gain insight into flow processes is becoming a prevalent research methodology in fluvial geomorphology. The advantage of this approach is that models are particularly useful for identifying emergent behaviour in the landscape where combinations of processes act over several scales. However, there are a wide range of available models and it is not always apparent that methodological approach should be chosen. The decision about the amount of process representation required needs to be balanced against both the spatial and temporal scales of interest. In this article, it is argued that in order to gain a complete, high resolution process understanding of flow within the fluvial system a full three-dimensional modelling approach with a complete physical basis is required. [source]


Modelling of GPR waves for lossy media obeying a complex power law of frequency for dielectric permittivity

GEOPHYSICAL PROSPECTING, Issue 1 2004
Maksim Bano
ABSTRACT The attenuation of ground-penetrating radar (GPR) energy in the subsurface decreases and shifts the amplitude spectrum of the radar pulse to lower frequencies (absorption) with increasing traveltime and causes also a distortion of wavelet phase (dispersion). The attenuation is often expressed by the quality factor Q. For GPR studies, Q can be estimated from the ratio of the real part to the imaginary part of the dielectric permittivity. We consider a complex power function of frequency for the dielectric permittivity, and show that this dielectric response corresponds to a frequency-independent- Q or simply a constant- Q model. The phase velocity (dispersion relationship) and the absorption coefficient of electromagnetic waves also obey a frequency power law. This approach is easy to use in the frequency domain and the wave propagation can be described by two parameters only, for example Q and the phase velocity at an arbitrary reference frequency. This simplicity makes it practical for any inversion technique. Furthermore, by using the Hilbert transform relating the velocity and the absorption coefficient (which obeys a frequency power law), we find the same dispersion relationship for the phase velocity. Both approaches are valid for a constant value of Q over a restricted frequency-bandwidth, and are applicable in a material that is assumed to have no instantaneous dielectric response. Many GPR profiles acquired in a dry aeolian environment have shown a strong reflectivity inside dunes. Changes in water content are believed to be the origin of this reflectivity. We model the radar reflections from the bottom of a dry aeolian dune using the 1D wavelet modelling method. We discuss the choice of the reference wavelet in this modelling approach. A trial-and-error match of modelled and observed data was performed to estimate the optimum set of parameters characterizing the materials composing the site. Additionally, by combining the complex refractive index method (CRIM) and/or Topp equations for the bulk permittivity (dielectric constant) of moist sandy soils with a frequency power law for the dielectric response, we introduce them into the expression for the reflection coefficient. Using this method, we can estimate the water content and explain its effect on the reflection coefficient and on wavelet modelling. [source]


Simulating short-circuiting flow in a constructed wetland: the implications of bathymetry and vegetation effects

HYDROLOGICAL PROCESSES, Issue 6 2009
Joong-Hyuk Min
Abstract Short-circuiting flow, commonly experienced in many constructed wetlands, reduces hydraulic retention times in unit wetland cells and decreases the treatment efficiency. A two-dimensional (2-D), physically based, distributed modelling approach was used to systematically address the effects of bathymetry and vegetation on short-circuiting flow, which previously have been neglected or lumped in one-dimensional wetland flow models. In this study, a 2-D transient hydrodynamics with advection-dispersion model was developed using MIKE 21 and calibrated with bromide tracer data collected at the Orlando Easterly Wetland Cell 7. The estimated topographic difference between short-circuiting flow zone and adjacent area ranged from 0·3 to 0·8 m. A range of the Manning roughness coefficient at the short-circuiting flow zone was estimated (0·022,0·045 s m,1/3). Sensitivity analysis of topographical and vegetative heterogeneity deduced during model calibration shows that relic ditches or other ditch-shaped landforms and the associated sparse vegetation along the main flow direction intensify the short-circuiting pattern, considerably affecting 2-D solute transport simulation. In terms of hydraulic efficiency, this study indicates that the bathymetry effect on short-circuiting flow is more important than the vegetation effect. Copyright © 2009 John Wiley & Sons, Ltd. [source]