Modeling Efforts (modeling + effort)

Distribution by Scientific Domains


Selected Abstracts


GIS-BASED HYIROLOGIC MODELING OF RIPARIAN AREAS: IMPLICATIONS FOR STREAM WATER QUALITY,

JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 6 2001
Matthew E. Baker
ABSTRACT: Riparian buffers have potential for reducing excess nutrient levels in surface water. Spatial variation in riparian buffer effectiveness is well recognized, yet researchers and managers still lack effective general tools for understanding the relevance of different hydrologic settings. We present several terrain-based GIS models to predict spatial patterns of shallow, subsurface hydrologic flux and riparian hydrology. We then link predictions of riparian hydrology to patterns of nutrient export in order to demonstrate potential for augmenting the predictive power of land use/land cover (LU/LC) maps. Using predicted hydrology in addition to LUILC, we observed increases in the explained variation of nutrient exports from 290 sites across Lower Michigan. The results suggest that our hydrologic predictions relate more strongly to patterns of nutrient export than the presence or absence of wetland vegetation, and that in fact the influence of vegetative structure largely depends on its hydrologic context. Such GIS models are useful and complimentary tools for exploring the role of hydrologic routing in riparian ecosystem function and stream water quality. Modeling efforts that take a similar GIS approach to material transport might be used to further explore the causal implications of riparian buffers in heterogeneous watersheds. [source]


Growing Sovereignty: Modeling the Shift from Indirect to Direct Rule

INTERNATIONAL STUDIES QUARTERLY, Issue 1 2010
Lars-Erik Cederman
Drawing on theories of historical sociology, we model the emergence of the territorial state in early modern Europe. Our modeling effort focuses on systems change with respect to the shift from indirect to direct rule. We first introduce a one-dimensional model that captures the tradeoff between organizational and geographic distances. In a second step, we present an agent-based model that features states with a varying number of organizational levels. This model explicitly represents causal mechanisms of conquest and internal state-building through organizational bypass processes. The computational findings confirm our hypothesis that technological change is sufficient to trigger the emergence of modern, direct-state hierarchies. Our theoretical findings indicate that the historical transformation from indirect to direct rule presupposes a logistical, rather than the commonly assumed exponential, form of the loss-of-strength gradient. [source]


Water Resources Modeling of the Ganges-Brahmaputra-Meghna River Basins Using Satellite Remote Sensing Data,

JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 6 2009
Bushra Nishat
Nishat, Bushra and S.M. Mahbubur Rahman, 2009. Water Resources Modeling of the Ganges-Brahmaputra-Meghna River Basins Using Satellite Remote Sensing Data. Journal of the American Water Resources Association (JAWRA) 45(6):1313-1327. Abstract:, Large-scale water resources modeling can provide useful insights on future water availability scenarios for downstream nations in anticipation of proposed upstream water resources projects in large international river basins (IRBs). However, model set up can be challenging due to the large amounts of data requirement on both static states (soils, vegetation, topography, drainage network, etc.) and dynamic variables (rainfall, streamflow, soil moisture, evapotranspiration, etc.) over the basin from multiple nations and data collection agencies. Under such circumstances, satellite remote sensing provides a more pragmatic and convenient alternative because of the vantage of space and easy availability from a single data platform. In this paper, we demonstrate a modeling effort to set up a water resources management model, MIKE BASIN, over the Ganges, Brahmaputra, and Meghna (GBM) river basins. The model is set up with the objective of providing Bangladesh, the lowermost riparian nation in the GBM basins, a framework for assessing proposed water diversion scenarios in the upstream transboundary regions of India and deriving quantitative impacts on water availability. Using an array of satellite remote sensing data on topography, vegetation, and rainfall from the transboundary regions, we demonstrate that it is possible to calibrate MIKE BASIN to a satisfactory level and predict streamflow in the Ganges and Brahmaputra rivers at the entry points of Bangladesh at relevant scales of water resources management. Simulated runoff for the Ganges and Brahmaputra rivers follow the trends in the rated discharge for the calibration period. However, monthly flow volume differs from the actual rated flow by (,) 8% to (+) 20% in the Ganges basin, by (,) 15 to (+) 12% in the Brahmaputra basin, and by (,) 15 to (+) 19% in the Meghna basin. Our large-scale modeling initiative is generic enough for other downstream nations in IRBs to adopt for their own modeling needs. [source]


USE OF THE DELPHI METHOD IN RESOLVING COMPLEX WATER RESOURCES ISSUES,

JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 1 2003
Jonathan G. Taylor
ABSTRACT: The tri-state river basins, shared by Georgia, Alabama, and Florida, are being modeled by the U.S. Fish and Wildlife Service and the U.S. Army Corps of Engineers to help facilitate agreement in an acrimonious water dispute among these different state governments. Modeling of such basin reservoir operations requires parallel understanding of several river system components: hydropower production, flood control, municipal and industrial water use, navigation, and reservoir fisheries requirements. The Delphi method, using repetitive surveying of experts, was applied to determine fisheries' water and lake-level requirements on 25 reservoirs in these interstate basins. The Delphi technique allowed the needs and requirements of fish populations to be brought into the modeling effort on equal footing with other water supply and demand components. When the subject matter is concisely defined and limited, this technique can rapidly assess expert opinion on any natural resource issue, and even move expert opinion toward greater agreement. [source]


Variances Are Not Always Nuisance Parameters

BIOMETRICS, Issue 2 2003
Raymond J. Carroll
Summary In classical problems, e.g., comparing two populations, fitting a regression surface, etc., variability is a nuisance parameter. The term "nuisance parameter" is meant here in both the technical and the practical sense. However, there are many instances where understanding the structure of variability is just as central as understanding the mean structure. The purpose of this article is to review a few of these problems. I focus in particular on two issues: (a) the determination of the validity of an assay; and (b) the issue of the power for detecting health effects from nutrient intakes when the latter are measured by food frequency questionnaires. I will also briefly mention the problems of variance structure in generalized linear mixed models, robust parameter design in quality technology, and the signal in microarrays. In these and other problems, treating variance structure as a nuisance instead of a central part of the modeling effort not only leads to inefficient estimation of means, but also to misleading conclusions. [source]


Integrating Formalized User Experience within Building Design Models

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 2 2007
Debajyoti Pati
As a result they do not consider information on how settings perform during use in areas critical to building owners and users. The authors argue that integrating performance data from buildings-in-use in building models will greatly enhance the end uses of conceptual modeling efforts in AEC, in design evolution as well as in the traditional domain of facility management. The authors describe a modeling study that integrates descriptive and evaluative/performance concepts in existing building models, discusses the paradigms and philosophical issues posed in such endeavors, and illustrates several end-use scenarios to support their assertions. [source]


Population Synthesis: Comparing the Major Techniques Using a Small, Complete Population of Firms

GEOGRAPHICAL ANALYSIS, Issue 2 2009
Justin Ryan
Recently, disaggregate modeling efforts that rely on microdata have received wide attention by scholars and practitioners. Synthetic population techniques have been devised and are used as a viable alternative to the collection of microdata that normally are inaccessible because of confidentiality concerns or incomplete because of high acquisition costs. The two most widely discussed synthetic techniques are the synthetic reconstruction method (IPFSR), which makes use of iterative proportional fitting (IPF) techniques, and the combinatorial optimization (CO) method. Both methods are described in this article and then evaluated in terms of their ability to recreate a known population of firms, using limited data extracted from the parent population of the firms. Testing a synthetic population against a known population is seldom done, because obtaining an entire population usually is too difficult. The case presented here uses a small, complete population of firms for the City of Hamilton, Ontario, for the year 1990; firm attributes compiled are number of employees, 3-digit standard industrial classification, and geographic location. Results are summarized for experiments based upon various combinations of sample size and tabulation detail designed to maximize the accuracy of resulting synthetic populations while holding input data costs to a minimum. The output from both methods indicates that increases in sample size and tabulation detail result in higher quality synthetic populations, although the quality of the generated population is more sensitive to increases in tabular detail. Finally, most tests conducted with the created synthetic populations suggest that the CO method is superior to the IPFSR method. Los modelos desagregados basados en micro data han recibido la atención relativamente reciente de los círculos académicos y de aplicación. La colección de dicha data es una tarea difícil por cuestiones de accesibilidad, confidencialidad, datos incompletos o altos costos de adquisición. Por esta razón se han creado indicadores sintéticos como a alternativa a la recolección directa de datos. Los dos indicadores sintéticos mas discutidos/conocidos son el método de Reconstrucción Sintética (Sytnthetic Reconstruction method) (IPFSR) que hace uso de técnicas de Ajuste Proporcional Iterativo (IPF); y el método Optimización Combinatoria (CO). Ambos métodos son descritos en este artículo y luego evaluados en base a su habilidad de recrear una población de empresas ya conocidas o preestablecidas. Contrastar una población sintética versus una población conocida es una operación poco frecuente porque la obtención de una población entera es por lo general bastante difícil. El caso presentado en este estudio utiliza una población pequeña y completa de empresas en la ciudad de Hamilton, Ontario (Canadá) para el año 1990. Las variables recopiladas son el número de empleados, SIC (código estandarizado de clasificación industrial), y ubicación geográfica. Los resultados que se reportan en el presente estudio son producto de varios experimentos basados en varias combinaciones del tamaño de la muestra, y del detalle en la tabulación diseñados, los mismos que fueron diseñados para maximizar la exactitud de las poblaciones sintéticas calculadas y al mismo tiempo minimizar los costos de datos necesarios. Los resultados obtenidos por ambos métodos indica que los incrementos en el tamaño de la muestra y en el detalle de la tabulación resultan en un estimado de poblaciones mejor, aunque este estimado es particularmente sensible a incrementos en el detalle de las tabulaciones. Finalmente, la mayoría de pruebas realizadas con las poblaciones sintéticas generadas para este estudio sugieren que el método CO es superior al método IPFSR. [source]


The Deciduous Forest , Boreal Forest Ecotone

GEOGRAPHY COMPASS (ELECTRONIC), Issue 7 2010
David Goldblum
Ecotones have been subject to significant attention over the past 25 years as a consensus emerged that they might be uniquely sensitive to the effects of climate change. Most ecotone field studies and modeling efforts have focused on transitions between forest and non-forest biomes (e.g. boreal forest to Arctic tundra, forest to prairie, subalpine forests to alpine tundra) while little effort has been made to evaluate or simply understand forest,forest ecotones, specifically the deciduous forest , boreal forest ecotone. Geographical shifts and changes at this ecotone because of anthropogenic factors are tied to the broader survival of both the boreal and deciduous forest communities as well as global factors such as biodiversity loss and dynamics of the carbon cycle. This review summarizes what is known about the location, controlling mechanisms, disturbance regimes, anthropogenic impacts, and sensitivity to climate change of the deciduous forest , boreal forest ecotone. [source]


Geochemical Factors Controlling Radium Activity in a Sandstone Aquifer

GROUND WATER, Issue 4 2006
Tim Grundl
Geochemical processes behind the occurrence of radium activities in excess of the U.S. EPA's drinking water limit of 5 pCi/L combined radium were investigated in a regional sandstone aquifer located in southeastern Wisconsin. Geochemical speciation modeling (PHREEQC 2.7) combined with a detailed understanding of the regional flow system provided by recent flow modeling efforts was used to determine that radium coprecipitation into barite controls radium activity in the unconfined portion of the aquifer. As the aquifer transitions from unconfined to confined conditions, radium levels rise and the water becomes more sulfate rich yet the aquifer remains at saturation with barite throughout. Calculations based on published distribution coefficients and the observed Ra:Ba atomic ratios indicate that barite contains ,12 ,g/kg coprecipitated radium. Confined portions of the aquifer have high concentrations of sulfate, and barium concentrations become too low to be an effective control on radium activity. Additional, as yet undefined, controls on radium are operative in the downgradient, confined portion of the aquifer. [source]


Bioaccumulation Assessment Using Predictive Approaches,

INTEGRATED ENVIRONMENTAL ASSESSMENT AND MANAGEMENT, Issue 4 2009
John W Nichols
Abstract Mandated efforts to assess chemicals for their potential to bioaccumulate within the environment are increasingly moving into the realm of data inadequacy. Consequently, there is an increasing reliance on predictive tools to complete regulatory requirements in a timely and cost-effective manner. The kinetic processes of absorption, distribution, metabolism, and elimination (ADME) determine the extent to which chemicals accumulate in fish and other biota. Current mathematical models of bioaccumulation implicitly or explicitly consider these ADME processes, but there is a lack of data needed to specify critical model input parameters. This is particularly true for compounds that are metabolized, exhibit restricted diffusion across biological membranes, or do not partition simply to tissue lipid. Here we discuss the potential of in vitro test systems to provide needed data for bioaccumulation modeling efforts. Recent studies demonstrate the utility of these systems and provide a "proof of concept" for the prediction models. Computational methods that predict ADME processes from an evaluation of chemical structure are also described. Most regulatory agencies perform bioaccumulation assessments using a weight-of-evidence approach. A strategy is presented for incorporating predictive methods into this approach. To implement this strategy it is important to understand the "domain of applicability" of both in vitro and structure-based approaches, and the context in which they are applied. [source]


Carbon monoxide poisoning of proton exchange membrane fuel cells

INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 8 2001
J. J. Baschuk
Abstract Proton exchange membrane fuel cell (PEMFC) performance degrades when carbon monoxide (CO) is present in the fuel gas; this is referred to as CO poisoning. This paper investigates CO poisoning of PEMFCs by reviewing work on the electrochemistry of CO and hydrogen, the experimental performance of PEMFCs exhibiting CO poisoning, methods to mitigate CO poisoning and theoretical models of CO poisoning. It is found that CO poisons the anode reaction through preferentially adsorbing to the platinum surface and blocking active sites, and that the CO poisoning effect is slow and reversible. There exist three methods to mitigate the effect of CO poisoning: (i) the use of a platinum alloy catalyst, (ii) higher cell operating temperature and (iii) introduction of oxygen into the fuel gas flow. Of these three methods, the third is the most practical. There are several models available in the literature for the effect of CO poisoning on a PEMFC and from the modeling efforts, it is clear that small CO oxidation rates can result in much increased performance of the anode. However, none of the existing models have considered the effect of transport phenomena in a cell, nor the effect of oxygen crossover from the cathode, which may be a significant contributor to CO tolerance in a PEMFC. In addition, there is a lack of data for CO oxidation and adsorption at low temperatures, which is needed for detailed modeling of CO poisoning in PEMFCs. Copyright © 2001 John Wiley & Sons, Ltd. [source]


,-Dynamics free energy simulation methods

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 11 2009
Jennifer L. Knight
Abstract Free energy calculations are fundamental to obtaining accurate theoretical estimates of many important biological phenomena including hydration energies, protein-ligand binding affinities and energetics of conformational changes. Unlike traditional free energy perturbation and thermodynamic integration methods, ,-dynamics treats the conventional "," as a dynamic variable in free energy simulations and simultaneously evaluates thermodynamic properties for multiple states in a single simulation. In the present article, we provide an overview of the theory of ,-dynamics, including the use of biasing and restraining potentials to facilitate conformational sampling. We review how ,-dynamics has been used to rapidly and reliably compute relative hydration free energies and binding affinities for series of ligands, to accurately identify crystallographically observed binding modes starting from incorrect orientations, and to model the effects of mutations upon protein stability. Finally, we suggest how ,-dynamics may be extended to facilitate modeling efforts in structure-based drug design. © 2009 Wiley Periodicals, Inc. J Comput Chem 2009 [source]


Nonlinear quantitative structure-property relationship modeling of skin permeation coefficient

JOURNAL OF PHARMACEUTICAL SCIENCES, Issue 11 2009
Brian J. Neely
Abstract The permeation coefficient characterizes the ability of a chemical to penetrate the dermis, and the current study describes our efforts to develop structure-based models for the permeation coefficient. Specifically, we have integrated nonlinear, quantitative structure-property relationship (QSPR) models, genetic algorithms (GAs), and neural networks to develop a reliable model. Case studies were conducted to investigate the effects of structural attributes on permeation using a carefully characterized database. Upon careful evaluation, a permeation coefficient data set consisting of 333 data points for 258 molecules was identified, and these data were added to our extensive thermophysical database. Of these data, permeation values for 160 molecular structures were deemed suitable for our modeling efforts. We employed established descriptors and constructed new descriptors to aid the development of a reliable QSPR model for the permeation coefficient. Overall, our new nonlinear QSPR model had an absolute-average percentage deviation, root-mean-square error, and correlation coefficient of 8.0%, 0.34, and 0.93, respectively. Cause-and-effect analysis of the structural descriptors obtained in this study indicates that that three size/shape and two polarity descriptors accounted for ,70% of the permeation information conveyed by the descriptors. © 2009 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 98:4069,4084, 2009 [source]


Models, Assumptions, and Stakeholders: Planning for Water Supply Variability in the Colorado River Basin,

JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 2 2008
Dustin Garrick
Abstract:, Declining reservoir storage has raised the specter of the first water shortage on the Lower Colorado River since the completion of Glen Canyon and Hoover Dams. This focusing event spurred modeling efforts to frame alternatives for managing the reservoir system during prolonged droughts. This paper addresses the management challenges that arise when using modeling tools to manage water scarcity under variable hydroclimatology, shifting use patterns, and institutional complexity. Assumptions specified in modeling simulations are an integral feature of public processes. The policymaking and management implications of assumptions are examined by analyzing four interacting sources of physical and institutional uncertainty: inflow (runoff), depletion (water use), operating rules, and initial reservoir conditions. A review of planning documents and model reports generated during two recent processes to plan for surplus and shortage in the Colorado River demonstrates that modeling tools become useful to stakeholders by clarifying the impacts of modeling assumptions at several temporal and spatial scales. A high reservoir storage-to-runoff ratio elevates the importance of assumptions regarding initial reservoir conditions over the three-year outlook used to assess the likelihood of reaching surplus and shortage triggers. An ensemble of initial condition predictions can provide more robust initial conditions estimates. This paper concludes that water managers require model outputs that encompass a full range of future potential outcomes, including best and worst cases. Further research into methods of representing and communicating about hydrologic and institutional uncertainty in model outputs will help water managers and other stakeholders to assess tradeoffs when planning for water supply variability. [source]


Easing the Inferential Leap in Competency Modelling: The Effects of Task-related Information and Subject Matter Expertise,

PERSONNEL PSYCHOLOGY, Issue 4 2004
FILIP LIEVENS
Despite the rising popularity of the practice of competency modeling, research on competency modeling has lagged behind. This study begins to close this practice,science gap through 3 studies (1 lab study and 2 field studies), which employ generalizability analysis to shed light on (a) the quality of inferences made in competency modeling and (b) the effects of incorporating elements of traditional job analysis into competency modeling to raise the quality of competency inferences. Study 1 showed that competency modeling resulted in poor interrater reliability and poor between-job discriminant validity amongst inexperienced raters. In contrast, Study 2 suggested that the quality of competency inferences was higher among a variety of job experts in a real organization. Finally, Study 3 showed that blending competency modeling efforts and task-related information increased both interrater reliability among SMEs and their ability to discriminate among jobs. In general, this set of results highlights that the inferences made in competency modeling should not be taken for granted, and that practitioners can improve competency modeling efforts by incorporating some of the methodological rigor inherent in job analysis. [source]