Home About us Contact | |||
Model Components (model + component)
Selected AbstractsCoupling integrated Earth System Model components with BFG2CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 6 2009C. W. Armstrong Abstract GENIE is a suite of modular Earth System Model components coupled in a variety of configurations used to investigate climate phenomena. As part of the GENIEfy project, there is a desire to make the activity of coupling GENIE configurations more flexible in order to ease the integration of new components, permit experimentation with alternative model orderings and connectivity, and execute GENIE components in distributed environments. The current coupling framework is inflexible because models are run in a fixed order by a prescriptive main code. This paper shows how the BFG2 (Bespoke Framework Generator,version 2) coupling tool offers significantly more flexibility. Using BFG2, scientists describe GENIE configurations as metadata that can then be transformed automatically into the desired framework. It is demonstrated that BFG2 provides flexibility in composition and deployment, improvements that are brought without modification to the GENIE components, without loss of performance and in a such a manner that it is possible to produce exactly the same results as under the original framework. We also demonstrate how BFG2 may be used to improve the performance of future GENIE coupled models. Copyright © 2008 John Wiley & Sons, Ltd. [source] Optimization of integrated Earth System Model components using Grid-enabled data management and computationCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2 2007A. R. Price Abstract In this paper, we present the Grid enabled data management system that has been deployed for the Grid ENabled Integrated Earth system model (GENIE) project. The database system is an augmented version of the Geodise Database Toolbox and provides a repository for scripts, binaries and output data in the GENIE framework. By exploiting the functionality available in the Geodise toolboxes we demonstrate how the database can be employed to tune parameters of coupled GENIE Earth System Model components to improve their match with observational data. A Matlab client provides a common environment for the project Virtual Organization and allows the scripting of bespoke tuning studies that can exploit multiple heterogeneous computational resources. We present the results of a number of tuning exercises performed on GENIE model components using multi-dimensional optimization methods. In particular, we find that it is possible to successfully tune models with up to 30 free parameters using Kriging and Genetic Algorithm methods. Copyright © 2006 John Wiley & Sons, Ltd. [source] Modelling runoff from highly glacierized alpine drainage basins in a changing climateHYDROLOGICAL PROCESSES, Issue 19 2008Matthias Huss Abstract The future runoff from three highly glacierized alpine catchments is assessed for the period 2007,2100 using a glacio-hydrological model including the change in glacier coverage. We apply scenarios for the seasonal change in temperature and precipitation derived from regional climate models. Glacier surface mass balance and runoff are calculated in daily time-steps using a distributed temperature-index melt and accumulation model. Model components account for changes in glacier extent and surface elevation, evaporation and runoff routing. The model is calibrated and validated using decadal ice volume changes derived from four digital elevation models (DEMs) between 1962 and 2006, and monthly runoff measured at a gauging station (1979,2006). Annual runoff from the drainage basins shows an initial increase which is due to the release of water from glacial storage. After some decades, depending on catchment characteristics and the applied climate change scenario, runoff stabilizes and then drops below the current level. In all climate projections, the glacier area shrinks dramatically. There is an increase in runoff during spring and early summer, whereas the runoff in July and August decreases significantly. This study highlights the impact of glaciers and their future changes on runoff from high alpine drainage basins. Copyright © 2008 John Wiley & Sons, Ltd. [source] SWAT2000: current capabilities and research opportunities in applied watershed modellingHYDROLOGICAL PROCESSES, Issue 3 2005J. G. Arnold Abstract SWAT (Soil and Water Assessment Tool) is a conceptual, continuous time model that was developed in the early 1990s to assist water resource managers in assessing the impact of management and climate on water supplies and non-point source pollution in watersheds and large river basins. SWAT is the continuation of over 30 years of model development within the US Department of Agriculture's Agricultural Research Service and was developed to ,scale up' past field-scale models to large river basins. Model components include weather, hydrology, erosion/sedimentation, plant growth, nutrients, pesticides, agricultural management, stream routing and pond/reservoir routing. The latest version, SWAT2000, has several significant enhancements that include: bacteria transport routines; urban routines; Green and Ampt infiltration equation; improved weather generator; ability to read in daily solar radiation, relative humidity, wind speed and potential ET; Muskingum channel routing; and modified dormancy calculations for tropical areas. A complete set of model documentation for equations and algorithms, a user manual describing model inputs and outputs, and an ArcView interface manual are now complete for SWAT2000. The model has been recoded into Fortran 90 with a complete data dictionary, dynamic allocation of arrays and modular subroutines. Current research is focusing on bacteria, riparian zones, pothole topography, forest growth, channel downcutting and widening, and input uncertainty analysis. The model SWAT is meanwhile used in many countries all over the world. Recent developments in European Environmental Policy, such as the adoption of the European Water Framework directive in December 2000, demand tools for integrative river basin management. The model SWAT is applicable for this purpose. It is a flexible model that can be used under a wide range of different environmental conditions, as this special issue will show. The papers compiled here are the result of the first International SWAT Conference held in August 2001 in Rauischholzhausen, Germany. More than 50 participants from 14 countries discussed their modelling experiences with the model development team from the USA. Nineteen selected papers with issues reaching from the newest developments, the evaluation of river basin management, interdisciplinary approaches for river basin management, the impact of land use change, methodical aspects and models derived from SWAT are published in this special issue. Copyright © 2005 John Wiley & Sons, Ltd. [source] PERSPECTIVE: Establishing an NPD Best Practices FrameworkTHE JOURNAL OF PRODUCT INNOVATION MANAGEMENT, Issue 2 2006Kenneth B. Kahn Achieving NPD best practices is a top-of-mind issue for many new product development (NPD) managers and is often an overarching implicit, if not explicit, goal. The question is what does one mean when talking about NPD best practices? And how does a manager move toward achieving these? This article proposes a best practices framework as a starting point for much-needed discussion on this topic. Originally presented during the 2004 Product Development Management Association (PDMA) Research Conference in Chicago, the article and the authors' presentation spurred a significant, expansive discussion that included all conference attendees. Given the interest generated, the decision was made to move forward on a series of rejoinders on the topic of NPD best practice, using the Kahn, Barczak, and Moss framework as a focal launching point for these rejoinders. A total of five rejoinders were received and accompany the best practices framework in this issue of JPIM. Each rejoinder brings out a distinct issue because each of the five authors has a unique perspective. The first rejoinder is written by Dr. Marjorie Adams-Bigelow, director of the PDMA's Comparative Performance Assessment Study (CPAS), PDMA Foundation. Based on her findings during the CPAS study, Adams comments on the proposed framework, suggesting limitations in scope. She particularly points out discrepancies between the proposed framework and the framework offered by PDMA's emerging body of knowledge. Dr. Elko Kleinschmidt, professor of marketing and international business at McMaster University, wrote the second rejoinder. Based on his extensive research with Robert G. Cooper on NPD practices, he points out that best practices really raise more questions than answers. Thomas Kuczmarski, president of Kuczmarski and Associates, is the author of the third rejoinder. Kuczmarski highlights that company mindset and metrics are critical elements needing keen attention. Where do these fit,or should they,in the proposed framework? The fourth rejoinder is written by Richard Notargiacomo, consultant for the integrated product delivery process at Eastman Kodak Company. Notargiacomo compares the proposed framework to a best practices framework Kodak has used for new product commercialization and management since 1998. The distinction of the Kodak framework is the inclusion of a product maturity model component. Dr. Lois Peters, associate professor at Rensselaer Polytechnic Institute (RPI), is the author of the fifth rejoinder. She brings out issues of radical innovation, a natural focal issue of RPI's radical innovation project (RRIP). It is highlighted that radical innovation may require unique, distinctive process characteristics a single framework cannot illustrate. Multiple layers of frameworks may be more appropriate, each corresponding to a level of innovation desired. The overall hope is that the discourse on best practices in this issue of JPIM generates more discussion and debate. Ultimately, the hope is that such discourse will lead to subsequent continued study to help discern what NPD best practice means for our discipline. [source] Optimization of integrated Earth System Model components using Grid-enabled data management and computationCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2 2007A. R. Price Abstract In this paper, we present the Grid enabled data management system that has been deployed for the Grid ENabled Integrated Earth system model (GENIE) project. The database system is an augmented version of the Geodise Database Toolbox and provides a repository for scripts, binaries and output data in the GENIE framework. By exploiting the functionality available in the Geodise toolboxes we demonstrate how the database can be employed to tune parameters of coupled GENIE Earth System Model components to improve their match with observational data. A Matlab client provides a common environment for the project Virtual Organization and allows the scripting of bespoke tuning studies that can exploit multiple heterogeneous computational resources. We present the results of a number of tuning exercises performed on GENIE model components using multi-dimensional optimization methods. In particular, we find that it is possible to successfully tune models with up to 30 free parameters using Kriging and Genetic Algorithm methods. Copyright © 2006 John Wiley & Sons, Ltd. [source] Evaluation of the PESERA model in two contrasting environmentsEARTH SURFACE PROCESSES AND LANDFORMS, Issue 5 2009F. Licciardello Abstract The performance of the Pan-European Soil Erosion Risk Assessment (PESERA) model was evaluated by comparison with existing soil erosion data collected in plots under different land uses and climate conditions in Europe. In order to identify the most important sources of error, the PESERA model was evaluated by comparing model output with measured values as well as by assessing the effect of the various model components on prediction accuracy through a multistep approach. First, the performance of the hydrological and erosion components of PESERA was evaluated separately by comparing both runoff and soil loss predictions with measured values. In order to assess the performance of the vegetation growth component of PESERA, the predictions of the model based on observed values of vegetation ground cover were also compared with predictions based on the simulated vegetation cover values. Finally, in order to evaluate the sediment transport model, predicted monthly erosion rates were also calculated using observed values of runoff and vegetation cover instead of simulated values. Moreover, in order to investigate the capability of PESERA to reproduce seasonal trends, the observed and simulated monthly runoff and erosion values were aggregated at different temporal scale and we investigated at what extend the model prediction error could be reduced by output aggregation. PESERA showed promise to predict annual average spatial variability quite well. In its present form, short-term temporal variations are not well captured probably due to various reasons. The multistep approach showed that this is not only due to unrealistic simulation of cover and runoff, being erosion prediction also an important source of error. Although variability between the investigated land uses and climate conditions is well captured, absolute rates are strongly underestimated. A calibration procedure, focused on a soil erodibility factor, is proposed to reduce the significant underestimation of soil erosion rates. Copyright © 2009 John Wiley & Sons, Ltd. [source] Application of Multivariate curve resolution-alternating least square methods on the resolution of overlapping CE peaks from different separation conditionsELECTROPHORESIS, Issue 20 2007Fang Zhang Abstract Discussed in this paper is the development of a new strategy to improve resolution of overlapping CE peaks by using second-order multivariate curve resolution with alternating least square (second-order MCR-ALS) methods. Several kinds of organic reagents are added, respectively, in buffers and sets of overlapping peaks with different separations are obtained. Augmented matrix is formed by the corresponding matrices of the overlapping peaks and is then analyzed by the second-order MCR-ALS method in order to use all data information to improve the precision of the resolution. Similarity between the resolved unit spectrum and the true one is used to assess the quality of the solutions provided by the above method. 3,4-Dihydropyrimidin-2-one derivatives (DHPOs) are used as model components and mixed artificially in order to obtain overlapping peaks. Three different impurity levels, 100, 20, and 10% relative to the main component, are used. With this strategy, the concentration profiles and spectra of impurities, which are no more than 10% of the main component, can be resolved from the overlapping peaks without pure standards participant in the analysis. The effects of the changes in the components spectra in the buffer with different organic reagents on the resolution are also evaluated, which are slight and can thus be ignored in the analysis. Individual data matrices (two-way data) are also analyzed by using MCR-ALS and heuristic evolving latent projections (HELP) methods and their results are compared with those when MCR-ALS is applied to augmented data matrix (three-way data) analysis. [source] Adapted DAX-8 fractionation method for dissolved organic matter (DOM) from soils: development, calibration with test components and application to contrasting soil solutionsEUROPEAN JOURNAL OF SOIL SCIENCE, Issue 6 2009F. Amery Summary Most methods to fractionate natural dissolved organic matter (DOM) rely on sorption of acidified DOM samples onto XAD-8 or DAX-8 resin. Procedural differences among methods are large and their interpretation is limited because there is a lack of calibration with DOM model molecules. An automated column-based DOM fractionation method was set up for 10-ml DOM samples, dividing DOM into hydrophilic (HPI), hydrophobic acid (HPOA) and hydrophobic neutral (HPON) fractions. Fifteen DOM model components were tested in isolation and in combination. Three reference DOM samples of the International Humic Substances Society were included to facilitate comparison with other methods. Aliphatic low-molecular-weight acids (LMWAs) and carbohydrates were classified as HPI DOM, but some LMWAs showed also a partial HPO character. Aromatic LMWAs and polyphenols partitioned in the HPOA fraction, menadione (quinone) and geraniol (terpenoid) in HPON DOM. Molecules with log Kow > 0.5 had negligible HPI fractions. The HPO molecules except geraniol had specific UV absorbance (SUVA, measure for aromaticity) >3 litres g,1 cm,1 while HPI molecules had SUVA values <3 litres g,1 cm,1. Distributions of DOM from eight soils ranged from 31 to 72% HPI, 25 to 46% HPOA and 2 to 28% HPON of total dissolved organic carbon. The SUVA of the HPI DOM was consistently smaller compared with the HPOA DOM. The SUVA of the natural DOM samples was not explained statistically by fractionation and the variation coefficient of SUVA among samples was not reduced by fractionation. Hence, fractionation did not reduce the variability in this DOM property, which casts some doubts on the practical role of DOM fractionation in predicting DOM properties. [source] Multilevel Analysis of the Chronic Care Model and 5A Services for Treating Tobacco Use in Urban Primary Care ClinicsHEALTH SERVICES RESEARCH, Issue 1 2009Dorothy Y. Hung Objective. To examine the chronic care model (CCM) as a framework for improving provider delivery of 5A tobacco cessation services. Methods. Cross-sectional surveys were used to obtain data from 497 health care providers in 60 primary care clinics serving low-income patients in New York City. A hierarchical generalized linear modeling approach to ordinal regression was used to estimate the probability of full 5A service delivery, adjusting for provider covariates and clustering effects. We examined associations between provider delivery of 5A services, clinic implementation of CCM elements tailored for treating tobacco use, and the degree of CCM integration in clinics. Principal Findings. Providers practicing in clinics with enhanced delivery system design, clinical information systems, and self-management support for cessation were 2.04,5.62 times more likely to perform all 5A services ( p<.05). CCM integration in clinics was also positively associated with 5As delivery. Compared with none, implementation of one to six CCM elements corresponded with a 3.69,30.9 increased odds of providers delivering the full spectrum of 5As ( p<.01). Conclusions. Findings suggest that the CCM facilitates provider adherence to the Public Health Service 5A clinical guideline. Achieving the full benefits of systems change may require synergistic adoption of all model components. [source] TOPCAT-NP: a minimum information requirement model for simulation of flow and nutrient transport from agricultural systemsHYDROLOGICAL PROCESSES, Issue 14 2008P. F. Quinn Abstract Future catchment planning requires a good understanding of the impacts of land use and management, especially with regard to nutrient pollution. A range of readily usable tools, including models, can play a critical role in underpinning robust decision-making. Modelling tools must articulate our process understanding, make links to a range of catchment characteristics and scales and have the capability to reflect future land-use management changes. Hence, the model application can play an important part in giving confidence to policy makers that positive outcomes will arise from any proposed land-use changes. Here, a minimum information requirement (MIR) modelling approach is presented that creates simple, parsimonious models based on more complex physically based models, which makes the model more appropriate to catchment-scale applications. This paper shows three separate MIR models that represent flow, nitrate losses and phosphorus losses. These models are integrated into a single catchment model (TOPCAT-NP), which has the advantage that certain model components (such as soil type and flow paths) are shared by all three MIR models. The integrated model can simulate a number of land-use activities that relate to typical land-use management practices. The modelling process also gives insight into the seasonal and event nature of nutrient losses exhibited at a range of catchment scales. Three case studies are presented to reflect the range of applicability of the model. The three studies show how different runoff and nutrient loss regimes in different soil/geological and global locations can be simulated using the same model. The first case study models intense agricultural land uses in Denmark (Gjern, 114 km2), the second is an intense agricultural area dominated by high superphosphate applications in Australia (Ellen Brook, 66 km2) and the third is a small research-scale catchment in the UK (Bollington Hall, 2 km2). Copyright © 2007 John Wiley & Sons, Ltd. [source] A field-scale infiltration model accounting for spatial heterogeneity of rainfall and soil saturated hydraulic conductivityHYDROLOGICAL PROCESSES, Issue 7 2006Renato Morbidelli Abstract This study first explores the role of spatial heterogeneity, in both the saturated hydraulic conductivity Ks and rainfall intensity r, on the integrated hydrological response of a natural slope. On this basis, a mathematical model for estimating the expected areal-average infiltration is then formulated. Both Ks and r are considered as random variables with assessed probability density functions. The model relies upon a semi-analytical component, which describes the directly infiltrated rainfall, and an empirical component, which accounts further for the infiltration of surface water running downslope into pervious soils (the run-on effect). Monte Carlo simulations over a clay loam soil and a sandy loam soil were performed for constructing the ensemble averages of field-scale infiltration used for model validation. The model produced very accurate estimates of the expected field-scale infiltration rate, as well as of the outflow generated by significant rainfall events. Furthermore, the two model components were found to interact appropriately for different weights of the two infiltration mechanisms involved. Copyright © 2005 John Wiley & Sons, Ltd. [source] Multi-variable and multi-site calibration and validation of SWAT in a large mountainous catchment with high spatial variabilityHYDROLOGICAL PROCESSES, Issue 5 2006Wenzhi Cao Abstract Many methods developed for calibration and validation of physically based distributed hydrological models are time consuming and computationally intensive. Only a small set of input parameters can be optimized, and the optimization often results in unrealistic values. In this study we adopted a multi-variable and multi-site approach to calibration and validation of the Soil Water Assessment Tool (SWAT) model for the Motueka catchment, making use of extensive field measurements. Not only were a number of hydrological processes (model components) in a catchment evaluated, but also a number of subcatchments were used in the calibration. The internal variables used were PET, annual water yield, daily streamflow, baseflow, and soil moisture. The study was conducted using an 11-year historical flow record (1990,2000); 1990,94 was used for calibration and 1995,2000 for validation. SWAT generally predicted well the PET, water yield and daily streamflow. The predicted daily streamflow matched the observed values, with a Nash,Sutcliffe coefficient of 0·78 during calibration and 0·72 during validation. However, values for subcatchments ranged from 0·31 to 0·67 during calibration, and 0·36 to 0·52 during validation. The predicted soil moisture remained wet compared with the measurement. About 50% of the extra soil water storage predicted by the model can be ascribed to overprediction of precipitation; the remaining 50% discrepancy was likely to be a result of poor representation of soil properties. Hydrological compensations in the modelling results are derived from water balances in the various pathways and storage (evaporation, streamflow, surface runoff, soil moisture and groundwater) and the contributions to streamflow from different geographic areas (hill slopes, variable source areas, sub-basins, and subcatchments). The use of an integrated multi-variable and multi-site method improved the model calibration and validation and highlighted the areas and hydrological processes requiring greater calibration effort. Copyright © 2005 John Wiley & Sons, Ltd. [source] Multi-variable parameter estimation to increase confidence in hydrological modellingHYDROLOGICAL PROCESSES, Issue 2 2002Sten Bergström Abstract The expanding use and increased complexity of hydrological runoff models has given rise to a concern about overparameterization and risks for compensating errors. One proposed way out is the calibration and validation against additional observations, such as snow, soil moisture, groundwater or water quality. A general problem, however, when calibrating the model against more than one variable is the strategy for parameter estimation. The most straightforward method is to calibrate the model components sequentially. Recent results show that in this way the model may be locked up in a parameter setting, which is good enough for one variable but excludes proper simulation of other variables. This is particularly the case for water quality modelling, where a small compromise in terms of runoff simulation may lead to dramatically better simulations of water quality. This calls for an integrated model calibration procedure with a criterion that integrates more aspects on model performance than just river runoff. The use of multi-variable parameter estimation and internal control of the HBV hydrological model is discussed and highlighted by two case studies. The first example is from a forested basin in northern Sweden and the second one is from an agricultural basin in the south of the country. A new calibration strategy, which is integrated rather than sequential, is proposed and tested. It is concluded that comparison of model results with more measurements than only runoff can lead to increased confidence in the physical relevance of the model, and that the new calibration strategy can be useful for further model development. Copyright © 2002 John Wiley & Sons, Ltd. [source] Hyperspectral NIR image regression part II: dataset preprocessing diagnosticsJOURNAL OF CHEMOMETRICS, Issue 3-4 2006James Burger Abstract When known reference values such as concentrations are available, the spectra from near infrared (NIR) hyperspectral images can be used for building regression models. The sets of spectra must be corrected for errors, transformed to reflectance or absorbance values, and trimmed of bad pixel outliers in order to build robust models and minimize prediction errors. Calibration models can be computed from small (<100) sets of spectra, where each spectrum summarizes an individual image or spatial region of interest (ROI), and used to predict large (>20,000) test sets of spectra. When the distributions of these large populations of predicted values are viewed as histograms they provide mean sample concentrations (peak centers) as well as uniformity (peak widths) and purity (peak shape) information. The same predicted values can also be viewed as concentration maps or images adding spatial information to the uniformity or purity presentations. Estimates of large population statistics enable a new metric for determining the optimal number of model components, based on a combination of global bias and pooled standard deviation values computed from multiple test images or ROIs. Two example datasets are presented: an artificial mixture design of three chemicals with distinct NIR spectra and samples of different cheeses. In some cases it was found that baseline correction by taking first derivatives gave more useful prediction results by reducing optical problems. Other data pretreatments resulted in negligible changes in prediction errors, overshadowed by the variance associated with sample preparation or presentation and other physical phenomena. Copyright © 2007 John Wiley & Sons, Ltd. [source] Stability analysis of an additive spline model for respiratory health data by using knot removalJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 5 2009Harald Binder Summary., In many settings with possibly non-linear influence of covariates, such as in the present application with children's respiratory health data, generalized additive models are an attractive choice. Although techniques for fitting these have been extensively investigated, there are fewer results on stability of replication, i.e. stability of fitted model components with respect to perturbations in the data. Nevertheless, this aspect is essential for judging how useful the present model is for understanding predictors of lung function. We therefore investigate existing tools for stability analysis based on bootstrap samples, such as quantities for variability and bias, for our application. Furthermore, as the focus is on models based on B -splines, knot removal techniques are available. These can help to provide more insight into the stability of local features that are fitted in bootstrap samples. We analyse the bootstrap result matrix via log-linear models. Specifically, the relationship with respect to local features between the influence functions of potential lung function predictors is investigated. [source] Preventing and ameliorating young children's chronic problem behaviors: An ecological classroom-based approach,PSYCHOLOGY IN THE SCHOOLS, Issue 1 2009Maureen Conroy The number of young children who demonstrate chronic problem behaviors placing them at high risk for the future development of emotional and behavioral disorders is increasing. These children's problem behaviors often exist prior to entering school and become apparent as they interact with their parents at home. In fact, researchers have suggested that children who demonstrate chronic problem behaviors and their parents often end up developing well-established negative interaction patterns that can evolve into coercive relationships and persist upon entry into school. This article describes an ecological classroom-based approach, which emphasizes changing teacher--student interaction patterns as a means for preventing and possibly ameliorating coercive interaction patterns demonstrated by young children and their teachers. First, a brief overview of current service delivery models and intervention programs addressing young children's behavioral excesses is presented. Next, a description of the ecological classroom-based intervention model for addressing the behavioral needs of these children is described. This section includes the theoretical frameworks on which the model is based and an overview of model components. Additionally, the application of the model to a school-wide systems approach is explored. Finally, future research directions are discussed. © 2008 Wiley Periodicals, Inc. [source] Structured additive regression for overdispersed and zero-inflated count dataAPPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 4 2006Ludwig Fahrmeir Abstract In count data regression there can be several problems that prevent the use of the standard Poisson log-linear model: overdispersion, caused by unobserved heterogeneity or correlation, excess of zeros, non-linear effects of continuous covariates or of time scales, and spatial effects. We develop Bayesian count data models that can deal with these issues simultaneously and within a unified inferential approach. Models for overdispersed or zero-inflated data are combined with semiparametrically structured additive predictors, resulting in a rich class of count data regression models. Inference is fully Bayesian and is carried out by computationally efficient MCMC techniques. Simulation studies investigate performance, in particular how well different model components can be identified. Applications to patent data and to data from a car insurance illustrate the potential and, to some extent, limitations of our approach. Copyright © 2006 John Wiley & Sons, Ltd. [source] Joint Modelling of Repeated Transitions in Follow-up Data , A Case Study on Breast Cancer DataBIOMETRICAL JOURNAL, Issue 3 2005B. Genser Abstract In longitudinal studies where time to a final event is the ultimate outcome often information is available about intermediate events the individuals may experience during the observation period. Even though many extensions of the Cox proportional hazards model have been proposed to model such multivariate time-to-event data these approaches are still very rarely applied to real datasets. The aim of this paper is to illustrate the application of extended Cox models for multiple time-to-event data and to show their implementation in popular statistical software packages. We demonstrate a systematic way of jointly modelling similar or repeated transitions in follow-up data by analysing an event-history dataset consisting of 270 breast cancer patients, that were followed-up for different clinical events during treatment in metastatic disease. First, we show how this methodology can also be applied to non Markovian stochastic processes by representing these processes as "conditional" Markov processes. Secondly, we compare the application of different Cox-related approaches to the breast cancer data by varying their key model components (i.e. analysis time scale, risk set and baseline hazard function). Our study showed that extended Cox models are a powerful tool for analysing complex event history datasets since the approach can address many dynamic data features such as multiple time scales, dynamic risk sets, time-varying covariates, transition by covariate interactions, autoregressive dependence or intra-subject correlation. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Spatial Modeling of Wetland Condition in the U.S. Prairie Pothole RegionBIOMETRICS, Issue 2 2002J. Andrew Royle Summary. We propose a spatial modeling framework for wetland data produced from a remote-sensing-based waterfowl habitat survey conducted in the U.S. Prairie Pothole Region (PPR). The data produced from this survey consist of the area containing water on many thousands of wetland basins (i.e., prairie potholes). We propose a two-state model containing wet and dry states. This model provides a concise description of wet probability, i.e., the probability that a basin contains water, and the amount of water contained in wet basins. The two model components are spatially linked through a common latent effect, which is assumed to be spatially correlated. Model fitting and prediction is carried out using Markov chain Monte Carlo methods. The model primarily facilitates mapping of habitat conditions, which is useful in varied monitoring and assessment capacities. More importantly, the predictive capability of the model provides a rigorous statistical framework for directing management and conservation activities by enabling characterization of habitat structure at any point on the landscape. [source] |