Model Framework (model + framework)

Distribution by Scientific Domains

Kinds of Model Framework

  • mixed model framework


  • Selected Abstracts


    International evidence on alternative models of the term structure of volatilities

    THE JOURNAL OF FUTURES MARKETS, Issue 7 2009
    Antonio Díaz
    The term structure of instantaneous volatilities (TSV) of forward rates for different monetary areas (euro, U.S. dollar and British pound) is examined using daily data from at-the-money cap markets. During the sample period (two and a half years), the TSV experienced severe changes both in level and shape. Two new functional forms of the instantaneous volatility of forward rates are proposed and tested within the LIBOR Market Model framework. Two other alternatives are calibrated and used as benchmarks to test the accuracy of the new models. The two new models provide more flexibility to adequately calibrate the observed cap prices, although this improved accuracy in replicating cap prices produces some instability in parameter estimates. © 2009 Wiley Periodicals, Inc. Jrl Fut Mark 29:653,683, 2009 [source]


    One Hundred Fifty Years of Change in Forest Bird Breeding Habitat: Estimates of Species Distributions

    CONSERVATION BIOLOGY, Issue 6 2005
    LISA A. SCHULTE
    aptitud del hábitat; ecología aviar; ecología de paisaje; planificación de conservación Abstract:,Evaluating bird population trends requires baseline data. In North America the earliest population data available are those from the late 1960s. Forest conditions in the northern Great Lake states (U.S.A.), however, have undergone succession since the region was originally cut over around the turn of the twentieth century, and it is expected that bird populations have undergone concomitant change. We propose pre-Euro-American settlement as an alternative baseline for assessing changes in bird populations. We evaluated the amount, quality, and distribution of breeding bird habitat during the mid-1800s and early 1990s for three forest birds: the Pine Warbler (Dendroica pinus), Blackburnian Warbler (D. fusca), and Black-throated Green Warbler (D. virens). We constructed models of bird and habitat relationships based on literature review and regional data sets of bird abundance and applied these models to widely available vegetation data. Original public-land survey records represented historical habitat conditions, and a combination of forest inventory and national land-cover data represented current conditions. We assessed model robustness by comparing current habitat distribution to actual breeding bird locations from the Wisconsin Breeding Bird Atlas. The model showed little change in the overall amount of Pine Warbler habitat, whereas both the Blackburnian Warber and the Black-throated Green Warbler have experienced substantial habitat losses. For the species we examined, habitat quality has degraded since presettlement and the spatial distribution of habitat shifted among ecoregions, with range expansion accompanying forest incursion into previously open habitats or the replacement of native forests with pine plantations. Sources of habitat loss and degradation include loss of conifers and loss of large trees. Using widely available data sources in a habitat suitability model framework, our method provides a long-term analysis of change in bird habitat and a presettlement baseline for assessing current conservation priority. Resumen:,La evaluación de tendencias de las poblaciones de aves requiere de datos de referencia. En Norte América, los primeros datos disponibles de poblaciones son del final de la década de 1960. Sin embargo, las condiciones de los bosques en los estados de los Grandes Lagos (E.U.A.) han experimentado sucesión desde que la región fue talada en los inicios del siglo veinte, y se espera que las poblaciones de aves hayan experimentado cambios concomitantes. Proponemos que se considere al período previo a la colonización euro americana como referencia alternativa para evaluar los cambios en las poblaciones de aves. Evaluamos la cantidad, calidad y distribución del hábitat para reproducción de tres especies de aves de bosque (Dendroica pinus, D. fusca y D. virens) a mediados del siglo XIX e inicios del XX. Construimos modelos de las relaciones entre las aves y el hábitat con base en la literatura y conjuntos de datos de abundancia de aves y los aplicamos a los datos de vegetación ampliamente disponibles. Los registros topográficos de tierras públicas originales representaron las condiciones históricas del hábitat, y una combinación de datos del inventario forestal y de cobertura de suelo representaron las condiciones actuales. Evaluamos la robustez del modelo mediante la comparación de la distribución de hábitat actual con sitios de reproducción de aves registrados en el Wisconsin Breeding Bird Atlas. El modelo mostró poco cambio en la cantidad total de hábitat de Dendroica pinus, mientras que tanto D. fusca como D. virens han experimentado pérdidas sustanciales de hábitat. Para las especies examinadas, la calidad del hábitat se ha degradado desde antes de la colonización y la distribución espacial del hábitat cambió entre ecoregiones, con la expansión del rango acompañando la incursión de bosques en hábitats anteriormente abiertos o el reemplazo de bosques nativos con plantaciones de pinos. Las fuentes de pérdida y degradación de hábitats incluyen la pérdida de coníferas y de árboles grandes. Mediante la utilización de fuentes de datos ampliamente disponibles en un modelo de aptitud de hábitat, nuestro método proporciona un análisis a largo plazo de los cambios en el hábitat de aves y una referencia precolonización para evaluar prioridades de conservación actuales. [source]


    A Comparison of Arbitration Procedures for Risk-Averse Disputants,

    DECISION SCIENCES, Issue 4 2004
    Michael J. Armstrong
    ABSTRACT We propose an arbitration model framework that generalizes many previous quantitative models of final offer arbitration, conventional arbitration, and some proposed alternatives to them. Our model allows the two disputants to be risk averse and assumes that the issue(s) in dispute can be summarized by a single quantifiable value. We compare the performance of the different arbitration procedures by analyzing the gap between the disputants' equilibrium offers and the width of the contract zone that these offers imply. Our results suggest that final offer arbitration should give results superior to those of conventional arbitration. [source]


    Bayes nets and babies: infants' developing statistical reasoning abilities and their representation of causal knowledge

    DEVELOPMENTAL SCIENCE, Issue 3 2007
    David M. Sobel
    A fundamental assumption of the causal graphical model framework is the Markov assumption, which posits that learners can discriminate between two events that are dependent because of a direct causal relation between them and two events that are independent conditional on the value of another event(s). Sobel and Kirkham (2006) demonstrated that 8-month-old infants registered conditional independence information among a sequence of events; infants responded according to the Markov assumption in such a way that was inconsistent with models that rely on simple calculations of associative strength. The present experiment extends these findings to younger infants, and demonstrates that such responses potentially develop during the second half of the first year of life. These data are discussed in terms of a developmental trajectory between associative mechanisms and causal graphical models as representations of infants' causal and statistical learning. [source]


    Dynamic distribution modelling: predicting the present from the past

    ECOGRAPHY, Issue 1 2009
    Stephen G. Willis
    Confidence in projections of the future distributions of species requires demonstration that recently-observed changes could have been predicted adequately. Here we use a dynamic model framework to demonstrate that recently-observed changes at the expanding northern boundaries of three British butterfly species can be predicted with good accuracy. Previous work established that the distributions of the study species currently lag behind climate change, and so we presumed that climate is not currently a major constraint at the northern range margins of our study species. We predicted 1970,2000 distribution changes using a colonisation model, MIGRATE, superimposed on a high-resolution map of habitat availability. Thirty-year rates and patterns of distribution change could be accurately predicted for each species (, goodness-of-fit of models >0.64 for all three species, corresponding to >83% of grid cells correctly assigned), using a combination of individual species traits, species-specific habitat associations and distance-dependent dispersal. Sensitivity analyses showed that population productivity was the most important determinant of the rate of distribution expansion (variation in dispersal rate was not studied because the species are thought to be similar in dispersal capacity), and that each species' distribution prior to expansion was critical in determining the spatial pattern of the current distribution. In future, modelling approaches that combine climate suitability and spatially-explicit population models, incorporating demographic variables and habitat availability, are likely to be valuable tools in projecting species' responses to climatic change and hence in anticipating management to facilitate species' dispersal and persistence. [source]


    Measuring dispersal and detecting departures from a random walk model in a grasshopper hybrid zone

    ECOLOGICAL ENTOMOLOGY, Issue 2 2003
    R. I. Bailey
    Abstract. 1. The grasshopper species Chorthippus brunneus and C. jacobsi form a complex mosaic hybrid zone in northern Spain. Two mark,release,recapture studies were carried out near the centre of the zone in order to make direct estimates of lifetime dispersal. 2. A model framework based on a simple random walk in homogeneous habitat was extended to include the estimation of philopatry and flying propensity. Each model was compared with the real data, correcting for spatial and temporal biases in the data sets. 3. All four data sets (males and females at each site) deviated significantly from a random walk. Three of the data sets showed strong philopatry and three had a long dispersal tail, indicating a low propensity to move further than predicted by the random walk model. 4. Neighbourhood size estimates were 76 and 227 for the two sites. These estimates may underestimate effective population size, which could be increased by the long tail to the dispersal function. The random walk model overestimates lifetime dispersal and hence the minimum spatial scale of adaptation. 5. Best estimates of lifetime dispersal distance of 7,33 m per generation were considerably lower than a previous indirect estimate of 1344 m per generation. This discrepancy could be influenced by prezygotic isolation, an inherent by-product of mosaic hybrid zone structure. [source]


    The seasonal temperature dependency of photosynthesis and respiration in two deciduous forests

    GLOBAL CHANGE BIOLOGY, Issue 6 2004
    Andrew J. Jarvis
    Abstract Novel nonstationary and nonlinear dynamic time series analysis tools are applied to multiyear eddy covariance CO2 flux and micrometeorological data from the Harvard Forest and University of Michigan Biological Station field study sites. Firstly, the utility of these tools for partitioning the gross photosynthesis and bulk respiration signals within these series is demonstrated when employed within a simple model framework. This same framework offers a promising new method for gap filling missing CO2 flux data. Analysing the dominant seasonal components extracted from the CO2 flux data using these tools, models are inferred for daily gross photosynthesis and bulk respiration. Despite their simplicity, these models fit the data well and yet are characterized by well-defined parameter estimates when the models are optimized against calibration data. Predictive validation of the models also demonstrates faithful forecasts of annual net cumulative CO2 fluxes for these sites. [source]


    Modelling canopy CO2 fluxes: are ,big-leaf' simplifications justified?

    GLOBAL ECOLOGY, Issue 6 2001
    A. D. Friend
    Abstract 1The ,big-leaf' approach to calculating the carbon balance of plant canopies is evaluated for inclusion in the ETEMA model framework. This approach assumes that canopy carbon fluxes have the same relative responses to the environment as any single leaf, and that the scaling from leaf to canopy is therefore linear. 2A series of model simulations was performed with two models of leaf photosynthesis, three distributions of canopy nitrogen, and two levels of canopy radiation detail. Leaf- and canopy-level responses to light and nitrogen, both as instantaneous rates and daily integrals, are presented. 3Observed leaf nitrogen contents of unshaded leaves are over 40% lower than the big-leaf approach requires. Scaling from these leaves to the canopy using the big-leaf approach may underestimate canopy photosynthesis by ~20%. A leaf photosynthesis model that treats within-leaf light extinction displays characteristics that contradict the big-leaf theory. Observed distributions of canopy nitrogen are closer to those required to optimize this model than the homogeneous model used in the big-leaf approach. 4It is theoretically consistent to use the big-leaf approach with the homogeneous photosynthesis model to estimate canopy carbon fluxes if canopy nitrogen and leaf area are known and if the distribution of nitrogen is assumed optimal. However, real nitrogen profiles are not optimal for this photosynthesis model, and caution is necessary in using the big-leaf approach to scale satellite estimates of leaf physiology to canopies. Accurate prediction of canopy carbon fluxes requires canopy nitrogen, leaf area, declining nitrogen with canopy depth, the heterogeneous model of leaf photosynthesis and the separation of sunlit and shaded leaves. The exact nitrogen profile is not critical, but realistic distributions can be predicted using a simple model of canopy nitrogen allocation. [source]


    Impact of Simulation Model Solver Performance on Ground Water Management Problems

    GROUND WATER, Issue 5 2008
    David P. Ahlfeld
    Ground water management models require the repeated solution of a simulation model to identify an optimal solution to the management problem. Limited precision in simulation model calculations can cause optimization algorithms to produce erroneous solutions. Experiments are conducted on a transient field application with a streamflow depletion control management formulation solved with a response matrix approach. The experiment consists of solving the management model with different levels of simulation model solution precision and comparing the differences in optimal solutions obtained. The precision of simulation model solutions is controlled by choice of solver and convergence parameter and is monitored by observing reported budget discrepancy. The difference in management model solutions results from errors in computation of response coefficients. Error in the largest response coefficients is found to have the most significant impact on the optimal solution. Methods for diagnosing the adequacy of precision when simulation models are used in a management model framework are proposed. [source]


    Integrated model framework for the evaluation of an SOFC/GT system as a centralized power source

    INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 1 2004
    Michihisa Koyama
    Abstract New power generation technologies are expected to reduce various environmental impacts of providing electricity to urban regions for some investment cost. Determining which power generation technologies are most suitable for meeting the demand of a particular region requires analysis of tradeoffs between costs and environmental impacts. Models simulating different power generation technologies can help quantify these tradeoffs. An Internet-based modelling infrastructure called DOME (distributed object-based modelling environment) provides a flexible mechanism to create integrated models from independent simulation models for different power generation technologies. As new technologies appear, corresponding simulation models can readily be added to the integrated model. DOME was used to combine a simulation model for hybrid SOFC (solid oxide fuel cell) and gas turbine system with a power generation capacity and dispatch optimization model. The integrated models were used to evaluate the effectiveness of the system as a centralized power source for meeting the power demand in Japan. Evaluation results indicate that a hybrid system using micro-tube SOFC may reduce CO2 emissions from power generation in Japan by about 50%. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Modelling the impact of energy taxation

    INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 6 2002
    Jörgen Sjödin
    Abstract Energy taxation in Sweden is complicated and strongly guides and governs district energy production. Consequently, there is a need for methods for accurate calculation and analysis of effects that different energy tax schemes may have on district energy utilities. Here, a practicable method to analyse influence of such governmental policy measures is demonstrated. The Swedish Government has for some years now been working on a reform of energy taxation, and during this process, several interest groups have expressed their own proposals for improving and developing the system of energy taxation. Together with the present system of taxation, four new alternatives, including the proposed directive of the European Commission, are outlined in the paper. In a case study, an analysis is made of how the different tax alternatives may influence the choice of profitable investments and use of energy carriers in a medium-sized district-heating utility. The calculations are made with a linear-programming model framework. By calculating suitable types and sizes of new investments, if any, and the operation of existing and potential plants, total energy costs are minimized. Results of the analysis include the most profitable investments, which fuel should be used, roughly when during a year plants should be in operation, and at what output. In most scenarios, the most profitable measure is to invest in a waste incineration plant. However, a crucial assumption is, with reference to the new Swedish waste disposal act, a significant income from incinerating refuse. Without this income, different tax schemes result in different technical solutions being most profitable. An investment in cogeneration seems possible in only one scenario. It is also found that particular features of some alternatives seem to oppose both main governmental policy goals, and intentions of the district heating company. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Impact of interviewing by proxy in travel survey conducted by telephone

    JOURNAL OF ADVANCED TRANSPORTATION, Issue 1 2002
    Daniel A. Badoe
    Telephone-interview surveys are a very efficient way of conducting large-scale travel surveys. Recent advancements in computer technology have made it possible to improve upon the quality of data collected by telephone surveys through computerization of the entire sample-control process, and through the direct recording of the collected data into a computer. Notwithstanding these technological advancements, potential sources of bias still exist, including the reliance on an adult member of the household to report the travel information of other household members. Travel data collected in a recent telephone interview survey in the Toronto region is used to examine this issue. The statistical tool used in the research was the Analysis of Variance (ANOVA) technique as implemented within the general linear model framework in SAS. The study-results indicate that reliance on informants to provide travel information for non-informant members of their respective households led to the underreporting of some categories of trips. These underreported trip categories were primarily segments of home-based discretionary trips, and non home-based trips. Since these latter two categories of trips are made primarily outside the morning peak period, estimated factors to adjust for their underreporting were time-period sensitive. Further, the number of vehicles available to the household, gender, and driver license status respectively were also found to be strongly associated with the underreporting of trips and thus were important considerations in the determination of adjustment factors. Work and school trips were found not to be underreported, a not surprising result giving the almost daily repetitiveness of trips made for these purposes and hence the ability of the informant to provide relatively more precise information on them. [source]


    Hybrid Framework for Managing Uncertainty in Life Cycle Inventories

    JOURNAL OF INDUSTRIAL ECOLOGY, Issue 6 2009
    Eric D. Williams
    Summary Life cycle assessment (LCA) is increasingly being used to inform decisions related to environmental technologies and polices, such as carbon footprinting and labeling, national emission inventories, and appliance standards. However, LCA studies of the same product or service often yield very different results, affecting the perception of LCA as a reliable decision tool. This does not imply that LCA is intrinsically unreliable; we argue instead that future development of LCA requires that much more attention be paid to assessing and managing uncertainties. In this article we review past efforts to manage uncertainty and propose a hybrid approach combining process and economic input,output (I-O) approaches to uncertainty analysis of life cycle inventories (LCI). Different categories of uncertainty are sometimes not tractable to analysis within a given model framework but can be estimated from another perspective. For instance, cutoff or truncation error induced by some processes not being included in a bottom-up process model can be estimated via a top-down approach such as the economic I-O model. A categorization of uncertainty types is presented (data, cutoff, aggregation, temporal, geographic) with a quantitative discussion of methods for evaluation, particularly for assessing temporal uncertainty. A long-term vision for LCI is proposed in which hybrid methods are employed to quantitatively estimate different uncertainty types, which are then reduced through an iterative refinement of the hybrid LCI method. [source]


    Kinetic analysis of a reactive model enediyne

    JOURNAL OF PHYSICAL ORGANIC CHEMISTRY, Issue 9 2004
    M. F. Semmelhack
    Abstract A model framework related to the calcheamicin family of enediyne toxins was evaluated in its reactivity toward cycloaromatization through the arene 1,4-diyl intermediates and hydrogen atom abstractions. The keto version 9 is relatively unreactive, with t1/2,=,1.5,h at 60°C. The product from hydride reduction of the ketone, alcohol 13b, is much more reactive, with t1/2=50,min at 0°C. In both cases, the rate of rearrangement is independent of the hydrogen atom donor, consistent with a rate-determining first step (cyclization). Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Capital Allocation for Insurance Companies,What Good IS IT?

    JOURNAL OF RISK AND INSURANCE, Issue 2 2007
    Helmut Gründl
    In their 2001 Journal of Risk and Insurance article, Stewart C. Myers and James A. Read Jr. propose to use a specific capital allocation method for pricing insurance contracts. We show that in their model framework no capital allocation to lines of business is needed for pricing insurance contracts. In the case of having to cover frictional costs, the suggested allocation method may even lead to inappropriate insurance prices. Beside the purpose of pricing insurance contracts, capital allocation methods proposed in the literature and used in insurance practice are typically intended to help derive capital budgeting decisions in insurance companies, such as expanding or contracting lines of business. We also show that net present value analyses provide better capital budgeting decisions than capital allocation in general. [source]


    Wavelet-based functional mixed models

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 2 2006
    Jeffrey S. Morris
    Summary., Increasingly, scientific studies yield functional data, in which the ideal units of observation are curves and the observed data consist of sets of curves that are sampled on a fine grid. We present new methodology that generalizes the linear mixed model to the functional mixed model framework, with model fitting done by using a Bayesian wavelet-based approach. This method is flexible, allowing functions of arbitrary form and the full range of fixed effects structures and between-curve covariance structures that are available in the mixed model framework. It yields nonparametric estimates of the fixed and random-effects functions as well as the various between-curve and within-curve covariance matrices. The functional fixed effects are adaptively regularized as a result of the non-linear shrinkage prior that is imposed on the fixed effects' wavelet coefficients, and the random-effect functions experience a form of adaptive regularization because of the separately estimated variance components for each wavelet coefficient. Because we have posterior samples for all model quantities, we can perform pointwise or joint Bayesian inference or prediction on the quantities of the model. The adaptiveness of the method makes it especially appropriate for modelling irregular functional data that are characterized by numerous local features like peaks. [source]


    A spatiotemporal model for Mexico City ozone levels

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 2 2004
    Gabriel Huerta
    Summary., We consider hourly readings of concentrations of ozone over Mexico City and propose a model for spatial as well as temporal interpolation and prediction. The model is based on a time-varying regression of the observed readings on air temperature. Such a regression requires interpolated values of temperature at locations and times where readings are not available. These are obtained from a time-varying spatiotemporal model that is coupled to the model for the ozone readings. Two location-dependent harmonic components are added to account for the main periodicities that ozone presents during a given day and that are not explained through the covariate. The model incorporates spatial covariance structure for the observations and the parameters that define the harmonic components. Using the dynamic linear model framework, we show how to compute smoothed means and predictive values for ozone. We illustrate the methodology on data from September 1997. [source]


    Effects of loading rate on viscoplastic properties of polymer geosynthetics and its constitutive modeling

    POLYMER ENGINEERING & SCIENCE, Issue 3 2010
    Fang-Le Peng
    On the basis of the special tensile test results under various loading histories, the rate-dependent behaviors of three polymer geosynthetics due to their viscous properties have been investigated. All the investigated polymer geosynthetics show significant loading rate effects, creep deformation, and stress relaxation. Except for the polyester geogrid showing the combined viscosity, all the investigated polymer geosynthetics exhibit the isotach viscosity. An elasto-viscoplastic constitutive model described in a nonlinear three-component model framework is developed to simulate the rate-dependent behaviors of polymer geosynthetics. The developed constitutive model is verified by comparing its simulated results with the experimental data of polymer geosynthetics presented in this study and those available from the literature. The comparison indicates that the developed model can reasonably interpret the rate-dependent behaviors of polymer geosynthetics under arbitrary loading histories, including the step-changed strain rate loading, creep, and stress relaxation applied during otherwise monotonic loading (ML). POLYM. ENG. SCI., 2010. © 2009 Society of Plastics Engineers [source]


    Unobserved Heterogeneity in Models of Competing Mortgage Termination Risks

    REAL ESTATE ECONOMICS, Issue 2 2006
    John M. Clapp
    This article extends unobserved heterogeneity to the multinomial logit (MNL) model framework in the context of mortgages terminated by refinance, move or default. It tests for the importance of unobserved heterogeneity when borrower characteristics such as income, age and credit score are included to capture lender-observed heterogeneity. It does this by comparing the proportional hazard model to MNL with and without mass-point estimates of unobserved heterogeneous groups of borrowers. The mass-point mixed hazard (MMH) model yields larger and more significant coefficients for several important variables in the move model, whereas the MNL model without unobserved heterogeneity performs well with the refinance estimates. The MMH clearly dominates the alternative models in sample and out of sample. However, it is sometimes difficult to obtain convergence for the models estimated jointly with mass points. [source]


    VARIATIONAL BAYESIAN ANALYSIS FOR HIDDEN MARKOV MODELS

    AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 2 2009
    C. A. McGrory
    Summary The variational approach to Bayesian inference enables simultaneous estimation of model parameters and model complexity. An interesting feature of this approach is that it also leads to an automatic choice of model complexity. Empirical results from the analysis of hidden Markov models with Gaussian observation densities illustrate this. If the variational algorithm is initialized with a large number of hidden states, redundant states are eliminated as the method converges to a solution, thereby leading to a selection of the number of hidden states. In addition, through the use of a variational approximation, the deviance information criterion for Bayesian model selection can be extended to the hidden Markov model framework. Calculation of the deviance information criterion provides a further tool for model selection, which can be used in conjunction with the variational approach. [source]


    Semiparametric Models of Time-Dependent Predictive Values of Prognostic Biomarkers

    BIOMETRICS, Issue 1 2010
    Yingye Zheng
    Summary Rigorous statistical evaluation of the predictive values of novel biomarkers is critical prior to applying novel biomarkers into routine standard care. It is important to identify factors that influence the performance of a biomarker in order to determine the optimal conditions for test performance. We propose a covariate-specific time-dependent positive predictive values curve to quantify the predictive accuracy of a prognostic marker measured on a continuous scale and with censored failure time outcome. The covariate effect is accommodated with a semiparametric regression model framework. In particular, we adopt a smoothed survival time regression technique (Dabrowska, 1997,,The Annals of Statistics,25, 1510,1540) to account for the situation where risk for the disease occurrence and progression is likely to change over time. In addition, we provide asymptotic distribution theory and resampling-based procedures for making statistical inference on the covariate-specific positive predictive values. We illustrate our approach with numerical studies and a dataset from a prostate cancer study. [source]


    Variable Selection for Semiparametric Mixed Models in Longitudinal Studies

    BIOMETRICS, Issue 1 2010
    Xiao Ni
    Summary We propose a double-penalized likelihood approach for simultaneous model selection and estimation in semiparametric mixed models for longitudinal data. Two types of penalties are jointly imposed on the ordinary log-likelihood: the roughness penalty on the nonparametric baseline function and a nonconcave shrinkage penalty on linear coefficients to achieve model sparsity. Compared to existing estimation equation based approaches, our procedure provides valid inference for data with missing at random, and will be more efficient if the specified model is correct. Another advantage of the new procedure is its easy computation for both regression components and variance parameters. We show that the double-penalized problem can be conveniently reformulated into a linear mixed model framework, so that existing software can be directly used to implement our method. For the purpose of model inference, we derive both frequentist and Bayesian variance estimation for estimated parametric and nonparametric components. Simulation is used to evaluate and compare the performance of our method to the existing ones. We then apply the new method to a real data set from a lactation study. [source]


    Nested Markov Compliance Class Model in the Presence of Time-Varying Noncompliance

    BIOMETRICS, Issue 2 2009
    Julia Y. Lin
    Summary We consider a Markov structure for partially unobserved time-varying compliance classes in the Imbens,Rubin (1997, The Annals of Statistics25, 305,327) compliance model framework. The context is a longitudinal randomized intervention study where subjects are randomized once at baseline, outcomes and patient adherence are measured at multiple follow-ups, and patient adherence to their randomized treatment could vary over time. We propose a nested latent compliance class model where we use time-invariant subject-specific compliance principal strata to summarize longitudinal trends of subject-specific time-varying compliance patterns. The principal strata are formed using Markov models that relate current compliance behavior to compliance history. Treatment effects are estimated as intent-to-treat effects within the compliance principal strata. [source]


    A Two-Part Joint Model for the Analysis of Survival and Longitudinal Binary Data with Excess Zeros

    BIOMETRICS, Issue 2 2008
    Dimitris Rizopoulos
    Summary Many longitudinal studies generate both the time to some event of interest and repeated measures data. This article is motivated by a study on patients with a renal allograft, in which interest lies in the association between longitudinal proteinuria (a dichotomous variable) measurements and the time to renal graft failure. An interesting feature of the sample at hand is that nearly half of the patients were never tested positive for proteinuria (,1g/day) during follow-up, which introduces a degenerate part in the random-effects density for the longitudinal process. In this article we propose a two-part shared parameter model framework that effectively takes this feature into account, and we investigate sensitivity to the various dependence structures used to describe the association between the longitudinal measurements of proteinuria and the time to renal graft failure. [source]


    An Empirical Bayes Method for Estimating Epistatic Effects of Quantitative Trait Loci

    BIOMETRICS, Issue 2 2007
    Shizhong Xu
    Summary The genetic variance of a quantitative trait is often controlled by the segregation of multiple interacting loci. Linear model regression analysis is usually applied to estimating and testing effects of these quantitative trait loci (QTL). Including all the main effects and the effects of interaction (epistatic effects), the dimension of the linear model can be extremely high. Variable selection via stepwise regression or stochastic search variable selection (SSVS) is the common procedure for epistatic effect QTL analysis. These methods are computationally intensive, yet they may not be optimal. The LASSO (least absolute shrinkage and selection operator) method is computationally more efficient than the above methods. As a result, it has been widely used in regression analysis for large models. However, LASSO has never been applied to genetic mapping for epistatic QTL, where the number of model effects is typically many times larger than the sample size. In this study, we developed an empirical Bayes method (E-BAYES) to map epistatic QTL under the mixed model framework. We also tested the feasibility of using LASSO to estimate epistatic effects, examined the fully Bayesian SSVS, and reevaluated the penalized likelihood (PENAL) methods in mapping epistatic QTL. Simulation studies showed that all the above methods performed satisfactorily well. However, E-BAYES appears to outperform all other methods in terms of minimizing the mean-squared error (MSE) with relatively short computing time. Application of the new method to real data was demonstrated using a barley dataset. [source]


    Joint Analysis of Time-to-Event and Multiple Binary Indicators of Latent Classes

    BIOMETRICS, Issue 1 2004
    Klaus Larsen
    Summary. Multiple categorical variables are commonly used in medical and epidemiological research to measure specific aspects of human health and functioning. To analyze such data, models have been developed considering these categorical variables as imperfect indicators of an individual's "true" status of health or functioning. In this article, the latent class regression model is used to model the relationship between covariates, a latent class variable (the unobserved status of health or functioning), and the observed indicators (e.g., variables from a questionnaire). The Cox model is extended to encompass a latent class variable as predictor of time-to-event, while using information about latent class membership available from multiple categorical indicators. The expectation-maximization (EM) algorithm is employed to obtain maximum likelihood estimates, and standard errors are calculated based on the profile likelihood, treating the nonparametric baseline hazard as a nuisance parameter. A sampling-based method for model checking is proposed. It allows for graphical investigation of the assumption of proportional hazards across latent classes. It may also be used for checking other model assumptions, such as no additional effect of the observed indicators given latent class. The usefulness of the model framework and the proposed techniques are illustrated in an analysis of data from the Women's Health and Aging Study concerning the effect of severe mobility disability on time-to-death for elderly women. [source]


    PERCEPTIONS OF BENEFIT FRAUD STAFF IN THE UK: GIVING P.E.A.C.E.

    PUBLIC ADMINISTRATION, Issue 2 2007
    A CHANCE?
    This article reports a study concerning perceptions of benefit fraud staff and of management concerning their own interviewing techniques and standards, and their views pertaining to a preferred model of interviewing. Interviewing fraud suspects forms an important task performed by Fraud Investigators (FIs) within the Department of Work and Pensions (DWP) in the UK. Given this significance, it is surprising that there has been little analysis of the skills used to do this task. Current training consists of a course centred on an interviewing framework called the PEACE model, which was originally developed for police use. The research outlined in this paper examined both FIs and their managers' perceptions and attitudes of the model and of their own practices. It was found that, while there was general support for the model, reservations were voiced over how effective PEACE may actually be in practice. These reservations centred on insufficient time to prepare for investigations along with a perceived inflexibility over the model's framework. In, addition, it was highlighted that the absence of any national supervisory framework for investigative interviews should give the organization cause for concern in ensuring standards. [source]