Adjustment Factors (adjustment + factor)

Distribution by Scientific Domains


Selected Abstracts


Impact of inter-individual differences in drug metabolism and pharmacokinetics on safety evaluation

FUNDAMENTAL & CLINICAL PHARMACOLOGY, Issue 6 2004
J.L.C.M. Dorne
Abstract Safety evaluation aims to assess the dose,response relationship to determine a dose/level of exposure for food contaminants below which no deleterious effect is measurable that is ,without appreciable health risk' when consumed daily over a lifetime. These safe levels, such as the acceptable daily intake (ADI) have been derived from animal studies using surrogates for the threshold such as the no-observed-adverse-effect-level (NOAEL). The extrapolation from the NOAEL to the human safe intake uses a 100-fold uncertainty factor, defined as the product of two 10-fold factors allowing for human variability and interspecies differences. The 10-fold factor for human variability has been further subdivided into two factors of 100.5 (3.16) to cover toxicokinetics and toxicodynamics and this subdivsion allows for the replacement of an uncertainty factor with a chemical-specific adjustment factor (CSAF) when compound-specific data are available. Recently, an analysis of human variability in pharmacokinetics for phase I metabolism (CYP1A2, CYP2A6, CYP2C9, CYP2C19, CYP2D6, CYP2E1, CYP3A4, hydrolysis, alcohol dehydrogenase), phase II metabolism (N-acetyltransferase, glucuronidation, glycine conjugation, sulphation) and renal excretion was used to derive pathway-related uncertainty factors in subgroups of the human population (healthy adults, effects of ethnicity and age). Overall, the pathway-related uncertainty factors (99th centile) were above the toxicokinetic uncertainty factor for healthy adults exposed to xenobiotics handled by polymorphic metabolic pathways (and assuming the parent compound was the proximate toxicant) such as CYP2D6 poor metabolizers (26), CYP2C19 poor metabolizers (52) and NAT-2 slow acetylators (5.2). Neonates were the most susceptible subgroup of the population for pathways with available data [CYP1A2 and glucuronidation (12), CYP3A4 (14), glycine conjugation (28)]. Data for polymorphic pathways were not available in neonates but uncertainty factors of up to 45 and 9 would allow for the variability observed in children for CYP2D6 and CYP2C19 metabolism, respectively. This review presents an overview on the history of uncertainty factors, the main conclusions drawn from the analysis of inter-individual differences in metabolism and pharmacokinetics, the development of pathway-related uncertainty factors and their use in chemical risk assessment. [source]


Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses

HEALTH SERVICES RESEARCH, Issue 4p1 2004
Colin Preyra
Objective. To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Data Sources. Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Study Design. Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Principal Findings. Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Conclusions. Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post. [source]


Segmentation and Estimation for SNP Microarrays: A Bayesian Multiple Change-Point Approach

BIOMETRICS, Issue 3 2010
Yu Chuan Tai
Summary High-density single-nucleotide polymorphism (SNP) microarrays provide a useful tool for the detection of copy number variants (CNVs). The analysis of such large amounts of data is complicated, especially with regard to determining where copy numbers change and their corresponding values. In this article, we propose a Bayesian multiple change-point model (BMCP) for segmentation and estimation of SNP microarray data. Segmentation concerns separating a chromosome into regions of equal copy number differences between the sample of interest and some reference, and involves the detection of locations of copy number difference changes. Estimation concerns determining true copy number for each segment. Our approach not only gives posterior estimates for the parameters of interest, namely locations for copy number difference changes and true copy number estimates, but also useful confidence measures. In addition, our algorithm can segment multiple samples simultaneously, and infer both common and rare CNVs across individuals. Finally, for studies of CNVs in tumors, we incorporate an adjustment factor for signal attenuation due to tumor heterogeneity or normal contamination that can improve copy number estimates. [source]


Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses

HEALTH SERVICES RESEARCH, Issue 4p1 2004
Colin Preyra
Objective. To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Data Sources. Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Study Design. Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Principal Findings. Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Conclusions. Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post. [source]


Impact of interviewing by proxy in travel survey conducted by telephone

JOURNAL OF ADVANCED TRANSPORTATION, Issue 1 2002
Daniel A. Badoe
Telephone-interview surveys are a very efficient way of conducting large-scale travel surveys. Recent advancements in computer technology have made it possible to improve upon the quality of data collected by telephone surveys through computerization of the entire sample-control process, and through the direct recording of the collected data into a computer. Notwithstanding these technological advancements, potential sources of bias still exist, including the reliance on an adult member of the household to report the travel information of other household members. Travel data collected in a recent telephone interview survey in the Toronto region is used to examine this issue. The statistical tool used in the research was the Analysis of Variance (ANOVA) technique as implemented within the general linear model framework in SAS. The study-results indicate that reliance on informants to provide travel information for non-informant members of their respective households led to the underreporting of some categories of trips. These underreported trip categories were primarily segments of home-based discretionary trips, and non home-based trips. Since these latter two categories of trips are made primarily outside the morning peak period, estimated factors to adjust for their underreporting were time-period sensitive. Further, the number of vehicles available to the household, gender, and driver license status respectively were also found to be strongly associated with the underreporting of trips and thus were important considerations in the determination of adjustment factors. Work and school trips were found not to be underreported, a not surprising result giving the almost daily repetitiveness of trips made for these purposes and hence the ability of the informant to provide relatively more precise information on them. [source]


Multiplicative random regression model for heterogeneous variance adjustment in genetic evaluation for milk yield in Simmental

JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 3 2008
M.H. Lidauer
Summary A multiplicative random regression (M-RRM) test-day (TD) model was used to analyse daily milk yields from all available parities of German and Austrian Simmental dairy cattle. The method to account for heterogeneous variance (HV) was based on the multiplicative mixed model approach of Meuwissen. The variance model for the heterogeneity parameters included a fixed region × year × month × parity effect and a random herd × test-month effect with a within-herd first-order autocorrelation between test-months. Acceleration of variance model solutions after each multiplicative model cycle enabled fast convergence of adjustment factors and reduced total computing time significantly. Maximum Likelihood estimation of within-strata residual variances was enhanced by inclusion of approximated information on loss in degrees of freedom due to estimation of location parameters. This improved heterogeneity estimates for very small herds. The multiplicative model was compared with a model that assumed homogeneous variance. Re-estimated genetic variances, based on Mendelian sampling deviations, were homogeneous for the M-RRM TD model but heterogeneous for the homogeneous random regression TD model. Accounting for HV had large effect on cow ranking but moderate effect on bull ranking. [source]


Modelling risk and uncertainty with the analytic hierarchy process

JOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 2 2002
Ido Millet
Abstract This paper proposes methods for modelling risk and uncertainty with the analytic hierarchy process (AHP). We start by showing why benefit/risk ratios, as described in previous literature, might be an improper modelling approach. We then introduce prototypical case studies where risk plays a role in multicriteria decision making. These cases demonstrate how the AHP can be used to derive relative probabilities, multiple criteria outcome measures, risk criteria, and risk adjustment factors. Copyright © 2003 John Wiley & Sons, Ltd. [source]


The software maintenance project effort estimation model based on function points

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 2 2003
Yunsik Ahn
Abstract In this study, software maintenance size is discussed and the software maintenance project effort estimation model (SMPEEM) is proposed. The SMPEEM uses function points to calculate the volume of the maintenance function. Ten value adjustment factors (VAF) are considered and grouped into three categories of maintenance characteristics, that is the engineer's skill (people domain), its technical characteristics (product domain) and the maintenance environment (process domain). Finally, we suggest an exponential function model which can show the relationships among the maintenance efforts, maintenance environment factors, and function points of the software maintenance project. Regression analysis of some small maintenance projects demonstrates the significance of the SMPEEM model. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Global Daily Reference Evapotranspiration Modeling and Evaluation,

JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 4 2008
G.B. Senay
Abstract:, Accurate and reliable evapotranspiration (ET) datasets are crucial in regional water and energy balance studies. Due to the complex instrumentation requirements, actual ET values are generally estimated from reference ET values by adjustment factors using coefficients for water stress and vegetation conditions, commonly referred to as crop coefficients. Until recently, the modeling of reference ET has been solely based on important weather variables collected from weather stations that are generally located in selected agro-climatic locations. Since 2001, the National Oceanic and Atmospheric Administration's Global Data Assimilation System (GDAS) has been producing six-hourly climate parameter datasets that are used to calculate daily reference ET for the whole globe at 1-degree spatial resolution. The U.S. Geological Survey Center for Earth Resources Observation and Science has been producing daily reference ET (ETo) since 2001, and it has been used on a variety of operational hydrological models for drought and streamflow monitoring all over the world. With the increasing availability of local station-based reference ET estimates, we evaluated the GDAS-based reference ET estimates using data from the California Irrigation Management Information System (CIMIS). Daily CIMIS reference ET estimates from 85 stations were compared with GDAS-based reference ET at different spatial and temporal scales using five-year daily data from 2002 through 2006. Despite the large difference in spatial scale (point vs. ,100 km grid cell) between the two datasets, the correlations between station-based ET and GDAS-ET were very high, exceeding 0.97 on a daily basis to more than 0.99 on time scales of more than 10 days. Both the temporal and spatial correspondences in trend/pattern and magnitudes between the two datasets were satisfactory, suggesting the reliability of using GDAS parameter-based reference ET for regional water and energy balance studies in many parts of the world. While the study revealed the potential of GDAS ETo for large-scale hydrological applications, site-specific use of GDAS ETo in complex hydro-climatic regions such as coastal areas and rugged terrain may require the application of bias correction and/or disaggregation of the GDAS ETo using downscaling techniques. [source]


Testing for differences in benefit transfer values between state and regional frameworks

AUSTRALIAN JOURNAL OF AGRICULTURAL & RESOURCE ECONOMICS, Issue 2 2008
John Rolfe
Policy makers are often interested in transferring non-market estimates of environmental values from a ,source' study to predict economic values at a ,target' site. While most applications of the benefit transfer process involve an opportunistic search for suitable source studies, there are some examples available of more systematic approaches to developing a framework of values for benefit transfer processes. A key issue in developing such a framework is to deal with adjustment factors, where value estimates might vary systematically according to the context of the trade-offs. Previous research has identified that large differences in scope, such as between national and regional contexts, do affect values and hence benefit transfer. The research reported in this paper indicates that such differences are not significant for smaller scope variations, such as between state and regional contexts. These results provide some promise that systematic databases for benefit transfer can be developed. [source]


Spatial Yield Risk Across Region, Crop and Aggregation Method

CANADIAN JOURNAL OF AGRICULTURAL ECONOMICS, Issue 2-3 2005
Michael Popp
A researcher interested in crop yield risk analysis often has to contend with a lack of field- or farm-level data. While spatially aggregated yield data are often readily available from various agencies, aggregation distortions for farm-level analysis may exist. This paper addresses how much aggregation distortion might be expected and whether findings are robust across wheat, canola and flax grown in two central Canadian production regions, differing mainly by rainfall, frost-free growing days and soil type. Using Manitoba Crop Insurance Corporation data from 1980 to 1990, this research, regardless of crop or region analyzed, indicates that (i) spatial patterns in risk are absent; (ii) use of aggregate data overwhelmingly under-estimates field-level yield risk; and (iii) use of a relative risk measure compared to an absolute risk measure leads to slightly less aggregation distortion. Analysts interested in conducting farm-level analysis using aggregate data are offered a range of adjustment factors to adjust for potential bias. Un chercheur qui s'intéresse à l'analyse du risque du rendement des cultures doit souvent composer avec un manque de micro-données provenant de l'exploitation. Bien qu'il soit possible d'obtenir des données sur les rendements spatialement cumulées auprès de divers organismes, ces données peuvent comporter des distorsions importantes dues à l'agrégation des données de base et être trompeuses si elles sont utilisées pour effectuer des analyses à l'échelle de l'exploitation. Le présent article traite de la quantité de distorsion due à l'agrégation à laquelle on doit s'attendre et examine si les résultats obtenus pour le blé, le canola et le lin dans deux principales régions productrices canadiennes, où les précipitations, les jours de croissance sans gel et le type de sol constituent les principales différences, sont robustes ou non. À l'aide des données obtenues auprès de la Société d'assurance-récolte du Manitoba pour la période 1980,1990, la présente étude, sans égard à la culture ou à la région analysée, indique (i) que les profils régionaux en matière de risque n'existent pas; (ii) que l'utilisation de données agrégées sous-estime considérablement le risque de rendement; (iii) que l'utilisation d'une mesure du risque relatif comparativement à une mesure du risque absolu entraîne légèrement moins de distorsion d'agrégation. Afin d'ajuster les données pour minimiser un biais éventuel, nous proposons une gamme de facteurs d'ajustement aux analystes intéressés à effectuer des analyses à l'échelle des exploitations à l'aide de données agrégées. [source]