Point Estimates (point + estimate)

Distribution by Scientific Domains


Selected Abstracts


Bayesian inference strategies for the prediction of genetic merit using threshold models with an application to calving ease scores in Italian Piemontese cattle

JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 4 2002
K. Kizilkaya
Summary First parity calving difficulty scores from Italian Piemontese cattle were analysed using a threshold mixed effects model. The model included the fixed effects of age of dam and sex of calf and their interaction and the random effects of sire, maternal grandsire, and herd-year-season. Covariances between sire and maternal grandsire effects were modelled using a numerator relationship matrix based on male ancestors. Field data consisted of 23 953 records collected between 1989 and 1998 from 4741 herd-year-seasons. Variance and covariance components were estimated using two alternative approximate marginal maximum likelihood (MML) methods, one based on expectation-maximization (EM) and the other based on Laplacian integration. Inferences were compared to those based on three separate runs or sequences of Markov Chain Monte Carlo (MCMC) sampling in order to assess the validity of approximate MML estimates derived from data with similar size and design structure. Point estimates of direct heritability were 0.24, 0.25 and 0.26 for EM, Laplacian and MCMC (posterior mean), respectively, whereas corresponding maternal heritability estimates were 0.10, 0.11 and 0.12, respectively. The covariance between additive direct and maternal effects was found to be not different from zero based on MCMC-derived confidence sets. The conventional joint modal estimates of sire effects and associated standard errors based on MML estimates of variance and covariance components differed little from the respective posterior means and standard deviations derived from MCMC. Therefore, there may be little need to pursue computation-intensive MCMC methods for inference on genetic parameters and genetic merits using conventional threshold sire and maternal grandsire models for large datasets on calving ease. Zusammenfassung Die Kalbeschwierigkeiten bei italienischen Piemonteser Erstkalbskühen wurden mittels eines gemischten Threshold Modells untersucht. Im Modell wurden die fixen Einflüsse vom Alter der Kuh und dem Geschlecht des Kalbes, der Interaktion zwischen beiden und die zufälligen Effekte des Großvaters der Mutter und der Herden-Jahr-Saisonklasse berücksichtigt. Die Kovarianz zwischen dem Vater der Kuh und dem Großvater der Mutter wurde über die nur auf väterlicher Verwandtschaft basierenden Verwandtschaftsmatrix berücksichtigt. Es wurden insgesamt 23953 Datensätze aus den Jahren 1989 bis 1998 von 4741 Herden-Jahr-Saisonklassen ausgewertet. Die Varianz- und Kovarianzkomponenten wurden mittels zweier verschiedener approximativer marginal Maximum Likelihood (MML) Methoden geschätzt, die erste basierend auf Expectation-Maximierung (EM) und die zweite auf Laplacian Integration. Rückschlüsse wurden verglichen mit solchen, basierend auf drei einzelne Läufe oder Sequenzen von Markov Chain Monte Carlo (MCMC) Stichproben, um die Gültigkeit der approximativen MML Schätzer aus Daten mit ähnlicher Größe und Struktur zu prüfen. Die Punktschätzer der direkten Heritabilität lagen bei 0,24; 0,25 und 0,26 für EM, Laplacian und MCMC (Posterior Mean), während die entsprechenden maternalen Heritabilitäten bei 0,10, 0,11 und 0,12 lagen. Die Kovarianz zwischen dem direkten additiven und dem maternalen Effekt wurden als nicht von Null verschieden geschätzt, basierend auf MCMC abgeleiteten Konfidenzintervallen. Die konventionellen Schätzer der Vatereffekte und deren Standardfehler aus den MML-Schätzungen der Varianz- und Kovarianzkomponenten differieren leicht von denen aus der MCMC Analyse. Daraus folgend besteht wenig Bedarf die rechenintensiven MCMC-Methoden anzuwenden, um genetische Parameter und den genetischen Erfolg zu schätzen, wenn konventionelle Threshold Modelle für große Datensätze mit Vätern und mütterlichen Großvätern mit Kalbeschwierigkeiten genutzt werden. [source]


Pharmacokinetics and pharmacodynamics of prasugrel in subjects with moderate liver disease

JOURNAL OF CLINICAL PHARMACY & THERAPEUTICS, Issue 5 2009
D. S. Small PhD
Summary Background and Objective:, Prasugrel is a thienopyridine antiplatelet agent under investigation for the prevention of atherothrombotic events in patients with acute coronary syndrome who undergo percutaneous coronary intervention. Patients with chronic liver disease are among those in the target population for prasugrel. As hepatic enzymes play a key role in formation of prasugrel's active metabolite, hepatic impairment could affect the safety and/or efficacy of prasugrel in such patients. Methods:, This was a parallel-design, open-label, multiple dose study of 30 subjects, 10 with moderate hepatic impairment (Child-Pugh Class B) and 20 with normal hepatic function. Prasugrel was administered orally as a 60-mg loading dose (LD) and daily 10-mg maintenance doses (MDs) for 5 days. Pharmacokinetic parameters (AUC0,t, Cmax and tmax) and maximal platelet aggregation (MPA) by light transmission aggregometry were assessed after the LD and final MD. Results and Discussion:, Exposure to prasugrel's active metabolite was comparable between healthy subjects and those with moderate hepatic impairment. Point estimates for the ratios of geometric least square means for AUC0,t and Cmax after the LD and last MD ranged from 0·91 to 1·14. MPA to 20 ,m ADP was similar between subjects with moderate hepatic impairment and healthy subjects for both the LD and MD. Prasugrel was well tolerated by all subjects, and adverse events were mild in severity. Conclusion:, Moderate hepatic impairment appears to have no effect on exposure to prasugrel's active metabolite. Furthermore, MPA results suggest that moderate hepatic impairment has little or no effect on platelet aggregation relative to healthy controls. Overall, these results suggest that a dose adjustment would not be required in moderately hepatically impaired patients taking prasugrel. [source]


Bioavailability of generic ritonavir and lopinavir/ritonavir tablet products in a dog model

JOURNAL OF PHARMACEUTICAL SCIENCES, Issue 2 2010
Kevin W. Garren
Abstract In this study, we explored the bioavailability in dogs and chemical potency of generic ritonavir and lopinavir/ritonavir tablet products manufactured by various pharmaceutical companies. Chemical potency of the products was examined by HPLC quantitation of ritonavir and lopinavir. Using a dog model, we determined point estimates for Cmax and AUC of ritonavir and lopinavir/ritonavir for eight generic products compared to Abbott's Norvir® capsule and Kaletra® tablet. Chemical potencies ranged from 79.0% to 104.6%. Point estimates for AUC in the generic tablet products ranged from 0.01 to 1.11, indicating that the relative bioavailability of these formulations was in the range of 1,111% compared to the branded products. This study showed significant variability in bioavailability in a dog model amongst generic tablet products containing the protease inhibitors ritonavir or lopinavir/ritonavir. The chemical potency of the generic products was not indicative of the plasma levels of ritonavir or lopinavir that were achieved. These results reinforce the need for human bioequivalence testing of generic products containing ritonavir or lopinavir/ritonavir to assure that efficacy in patients is not compromised prior to these products being made available to patients. Procurement policies of funding agencies should require such quality assurance processes. © 2009 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 99:626,631, 2010 [source]


Monetary Policy Reaction Functions in Australia,

THE ECONOMIC RECORD, Issue 253 2005
GORDON De BROUWER
Interest-rate functions are estimated to assess the stability of Australian monetary policy in the post-float period. The results indicate that the Reserve Bank of Australia (RBA) is forward-looking, focusing on outcomes 1 year ahead. The weight on inflation in the RBA reaction function has increased, and that on output has decreased, since inflation targeting. This is robust to various definitions of the output gap. The RBA also appears to take modest account of sustained movements in the effective exchange rate. Point estimates of the implied neutral rate of interest are from 5 to 5½ per cent. [source]


Optimal hedging with a regime-switching time-varying correlation GARCH model

THE JOURNAL OF FUTURES MARKETS, Issue 5 2007
Hsiang-Tai Lee
The authors develop a Markov regime-switching time-varying correlation generalized autoregressive conditional heteroscedasticity (RS-TVC GARCH) model for estimating optimal hedge ratios. The RS-TVC nests within it both the time-varying correlation GARCH (TVC) and the constant correlation GARCH (CC). Point estimates based on the Nikkei 225 and the Hang Seng index futures data show that the RS-TVC outperforms the CC and the TVC both in- and out-of-sample in terms of variance reduction. Based on H. White's (2000) reality check, the null hypothesis of no improvement of the RS-TVC over the TVC is rejected for the Nikkei 225 index contract but is not rejected for the Hang Seng index contract. © 2007 Wiley Periodicals, Inc. Jrl Fut Mark 27:495,516, 2007 [source]


Pharmacokinetic assessment of a five-probe cocktail for CYPs 1A2, 2C9, 2C19, 2D6 and 3A

BRITISH JOURNAL OF CLINICAL PHARMACOLOGY, Issue 6 2009
Sandrine Turpault
WHAT IS ALREADY KNOWN ABOUT THIS SUBJECT , Numerous cocktails using concurrent administration of several cytochrome P450 (CYP) isoform-selective probe drugs have been reported to investigate drug,drug interactions in vivo. , This approach has several advantages: characterize the inhibitory or induction potential of compounds in development toward the CYP enzymes identified in vitro in an in vivo situation, assess several enzymes in the same trial, and have complete in vivo information about potential CYP-based drug interactions. WHAT THIS STUDY ADDS , This study describes a new cocktail containing five probe drugs that has never been published. , This cocktail can be used to test the effects of a new chemical entity on multiple CYP isoforms in a single clinical study: CYP1A2 (caffeine), CYP2C9 (warfarin), CYP2C19 (omeprazole), CYP2D6 (metoprolol), and CYP3A (midazolam) and was designed to overcome potential liabilities of other reported cocktails. AIMS To assess the pharmacokinetics (PK) of selective substrates of CYP1A2 (caffeine), CYP2C9 (S-warfarin), CYP2C19 (omeprazole), CYP2D6 (metoprolol) and CYP3A (midazolam) when administered orally and concurrently as a cocktail relative to the drugs administered alone. METHODS This was an open-label, single-dose, randomized, six-treatment six-period six-sequence William's design study with a wash-out of 7 or 14 days. Thirty healthy male subjects received 100 mg caffeine, 100 mg metoprolol, 0.03 mg kg,1 midazolam, 20 mg omeprazole and 10 mg warfarin individually and in combination (cocktail). Poor metabolizers of CYP2C9, 2C19 and 2D6 were excluded. Plasma samples were obtained up to 48 h for caffeine, metoprolol and omeprazole, 12 h for midazolam, 312 h for warfarin and the cocktail. Three different validated liquid chromatography tandem mass spectrometry methods were used. Noncompartmental PK parameters were calculated. Log-transformed Cmax, AUClast and AUC for each analyte were analysed with a linear mixed effects model with fixed term for treatment, sequence and period, and random term for subject within sequence. Point estimates (90% CI) for treatment ratios (individual/cocktail) were computed for each analyte Cmax, AUClast and AUC. RESULTS There was no PK interaction between the probe drugs when administered in combination as a cocktail, relative to the probes administered alone, as the 90% CI of the PK parameters was within the prespecified bioequivalence limits of 0.80, 1.25. CONCLUSION The lack of interaction between probes indicates that this cocktail could be used to evaluate the potential for multiple drug,drug interactions in vivo. [source]


Long-term prognostic value of B-type natriuretic peptide in cardiac and non-cardiac causes of acute dyspnoea

EUROPEAN JOURNAL OF CLINICAL INVESTIGATION, Issue 11 2007
M. Christ
Abstract Background, B-type natriuretic peptide (BNP) levels significantly predict increased risk of death in heart failure. The predictive role of BNP levels in patients with non-cardiac causes of acute dyspnoea presenting to the emergency department is not well characterized. Materials and methods, The B-type natriuretic peptide for Acute Shortness of Breath EvaLuation (BASEL) study enrolled consecutive patients with acute dyspnoea. Results, Cumulative mortality was 14·8%, 33·1% and 51·9% in 452 patients (age: 19,97 years; 58% male) within low (< 100 pg mL,1), intermediate (100,500 pg mL,1) and high (> 500 pg mL,1) BNP plasma levels at 18 months of follow-up. BNP classes (point estimate: 1·55, 95%CI: 1·19,2·03, P = 0·001) in addition to age, increased heart rate and diuretic use emerged as significant predictors for long-term mortality in multivariable Cox regression analyses. The BNP concentration alone had an area under the receiver operating characteristic curve of 0·71 (95%CI: 0·66,0·76; P < 0·001) for predicting 18 months mortality. BNP plasma levels independently predicted long-term risk of death in patients with non-cardiac (point estimate: 1·72, 95%CI: 1·16,2·56; P = 0·007) and with cardiac causes of acute dyspnoea (point estimate: 2·21, 95%CI: 1·34,3·64; P = 0·002). Conclusions, BNP levels are strong and independent predictors for long-term mortality in unselected dyspnoeic patients presenting to the emergency department independent from the cause of dyspnoea. [source]


An alternative approach to estimate the wage returns to private-sector training

JOURNAL OF APPLIED ECONOMETRICS, Issue 4 2008
Edwin Leuven
This paper follows an alternative approach to identify the wage effects of private-sector training. The idea is to narrow down the comparison group by only taking into consideration the workers who wanted to participate in training but did not do so because of some random event. This makes the comparison group increasingly similar to the group of participants in terms of observed individual characteristics and the characteristics of (planned) training events. At the same time, the point estimate of the average return to training consistently drops from a large and significant return to a point estimate close to zero. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Illusion of confirmation from exposure to another's hypothesis

JOURNAL OF BEHAVIORAL DECISION MAKING, Issue 1 2006
Derek J. Koehler
Abstract We examine the influence of exposure to an advisor's hypothesis, in the form of a point estimate of an uncertain quantity, on subsequent point estimates and confidence judgments made by advisees. In three experiments, a group of unexposed advisees produced their own estimates before being presented with that of the advisor, while a group of exposed advisees were presented with the advisor's estimate before making their own. Not surprisingly, exposed advisees deliberately incorporated the information conveyed by the advisor's estimate in producing their own estimates. But the exposure manipulation also had a contaminating influence that shifted what the advisees viewed as their own, independent estimates toward those of the advisor. Seemingly unaware of this influence, exposed advisees were subject to an illusion of confirmation in which they expressed greater confidence in the accuracy of the advisor's estimate than did unexposed advisees. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Association Between Vertebral Fracture and Increased Mortality in Osteoporotic Patients,

JOURNAL OF BONE AND MINERAL RESEARCH, Issue 7 2003
Tarja Jalava
Abstract Determinants of mortality were studied in a prospective study of 677 women and men with primary or secondary osteoporosis. Prevalent vertebral fractures were associated with increased mortality, but other known predictors of mortality explain a significant proportion of the excess risk. Introduction: In population studies, prevalent vertebral fractures are associated with increased mortality. It is unknown whether this excess mortality is related to low bone mineral density or its determinants or whether there is an additional component associated with fracture itself. Methods: We studied 677 women and men with osteoporosis, 28,88 years old, of whom 352 had morphometrically determined vertebral fracture, to examine the risk and causes of mortality in patients with osteoporosis (defined densitometrically as a spine bone mineral density T-score < ,2.5 and ,3.0 for women and men, respectively, and/or one or more prevalent vertebral fractures without a history of significant trauma). The participants had enrolled in a double-blind placebo-controlled study in osteoporosis and were comprised of 483 women with postmenopausal osteoporosis, 110 women with secondary osteoporosis, and 84 men with osteoporosis of any cause. Demographics, medical history, and other measures of skeletal and nonskeletal health status were assessed at entry. Results: During a median follow-up of 3.2 years, 37 (5.5%) participants died, with 31 of these deaths occurring in those with prevalent vertebral fractures. Compared with participants who did not have a prevalent vertebral fracture, those with one or more fractures had a 4.4-fold higher (95% CI, 1.85, 10.6) mortality rate. After adjustment for predictors for poor health,including number of medications, number of diseases, use of oral corticosteroids, alcohol intake, serum albumin and erythrocyte sedimentation rate (ESR), renal function, height, weight, gender, and age,the point estimate of risk remained elevated but was no longer statistically significant (hazard ratio, 2.4; 95% CI. 0.93, 6.23). Conclusions: Prevalent vertebral fractures in osteoporotic patients are associated with increased mortality. Other known predictors of mortality can explain a significant proportion of the excess risk. [source]


Association between TNFA-308 G/A polymorphism and sensitization to para- phenylenediamine: a case,control study

ALLERGY, Issue 2 2009
B. Blömeke
Background:,Para -phenylenediamine (PPD) and related chemicals are common contact sensitizers, frequently causing allergic contact dermatitis (ACD). The cytokine tumor necrosis factor-alpha (TNF-,) plays a key role in contact sensitization. Methods:, In this case,control study, we evaluated the distribution of variations in the regulatory region of the gene for TNF-, (TNFA-308 G/A) in 181 Caucasian individuals with a history of ACD and sensitization to PPD and 161 individuals with no history of sensitization to PPD. Results:, The frequency of GA or AA TNFA genotypes was significantly higher in individuals sensitized to PPD than in age- and gender-matched controls giving an odds ratio (OR) of 2.16 (95% confidence interval, CI: 1.35,3.47; P = 0.0016). This relation was even more pronounced when restricting cases to females over 45 years (OR = 3.71; 95% CI: 1.65,8.31; P = 0.0017) vs younger females (less than or equal to 45 years; OR = 2.41; 95% CI: 1.03,5.65; P = 0.044) or males (OR = 1.05; 95% CI: 0.449,2.47; P = 1.0). In addition, a logistic regression model revealed a significant effect for TNFA-308 AA and AG vs GG genotype (point estimate = 2.152; 95% Wald CI: 1.332,3.477). Conclusions:, These findings suggest a possible role for the TNFA-308 genetic polymorphism as a susceptibility factor for chemically induced ACD. [source]


Systemic lupus erythematosus prevalence in the U.K.: methodological issues when using the General Practice Research Database to estimate frequency of chronic relapsing-remitting disease,

PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, Issue 2 2007
A. L. Nightingale BSc (Hons)
Abstract Purpose The purpose of this study was to calculate the prevalence of systemic lupus erythematosus (SLE) between 1992 and 1998 using the General Practice Research Database (GPRD) Methods We identified all individuals who had contributed at least 3 years of data to the GPRD and who had a diagnosis of SLE with supporting evidence of their diagnosis. We calculated the annual age- and sex-specific prevalence of SLE. Additionally, we stratified the prevalence by years of data contributed to the GRPD. Results In males the point estimate of the prevalence of SLE increased from 7.5/100,000 (CI95 6.3, 8.8) to 10.1/100,000 (CI95 7.8, 12.2) but this rise was not statistically significant. However, prevalence appeared to increase significantly amongst females from 42.6/100,000 (CI95 39.6, 45.6) in 1992 to 70.8/100,000 (CI95 65.1, 76.6) in 1998. This increase was mainly amongst women aged 50,79 and in those contributing more than 5 years of data and could not be explained by increasing incidence of SLE or decreasing mortality during the study period. Conclusions We found an increasing prevalence of SLE that could not be explained by increasing incidence or decreasing mortality. This is almost certainly an artefact caused by the increased likelihood of detecting or confirming cases of chronic relapsing-remitting diseases with increasing time contributed to the GPRD. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Small proportions: what to report for confidence intervals?,

PHARMACOEPIDEMIOLOGY AND DRUG SAFETY, Issue 4 2005
Hilde Tobi
Abstract Purpose It is generally agreed that a confidence interval (CI) is usually more informative than a point estimate or p -value, but we rarely encounter small proportions with CI in the pharmacoepidemiological literature. When a CI is given it is sporadically reported, how it was calculated. This incorrectly suggests one single method to calculate CIs. To identify the method best suited for small proportions, seven approximate methods and the Clopper,Pearson Exact method to calculate CIs were compared. Methods In a simulation study for 90-, 95- and 99%CIs, with sample size 1000 and proportions ranging from 0.001 to 0.01, were evaluated systematically. Main quality criteria were coverage and interval width. The methods are illustrated using data from pharmacoepidemiology studies. Results Simulations showed that standard Wald methods have insufficient coverage probability regardless of how the desired coverage is perceived. Overall, the Exact method and the Score method with continuity correction (CC) performed best. Real life examples showed the methods to yield different results too. Conclusions For CIs for small proportions (,,,,0.01), the use of the Exact method and the Score method with CC are advocated based on this study. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Treating Words as Data with Error: Uncertainty in Text Statements of Policy Positions

AMERICAN JOURNAL OF POLITICAL SCIENCE, Issue 2 2009
Kenneth Benoit
Political text offers extraordinary potential as a source of information about the policy positions of political actors. Despite recent advances in computational text analysis, human interpretative coding of text remains an important source of text-based data, ultimately required to validate more automatic techniques. The profession's main source of cross-national, time-series data on party policy positions comes from the human interpretative coding of party manifestos by the Comparative Manifesto Project (CMP). Despite widespread use of these data, the uncertainty associated with each point estimate has never been available, undermining the value of the dataset as a scientific resource. We propose a remedy. First, we characterize processes by which CMP data are generated. These include inherently stochastic processes of text authorship, as well as of the parsing and coding of observed text by humans. Second, we simulate these error-generating processes by bootstrapping analyses of coded quasi-sentences. This allows us to estimate precise levels of nonsystematic error for every category and scale reported by the CMP for its entire set of 3,000-plus manifestos. Using our estimates of these errors, we show how to correct biased inferences, in recent prominently published work, derived from statistical analyses of error-contaminated CMP data. [source]


Estimation and Confidence Intervals after Adjusting the Maximum Information,

BIOMETRICAL JOURNAL, Issue 2 2003
John Lawrence
Abstract In a comparative clinical trial, if the maximum information is adjusted on the basis of unblinded data, the usual test statistic should be avoided due to possible type I error inflation. An adaptive test can be used as an alternative. The usual point estimate of the treatment effect and the usual confidence interval should also be avoided. In this article, we construct a point estimate and a confidence interval that are motivated by an adaptive test statistic. The estimator is consistent for the treatment effect and the confidence interval asymptotically has correct coverage probability. [source]


Exact Confidence Bounds Following Adaptive Group Sequential Tests

BIOMETRICS, Issue 2 2009
Werner Brannath
Summary We provide a method for obtaining confidence intervals, point estimates, and p-values for the primary effect size parameter at the end of a two-arm group sequential clinical trial in which adaptive changes have been implemented along the way. The method is based on applying the adaptive hypothesis testing procedure of Müller and Schäfer (2001, Biometrics57, 886,891) to a sequence of dual tests derived from the stage-wise adjusted confidence interval of Tsiatis, Rosner, and Mehta (1984, Biometrics40, 797,803). In the nonadaptive setting this confidence interval is known to provide exact coverage. In the adaptive setting exact coverage is guaranteed provided the adaptation takes place at the penultimate stage. In general, however, all that can be claimed theoretically is that the coverage is guaranteed to be conservative. Nevertheless, extensive simulation experiments, supported by an empirical characterization of the conditional error function, demonstrate convincingly that for all practical purposes the coverage is exact and the point estimate is median unbiased. No procedure has previously been available for producing confidence intervals and point estimates with these desirable properties in an adaptive group sequential setting. The methodology is illustrated by an application to a clinical trial of deep brain stimulation for Parkinson's disease. [source]


Semiparametric Regression Modeling with Mixtures of Berkson and Classical Error, with Application to Fallout from the Nevada Test Site

BIOMETRICS, Issue 1 2002
Bani Mallick
Summary. We construct Bayesian methods for semiparametric modeling of a monotonic regression function when the predictors are measured with classical error, Berkson error, or a mixture of the two. Such methods require a distribution for the unobserved (latent) predictor, a distribution we also model semi-parametrically. Such combinations of semiparametric methods for the dose-response as well as the latent variable distribution have not been considered in the measurement error literature for any form of measurement error. In addition, our methods represent a new approach to those problems where the measurement error combines Berkson and classical components. While the methods are general, we develop them around a specific application, namely, the study of thyroid disease in relation to radiation fallout from the Nevada test site. We use this data to illustrate our methods, which suggest a point estimate (posterior mean) of relative risk at high doses nearly double that of previous analyses but that also suggest much greater uncertainty in the relative risk. [source]


Absence of clinically relevant drug interactions following simultaneous administration of didanosine-encapsulated, enteric-coated bead formulation with either itraconazole or fluconazole

BIOPHARMACEUTICS AND DRUG DISPOSITION, Issue 2 2002
B. Damle
Abstract This open-label, two-way crossover study was undertaken to determine whether the enteric formulation of didanosine influences the pharmacokinetics of itraconazole or fluconazole, two agents frequently used to treat fungal infections that occur with HIV infection, and whose bioavailability may be influenced by changes in gastric pH. Healthy subjects were randomized to Treatment A (200-mg itraconazole or 200-mg fluconazole) or Treatment B (same dose of itraconazole or fluconazole with 400 mg of didanosine as an encapsulated, enteric-coated bead formulation). In the itraconazole study, a lack of interaction was concluded if the 90% confidence interval (CI) of the ratio of the geometric means of log-transformed Cmax and AUC0,T values of itraconazole and hydroxyitraconazole, the active metabolite of itraconazole, were contained entirely between 0.75 and 1.33. In the fluconazole study, the equivalence interval for Cmax and AUC0,T was 0.80,1.25. The data showed that for itraconazole the point estimate and 90% CI of the ratios of Cmax and AUC0,T values were 0.98 (0.79, 1.20) and 0.88 (0.71, 1.09), respectively; for hydroxyitraconazole the respective values were 0.91 (0.76, 1.08) and 0.85 (0.68, 1.06). In the fluconazole study, the point estimate and 90% CI of the ratios of Cmax and AUC0,T values were 0.98 (0.93, 1.03) and 1.01 (0.99, 1.03), respectively. The Tmax for itraconazole, hydroxyitraconazole, and fluconazole were similar between treatments. Both studies indicated a lack of clinically significant interactions of the didanosine formulation with itraconazole or fluconazole. These results showed that the encapsulated, enteric-coated bead formulation of didanosine can be concomitantly administered with drugs, such as the azole antifungal agents, whose bioavailability may be influenced by interaction with antacids. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Pharmacokinetics of tibolone in early and late postmenopausal women

BRITISH JOURNAL OF CLINICAL PHARMACOLOGY, Issue 2 2002
C. J. Timmer
Aims, Tibolone is a tissue-specific compound with favourable effects on bone, vagina, climacteric symptoms, mood and sexual well being in postmenopausal women, without stimulating the endometrium or breast. Since tibolone is used for the treatment of both young and elderly postmenopausal women, its pharmacokinetics were studied to investigate potential differences with age. In addition, the bioequivalence of the 1.25 and 2.5 mg tablets was evaluated. Methods, Single doses of 1.25 or 2.5 mg of tibolone were given in a double-blind, randomized, two-way cross-over study to women aged between 45 and 55 years or between 65 and 75 years of age. Results, Age did not have a significant effect on Cmax, tmax, and t½ of tibolone and its metabolites and on the body weight standardized oral clearance (CL/F kg,1) of the 3,- and 3,-hydroxy tibolones. In early postmenopausal women, significantly lower values were found for the AUC(0,16 h), and AUC(0,,) of 3,-hydroxy tibolone 24.6±6.6 vs 29.2±4.9 and 27.1±6.9 vs 32.3±6.5 ng ml,1 h for the 1.25 mg tablet, respectively, and 45.4±13.9 vs 55.7±14.1 and 49.6±14.6 vs 62.6±17.3 ng ml,1 h for the 2.5 mg tablet, respectively. When these values were adjusted for the significantly higher body weight of the early postmenopausal women, the differences disappeared. No significant differences between early and late postmenopausal women were found for the AUC(0,8 h), and AUC(0,,) of 3,-hydroxy tibolone. The rate of absorption of tibolone and the rates of absorption or formation of the 3,- and 3,-hydroxy tibolones were significantly higher after the 1.25 mg dose than after the 2.5 mg tablet, resulting in increases of 32%, 27% and 17% for the dose normalized- Cmax of tibolone and the 3,- and 3,-hydroxy tibolones, respectively. tmax for tibolone and its metabolites was 12,27% less after 1.25 mg compared to 2.5 mg, which was statistically significant. The two formulations were bioequivalent with respect to the dose-normalized AUC(0,,) and the AUC(0,tfix) values for the 3,-hydroxy tibolone (ratio point estimate [90%, confidence limits]: 1.08 [1.04, 1.14] and 1.08 [1.03, 1.13], respectively) and for the 3,-hydroxy tibolone (1.07 [1.01, 1.14] and 1.04 [0.96, 1.12], respectively). Both formulations were also bioequivalent with respect to CL/F kg,1 and t½. Conclusions, The pharmacokinetics of tibolone are similar in early (age 45,55 years) and late (65,75 years) postmenopausal women. The 2.5 and 1.25 mg tablets are bioequivalent with respect to the extent of absorption. The rate of absorption or formation of the metabolites of tibolone were not bioequivalent, but these differences are considered to have no clinical relevance in view of the chronic administration of tibolone. [source]


Ownership Concentration and Corporate Performance on the Budapest Stock Exchange: do too many cooks spoil the goulash?

CORPORATE GOVERNANCE, Issue 2 2005
John S. Earle
We examine the impact of ownership concentration on firm performance using panel data for firms listed on the Budapest Stock Exchange, where ownership tends to be highly concentrated and frequently involves multiple blocks. Fixed-effects estimates imply that the size of the largest block increases profitability and efficiency strongly and monotonically, but the effects of total blockholdings are much smaller and statistically insignificant. Controlling for the size of the largest block, point estimates of the marginal effects of additional blocks are negative. The results suggest that the marginal costs of concentration may outweigh the benefits when the increased concentration involves "too many cooks". [source]


Characterization and uncertainty analysis of VOCs emissions from industrial wastewater treatment plants

ENVIRONMENTAL PROGRESS & SUSTAINABLE ENERGY, Issue 3 2010
Kaishan Zhang
Abstract Air toxics from the industrial wastewater treatment plants (IWTPs) impose serious health concerns on its surrounding residential neighborhoods. To address such health concerns, one of the key challenges is to quantify the air emissions from the IWTPs. The objective here is to characterize the air emissions from the IWTPs and quantify its associated uncertainty. An IWTP receiving the wastewaters from an airplane maintenance facility is used for illustration with focus on the quantification of air emissions for benzyl alcohol, phenol, methylene chloride, 2-butanone, and acetone. Two general fate models, i.e., WATER9 and TOXCHEM+V3.0 were used to model the IWTP and quantify the air emissions. Monte Carlo and Bootstrap simulation were used for uncertainty analysis. On average, air emissions from the IWTP were estimated to range from 0.003 lb/d to approximately 16 lb/d with phenol being the highest and benzyl alcohol being the least. However, emissions are associated with large uncertainty. The ratio of the 97.5th percentile to the 2.5th percentile air emissions ranged from 5 to 50 depending on pollutants. This indicates point estimates of air emissions might fail to capture the worst scenarios, leading to inaccurate conclusion when used for health risk assessment. © 2009 American Institute of Chemical Engineers Environ Prog, 2010 [source]


Changes in Quality of Life in Epilepsy: How Large Must They Be to Be Real?

EPILEPSIA, Issue 1 2001
Samuel Wiebe
Summary: ,Purpose: The study goal was to assess the magnitude of change in generic and epilepsy-specific health-related quality-of-life (HRQOL) instruments needed to exclude chance or error at various levels of certainty in patients with medically refractory epilepsy. Methods: Forty patients with temporal lobe epilepsy and clearly defined criteria of clinical stability received HRQOL measurements twice, 3 months apart, using the Quality of Life in Epilepsy Inventory-89 and -31 (QOLIE-89 and QOLIE-31), Liverpool Impact of Epilepsy, adverse drug events, seizure severity scales, and the Generic Health Utilities Index (HUI-III). Standard error of measurement and test-retest reliability were obtained for all scales and for QOLIE-89 subscales. Using the Reliable Change Index described by Jacobson and Truax, we assessed the magnitude of change required by HRQOL instruments to be 90 and 95% certain that real change has occurred, as opposed to change due to chance or measurement error. Results: Clinical features, point estimates and distribution of HRQOL measures, and test-retest reliability (all > 0.70) were similar to those previously reported. Score changes of ±13 points in QOLIE-89, ±15 in QOLIE-31, ±6.3 in Liverpool seizure severity,ictal, ±11 in Liverpool adverse drug events, ±0.25 in HUI-III, and ±9.5 in impact of epilepsy exclude chance or measurement error with 90% certainty. These correspond, respectively, to 13, 15, 17, 18, 25, and 32% of the potential range of change of each instrument. Conclusions: Threshold values for real change varied considerably among HRQOL tools but were relatively small for QOLIE-89, QOLIE-31, Liverpool Seizure Severity, and adverse drug events. In some instruments, even relatively large changes cannot rule out chance or measurement error. The relation between the Reliable Change Index and other measures of change and its distinction from measures of minimum clinically important change are discussed. [source]


EFFECTIVE POPULATION SIZES AND TEMPORAL STABILITY OF GENETIC STRUCTURE IN RANA PIPIENS, THE NORTHERN LEOPARD FROG

EVOLUTION, Issue 11 2004
Eric A. Hoffman
Abstract Although studies of population genetic structure are very common, whether genetic structure is stable over time has been assessed for very few taxa. The question of stability over time is particularly interesting for frogs because it is not clear to what extent frogs exist in dynamic metapopulations with frequent extinction and recolonization, or in stable patches at equilibrium between drift and gene flow. In this study we collected tissue samples from the same five populations of leopard frogs, Rana pipens, over a 22,30 year time interval (11,15 generations). Genetic structure among the populations was very stable, suggesting that these population were not undergoing frequent extinction and colonization. We also estimated the effective size of each population from the change in allele frequencies over time. There exist few estimates of effective size for frog populations, but the data available suggest that ranid frogs may have much larger ratios of effective size (Ne) to census size (Nc) that toads (bufonidae). Our results indicate that R. pipiens populations have effective sizes on the order of hundreds to at most a few thousand frogs, and Nee/Nc ratios in the range of 0.1,1.0. These estimates of Ne/Nc are consistent with those estimated for other Rana species. Finally, we compared the results of three temporal methods for estimating Ne. Moment and pseudolikelihood methods that assume a closed population gave the most similar point estimates, although the moment estimates were consistently two to four times larger. Wang and Whitlock's new method that jointly estimates Ne and the rate of immigration into a population (m) gave much smaller estimates of Ne and implausibly large estimates of m. This method requires knowing allele frequencies in the source of immigrants, but was thought to be insensitive to inexact estimates. In our case the method may have failed because we did not know the true source of immigrants for each population. The method may be more sensitive to choice of source frequencies than was previously appreciated, and so should be used with caution if the most likely source of immigrants cannot be identified clearly. [source]


Upward bias in odds ratio estimates from genome-wide association studies

GENETIC EPIDEMIOLOGY, Issue 4 2007
Chad Garner
Abstract Genome-wide association studies are carried out to identify unknown genes for a complex trait. Polymorphisms showing the most statistically significant associations are reported and followed up in subsequent confirmatory studies. In addition to the test of association, the statistical analysis provides point estimates of the relationship between the genotype and phenotype at each polymorphism, typically an odds ratio in case-control association studies. The statistical significance of the test and the estimator of the odds ratio are completely correlated. Selecting the most extreme statistics is equivalent to selecting the most extreme odds ratios. The value of the estimator, given the value of the statistical significance depends on the standard error of the estimator and the power of the study. This report shows that when power is low, estimates of the odds ratio from a genome-wide association study, or any large-scale association study, will be upwardly biased. Genome-wide association studies are often underpowered given the low , levels required to declare statistical significance and the small individual genetic effects known to characterize complex traits. Factors such as low allele frequency, inadequate sample size and weak genetic effects contribute to large standard errors in the odds ratio estimates, low power and upwardly biased odds ratios. Studies that have high power to detect an association with the true odds ratio will have little or no bias, regardless of the statistical significance threshold. The results have implications for the interpretation of genome-wide association analysis and the planning of subsequent confirmatory stages. Genet Epidemiol. 2007. © 2007 Wiley-Liss, Inc. [source]


How much confidence should we place in efficiency estimates?

HEALTH ECONOMICS, Issue 11 2003
Andrew StreetArticle first published online: 3 DEC 200
Abstract Ordinary least squares (OLS) and stochastic frontier (SF) analyses are commonly used to estimate industry-level and firm-specific efficiency. Using cross-sectional data for English public hospitals, a total cost function based on a specification developed by the English Department of Health is estimated. Confidence intervals are calculated around the OLS residuals and around the inefficiency component of the SF residuals. Sensitivity analysis is conducted to assess whether conclusions about relative performance are robust to choices of error distribution, functional form and model specification. It is concluded that estimates of relative hospital efficiency are sensitive to estimation decisions and that little confidence can be placed in the point estimates for individual hospitals. The use of these techniques to set annual performance targets should be avoided. Copyright © 2002 John Wiley & Sons, Ltd. [source]


A view from the bridge: agreement between the SF-6D utility algorithm and the Health Utilities Index

HEALTH ECONOMICS, Issue 11 2003
Bernie J. O'Brien
Abstract Background: The SF-6D is a new health state classification and utility scoring system based on 6 dimensions (,6D') of the Short Form 36, and permits a "bridging" transformation between SF-36 responses and utilities. The Health Utilities Index, mark 3 (HUI3) is a valid and reliable multi-attribute health utility scale that is widely used. We assessed within-subject agreement between SF-6D utilities and those from HUI3. Methods: Patients at increased risk of sudden cardiac death and participating in a randomized trial of implantable defibrillator therapy completed both instruments at baseline. Score distributions were inspected by scatterplot and histogram and mean score differences compared by paired t -test. Pearson correlation was computed between instrument scores and also between dimension scores within instruments. Between-instrument agreement was by intra-class correlation coefficient (ICC). Results: SF-6D and HUI3 forms were available from 246 patients. Mean scores for HUI3 and SF-6D were 0.61 (95% CI 0.60,0.63) and 0.58 (95% CI 0.54,0.62) respectively; a difference of 0.03 (p<0.03). Score intervals for HUI3 and SF-6D were (-0.21 to 1.0) and (0.30,0.95). Correlation between the instrument scores was 0.58 (95% CI 0.48,0.68) and agreement by ICC was 0.42 (95% CI 0.31,0.52). Correlations between dimensions of SF-6D were higher than for HUI3. Conclusions: Our study casts doubt on the whether utilities and QALYs estimated via SF-6D are comparable with those from HUI3. Utility differences may be due to differences in underlying concepts of health being measured, or different measurement approaches, or both. No gold standard exists for utility measurement and the SF-6D is a valuable addition that permits SF-36 data to be transformed into utilities to estimate QALYs. The challenge is developing a better understanding as to why these classification-based utility instruments differ so markedly in their distributions and point estimates of derived utilities. Copyright © 2003 John Wiley & Sons, Ltd. [source]


The performance of sample selection estimators to control for attrition bias

HEALTH ECONOMICS, Issue 5 2001
Astrid Grasdal
Abstract Sample attrition is a potential source of selection bias in experimental, as well as non-experimental programme evaluation. For labour market outcomes, such as employment status and earnings, missing data problems caused by attrition can be circumvented by the collection of follow-up data from administrative registers. For most non-labour market outcomes, however, investigators must rely on participants' willingness to co-operate in keeping detailed follow-up records and statistical correction procedures to identify and adjust for attrition bias. This paper combines survey and register data from a Norwegian randomized field trial to evaluate the performance of parametric and semi-parametric sample selection estimators commonly used to correct for attrition bias. The considered estimators work well in terms of producing point estimates of treatment effects close to the experimental benchmark estimates. Results are sensitive to exclusion restrictions. The analysis also demonstrates an inherent paradox in the ,common support' approach, which prescribes exclusion from the analysis of observations outside of common support for the selection probability. The more important treatment status is as a determinant of attrition, the larger is the proportion of treated with support for the selection probability outside the range, for which comparison with untreated counterparts is possible. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Bayesian estimation of financial models

ACCOUNTING & FINANCE, Issue 2 2002
Philip Gray
This paper outlines a general methodology for estimating the parameters of financial models commonly employed in the literature. A numerical Bayesian technique is utilised to obtain the posterior density of model parameters and functions thereof. Unlike maximum likelihood estimation, where inference is only justified in large samples, the Bayesian densities are exact for any sample size. A series of simulation studies are conducted to compare the properties of point estimates, the distribution of option and bond prices, and the power of specification tests under maximum likelihood and Bayesian methods. Results suggest that maximum,likelihood,based asymptotic distributions have poor finite,sampleproperties. [source]


Estimating the snow water equivalent on the Gatineau catchment using hierarchical Bayesian modelling

HYDROLOGICAL PROCESSES, Issue 4 2006
Ousmane Seidou
Abstract One of the most important parameters for spring runoff forecasting is the snow water equivalent on the watershed, often estimated by kriging using in situ measurements, and in some cases by remote sensing. It is known that kriging techniques provide little information on uncertainty, aside from the kriging variance. In this paper, two approaches using Bayesian hierarchical modelling are compared with ordinary kriging; Bayesian hierarchical modelling is a flexible and general statistical approach that uses observations and prior knowledge to make inferences on both unobserved data (snow water equivalent on the watershed where there is no measurements) and on the parameters (influence of the covariables, spatial interactions between the values of the process at various sites). The first approach models snow water equivalent as a Gaussian spatial process, for which the mean varies in space, and the other uses the theory of Markov random fields. Although kriging and the Bayesian models give similar point estimates, the latter provide more information on the distribution of the snow water equivalent. Furthermore, kriging may considerably underestimate interpolation error. Copyright © 2006 Environment Canada. Published by John Wiley & Sons, Ltd. [source]


Bayesian counterfactual analysis of the sources of the great moderation

JOURNAL OF APPLIED ECONOMETRICS, Issue 2 2008
Chang-Jin Kim
We use counterfactual experiments to investigate the sources of the large volatility reduction in US real GDP growth in the 1980s. Contrary to an existing literature that conducts counterfactual experiments based on classical estimation and point estimates, we consider Bayesian analysis that provides a straightforward measure of estimation uncertainty for the counterfactual quantity of interest. Using Blanchard and Quah's (1989) structural VAR model of output growth and the unemployment rate, we find strong statistical support for the idea that a counterfactual change in the size of structural shocks alone, with no corresponding change in the propagation of these shocks, would have produced the same overall volatility reduction as what actually occurred. Looking deeper, we find evidence that a counterfactual change in the size of aggregate supply shocks alone would have generated a larger volatility reduction than a counterfactual change in the size of aggregate demand shocks alone. We show that these results are consistent with a standard monetary VAR, for which counterfactual analysis also suggests the importance of shocks in generating the volatility reduction, but with the counterfactual change in monetary shocks alone generating a small reduction in volatility. Copyright © 2007 John Wiley & Sons, Ltd. [source]