Relative Efficiency (relative + efficiency)

Distribution by Scientific Domains


Selected Abstracts


THE RELATIVE EFFICIENCY OF CHARTER SCHOOLS

ANNALS OF PUBLIC AND COOPERATIVE ECONOMICS, Issue 1 2009
Shawna Grosskopf
ABSTRACT,:,This analysis compares the technical efficiency of charter school primary and secondary campuses with that of comparable campuses in traditional Texas school districts. Charter schools are hybrids,publicly funded, but not required to meet all the state regulations releant for traditional schools. Student performance is measured using value added on standardized tests in reading and mathematics, and efficiency is measured using the input distance function. The analysis suggests that at least in Texas, charter schools are substantially more efficient than traditional public schools. [source]


Estimating the Relative Efficiency of Brazilian Publicly and Privately Owned Water Utilities: A Stochastic Cost Frontier Approach,

JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 5 2007
Geraldo Da Silva e Souza
R15; R38 Abstract:, This paper assesses cost efficiencies of Brazilian public and private companies of water supply. To measure the efficiency, we used a stochastic frontier model derived from the translog family , a specification similar to a Cobb-Douglas including a quadratic term in log output. The model parameters are estimated by maximum likelihood using Brazilian data for the year 2002. Statistical inference leads to the conclusion that there is no evidence that private firms and public firms are significantly different in terms of efficiency measurements. [source]


Relative Efficiency of Price Discovery on an Established New Market and the Main Board: Evidence from Korea

ASIA-PACIFIC JOURNAL OF FINANCIAL STUDIES, Issue 4 2010
Kyong Shik Eom
G10; G14; G15 Abstract We examine the relative efficiency of price discovery between the new market (KOSDAQ) and the main board (KOSPI) in the Korean stock markets that have the same trading mechanism (i.e. electronic limit-order book), focusing on the comparisons of each market's efficiency of price discovery in three aspects: speed, degree, and accuracy. We find that, for our entire firm sample, price discovery on KOSDAQ is less efficient than on KOSPI. However, the price discovery of the most liquid group (top 40 stocks) on KOSDAQ turns out to be as efficient as the lowest group (top 160th,200th stocks) among the top 200 liquid stocks on KOSPI. These two quintiles are comparable in terms of their firm characteristics, so it appears that the greater overall efficiency of price discovery on KOSPI is due to the characteristics of its listed firms, rather than any inherent difference between a main board and a new market. We also find evidence that the speed of price discovery is mainly determined by turnover, whereas the accuracy of price discovery is mainly determined by turnover and intraday volatility. All together, our results provide some policy implications for developing or even developed countries eager to establish a viable new market. First, price discovery in a successful or viable new market in an emerging economy behaves as predicted in the market microstructure literature, even though that literature is based primarily on main boards in advanced stock markets. Second, price discovery in the most liquid group in a new market is more accurate, although slower, than in the lowest group among the liquid stocks on a main board; on balance, the main board and new market are comparable. Finally, the accuracy of price discovery is more (less) impacted by turnover (intraday volatility) on the new market than on the main board. [source]


Ecophysiological controls over the net ecosystem exchange of mountain spruce stand.

GLOBAL CHANGE BIOLOGY, Issue 1 2007
Comparison of the response in direct vs. diffuse solar radiation
Abstract Cloud cover increases the proportion of diffuse radiation reaching the Earth's surface and affects many microclimatic factors such as temperature, vapour pressure deficit and precipitation. We compared the relative efficiencies of canopy photosynthesis to diffuse and direct photosynthetic photon flux density (PPFD) for a Norway spruce forest (25-year-old, leaf area index 11 m2 m,2) during two successive 7-day periods in August. The comparison was based on the response of net ecosystem exchange (NEE) of CO2 to PPFD. NEE and stomatal conductance at the canopy level (Gcanopy) was estimated from half-hourly eddy-covariance measurements of CO2 and H2O fluxes. In addition, daily courses of CO2 assimilation rate (AN) and stomatal conductance (Gs) at shoot level were measured using a gas-exchange technique applied to branches of trees. The extent of spectral changes in incident solar radiation was assessed using a spectroradiometer. We found significantly higher NEE (up to 150%) during the cloudy periods compared with the sunny periods at corresponding PPFDs. Prevailing diffuse radiation under the cloudy days resulted in a significantly lower compensation irradiance (by ca. 50% and 70%), while apparent quantum yield was slightly higher (by ca. 7%) at canopy level and significantly higher (by ca. 530%) in sun-acclimated shoots. The main reasons for these differences appear to be (1) more favourable microclimatic conditions during cloudy periods, (2) stimulation of photochemical reactions and stomatal opening via an increase of blue/red light ratio, and (3) increased penetration of light into the canopy and thus a more equitable distribution of light between leaves. Our analyses identified the most important reason of enhanced NEE under cloudy sky conditions to be the effective penetration of diffuse radiation to lower depths of the canopy. This subsequently led to the significantly higher solar equivalent leaf area compared with the direct radiation. Most of the leaves in such dense canopy are in deep shade, with marginal or negative carbon balances during sunny days. These findings show that the energy of diffuse, compared with direct, solar radiation is used more efficiently in assimilation processes at both leaf and canopy levels. [source]


Applying benchmarking and data envelopment analysis (DEA) techniques to irrigation districts in Spain

IRRIGATION AND DRAINAGE, Issue 2 2004
J. A. Rodríguez Díaz
indicateurs de performance; benchmarking; DEA Abstract In this research, the application of data envelopment analysis (DEA) is proposed as a methodology to overcome the problems related to the lack of methodology to assign the correct weightings for the calculation of indexes and to the subjectivity of the interpretations of results. DEA is a linear programming technique to determine the relative efficiencies of a company when the inputs and outputs of production units within the company are known, but the productive process itself is not. In this way, quantitative efficiencies and the weighting of any performance indicator can be assessed and compared, permitting managers to obtain a well-defined performance ranking. This is especially important when managers dispose of a limited budget. The results of the application of this methodology to Andalusian irrigation districts (Spain) are presented and discussed here. This study was used to select the most representative irrigation districts in Andalusia which were then studied in greater depth by applying the performance indicators selected by IPTRID for use by the benchmarking international program. Copyright © 2004 John Wiley & Sons, Ltd. Dans ce travail, l'utilisation de la méthode de ,,data envelopment analysis (DEA),, est analysée en tant que méthodologie capable de résoudre les problèmes liés au manque de méthodologie pour l'attribution des pondérations dans le calcul d'indices composites et à la subjectivité des comparaisons. La DEA est une technique de programmation linéaire pour déterminer les efficiences relatives d'une compagnie. Les moyens utilisés sont la connaissance des intrants et produits de la compagnie, ignorant les processus de production. De cette façon, les gestionnaires peuvent obtenir un large ensemble d'indices de gestion qui se révèle particulièrement important lorsque le gestionnaire dispose d'un budget limité. Les résultats de l'utilisation de cette méthode pour l'irrigation dans la région d'Andalousie (Espagne) sont présentés et discutés dans ce travail. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Quantitative colonoscopic evaluation of relative efficiencies of an immunochemical faecal occult blood test and a sensitive guaiac test for detecting significant colorectal neoplasms

ALIMENTARY PHARMACOLOGY & THERAPEUTICS, Issue 4 2009
P. ROZEN
Summary Background, The guaiac faecal occult blood test (G-FOBT), HemoccultSENSA, is sensitive for significant neoplasms [colorectal cancer (CRC), advanced adenomatous polyps (AAP)], but faulted by non-specificity for human haemoglobin (Hb). Quantified, Hb- specific, immunochemical faecal occult blood tests (I-FOBT) are now used. Aims, To (i) compare I-FOBT and G-FOBT efficacy in identifying significant neoplasms and colonoscopy needs for positive tests and (ii) examine number of I-FOBTs needed and test threshold to use for equivalent or better sensitivity than G-FOBT and fewest colonoscopies for positive tests. Methods, Three daily G-FOBTs and I-FOBTs were collected and analysed in 330 patients scheduled for colonoscopy. Results, Colonoscopy found significant neoplasms in 32 patients, 6 CRC, 26 AAP. G-FOBT, sensitivity and specificity were 53.1% (17 neoplasms) and 59.4%, resulting in 8.1 colonoscopies/neoplasm. One I-FOBT having ,50 ngHb/mL of buffer provided equivalent sensitivity but 94.0% specificity, resulting in 2.1 colonoscopies/neoplasm. By analysing the higher of two I-FOBTs at 50 ngHb/mL threshold, sensitivity increased to 68.8% (22 neoplasms, P = 0.063), specificity fell to 91.9% (P < 0.001), but still required 2.1 colonoscopies/neoplasm. Conclusions, In this population, quantified I-FOBT had significantly better specificity than G-FOBT for significant neoplasms, reducing the number of colonoscopies needed/neoplasm detected. Results depend on the number of I-FOBTs performed and the chosen development threshold. [source]


Use of Binomial Group Testing in Tests of Hypotheses for Classification or Quantitative Covariables

BIOMETRICS, Issue 1 2000
Ming-Chin Hung
Summary. In group testing, the test unit consists of a group of individuals. If the group test is positive, then one or more individuals in the group are assumed to be positive. A group observation in binomial group testing can be, say, the test result (positive or negative) for a pool of blood samples that come from several different individuals. It has been shown that, when the proportion (p) of infected individuals is low, group testing is often preferable to individual testing for identifying infected individuals and for estimating proportions of those infected. We extend the potential applications of group testing to hypothesis-testing problems wherein one wants to test for a relationship between p and a classification or quantitative covariable. Asymptotic relative efficiencies (AREs) of tests based on group testing versus the usual individual testing are obtained. The Pitman ARE strongly favors group testing in many cases. Small-sample results from simulation studies are given and are consistent with the large-sample (asymptotic) findings. We illustrate the potential advantages of group testing in hypothesis testing using HIV-1 seroprevalence data. [source]


Biogeographic Crossroads as Priority Areas for Biodiversity Conservation

CONSERVATION BIOLOGY, Issue 6 2002
Sacha Spector
I suggest that targeting the regions where biogeographic assemblages intersect,"biogeographic crossroads",is a strategy that may achieve significant conservation economy by focusing on areas that satisfy many conservation criteria. I used a combination of data on Scarabaeine beetles in Bolivia and on other taxa and locations from the literature to consider the short- and long-term benefits of conserving these biogeographic crossroads. Biogeographic crossroads are areas of high species richness and beta diversity, often across many taxonomic groups. They are also regions where representativeness can be achieved with relative efficiency. Recent evidence that ecotones may be loci of evolution suggests that evolutionary processes such as speciation and coevolution may be conserved at biogeographic crossroads. Biogeographic crossroads appear to be areas of high conservation priority and opportunity in both the short and long term and require increased attention in the process of setting conservation priorities. Resumen: Las amenazas a la biodiversidad rebasan los recursos de la comunidad conservacionista y requieren de una cuidadosa priorización de las acciones de conservación. Sugiero que enfocar en las regiones donde intersectan ensambles biogeográficos,"intersecciones biogeográficas", es una estrategia que puede lograr una economía significativa de los esfuerzos de conservación al atender áreas que satisfacen muchos criterios de conservación. Utilicé una combinación de datos de escarabajos Scarabaeine de Bolivia y de otros taxones y localidades de la literatura para considerar los beneficios a corto y largo plazo de conservar estas intersecciones biogeográficas. Las intersecciones biogeográficas son áreas de alta riqueza de especies y de diversidad beta, y probablemente éste sea el caso de muchos grupos taxonómicos. También son regiones en las que se puede alcanzar representatividad con relativa eficiencia. Evidencia reciente de que los ecotonos pueden ser sitios de evolución sugiere que los procesos evolutivos tales como la especiación y coevolución pueden ser conservados en intersecciones biogeográficas. Las intersecciones biogeográficas parecen ser áreas de alta prioridad y oportunidad de conservación tanto a corto como a largo plazo y requieren mayor atención en el proceso de definición de prioridades de conservación. [source]


Production Efficiency and the Pricing of Audit Services,

CONTEMPORARY ACCOUNTING RESEARCH, Issue 1 2003
Nicholas Dopuch
Abstract In this paper, we examine the relative efficiency of audit production by one of the then Big 6 public accounting firms for a sample of 247 geographically dispersed audits of U.S. companies performed in 1989. To test the relative efficiency of audit production, we use both stochastic frontier estimation (SFE) and data envelopment analysis (DEA). A feature of our research is that we also test whether any apparent inefficiencies in production, identified using SFE and DEA, are correlated with audit pricing. That is, do apparent inefficiencies cause the public accounting firm to reduce its unit price (billing rate) per hour of labor utilized on an engagement? With respect to results, we do not find any evidence of relative (within-sample) inefficiencies in the use of partner, manager, senior, or staff labor hours using SFE. This suggests that the SFE model may not be sufficiently powerful to detect inefficiencies, even with our reasonably large sample size. However, we do find apparent inefficiencies using the DEA model. Audits range from about 74 percent to 100 percent relative efficiency in production, while the average audit is produced at about an 88 percent efficiency level, relative to the most efficient audits in the sample. Moreover, the inefficiencies identified using DEA are correlated with the firm's realization rate. That is, average billing rates per hour fall as the amount of inefficiency increases. Our results suggest that there are moderate inefficiencies in the production of many of the subject public accounting firm's audits, and that such inefficiencies are economically costly to the firm. [source]


Dual Economies and International Total Factor Productivity Differences: Channelling the Impact from Institutions, Trade, and Geography

ECONOMICA, Issue 300 2008
AREENDAM CHANDA
This paper provides a framework that decomposes aggregate total factor productivity (TFP) into a component reflecting relative efficiency across sectors, and another component that reflects the absolute level of efficiency. A development accounting analysis suggests that as much as 85% of the international variation in aggregate TFP can be attributed to variation in relative efficiency across sectors. Estimation results show that recent findings highlighting the importance of strong protection of property rights, financial development and geographical advantage for the level of TFP, can be explained by their impact on relative efficiency. [source]


Improving the precision of longitudinal ecological surveys using precisely defined observational units

ENVIRONMETRICS, Issue 3 2003
Daniel A. J. Ryan
Abstract In ecological longitudinal studies, it is difficult to precisely define an observational unit. In many cases the investigator must decide between permanent observational units or observational units with no permanent markers (re-established each year). The latter choice is appealing as there are no maintenance costs, but the loss of precision in a longitudinal study can be severe. In this article, we analytically demonstrate that by not permanently and precisely marking observational units in a longitudinal study, the efficiency can be reduced, or the cost increased, or both. The magnitude of loss is illustrated for a coral reef benthic survey where precision was reduced by as much as a factor of 12. General formulae to estimate the relative efficiency of ecological surveys based on different levels of observational unit markings are also presented. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Systematic sample design for the estimation of spatial means

ENVIRONMETRICS, Issue 1 2003
Luis Ambrosio Flores
Abstract This article develops a practical approach to undertaking systematic sampling for the estimation of the spatial mean of an attribute in a selected area. A design-based approach is used to estimate population parameters, but it is combined with elements of a model-based approach in order to identify the spatial correlation structure, to evaluate the relative efficiency of the sample mean under simple random and systematic sampling, to estimate sampling error and to assess the sample size needed in order to achieve a desired level of precision. Using two case studies (land use estimation and weed seedbank in soil) it is demonstrated how the practical basis for the design of systematic samples provided in this work should be applied and it is shown that if the spatial correlation is ignored the sampling error of the sample mean and the sample size needed in order to achieve a desired level of precision with systematic sampling are overestimated. Copyright © 2003 John Wiley & Sons, Ltd. [source]


An Evaluation of the Cod Fishing Policies of Denmark Iceland and Norway

EUROCHOICES, Issue 3 2004
R. Arnason
Summary An Evaluation of the Cod Fishing Policies of Denmark, Iceland and Norway Many ocean fisheries are subject to a fundamental economic problem generally referred to as the common property problem. This problem manifests itself as excessive fishing fleets and fishing effort, depressed fish stocks and little or no profitability of the fishing activity, irrespective of the richness of the underlying marine resources. European fisheries represent some of the most dramatic examples of the common property problem. This article employs simple empirical models and recently developed mathematical techniques to examine the economic efficiency of three European fisheries, namely the Danish, Icelandic and Norwegian cod fisheries, The optimal harvesting policies for each of these fisheries are calculated. Comparing these optimal policies with actual harvests provides a measure of the relative efficiency in these three cod fisheries. The comparison confirms the widely held impression that the cod harvesting policies of ail three countries have been hugely inefficient in the past. Moreover, it appears that the inefficiency has been increasing over time. Only during the last few years of our data are there indications that this negative trend may have been halted. Somewhat more surprisingly, in spite of radically different fisheries management systems, we find relatively little difference in the level of stock over-exploitation between these three countries. Politiques compareées de pêhe à la morue au Danemark, en Islande et en Norvège Beaucoup de pêcheries océaniques sont confrontées au problème nique fondamental de la "propriété collective". Celui-ci se manifeste par des flottes de taille excessive, au service d' efforts de péche exagérés, qui aboutissent à détruire les stocks de poisson et la rentabilité des pécheries, en dépit de la richesse des ressources marines sousjacentes. Les pêcheries européennes constituent actuellement l'un des exemples les plus dramatiques des problèmes associés a la propriété collective. On présente ici un modèle empirique assez simple mais associéà de nouvelles techniques mathématiques récemment développé es pour mesurer l' efficacitééconomique de trois types de politiques de pêche à la morue, au Danemark, en Norvège et en Islande. On commence par calculer le volume optimal des prises pour chacun de ces pays. La comparaison entre le niveau optimal et le niveau réel des prises permet de mesurer le degré d' efficacité des politiques suivies. Elle permet de confirmer l'opinion largement répandue selon laquelle les politiques passées ont été extrêmement peu efficaces. En outre, il apparaît que l'inefficacité est croissante avec le temps. C'est seulement dans les toutes dernières années des séries de données que l' on observe un infléchissement de cette tendance négative. Enfin, et c'est le plus surprenant, on trouve peu de differences entre les trois pays en ce qui conceme le degré de surexploitation du stock, et cela, bien que les principes de gestion des pêcheries y soient entiArement différents. Eine Bewertung der Kabeljaufischereipolitik in Dänemark, Island und Norwegen ahlreiche Hochseefischereien tehen einem grundlegenden ftliehen Problem gegenüber, das allgemein als Problem des kollektiven Eigentums bezeichnet wird. Dieses Problem zeigt sich in übermäßig großen Fangflotten und beträchtlichem Fischereiaufwand, geschrumpften Fischbeständen und geringer oder fehlender Rentabilität der Fischerei; dabei ist die Höhe der Fischressourcen unbedeutend für das Problem. Die europäischen Fischereien stellen einige der drastischsten Beispiele für das Problem des kollektiven Eigentums dar. In diesem Beitrag werden einfache empirische Modelle und kürzlich entwickelte mathematische Verfahren angewendet, um die wirtschaftliche Effizienz von drei europäischen Fischereien zu untersuchem der dänischen, der isländischen und der norwegischen Kabeljaufischerei. Für jede dieser Fischereien wird die optimale Nutzungsstrategie berechnet. Aus dem Vergleich dieser optimalen Nutzungsstrategie mit den tatsächlichen Erträgen ergibt sich ein Maß fur die relative Effizienz, die bei diesen drei Kabeljaufischereien vorliegt. Der Vergleich bestätigt den weit verbreiteten Eindruck, dass die Strategien zum Kabeljaufang in alien drei Ländern in der Vergangenheit enorm ineffizient waren. Darüber hinaus wird deutlich, dass die Ineffizienz im Laufe der Zeit zugenommen hat. Lediglich die Daten der letzten jahre enthalten Hinweise darauf, dass dieser negative Trend zum Stillstand gekommen sein könnte. Obwohl sich die Fischwirtschaft in jedem dieser drei Länder sehr stark unterscheidet, lassen sich erstaunlich wenige Unterschiede im Maß der Übernutzung des Fischbestandes finden. [source]


One-year follow up of patients with chronic tinnitus treated with left temporoparietal rTMS

EUROPEAN JOURNAL OF NEUROLOGY, Issue 3 2009
E. M. Khedr
Background and purpose: Although there are a number of positive reports on the therapeutic effects of repetitive transcranial magnetic stimulation (rTMS) for treatment of tinnitus, there are few details about the duration of treatment effects or the relative efficiency of different rTMS protocols. Methods: Sixty six patients with chronic tinnitus were divided into four groups, receiving sham rTMS, 1, 10 and 25 Hz rTMS applied each day for 10 days over left temporoparietal cortex. They were followed up at 4 months and 1 year using the tinnitus questionnaire [Tinnitus Handicap Inventory(THI)] and self ratings of annoyance as well as measures of residual inhibition. Results: A two factor anova revealed a significant ,rTMS' × ,time' interaction indicating that real and sham rTMS had different effects on the THI scale and annoyance of tinnitus (P = 0.026 and 0.046 respectively). After 1 year, the tinnitus was absent in one or both ears of 10 patients who had received real rTMS: one of these was in the 1 Hz group, four patients were in the 10 Hz group and five patients were in the 25 Hz group. Conclusion: Some patients show a lasting benefit at 1 year after 10 days of rTMS treatment. It appears that treatment at 10 or 25 Hz may be more beneficial than at 1 Hz, although more work is necessary to validate this conclusion. [source]


Kinetics of inhibition of acetylcholinesterase in the presence of acetonitrile

FEBS JOURNAL, Issue 8 2009
Markus Pietsch
The hydrolysis of acetylthiocholine by acetylcholinesterase from Electrophorus electricus was investigated in the presence of the inhibitors tacrine, gallamine and compound 1. The interaction of the enzyme with the substrate and the inhibitors was characterized by the parameters KI, ,,, b or ,, Km and Vmax, which were determined directly and simultaneously from nonlinear Michaelis,Menten plots. Tacrine was shown to act as a mixed-type inhibitor with a strong noncompetitive component (,, , 1) and to completely block deacylation of the acyl-enzyme. In contrast, acetylcholinesterase inhibition by gallamine followed the ,steric blockade hypothesis', i.e. only substrate association to as well as substrate/product dissociation from the active site were reduced in the presence of the inhibitor. The relative efficiency of the acetylcholinesterase,gallamine complex for the catalysis of substrate conversion was determined to be 1.7,25% of that of the free enzyme. Substrate hydrolysis and the inhibition of acetylcholinesterase were also investigated in the presence of 6% acetonitrile, and a competitive pseudo-inhibition was observed for acetonitrile (KI = 0.25 m). The interaction of acetylcholinesterase with acetonitrile and tacrine or gallamine resulted in a seven- to 10-fold increase in the KI values, whereas the principal mode of inhibition was not affected by the organic solvent. The determination of the inhibitory parameters of compound 1 in the presence of acetonitrile revealed that the substance acts as a hyperbolic mixed-type inhibitor of acetylcholinesterase. The complex formed by the enzyme and the inhibitor still catalysed product formation with 8.7,9.6% relative efficiency. [source]


Policy analysis for tropical marine reserves: challenges and directions

FISH AND FISHERIES, Issue 1 2003
Murray A Rudd
Abstract Marine reserves are considered to be a central tool for marine ecosystem-based management in tropical inshore fisheries. The arguments supporting marine reserves are often based on both the nonmarket values of ecological amenities marine reserves provide and the pragmatic cost-saving advantages relating to reserve monitoring and enforcement. Marine reserves are, however, only one of a suite of possible policy options that might be used to achieve conservation and fisheries management objectives, and have rarely been the focus of rigorous policy analyses that consider a full range of economic costs and benefits, including the transaction costs of management. If credible analyses are not undertaken, there is a danger that current enthusiasm for marine reserves may wane as economic performance fails to meet presumed potential. Fully accounting for the value of ecological services flowing from marine reserves requires consideration of increased size and abundance of focal species within reserve boundaries, emigration of target species from reserves to adjacent fishing grounds, changes in ecological resilience, and behavioural responses of fishers to spatially explicit closures. Expanding policy assessments beyond standard cost,benefit analysis (CBA) also requires considering the impact of social capital on the costs of managing fisheries. In the short term, the amount of social capital that communities possess and the capacity of the state to support the rights of individuals and communities will affect the relative efficiency of marine reserves. Reserves may be the most efficient policy option when both community and state capacity is high, but may not be when one and/or the other is weak. In the longer term, the level of social capital that a society possesses and the level of uncertainty in ecological and social systems will also impact the appropriate level of devolution or decentralization of fisheries governance. Determining the proper balance of the state and the community in tropical fisheries governance will require broad comparative studies of marine reserves and alternative policy tools. [source]


Quantifying bias due to allele misclassification in case-control studies of haplotypes

GENETIC EPIDEMIOLOGY, Issue 7 2006
Usha S. Govindarajulu
Abstract Objectives Genotyping errors can induce biases in frequency estimates for haplotypes of single nucleotide polymorphisms (SNPs). Here, we considered the impact of SNP allele misclassification on haplotype odds ratio estimates from case-control studies of unrelated individuals. Methods We calculated bias analytically, using the haplotype counts expected in cases and controls under genotype misclassification. We evaluated the bias due to allele misclassification across a range of haplotype distributions using empirical haplotype frequencies within blocks of limited haplotype diversity. We also considered simple two- and three-locus haplotype distributions to understand the impact of haplotype frequency and number of SNPs on misclassification bias. Results We found that for common haplotypes (>5% frequency), realistic genotyping error rates (0.1,1% chance of miscalling an allele), and moderate relative risks (2,4), the bias was always towards the null and increases in magnitude with increasing error rate, increasing odds ratio. For common haplotypes, bias generally increased with increasing haplotype frequency, while for rare haplotypes, bias generally increased with decreasing frequency. When the chance of miscalling an allele is 0.5%, the median bias in haplotype-specific odds ratios for common haplotypes was generally small (<4% on the log odds ratio scale), but the bias for some individual haplotypes was larger (10,20%). Bias towards the null leads to a loss in power; the relative efficiency using a test statistic based upon misclassified haplotype data compared to a test based on the unobserved true haplotypes ranged from roughly 60% to 80%, and worsened with increasing haplotype frequency. Conclusions The cumulative effect of small allele-calling errors across multiple loci can induce noticeable bias and reduce power in realistic scenarios. This has implications for the design of candidate gene association studies that utilize multi-marker haplotypes. Genet. Epidemiol. 2006. © 2006 Wiley-Liss, Inc. [source]


Quantitative trait linkage analysis by generalized estimating equations: Unification of variance components and Haseman-Elston regression

GENETIC EPIDEMIOLOGY, Issue 4 2004
Wei-Min Chen
Two of the major approaches for linkage analysis with quantitative traits in humans include variance components and Haseman-Elston regression. Previously, these were viewed as quite separate methods. We describe a general model, fit by use of generalized estimating equations (GEE), for which the variance components and Haseman-Elston methods (including many of the extensions to the original Haseman-Elston method) are special cases, corresponding to different choices for a working covariance matrix. We also show that the regression-based test of Sham et al. ([2002] Am. J. Hum. Genet. 71:238,253) is equivalent to a robust score statistic derived from our GEE approach. These results have several important implications. First, this work provides new insight regarding the connection between these methods. Second, asymptotic approximations for power and sample size allow clear comparisons regarding the relative efficiency of the different methods. Third, our general framework suggests important extensions to the Haseman-Elston approach which make more complete use of the data in extended pedigrees and allow a natural incorporation of environmental and other covariates. © 2004 Wiley-Liss, Inc. [source]


Stratified case sampling and the use of family controls

GENETIC EPIDEMIOLOGY, Issue 3 2001
Kimberly D. Siegmund
Abstract We compare the asymptotic relative efficiency (ARE) of different study designs for estimating gene and gene-environment interaction effects using matched case-control data. In the sampling schemes considered, cases are selected differentially based on their family history of disease. Controls are selected either from unrelated subjects or from among the case's unaffected siblings and cousins. Parameters are estimated using weighted conditional logistic regression, where the likelihood contributions for each subject are weighted by the fraction of cases sampled sharing the same family history. Results showed that compared to random sampling, over-sampling cases with a positive family history increased the efficiency for estimating the main effect of a gene for sib-control designs (103,254% ARE) and decreased efficiency for cousin-control and population-control designs (68,94% ARE and 67,84% ARE, respectively). Population controls and random sampling of cases were most efficient for a recessive gene or a dominant gene with an relative risk less than 9. For estimating gene-environment interactions, over-sampling positive-family-history cases again led to increased efficiency using sib controls (111,180% ARE) and decreased efficiency using population controls (68,87% ARE). Using case-cousin pairs, the results differed based on the genetic model and the size of the interaction effect; biased sampling was only slightly more efficient than random sampling for large interaction effects under a dominant gene model (relative risk ratio = 8, 106% ARE). Overall, the most efficient study design for studying gene-environment interaction was the case-sib-control design with over-sampling of positive-family-history-cases. Genet. Epidemiol. 20:316,327, 2001. © 2001 Wiley-Liss, Inc. [source]


The analysis of efficiency among a small number of organisations: How inferences can be improved by exploiting patient-level data

HEALTH ECONOMICS, Issue 6 2008
Kim Rose Olsen
Abstract Those responsible for monitoring and managing the performance of health-care organisations face the common problem that the relationship between observed performance and effort is difficult to establish. A solution is to compare the performance of multiple organisations, but this requires a sufficient number of comparators. Faced with a small sample, it may be possible to exploit other information sources. Multilevel regression models are applied to analyse the performance of six Danish vascular departments in 2004 using a patient-level data set. We find that treatment costs are higher for smokers, older patients, patients with cerebrovascular and pulmonal diseases and for those subject to acute hospitalisation and with longer lengths of stay. Costs are lower for patients who are having follow-up surgery and for patients who receive some form of home care, suggesting that there may be some substitution of care input between vascular departments and other care providers. We estimate the relative efficiency of each department. The construction of confidence intervals allows the six departments to be sorted into two groups containing the least and most efficient departments. Conclusions about relative efficiency are robust to model specification, choice of estimator and hold at the 95% confidence level. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Network density control in epoxy,silica hybrids by selective silane functionalization of precursors

ADVANCES IN POLYMER TECHNOLOGY, Issue 2 2005
Luca Prezzi
Abstract Following previous work on the compatibilization of organic,inorganic hybrids through coupling reactions with the precursor components, the present study evaluates the relative efficiency of different types of coupling agents on the morphology and properties of epoxy,silica hybrids. In particular, this investigation compares the effects of introducing trialkoxysilane functional groups at the chain end (using amine- and mercapto-silanes) with similar types grafted in the middle of the chain of the constituent resin (using an isocyanate silane). The use of coupling agents with a basic character (amine silane type) brings about the formation of denser networks in both constituent phases of the resulting epoxy,silica hybrid, which is manifest through a large increase in the Tg and a more extensive suppression of the molecular relaxations within the glass transition regions. Increasing the number of alkoxysilane functional groups at the chain end, with the use of a bis-aminosilane, has a relatively minor effect on the morphology and dynamic mechanical spectra of the resulting epoxy,silica hybrids. It was also found that while the incorporation of small amounts of a high molecular weight epoxy resin causes considerable plasticization of the organic phase, much larger amounts of organic (aliphatic) co-agent within the siloxane phase are required to deteriorate those properties that are related to the inorganic character of the hybrid material. © 2005 Wiley Periodicals, Inc. Adv Polym Techn 24:91,102, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/adv.20033 [source]


Virus-vectored immunocontraception to control feral cats on islands: a mathematical model

JOURNAL OF APPLIED ECOLOGY, Issue 6 2000
Franck Courchamp
Summary 1. ,Feral cats Felis catus introduced onto oceanic islands pose a major ecological threat to endemic vertebrates, but their control is difficult. Immunocontraception has not been considered previously as a method for their control or eradication, and therefore we used a modelling approach to assess whether virus-vectored immunocontraception (VVIC) might be effective. 2. ,We compared the relative efficiency of cat control/eradication using immunocontraception and three different disseminating techniques, i.e. baits, genetically modified viral vectors, or both. We accounted for several forms of dynamic compensation likely to arise in a population with artificially reduced fertility. 3. ,We conclude that, under the assumptions of our model, immunocontraception can control or eradicate feral cats on oceanic islands. VVIC was found to be a more efficient dissemination technique than baits, but an integrated method involving viral-infected baits was the most likely to lead to eradication. 4. ,We advocate field trials of this VVIC technique, when available, under island conditions where any risks to non-target fauna would be minimal. [source]


A randomized comparison of plateletpheresis with the same donors using four blood separators at a single blood center,

JOURNAL OF CLINICAL APHERESIS, Issue 4 2002
Grace C. Tenorio
Abstract At one blood center, each of 20 donors underwent plateletpheresis on four blood cell separators in random order. We compared the CS3000+, Amicus V 2.41, MCS Plus, and Spectra LRS V 7 Turbo regarding platelet (PLT) yield, pre- and post-procedure PLT counts, percent fall in donor PLT count, process time, efficiency, PLT product and donor PLT volume (MPV). Using , 150 × 109 PLTs/L pre-donation counts, a goal was set of 4.5 × 1011 PLTs unit in up to 100 minutes processing time. Results were (mean values) PLT yields of Amicus, Spectra, CS3000+, and MCS Plus: 4.3, 4.6, 4.3, 4.0 × 1011 PLTS, respectively; percent donor PLT fall: 24, 32, 30, 29%, respectively; processing times: 50, 74, 87, 101 minutes, respectively; relative efficiency (RE): 2.2, 1.6, 1.2,1.0, respectively (based on the MCS Plus performance with RE of 1 = 4 × 109 PLTS/min); PLT product MPV: 6.7, 7.4, 6.8,7.1 fL, respectively; pre-procedure donor MPV: 7.7, 7.3, 7.6 and 7.6 fL, respectively; and percent donor MPV change: ,5.2, 0, ,6.6, and ,10%, respectively. Significant changes in the donor MPV were noted (P < 0.05) but could not be related to product MPV. Spectra seemed to collect larger PLTs (higher MPV); the significance remains unknown for both donors and recipients. Importantly, all four separators gave acceptable and comparable PLT yields (P < 0.05) with Spectra trending higher. The short process time and high RE together indicate highly efficient collections particularly by Amicus and Spectra. J. Clin. Apheresis 17:170,176, 2002. © 2002 Wiley-Liss, Inc. [source]


Economic analysis of Campylobacter control in the dutch broiler meat chain

AGRIBUSINESS : AN INTERNATIONAL JOURNAL, Issue 2 2007
Marie-Josée J. Mangen
The goal of the CARMA (Campylobacter risk management and assessment) project was to advise the Dutch government on the effectiveness and efficiency of interventions aimed at reducing campylobacteriosis cases in the Netherlands. The burden of disease, expressed in Disability-Adjusted Life Years (DALYs) and the corresponding cost-of-illness, were estimated using data from epidemiological studies. With the help of a risk assessment model, the reduction in the incidence of Campylobacter infections due to a set of possible interventions in the broiler meat (chicken) chain was modeled. Separately, costs related to the implementation of these interventions in the broiler meat chain were estimated. For each intervention to be modeled, the net costs of an intervention,additional costs in the broiler meat chain minus reduced cost-of-illness,were related to the reduced burden of disease. This resulted in a cost-utility ratio, expressing the relative efficiency of several policy options to reduce Campylobacter infections. [EconLit Citations: Q180, I180] © 2007 Wiley Periodicals, Inc. Agribusiness 23: 173,192, 2007. [source]


Assessing accuracy of a continuous screening test in the presence of verification bias

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 1 2005
Todd A. Alonzo
Summary., In studies to assess the accuracy of a screening test, often definitive disease assessment is too invasive or expensive to be ascertained on all the study subjects. Although it may be more ethical or cost effective to ascertain the true disease status with a higher rate in study subjects where the screening test or additional information is suggestive of disease, estimates of accuracy can be biased in a study with such a design. This bias is known as verification bias. Verification bias correction methods that accommodate screening tests with binary or ordinal responses have been developed; however, no verification bias correction methods exist for tests with continuous results. We propose and compare imputation and reweighting bias-corrected estimators of true and false positive rates, receiver operating characteristic curves and area under the receiver operating characteristic curve for continuous tests. Distribution theory and simulation studies are used to compare the proposed estimators with respect to bias, relative efficiency and robustness to model misspecification. The bias correction estimators proposed are applied to data from a study of screening tests for neonatal hearing loss. [source]


Robust Estimation For Periodic Autoregressive Time Series

JOURNAL OF TIME SERIES ANALYSIS, Issue 2 2008
Q. Shao
Abstract., A robust estimation procedure for periodic autoregressive (PAR) time series is introduced. The asymptotic properties and the asymptotic relative efficiency are discussed by the estimating equation approach. The performance of the robust estimators for PAR time-series models with order one is illustrated by a simulation study. The technique is applied to a real data analysis. [source]


Protein Sequences as Literature Text

MACROMOLECULAR THEORY AND SIMULATIONS, Issue 5 2006
Valentina V. Vasilevskaya
Abstract Summary: We have performed analysis of protein sequences treating them as texts written in a "protein" language. We have shown that repeating patterns (words) of various lengths can be identified in these sequences. It was found that the maximum word lengths are different for proteins belonging to different classes; therefore, the corresponding values can be used to characterize the protein type. The suggested technique was first applied to analyze (decompose into words) normal (literature) texts written as a gapless symbolic sequence without spaces and punctuation marks. The tests using fiction, scientific, and popular scientific English texts proved the relative efficiency of the technique. Maximum word length for various proteins: ,fibrillar proteins, ,globular proteins, ,membrane proteins. [source]


Improving the precision of cotton performance trials conducted on highly variable soils of the southeastern USA coastal plain

PLANT BREEDING, Issue 6 2007
B. T. Campbell
Abstract Reliable agronomic and fibre quality data generated in Upland cotton (Gossypium hirsutum L.) cultivar performance trials are highly valuable. The most common strategy used to generate reliable performance trial data uses experimental design to minimize experimental error resulting from spatial variability. However, an alternative strategy uses a posteriori statistical procedures to account for spatial variability. In this study, the efficiency of the randomized complete block (RCB) design and nearest neighbour adjustment (NNA) were compared in a series of cotton performance trials conducted in the southeastern USA to identify the efficiency of each in minimizing experimental error for yield, yield components and fibre quality. In comparison to the RCB, relative efficiency of the NNA procedure varied amongst traits and trials. Results show that experimental analyses, depending on the trait and selection intensity employed, can affect cultivar or experimental line selections. Based on this study, we recommend researchers conducting cotton performance trials on variable soils consider using NNA or other spatial methods to improve trial precision. [source]


Designing experiments for nonlinear models,an introduction,

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 5 2010
Rachel T. Johnson
Abstract We illustrate the construction of Bayesian D -optimal designs for nonlinear models and compare the relative efficiency of standard designs with these designs for several models and prior distributions on the parameters. Through a relative efficiency analysis, we show that standard designs can perform well in situations where the nonlinear model is intrinsically linear. However, if the model is nonlinear and its expectation function cannot be linearized by simple transformations, the nonlinear optimal design is considerably more efficient than the standard design. Published in 2009 by John Wiley & Sons, Ltd. [source]


Genetic variation of chloroplast DNA in Zingiberaceae taxa from Myanmar assessed by PCR,restriction fragment length polymorphism analysis

ANNALS OF APPLIED BIOLOGY, Issue 1 2009
D. Ahmad
Abstract We examined genetic variation in 22 accessions belonging to 11 species in four genera of the Zingiberaceae, mainly from Myanmar, by PCR,restriction fragment length polymorphism analysis to investigate their relationships within this family. Two of 10 chloroplast gene regions (trnS-trnfM and trnK2,trnQr) showed differential PCR amplification across the taxa. Restriction enzyme digestion of the PCR products revealed interspecific variability. The restriction patterns were used to classify the regions as either highly conserved or variable across the taxa. None of the regions was highly conserved across the four genera, and the level of conservation varied. The gene region trnS-trnfM appeared to display interspecific variability among most of the species. However, the relative efficiency of different restriction enzymes depended on the gene regions and genera investigated. Cluster analysis revealed interspecific discrimination among the taxa. The two Curcuma species (Curcuma zedoaria and Curcuma xanthorrhiza) appeared to be identical, thus supporting their recent classification as synonyms. The results provide the basis for selecting specific combinations of restriction enzymes and gene regions of chloroplast DNA (cpDNA) to identify interspecific variation in the Zingiberaceae and to identify both highly conserved and variable regions. Overall, cpDNA depicted comparatively diverse genetic profile of the studied germplasm. The genetic information revealed here can be applied to the conservation and future breeding of Zingiber and Curcuma species. [source]