Home About us Contact | |||
Weighting
Kinds of Weighting Terms modified by Weighting Selected AbstractsEFFECT OF TAXON SAMPLING, CHARACTER WEIGHTING, AND COMBINED DATA ON THE INTERPRETATION OF RELATIONSHIPS AMONG THE HETEROKONT ALGAE,JOURNAL OF PHYCOLOGY, Issue 2 2003Leslie R. Goertzen Nuclear ribosomal small subunit and chloroplast rbcL sequence data for heterokont algae and potential outgroup taxa were analyzed separately and together using maximum parsimony. A series of taxon sampling and character weighting experiments was performed. Traditional classes (e.g. diatoms, Phaeophyceae, etc.) were monophyletic in most analyses of either data set and in analyses of combined data. Relationships among classes and of heterokont algae to outgroup taxa were sensitive to taxon sampling. Bootstrap (BS) values were not always predictive of stability of nodes in taxon sampling experiments or between analyses of different data sets. Reweighting sites by the rescaled consistency index artificially inflates BS values in the analysis of rbcL data. Inclusion of the third codon position from rbcL enhanced signal despite the superficial appearance of mutational saturation. Incongruence between data sets was largely due to placement of a few problematic taxa, and so data were combined. BS values for the combined analysis were much higher than for analyses of each data set alone, although combining data did not improve support for heterokont monophyly. [source] WATERSHED WEIGHTING OF EXPORT COEFFICIENTS TO MAP CRITICAL PHOSPHOROUS LOADING AREAS,JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 1 2003Theodore A. Endreny ABSTRACT: The Export Coefficient model (ECM) is capable of generating reasonable estimates of annual phosphorous loading simply from a watershed's land cover data and export coefficient values (ECVs). In its current form, the ECM assumes that ECVs are homogeneous within each land cover type, yet basic nutrient runoff and hydrological theory suggests that runoff rates have spatial patterns controlled by loading and filtering along the flow paths from the upslope contributing area and downslope dispersal area. Using a geographic information system (GIS) raster, or pixel, modeling format, these contributing area and dispersal area (CADA) controls were derived from the perspective of each individual watershed pixel to weight the otherwise homogeneous ECVs for phosphorous. Although the CADA-ECM predicts export coefficient spatial variation for a single land use type, the lumped basin load is unaffected by weighting. After CADA weighting, a map of the new ECVs addressed the three fundamental criteria for targeting critical pollutant loading areas: (1) the presence of the pollutant, (2) the likelihood for runoff to carry the pollutant offsite, and (3) the likelihood that buffers will trap nutrients prior to their runoff into the receiving water body. These spatially distributed maps of the most important pollutant management areas were used within New York's West Branch Delaware River watershed to demonstrate how the CADA-ECM could be applied in targeting phosphorous critical loading areas. [source] Weighting and adjusting for design effects in secondary data analysesNEW DIRECTIONS FOR INSTITUTIONAL RESEARCH, Issue 127 2005Scott L. Thomas Institutional researchers frequently use national datasets such as those provided by the National Center for Education Statistics (NCES). The authors of this chapter explore the adjustments required when analyzing NCES data collected using complex sample designs. [source] CIE 151:2003 Spectral Weighting of Solar Ultraviolet RadiationCOLOR RESEARCH & APPLICATION, Issue 6 2003Article first published online: 15 OCT 200 No abstract is available for this article. [source] A simple method to calculate the signal-to-noise ratio of a circular-shaped coil for MRICONCEPTS IN MAGNETIC RESONANCE, Issue 6 2006K. Ocegueda Abstract The introduction of the ultrafast imaging sequences has renewed the interest in development of RF coils. The theoretical frame of the SNR of MRI coils is a challenge because it requires a deep mathematical background to master the associated concepts. Here, a simpler method is proposed based on Legendre polynomials. This approximation method, together with a quasi-static approach, was used to derive a signal-to-noise ratio expression for a circular-shaped coil. Legendre polynomials were used instead of a weighting function to simplify the vector potential of the power loss, and an SNR formula was then derived. The simplified version of the SNR formula of a circular coil was compared with the weighting function-derived SNR expression using the quasi-static approach. SNR-vs.-depth plots were computed to theoretically compare both SNR formulas. Results showed a strong agreement between SNR values for the circular-shaped coil. This approach can be used as a tool to derive SNR expressions for more complex geometries. © 2006 Wiley Periodicals, Inc. Concepts Magn Reson Part A 28A: 422,429, 2006 [source] Seasonal allergies and suicidality: results from the National Comorbidity Survey ReplicationACTA PSYCHIATRICA SCANDINAVICA, Issue 2 2010E. Messias Messias E, Clarke DE, Goodwin RD. Seasonal allergies and suicidality: results from the National Comorbidity Survey Replication. Objective:, Studies have shown an association between allergies and suicidality, and a seasonality of suicide has also been described. We hypothesize an association between history of seasonal allergies and suicide ideation and attempt. Method:, Data came from the National Comorbidity Survey Replication, a nationally representative sample (n = 5692) of adults living in the US. Logistic regression models were used to calculate adjusted odds ratios (OR) controlling for the following: age, sex, race, smoking, asthma and depression. Results:, After weighting and adjustment, a positive and statistically significant association was found between history of seasonal allergies and history of suicidal ideation [adjusted OR = 1.27 (1.01,1.58)]. We found no association between history of seasonal allergies and history of suicide attempts [adjusted OR = 1.17 (0.89,1.52)]. Conclusion:, Findings from a population-based sample support the hypothesized relationship between allergies and suicidal ideation. [source] The Art of Repair in Surgical Hair Restoration,Part II: The Tactics of RepairDERMATOLOGIC SURGERY, Issue 10 2002Robert M. Bernstein MD background. As patient awareness of new hair transplantation techniques grows, the repair of improperly planned or poorly executed procedures becomes an increasingly important part of surgical hair restoration. objective. Part II of this series is written to serve as a practical guide for surgeons who perform repairs in their daily practices. It focuses on specific repair techniques. methods. The repairs are performed by excision with reimplantation and/or by camouflage. Follicular unit transplantation is used for the restorative aspects of the procedure. results. Using punch or linear excision techniques allows the surgeon to relocate poorly planted grafts to areas that are more appropriate. The key elements of camouflage include creating a deep zone of follicular units, angling grafts in their natural direction, and using forward and side weighting of grafts to increase the appearance of fullness. In special situations, removal of grafts without reimplantation can be accomplished using lasers or electrolysis. conclusion. Meticulous surgical techniques and optimal utilization of a limited hair supply will enable the surgeon to achieve the best possible cosmetic results for patients requiring repairs. [source] The Art of Repair in Surgical Hair Restoration Part I: Basic Repair StrategiesDERMATOLOGIC SURGERY, Issue 9 2002Robert M. Bernstein MD background. An increasingly important part of many hair restoration practices is the correction of hair transplants that were performed using older, outdated methods, or the correction of hair transplants that have left disfiguring results. The skill and judgment involved in these repair procedures often exceed those needed to operate on patients who have had no prior surgery. The use of small grafts alone does not protect the patient from poor work. Errors in surgical and aesthetic judgment, performing procedures on noncandidate patients, and the failure to communicate successfully with patients about realistic expectations remain major problems. objective. This two-part series presents new insights into repair strategies and expands upon several techniques previously described in the hair restoration literature. The focus is on creative aesthetic solutions to solve the supply/demand limitations inherent in most repairs. This article is written to serve as a guide for surgeons who perform repairs in their daily practices. methods. The repairs are performed by excision with reimplantation and/or by camouflage. Follicular unit transplantation is used for the restorative aspects of the procedure. results. Using punch or linear excision techniques allows the surgeon to relocate poorly planted grafts to areas that are more appropriate. In special situations, removal of grafts without reimplantation can be accomplished using lasers or electrolysis. The key elements of camouflage include creating a deep zone of follicular units, angling grafts in their natural direction, and using forward and side weighting of grafts to increase the appearance of fullness. The available donor supply is limited by hair density, scalp laxity, and scar placement. conclusion. Presented with significant cosmetic problems and severely limited donor reserves, the surgeon performing restorative hair transplantation work faces distinct challenges. Meticulous surgical techniques and optimal utilization of a limited hair supply will enable the surgeon to achieve the best possible cosmetic results for patients requiring repairs. [source] How does the knowledge about the spatial distribution of Iberian dung beetle species accumulate over time?DIVERSITY AND DISTRIBUTIONS, Issue 6 2007Jorge M. Lobo ABSTRACT Different distribution maps can be obtained for the same species if localities where species are present are mapped at different times. We analysed the accumulation of information over time for a group of dung beetle species in the Iberian Peninsula. To do this, we used all available information about the distribution of the group as well as data on selected species to examine if the process of discovery of species distribution has occurred in a climatically or spatially structured fashion. Our results show the existence of a well-defined pattern of temporal growth in distributional information; due to this, the date of capture of each specimen can be explained by the environmental and spatial variables associated to the collection sites. We hypothesize that such temporal biases could be the rule rather than the exception in most distributional data. These biases could affect the weighting of environmental factors that influence species distributions, as well as the accuracy of predictive distribution models. Systematic surveys should be a priority for the description of species geographical ranges in order to make robust predictions about the consequences of habitat and climate change for their persistence and conservation. [source] Efficient sampling and data reduction techniques for probabilistic seismic lifeline risk assessmentEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 10 2010Nirmal Jayaram Abstract Probabilistic seismic risk assessment for spatially distributed lifelines is less straightforward than for individual structures. While procedures such as the ,PEER framework' have been developed for risk assessment of individual structures, these are not easily applicable to distributed lifeline systems, due to difficulties in describing ground-motion intensity (e.g. spectral acceleration) over a region (in contrast to ground-motion intensity at a single site, which is easily quantified using Probabilistic Seismic Hazard Analysis), and since the link between the ground-motion intensities and lifeline performance is usually not available in closed form. As a result, Monte Carlo simulation (MCS) and its variants are well suited for characterizing ground motions and computing resulting losses to lifelines. This paper proposes a simulation-based framework for developing a small but stochastically representative catalog of earthquake ground-motion intensity maps that can be used for lifeline risk assessment. In this framework, Importance Sampling is used to preferentially sample ,important' ground-motion intensity maps, and K -Means Clustering is used to identify and combine redundant maps in order to obtain a small catalog. The effects of sampling and clustering are accounted for through a weighting on each remaining map, so that the resulting catalog is still a probabilistically correct representation. The feasibility of the proposed simulation framework is illustrated by using it to assess the seismic risk of a simplified model of the San Francisco Bay Area transportation network. A catalog of just 150 intensity maps is generated to represent hazard at 1038 sites from 10 regional fault segments causing earthquakes with magnitudes between five and eight. The risk estimates obtained using these maps are consistent with those obtained using conventional MCS utilizing many orders of magnitudes more ground-motion intensity maps. Therefore, the proposed technique can be used to drastically reduce the computational expense of a simulation-based risk assessment, without compromising the accuracy of the risk estimates. This will facilitate computationally intensive risk analysis of systems such as transportation networks. Finally, the study shows that the uncertainties in the ground-motion intensities and the spatial correlations between ground-motion intensities at various sites must be modeled in order to obtain unbiased estimates of lifeline risk. Copyright © 2010 John Wiley & Sons, Ltd. [source] Should symptom frequency be factored into scalar measures of alcohol use disorder severity?ADDICTION, Issue 9 2010Deborah A. Dawson ABSTRACT Aims To evaluate whether weighting counts of alcohol use disorder (AUD) criteria or symptoms by their frequency of occurrence improves their association with correlates of AUD. Design and participants Data were collected in personal interviews with a representative sample of US adults interviewed in 1991,92. Analyses were conducted among past-year drinkers (12+ drinks, n = 18 352) and individuals with past-year DSM-IV AUD (n = 2770). Measurements Thirty-one symptom item indicators, whose frequency of occurrence was measured in eight categories, were used to create unweighted and frequency-weighted counts of DSM-IV past-year AUD symptoms and criteria. Correlates included density of familial alcoholism and past-year volume of ethanol intake, frequency of intoxication and utilization of alcohol treatment. Findings Although the AUD correlates were associated strongly and positively with the frequency of AUD symptom occurrence, weighting for symptom frequency did not strengthen their association consistently with AUD severity scores. Improved performance of the weighted scores was observed primarily among AUD correlates linked closely with the frequency of heavy drinking and among individuals with AUD. Criterion counts were correlated nearly as strongly as symptom counts with the AUD correlates. Conclusions Frequency weighting may add somewhat to the validity of AUD severity measures, especially those that are intended for use among individuals with AUD, e.g. in clinical settings. For studying the etiology and course of AUD in the general population, an equally effective and less time-consuming alternative to obtaining symptom frequency may be the use of unweighted criterion counts accompanied by independent measures of frequency of heavy drinking. [source] Simultaneous Quantitative Determination of Cadmium, Lead, and Copper on Carbon-Ink Screen-Printed Electrodes by Differential Pulse Anodic Stripping Voltammetry and Partial Least Squares RegressionELECTROANALYSIS, Issue 23 2008Michael Cauchi Abstract Water is a vital commodity for every living entity on the planet. However, water resources are threatened by various sources of contamination from pesticides, hydrocarbons and heavy metals. This has resulted in the development of concepts and technologies to create a basis for provision of safe and high quality drinking water. This paper focuses on the simultaneous quantitative determination of three common contaminants, the heavy metals cadmium, lead and copper. Multivariate calibration was applied to voltammograms acquired on in-house printed carbon-ink screen-printed electrodes by the highly sensitive electrochemical method of differential pulse anodic stripping voltammetry (DPASV). The statistically inspired modification of partial least squares (SIMPLS) algorithm was employed to effect the multivariate calibration. The application of data pretreatment techniques involving range-scaling, mean-centering, weighting of variables and the effects of peak realignment are also investigated. It was found that peak realignment in conjunction with weighting and SIMPLS led to the better overall root mean square error of prediction (RMSEP) value. This work represents significant progress in the development of multivariate calibration tools in conjunction with analytical techniques for water quality determination. It is the first time that multivariate calibration has been performed on DPASV voltammograms acquired on carbon-ink screen-printed electrodes. [source] PECTIVE: HERE'S TO FISHER, ADDITIVE GENETIC VARIANCE, AND THE FUNDAMENTALTHEOREM OF NATURAL SELECTIONEVOLUTION, Issue 7 2002James F. Crow Abstract Fisher's fundamental theorem of natural selection, that the rate of change of fitness is given by the additive genetic variance of fitness, has generated much discussion since its appearance in 1930. Fisher tried to capture in the formula the change in population fitness attributable to changes of allele frequencies, when all else is not included. Lessar's formulation comes closest to Fisher's intention, as well as this can be judged. Additional terms can be added to account for other changes. The "theorem" as stated by Fisher is not exact, and therefore not a theorem, but it does encapsulate a great deal of evolutionary meaning in a simple statement. I also discuss the effectiveness of reproductive-value weighting and the theorem in integrated form. Finally, an optimum principle, analogous to least action and Hamilton's principle in physics, is discussed. [source] Evolutionary coincidence-based ontology mapping extractionEXPERT SYSTEMS, Issue 3 2008Vahed Qazvinian Abstract: Ontology matching is a process for selection of a good alignment across entities of two (or more) ontologies. This can be viewed as a two-phase process of (1) applying a similarity measure to find the correspondence of each pair of entities from two ontologies, and (2) extraction of an optimal or near optimal mapping. This paper is focused on the second phase and introduces our evolutionary approach for that. To be able to do so, we need a mechanism to score different possible mappings. Our solution is a weighting mechanism named coincidence-based weighting. A genetic algorithm is then introduced to create better mappings in successive iterations. We will explain how we code a mapping as well as our crossover and mutation functions. Evaluation of the algorithm is shown and discussed. [source] Reducing seabird bycatch in longline, trawl and gillnet fisheriesFISH AND FISHERIES, Issue 1 2007Leigh S. Bull Abstract With an increasing number of seabird species, particularly albatross and petrels, becoming threatened, a reduction of fishery impacts on these species is essential for their future survival. Here, mitigation methods to reduce and avoid seabird bycatch are assessed in terms of their ability to reduce bycatch rates and their economic viability for longline, trawl and gillnet fisheries worldwide. Factors influencing the appropriateness and effectiveness of a mitigation device include the fishery, vessel, location, seabird assemblage present and season of year. As yet, there is no single magic solution to reduce or eliminate seabird bycatch across all fisheries: a combination of measures is required, and even within a fishery there is likely to be refinement of techniques by individual vessels in order to maximize their effectiveness at reducing seabird bycatch. In longline demersal and pelagic fisheries, a minimum requirement of line weighting that achieves hook sink rates minimizing seabird bycatch rates should be tailored with a combination of strategic offal and discard management, bird-scaring lines (BSLs) and night-setting, particulary in Southern Hemisphere fisheries. Urgent investigation is needed into more effective measures at reducing seabird interactions with trawl nets and gill nets. In trawl fisheries, a combination of offal and discard management, the banning of net monitoring cables, paired BSLs, and a reduction in the time the net is on or near the surface are likely to be the most effective in reducing seabird interactions with the warp cables and net. Few seabird bycatch reduction methods have been developed for gillnet fisheries, although increasing the visibility of the net has been shown to reduce seabird bycatch. Further studies are required to determine the efficacy of this technique and its influence on target species catch rates. [source] Are stock assessment methods too complicated?FISH AND FISHERIES, Issue 3 2004A J R Cotter Abstract This critical review argues that several methods for the estimation and prediction of numbers-at-age, fishing mortality coefficients F, and recruitment for a stock of fish are too hard to explain to customers (the fishing industry, managers, etc.) and do not pay enough attention to weaknesses in the supporting data, assumptions and theory. The review is linked to North Sea demersal stocks. First, weaknesses in the various types of data used in North Sea assessments are summarized, i.e. total landings, discards, commercial and research vessel abundance indices, age-length keys and natural mortality (M). A list of features that an ideal assessment should have is put forward as a basis for comparing different methods. The importance of independence and weighting when combining different types of data in an assessment is stressed. Assessment methods considered are Virtual Population Analysis, ad hoc tuning, extended survivors analysis (XSA), year-class curves, catch-at-age modelling, and state-space models fitted by Kalman filter or Bayesian methods. Year-class curves (not to be confused with ,catch-curves') are the favoured method because of their applicability to data sets separately, their visual appeal, simple statistical basis, minimal assumptions, the availability of confidence limits, and the ease with which estimates can be combined from different data sets after separate analyses. They do not estimate absolute stock numbers or F but neither do other methods unless M is accurately known, as is seldom true. [source] Transgenic fish: an evaluation of benefits and risksFISH AND FISHERIES, Issue 2 2000N. Maclean Transgenic fish have many potential applications in aquaculture, but also raise concerns regarding the possible deleterious effects of escaped or released transgenic fish on natural ecosystems. In this review the potential applications of transgenic fish are considered, the probable benefits reviewed, the possible risks to the environment identified and the measures which might be taken to minimize these risks are evaluated. Growth trials of transgenic fish have already been carried out in outdoor facilities and some of these are discussed in the light of possible risks and benefits. Regarding the hazards associated with release or escape, whilst there is some evidence to suggest that transgenic fish may be less fit compared to their wild counterparts, there is insufficient evidence to say that this will be true in all cases. Using mathematical models, we have attempted to predict the magnitude of the genetic effects in a range of different scenarios. A number of possible containment techniques are considered, amongst which containment by sterility is probably the most promising. This can be engineered either by triploidy or by transgenic methods. The conclusions include a tabulated balance sheet of likely benefits and risks, with appropriate weighting. [source] Species co-occurrence, nestedness and guild,environment relationships in stream macroinvertebratesFRESHWATER BIOLOGY, Issue 9 20092Article first published online: 2 JUN 200, JANI HEINO Summary 1. Describing species distribution patterns and the underlying mechanisms is at the heart of ecological research. A number of recent studies have used null model approaches to explore mechanisms behind spatial variation in community structure. 2. However, unexplored questions are the degree to which single guilds of potentially competing stream macroinvertebrate species show: (i) interspecific segregation among-stream sites (i.e. occur together less often than expected by chance), suggesting competitive interactions; (ii) interspecific aggregation (i.e. occur together more often than expected by chance), suggesting similar responses to the environment; (iii) comply with nestedness, suggesting the existence of selective extinctions or colonisations and (iv) show similar environmental relationships. 3. The present analyses showed that guilds of stream macroinvertebrates exhibit non-random co-occurrence patterns that were generally contingent on the weighting of sites by stream size. Despite significant segregation of species, each guild also showed significantly nested patterns. Species richness was correlated with different environmental factors between the guilds, although these correlations were relatively low. By contrast, correlations between the major ordination axes and key environmental variables were slightly stronger in canonical correspondence analysis, and generally the same factors were most strongly correlated with variation in the species composition of each guild. 4. The present findings are the first to show that species within each stream macroinvertebrate guild show significant negative co-occurrence at the among-stream riffle scale. These findings present challenges for future studies that aim to disentangle whether these patterns comply with the habitat checkerboard or the competitive checkerboard explanations. [source] Undergraduate teaching in gerodontology in Austria, Switzerland and GermanyGERODONTOLOGY, Issue 3 2004Ina Nitschke Objective:, To survey the present state of undergraduate teaching in the domain of gerodontology in Germany, Switzerland and Austria. Study participants:, All universities of Austria (A), Germany (D) and Switzerland (CH). Protocol:, A questionnaire on undergraduate teaching in gerodontology was mailed to all Deans (A: n = 3; CH: n = 4; D: n = 31) and all independent departments except paediatric dentistry and orthodontics (A: n = 11; CH: n = 15; D: n = 111). Results:, The questionnaires were completed and returned by 29 Deans (A: n = 2; CH: n = 4; D: n = 23) and 102 departments (A: n = 7; CH: n = 8; D: n = 87). In Austria, gerodontology is a very small component of the dental curriculum and the Deans did not want this to be increased. Most German universities claimed to teach some aspects of gerodontology to undergraduate students and 87.4% of the Deans voted for separate lectures in gerodontology. In Switzerland, gerodontology seems well established. The results of questionnaires from the independent departments revealed that in all three countries lectures were more prevalent (A: n = 0; CH: n = 4; D: n = 6) than practical training in nursing homes (A: n = 0; CH: n = 3; D: n = 6). Conclusion:, Considering the demographical shift which is leading to an increasing proportion of elderly in the population, the weighting of gerodontology in the undergraduate dental curriculum should be considered for revision in Austria and Germany. [source] Estimating lifetime or episode-of-illness costs under censoringHEALTH ECONOMICS, Issue 9 2010Anirban Basu Abstract Many analyses of healthcare costs involve use of data with varying periods of observation and right censoring of cases before death or at the end of the episode of illness. The prominence of observations with no expenditure for some short periods of observation and the extreme skewness typical of these data raise concerns about the robustness of estimators based on inverse probability weighting (IPW) with the survival from censoring probabilities. These estimators also cannot distinguish between the effects of covariates on survival and intensity of utilization, which jointly determine costs. In this paper, we propose a new estimator that extends the class of two-part models to deal with random right censoring and for continuous death and censoring times. Our model also addresses issues about the time to death in these analyses and separates the survival effects from the intensity effects. Using simulations, we compare our proposed estimator to the inverse probability estimator, which shows bias when censoring is large and covariates affect survival. We find our estimator to be unbiased and also more efficient for these designs. We apply our method and compare it with the IPW method using data from the Medicare,SEER files on prostate cancer. Copyright © 2010 John Wiley & Sons, Ltd. [source] Conformational Analysis and CD Calculations of Methyl-Substituted 13-Tridecano-13-lactonesHELVETICA CHIMICA ACTA, Issue 2 2005Elena Voloshina Conformational models covering an energy range of 3,kcal/mol were calculated for (13S)-tetradecano-13-lactone (3), (12S)-12-methyltridecano-13-lactone (4), and (12S,13R)-12-methyltetradecano-13-lactone (8), starting from a semiempirical Monte-Carlo search with AM1 parametrization, and subsequent optimization of the 100 best conformers at the 6-31G*/B3LYP and then the TZVP/B3LYP level of density-functional theory. CD Spectra for these models were calculated by the time-dependent DFT method with the same functional and basis sets as for the ground-state calculations and Boltzmann weighting of the individual conformers. The good correlation of the calculated and experimental spectra substantiates the interpretation of these conformational models for the structure,odor correlation of musks. Furthermore, the application of the quadrant rule in the estimation of the Cotton effect for macrolide conformers is critically discussed. [source] Effects of nucleoside reverse transcriptase inhibitor backbone on the efficacy of first-line boosted highly active antiretroviral therapy based on protease inhibitors: meta-regression analysis of 12 clinical trials in 5168 patientsHIV MEDICINE, Issue 9 2009A Hill Objectives Tenofovir/emtricitabine (TDF/FTC) and abacavir/lamivudine (ABC/3TC) are widely used with ritonavir (RTV)-boosted protease inhibitors (PIs) as first-line highly active antiretroviral therapy (HAART), but there is conflicting evidence on their relative efficacy. The ACTG 5202 and BICOMBO trials suggested higher efficacy for TDF/FTC, whereas the HEAT trial showed no efficacy difference between the nucleoside reverse transcriptase inhibitor (NRTI) backbones. Methods A systematic MEDLINE search identified 21 treatment arms in 12 clinical trials of 5168 antiretroviral-naïve patients, where TDF/FTC (n=3399) or ABC/3TC (n=1769) was used with RTV-boosted PI. For each NRTI backbone and RTV-boosted PI, the percentage of patients with viral load <50 HIV-1 RNA copies/mL at week 48 by standardized Intent to Treat, Time to Loss of Virological Failure (ITT TLOVR) analysis were combined using inverse-variance weighting. The effect of baseline HIV RNA, CD4 cell count and choice of NRTI backbone were examined using a weighted analysis of covariance. Results Across all the trials, HIV RNA suppression rates were significantly higher for those with baseline viral load below 100 000 copies/mL (77.2%) vs. above 100 000 copies/mL (70.9%) (P=0.0005). For the trials of lopinavir/ritonavir (LPV/r), atazanavir/ritonavir (ATV/r) and fosamprenavir/ritonavir (FAPV/r) using either TDF/FTC or ABC/3TC, the HIV RNA responses were significantly lower when ABC/3TC was used, relative to TDF/FTC, for all patients (P=0.0015) and for patients with baseline viral load <100 000 copies/mL (70.1%vs. 80.6%, P=0.0161), and was borderline for those with viral load >100 000 copies/mL (67.5%vs. 71.5%, P=0.0523). Conclusions This systematic meta-regression analysis suggests higher efficacy for first-line use of a TDF/FTC NRTI backbone with boosted PIs, relative to use of ABC/3TC. However, this effect may be confounded by differences between the trials in terms of baseline characteristics, patient management or adherence. [source] What Is the Active Ingredients Equation for Success in Executive Coaching?INDUSTRIAL AND ORGANIZATIONAL PSYCHOLOGY, Issue 3 2009D. DOUGLAS McKENNA In this response, we address commentator concerns about the generalizability of the active ingredients of psychotherapy to the science and practice of executive coaching. We discuss four ingredient that may make a difference: (a) client characteristics, (b) goals or success criteria, (c) role of the organization, and (d) contextual knowledge of the executive coach. We explore how each of these differences is likely to affect the weighting of the four active ingredients in the equation for predicting executive coaching outcomes. From this analysis, we re-affirm our hypotheses that the active ingredients are generalizable to coaching and hold promise for strengthening research and practice. We conclude by highlighting the efforts of several commentators to extend and deepen our hypotheses to other areas of leadership development. [source] An element-wise, locally conservative Galerkin (LCG) method for solving diffusion and convection,diffusion problemsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 5 2008C. G. Thomas Abstract An element-wise locally conservative Galerkin (LCG) method is employed to solve the conservation equations of diffusion and convection,diffusion. This approach allows the system of simultaneous equations to be solved over each element. Thus, the traditional assembly of elemental contributions into a global matrix system is avoided. This simplifies the calculation procedure over the standard global (continuous) Galerkin method, in addition to explicitly establishing element-wise flux conservation. In the LCG method, elements are treated as sub-domains with weakly imposed Neumann boundary conditions. The LCG method obtains a continuous and unique nodal solution from the surrounding element contributions via averaging. It is also shown in this paper that the proposed LCG method is identical to the standard global Galerkin (GG) method, at both steady and unsteady states, for an inside node. Thus, the method, has all the advantages of the standard GG method while explicitly conserving fluxes over each element. Several problems of diffusion and convection,diffusion are solved on both structured and unstructured grids to demonstrate the accuracy and robustness of the LCG method. Both linear and quadratic elements are used in the calculations. For convection-dominated problems, Petrov,Galerkin weighting and high-order characteristic-based temporal schemes have been implemented into the LCG formulation. Copyright © 2007 John Wiley & Sons, Ltd. [source] The stabilization properties of fixed and floating exchange rate regimesINTERNATIONAL JOURNAL OF FINANCE & ECONOMICS, Issue 2 2004Keith Pilbeam Abstract This paper investigates the price and output stabilization properties of fixed and floating exchange rates using a small open economy model. The performance of the two regimes is compared in the face of money demand, aggregate demand and aggregate supply shocks. It is shown that the ranking of the two regimes is extremely sensitive to the weighting of the objective function as between price and output stability, the type of shock impinging upon the economy, the values of structural parameters of the economy and institutional features such as the degree of wage indexation within the economy. The results obtained suggest that estimates of the income elasticity of money demand, the elasticity of aggregate demand to changes in both the real exchange rate and the real interest rate, and the degree of openness of the economy are likely to be important to policymakers when making the choice of exchange rate regime. Neither regime can be said to be dominant in all circumstances. Copyright © 2004 John Wiley & Sons, Ltd. [source] The Richmond Agitation-Sedation Scale: translation and reliability testing in a Swedish intensive care unitACTA ANAESTHESIOLOGICA SCANDINAVICA, Issue 6 2010M. ALMGREN Background: Awareness about adequate sedation in mechanically ventilated patients has increased in recent years. The use of a sedation scale to continually evaluate the patient's response to sedation may promote earlier extubation and may subsequently have a positive effect on the length of stay in the intensive care unit (ICU). The Richmond Agitation-Sedation Scale (RASS) provides 10 well-defined levels divided into two different segments, including criteria for levels of sedation and agitation. Previous studies of the RASS have shown it to have strong reliability and validity. The aim of this study was to translate the RASS into Swedish and to test the inter-rater reliability of the scale in a Swedish ICU. Methods: A translation of the RASS from English into Swedish was carried out, including back-translation, critical review and pilot testing. The inter-rater reliability testing was conducted in a general ICU at a university hospital in the south of Sweden, including 15 patients mechanically ventilated and sedated. Forty in-pair assessments using the Swedish version of the RASS were performed and the inter-rater reliability was tested using weighted , statistics (linear weighting). Result: The translation of the RASS was successful and the Swedish version was found to be satisfactory and applicable in the ICU. When tested for inter-rater reliability, the weighed , value was 0.86. Conclusion: This study indicates that the Swedish version of the RASS is applicable with good inter-rater reliability, suggesting that the RASS can be useful for sedation assessment of patients mechanically ventilated in Swedish general ICUs. [source] Spectral weighting for distributed backward propagation image reconstruction in diffraction tomographyINTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 5-6 2008Hua Lee Abstract The objective of this work is to provide the formulation of the spatial-frequency weighting of distributed filtered backward propagation in multiple-projection diffraction tomography. This formulation provides the precise frequency-weighting characteristics for the generalized tomographic data-acquisition configurations. © 2008 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 18, 307,309, 2008 [source] Approximate knowledge modeling and classification in a frame-based language: The system CAININTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 6 2001Colette Faucher In this article, we present an extension of the frame-based language Objlog+, called CAIN, which allows the homogeneous representation of approximate knowledge (fuzzy, uncertain, and default knowledge) by means of new facets. We developed elements to manage approximate knowledge: fuzzy operators, extension of the inheritance mechanisms, and weighting of structural links. Contrary to other works in the domain, our system is strongly based on a theoretical approach inspired from Zadeh's and Dubois' works. We also defined an original instance classification mechanism, which has the ability to take into account the notions of typicality and similarity as they are presented in the psychological literature. Our model proposes consideration of a particular semantics of default values to estimate the typicality between a class and the instance to classify (ITC). In that way, the possibilities of the typicality representation proposed by frame-based languages are exploited. To find the most appropriate solution we do not systematically choose the most specific class that matches the ITC but we retain the most typical solution. Approximate knowledge is used to make the matching used during the classification process more flexible. Taking into account additional knowledge concerning heuristics and elements of cognitive psychology leads to the enrichment of the classification mechanism. © 2001 John Wiley & Sons, Inc. [source] The US National Comorbidity Survey Replication (NCS-R): design and field proceduresINTERNATIONAL JOURNAL OF METHODS IN PSYCHIATRIC RESEARCH, Issue 2 2004Ronald C. Kessler Abstract The National Comorbidity Survey Replication (NCS-R) is a survey of the prevalence and correlates of mental disorders in the US that was carried out between February 2001 and April 2003. Interviews were administered face-to-face in the homes of respondents, who were selected from a nationally representative multi-stage clustered area probability sample of households. A total of 9,282 interviews were completed in the main survey and an additional 554 short non-response interviews were completed with initial non-respondents. This paper describes the main features of the NCS-R design and field procedures, including information on fieldwork organization and procedures, sample design, weighting and considerations in the use of design-based versus model-based estimation. Empirical information is presented on non-response bias, design effect, and the trade-off between bias and efficiency in minimizing total mean-squared error of estimates by trimming weights. Copyright © 2004 Whurr Publishers Ltd. [source] Evaluations of sex assessment using weighted traits on incomplete skeletal remainsINTERNATIONAL JOURNAL OF OSTEOARCHAEOLOGY, Issue 5 2004A. KjellströmArticle first published online: 29 APR 200 Abstract Several studies have presented a variety of sexually dimorphic traits on the skeleton offering possibilities to score these traits for sex determination. However, few have discussed how fragmentation of skeletons affects the reliability of the results, and how to assess sex attribution based on a variety of methods. In the present study sex was determined for 354 skeletons from the medieval Swedish town Sigtuna, using well-recognized sexing techniques on the pelvis, skull and femur. The preservation of the skeletons varied markedly, thus affecting possibilities for sex assessments. An attempt was made to evaluate the result of the sex assessment when weighting of different traits with different scales was used. The resulting estimation for each individual was called total mean value A. In addition, a total mean value B that considers unobservable missing traits was estimated. It can be concluded that both weighting and fragmentation affect sex assessments of incomplete skeletons. Copyright © 2004 John Wiley & Sons, Ltd. [source] |