Home About us Contact | |||
Used Approach (used + approach)
Selected AbstractsBridging Patients to Cardiac TransplantationCONGESTIVE HEART FAILURE, Issue 5 2000Michael B. Higginbotham MD Potential recipients of heart transplants have the most advanced form of congestive heart failure, in which standard therapy fails to maintain clinical stability. In the absence of guidelines derived from evidence obtained in clinical trials, caring for these patients becomes a challenge. A successful approach requires the proper coordination of surgical and nonsurgical strategies, including revascularization and valvular surgery as well as mechanical ventricular support and medical strategies. Intensive medical therapy is the most commonly used approach for prolonged bridging to transplantation. Although carefully individualized regimens are necessary to achieve desired goals, most centers adopt a fairly standardized approach involving vasodilators, diuretics, and inotropic support. Bridging patients with cardiac decompensation to transplantation presents a major therapeutic challenge. Appropriate strategies will maximize patients' chances that the bridge from decompensation to transplantation remains intact. [source] Short-term electric power load forecasting using feedforward neural networksEXPERT SYSTEMS, Issue 3 2004Heidar A. Malki Abstract: This paper presents the results of a study on short-term electric power load forecasting based on feedforward neural networks. The study investigates the design components that are critical in power load forecasting, which include the selection of the inputs and outputs from the data, the formation of the training and the testing sets, and the performance of the neural network models trained to forecast power load for the next hour and the next day. The experiments are used to identify the combination of the most significant parameters that can be used to form the inputs of the neural networks in order to reduce the prediction error. The prediction error is also reduced by predicting the difference between the power load of the next hour (day) and that of the present hour (day). This is a promising alternative to the commonly used approach of predicting the actual power load. The potential of the proposed method is revealed by its comparison with two existing approaches that utilize neural networks for electric power load forecasting. [source] Temperature,stress,strain trajectory modelling during thermo-mechanical fatigueFATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 3 2006M. NAGODE ABSTRACT The isothermal strain-life approach is the most commonly used approach for determining fatigue damage, particularly when yielding occurs. Computationally it is extremely fast and generally requires elastic finite element analyses only. Therefore, it has been adapted for variable temperatures. Local temperature,stress,strain behaviour is modelled with an operator of the Prandtl type. The hysteresis loops are supposed to be stabilized and no creep is considered. The consequences of reversal point filtering are analysed. The approach is finally compared to several thermo-mechanical fatigue tests and the Skelton model. [source] Direct removal in the mouse of a floxed neo gene from a three-loxp conditional knockout allele by two novel approaches,GENESIS: THE JOURNAL OF GENETICS AND DEVELOPMENT, Issue 1 2001Xiaoling Xu Abstract Summary: The presence in an intron of the ploxP-neo-loxP cassette often results in severe interference with gene expression. Consequently, many investigators selectively remove the ploxP-neo-loxP cassette by transient expression of Cre in ES cells. Although effective, the added manipulation of the ES cells may reduce the likelihood that a clone will be able to transmit via the germline. Therefore, we developed two novel approaches that remove the ploxP-neo-loxP by Cre-mediated recombination in mouse. First, the ploxP-neo-loxP-containing mice were crossed with EIIa-Cre transgenic mice. Second, a Cre-expression plasmid was injected into pronuclei of fertilized eggs bearing the ploxP-neo-loxP allele. Both approaches produced mosaic mice with partial and complete excision. These mosaic mice were then mated, and the neo-less conditional knockout allele was found in the offspring after screening only a few litters. These procedures provide options for removing neo directly in the mouse in addition to the commonly used approach that deletes neo in ES cells. genesis 30:1,6, 2001. © 2001 Wiley-Liss, Inc. [source] Finite element modelling of free-surface flows with non-hydrostatic pressure and k,, turbulence modelINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 2 2005C. Leupi Abstract Validation of 3D finite element model for free-surface flow is conducted using a high quality and high spatial resolution data set. The commonly numerical models with the conventional hydrostatic pressure still remain the most widely used approach for the solution of practical engineering problems. However, when a 3D description of the velocity field is required, it is useful to resort to a more accurate model in which the hydrostatic assumption is removed. The present research finds its motivation in the increasing need for efficient management of geophysical flows such as estuaries (multiphase fluid flow) or natural rivers with the presence of short waves and/or strong bathymetry gradient, and/or strong channel curvature. A numerical solution is based on the unsteady Reynolds-averaged Navier,Stokes equations on the unstructured grid. The eddy viscosity is calculated from the efficient k,, turbulence model. The model uses implicit fractional step time stepping, and the characteristics method is used to compute the convection terms in the multi-layers system (suitable for the vertical stratified fluid flow), in which the vertical grid is located at predefined heights and the number of elements in the water column depends on water depth. The bottommost and topmost elements of variable height allow a faithful representation of the bed and the time-varying free-surface, respectively. The model is applied to the 3D open channel flows of various complexity, for which experimental data are available for comparison. Computations with and without non-hydrostatic are compared for the same trench to test the validity of the conventional hydrostatic pressure assumption. Good agreement is found between numerical computations and experiments. Copyright © 2005 John Wiley & Sons, Ltd. [source] Genetic association analysis: a primer on how it works, its strengths and its weaknessesINTERNATIONAL JOURNAL OF ANDROLOGY, Issue 6 2008Laura Rodriguez-Murillo Summary Currently, the most used approach to mapping disease genes is the genome wide association study, using large samples of cases and controls and hundreds of thousands of markers spread throughout the genome. This review focuses in explaining how an association study works, its strengths and its weaknesses, and the methods available to analyse the data. Issues related to sample size, genetic effect sizes, epistasis, replication and population stratification are specifically addressed, issues that an investigator must take into account when planning an association study of any complex disease. Finally, we include some special features concerning association studies in the Y chromosome, and we contrast the analysis characteristics of linkage and association. [source] Contrasting approaches to statistical regression in ecology and economicsJOURNAL OF APPLIED ECOLOGY, Issue 2 2009P. R. Armsworth Summary 1Conservation and natural resource management challenges are as much social problems as biological ones. In recognition of this fact, ecologists and economists work increasingly closely together. We discuss one barrier to effective integration of the two disciplines: put simply, many ecologists and economists approach statistical regression differently. 2Regression techniques provide the most commonly used approach for empirical analyses of land management decisions. Researchers from each discipline attribute differing importance to a range of possibly conflicting design criteria when formulating regression analyses. 3Ecologists commonly attribute greater importance to spatial autocorrelation and parsimony than do economists when designing regressions. Economists often attribute greater importance than ecologists to concerns about endogeneity and conformance with a priori theoretical expectations. 4Synthesis and applications. The differing importance attributed to different design characteristics may reflect a process of cultural drift within each discipline. Greater interdisciplinary collaboration can counteract this process by stimulating the flow of ideas and techniques across disciplinary boundaries. [source] Compatibilization and development of layered silicate nanocomposites based of unsatured polyester resin and customized intercalation agentJOURNAL OF APPLIED POLYMER SCIENCE, Issue 6 2010Luigi Torre Abstract In this study a procedure for the preparation of compatibilized nanoclays was used to produce effective nanocomposites based on unsatured polyester (UP) resin. A compatibilization procedure of the filler with a selected surfactant has been developed and optimized, the effect of organic modifiers on the synthesized nanocomposites properties was studied. Moreover, polyester/clay nanocomposites were prepared. In particular, samples were prepared using two different mixing methods. The properties and formation processes of the nanocomposites obtained using the two methods were compared. X-ray diffraction studies revealed the formation of intercalated/exfoliated nanocomposites structures. The effect of processing parameters, used for both the compatibilization procedure and the preparation of nanocomposites, was studied. Dynamic mechanical, thermal analysis, and rheological tests were performed to investigate the formation mechanism of UP/montmorillonite nanocomposite. In particular, mechanical properties of nanocomposites were studied using dynamic mechanical analysis and tensile tests. Mechanical, rheological, and thermal characterization have confirmed the validity of the used approach to compatibilize the nanoclay and to produce nanocomposites. Tensile strength and Young's modulus were modified by the loading of the organoclays. Furthermore, the rheology of the nanocomposite formulation provided processing information, while mechanical and dynamic mechanical characterization was performed on the nanocomposites produced with the newly compatibilized formulation. The results have shown that nanocomposites with better mechanical properties can be obtained through the selection of an appropriate compatibilization process. © 2009 Wiley Periodicals, Inc. J Appl Polym Sci, 2010 [source] Glycoengineering: The effect of glycosylation on the properties of therapeutic proteinsJOURNAL OF PHARMACEUTICAL SCIENCES, Issue 8 2005Angus M. Sinclair Abstract Therapeutic proteins have revolutionized the treatment of many diseases but low activity or rapid clearance limits their utility. New approaches have been taken to design drugs with enhanced in vivo activity and half-life to reduce injection frequency, increase convenience, and improve patient compliance. One recently used approach is glycoengineering, changing protein-associated carbohydrate to alter pharmacokinetic properties of proteins. This technology has been applied to erythropoietin and resulted in the discovery of darbepoetin alfa (DA), a hyperglycosylated analogue of erythropoietin that contains two additional N-linked carbohydrates, a threefold increase in serum half-life and increased in vivo activity compared to recombinant human erythropoietin (rHuEPO). The increased serum half-life allows for less frequent dosing to maintain target hemoglobin levels in anemic patients. Carbohydrates on DA and other molecules can also increase molecular stability, solubility, increase in vivo biological activity, and reduce immunogenicity. These properties are discussed. © 2005 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 94:1626,1635, 2005 [source] Epoxycarotenoids esters analysis in intact orange juices using two-dimensional comprehensive liquid chromatographyJOURNAL OF SEPARATION SCIENCE, JSS, Issue 7 2009Paola Dugo Abstract In this work, the native carotenoid pattern of different orange juices was studied by LC×LC-DAD/APCI-IT-TOF-MS for the first time. Special attention was given to the epoxycarotenoids components. It has been already proposed that the relative proportions and composition of these epoxycarotenoids can be used to estimate the age and freshness of an orange juice. Re-arrangements from 5,6- to 5,8-epoxides can occur with time, partially due to the natural acidity of the juices. Thus, the study of these carotenoids in their intact form, that is, esterified with fatty acids, is of great interest. Besides, other free carotenoid and carotenoids esters were identified in seven different monovarietal orange juices and a commercial orange juice. Moreover, the higher separation power of the present LC×LC approach allowed a clearer identification of the compounds contained in the sample compared to the more commonly used approach which uses C30 stationary phases in conventional LC, thanks to the attainment of clearer MS spectra due to the higher resolution and separation degree obtained in LC×LC. This method could also be used to establish authenticity markers among orange varieties that could be potentially used to prevent or detect adulterations or to establish ripeness indexes. [source] Combining etanercept with traditional agents in the treatment of psoriasis: a review of the clinical evidenceJOURNAL OF THE EUROPEAN ACADEMY OF DERMATOLOGY & VENEREOLOGY, Issue 10 2010PA Foley Abstract Psoriasis is a chronic, systemic inflammatory disorder manifesting primarily in skin and potentially in joints, frequently necessitating treatment with conventional systemic therapies, phototherapy or biological agents. Patients with moderate to severe disease suffer a diminished quality of life, experience significant comorbidities and have a higher mortality. Although traditional treatments are effective in the short-term, their use is often limited by concerns over long-term toxicity, including end-organ damage and risk of malignancy. Combination therapy is a commonly used approach and is often more effective than any single agent. Lower doses of two treatments in combination can also minimize potential side effects from a single agent at higher doses. Etanercept is a recombinant human tumour necrosis factor (TNF), receptor (p75) protein fused with the Fc portion of IgG1 that binds to TNF,. This article reviews the evidence on the efficacy and safety of etanercept in combination with methotrexate, acitretin, narrowband UVB and cyclosporin. The largest body of evidence assesses the combination with methotrexate, although evidence is available for the other combinations. Data suggest that although highly effective as monotherapy, etanercept in combination with a conventional systemic agent can enhance efficacy and allow drug sparing. Potentially, the combination may also result in faster treatment responses and permit safe transitioning from one systemic agent to another. Evidence to date suggests that these benefits can be achieved without significant additional toxicity, although long-term data on the efficacy and safety of the combination in psoriatic populations is limited and further evaluation is warranted. [source] Comparison of two plant functional approaches to evaluate natural restoration along an old-field , deciduous forest chronosequenceJOURNAL OF VEGETATION SCIENCE, Issue 2 2009Isabelle Aubin Abstract Question: Are direct and indirect trait-based approaches similar in their usefulness to synthesize species responses to successional stages? Location: Northern hardwood forests, Québec, Canada (45°01,,45°08,N; 73°58,,74°21,W). Methods: Two different trait-based approaches were used to relate plant functional traits to succession on an old-field , deciduous forest chronosequence: (i) a frequently used approach based on co-occurrence of traits (emergent groups), and (ii) a new version of a direct functional approach at the trait level (the fourth-corner method). Additionally, we selected two different cut-off levels for the herb subset of the emergent group classification in order to test its robustness and ecological relevance. Results: Clear patterns of trait associations with stand developmental stages emerged from both the emergent group and the direct approach at the trait level. However, the emergent group classification was found to hide some trait-level differences such as a shift in seed size, light requirement and plant form along the chronosequence. Contrasting results were obtained for the seven or nine group classification of the herbaceous subset, illustrating how critical is the number of groups for emergent group classification. Conclusion: The simultaneous use of two different trait-based approaches provided a robust and comprehensive characterization of vegetation responses in the old-field , deciduous forest chronosequence. It also underlines the different goals as well as the limitations and benefits of these two approaches. Both approaches indicated that abandoned pastures of the northern hardwood biome have good potential for natural recovery. Conversion of these lands to other functions may lead to irremediable loss of biodiversity. [source] Measuring outcomes of United Way,funded programs: Expectations and realityNEW DIRECTIONS FOR EVALUATION, Issue 119 2008Michael Hendricks In 1996, United Way of America (UWA) developed and began disseminating the most widely used approach to program outcome measurement in the nonprofit sector. Today an estimated 450 local United Ways encourage approximately 19,000 local agencies they fund to measure outcomes. The authors first describe and then assess the strengths and limitations of the distinguishing features of the UWA approach, efforts to disseminate the approach, implementation by local United Ways, and actual outcome measurement by local agencies. The chapter ends with a description of United Way's relatively new emphasis on community impact and how that initiative relates to program outcome measurement. © Wiley Periodicals, Inc. [source] Truth and validity in grounded theory , a reconsidered realist interpretation of the criteria: fit, work, relevance and modifiabilityNURSING PHILOSOPHY, Issue 3 2003Kirsten Lomborg RN BA MSN Abstract Grounded theory is a frequently used approach in nursing research. Over the years the methodology has developed in different directions with ambiguous answers to questions of truth and validity. This ambiguity influences the interpretation of the criteria for quality judgement of grounded theories: fit, work, relevance and modifiability. In particular, the criterion fit seems to be caught in a vacuum between different epistemological and ontological positions. Fit can be interpreted either from a realist or from a nonrealist position but both present problems. A realist position is problematic if it insists on an immutable empirical world and ignores the social and psychological aspects of human life. A nonrealist position can either be argued to rely on hidden realist assumptions and therefore to be inconsistent, or it can be relativistic, opening up the possibility of ,anything goes' attitudes in research and solipsistic confirmations of the world view of researchers with little or misleading practical impact. A reconsideration of the realist position is suggested, in which validity is regulated by the social constructed reality ,as it really is'. From this position fit is a matter of correspondence to facts in social reality. The criteria work, relevance and modifiability are argued to support the fitness of a theory, and to be useful in the broader evaluation of the quality of grounded theories. [source] Defining the Bobath concept using the Delphi techniquePHYSIOTHERAPY RESEARCH INTERNATIONAL, Issue 1 2006Sue Raine Abstract Background and Purpose. The Bobath concept, based on the work of Berta and Karel Bobath, offers therapists working in the field of neurological rehabilitation a framework for their clinical interventions. It is the most commonly used approach in the UK. Although they recognize that over the last half-century the concept has undergone considerable developments, proponents of the Bobath concept have been criticized for not publishing these changes. The aim of the present study was to use the Delphi technique to enable experts in the field to define the current Bobath concept. Method. A four-round Delphi study design was used. The sample included all members of the British Bobath Tutor's Association, who are considered experts in the field. Initial statements were identified from the literature, with respondents generating additional statements during the study. The level of agreement was determined using a five-point Likert scale. The respondents were then provided with feedback on group opinions and given an opportunity to re-rate each statement. The level of group consensus was set at 80%. Results. Fifteen experts took part. The response rate was 85% in the first round, and 93% in each subsequent round. Ten statements from the literature were rated with a further 12 generated by the experts. Thirteen statements achieved consensus for agreement and seven for disagreement. Conclusions. The Delphi study was an effective research tool, maintaining anonymity of responses and exploring expert opinions on the Bobath concept. The experts stated that Bobath's work has been misunderstood if it is considered as the inhibition of spasticity and the facilitation of normal movement, as described in some literature. They agreed that the Bobath concept was developed by the Bobaths as a living concept, understanding that as therapists' knowledge base grows their view of treatment broadens. Copyright © 2006 John Wiley & Sons, Ltd. [source] On multiple-class prediction of issuer credit ratingsAPPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 5 2009Ruey-Ching Hwang Abstract For multiple-class prediction, a frequently used approach is based on ordered probit model. We show that this approach is not optimal in the sense that it is not designed to minimize the error rate of the prediction. Based upon the works by Altman (J. Finance 1968; 23:589,609), Ohlson (J. Accounting Res. 1980; 18:109,131), and Begley et al. (Rev. Accounting Stud. 1996; 1:267,284) on two-class prediction, we propose a modified ordered probit model. The modified approach depends on an optimal cutoff value and can be easily applied in applications. An empirical study is used to demonstrate that the prediction accuracy rate of the modified classifier is better than that obtained from usual ordered probit model. In addition, we also show that not only the usual accounting variables are useful for predicting issuer credit ratings, market-driven variables and industry effects are also important determinants. Copyright © 2008 John Wiley & Sons, Ltd. [source] ON SOCIAL LEARNING AND ROBUST EVOLUTIONARY ALGORITHM DESIGN IN THE COURNOT OLIGOPOLY GAMECOMPUTATIONAL INTELLIGENCE, Issue 2 2007Floortje Alkemade Agent-based computational economics (ACE) combines elements from economics and computer science. In this article, the focus is on the relation between the evolutionary technique that is used and the economic problem that is modeled. In the field of ACE, economic simulations often derive parameter settings for the genetic algorithm directly from the values of the economic model parameters. This article compares two important approaches that are dominating in ACE and shows that the above practice may hinder the performance of the genetic algorithm and thereby hinder agent learning. More specifically, it is shown that economic model parameters and evolutionary algorithm parameters should be treated separately by comparing the two widely used approaches to social learning with respect to their convergence properties and robustness. This leads to new considerations for the methodological aspects of evolutionary algorithm design within the field of ACE. [source] Spreadsheet for cyclone and hydrocyclone design considering nonspherical particle geometryCOMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 2 2007Marcelo K. Lenzi Abstract Cyclones and hydrocyclones are widely used for gas,solid and liquid,solid particle separations, respectively. The key feature is the presence of a centrifugal field. Equilibrium zone concept is one of the most used approaches for equipment design. This work revisits the design equations for nonspherical particles and compares design results considering nonspherical geometry to results considering spherical geometry and correcting the values to the non-spherical particle geometry. A didactic spreadsheet was prepared for analysis and instruction, and depending on the particle shape and on the particle orientation, errors up to 38% may be obtained in the cut diameter and 10% in the global collection efficiency. © 2007 Wiley Periodicals, Inc. Comput Appl Eng Educ. 15: 134,142, 2007; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20102 [source] Characterization of glyco isoforms in plasmaderived human antithrombin by on-line capillary zone electrophoresis-electrospray ionization-quadrupole ion trap-mass spectrometry of the intact glycoproteinsELECTROPHORESIS, Issue 13 2004Uwe M. Demelbauer Abstract The carbohydrate structures of five isoforms of ,-AT and two isoforms of ,-AT were determined by applying capillary zone electrophoresis (CZE) on-line coupled to electrospray ionization-mass spectrometry (ESI-MS) using an ion-trap analyzer. For the AT preparations gained from a plasma pool at least semiquantitative information on the isoform-distributions could be gained. Unlike to the commonly used approaches starting from enzymatically treated glycoproteins, this approach deals with intact proteins. The high accuracy of the molecular mass determination obtained by the ion-trap analyzer allows one to calculate and ascertain the carbohydrate composition assuming no variations in the protein moiety of AT and to exclude or confirm the presence of the potential post-translational or other modifications. Therefore, the direct coupling of CZE with ESI-MS does not only represent a fast alternative technique to two-dimensional electrophoresis (2-DE) but serves as a method which provides structural information complementary to that gained from peptide mapping methods. [source] Performance analysis of optically preamplified DC-coupled burst mode receiversEUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 3 2009T. J. Zuo Bit error rate and threshold acquisition penalty evaluation is performed for an optically preamplified DC-coupled burst mode receiver using a moment generating function (MGF) description of the signal plus noise. The threshold itself is a random variable and is also described using an appropriate MGF. Chernoff bound (CB), modified Chernoff bound (MCB) and the saddle-point approximation (SPA) techniques make use of the MGF to provide the performance analyses. This represents the first time that these widely used approaches to receiver performance evaluation have been applied to an optically preamplified burst mode receiver and it is shown that they give threshold acquisition penalty results in good agreement with a prior existing approach, whilst having the facility to incorporate arbitrary receiver filtering, receiver thermal noise and non-ideal extinction ratio. A traditional Gaussian approximation (GA) is also calculated and comparison shows that it is clearly less accurate (it exceeds the upper bounds provided by CB and MCB) in the realistic cases examined. It is deduced, in common with the equivalent continuous mode analysis, that the MCB is the most sensible approach. Copyright © 2009 John Wiley & Sons, Ltd. [source] Multiaxial fatigue of rubber: Part I: equivalence criteria and theoretical aspectsFATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 6 2005W. V. MARS ABSTRACT This paper investigates commonly used approaches for fatigue crack nucleation analysis in rubber, including maximum principal strain (or stretch), strain energy density and octahedral shear strain criteria. The ability of these traditional equivalence criteria, as well as a recent equivalence criterion (the cracking energy density) to predict multiaxial fatigue behaviour is explored. Theoretical considerations are also introduced relating to the applicability of various fatigue life analysis approaches. These include the scalar nature of traditional equivalence criteria, robustness of the criteria investigated for a wide range of multiaxial loadings, effects of crack closure and applications to non-proportional multiaxial loadings. It is shown that the notion of a stress or strain amplitude tensor used for the analysis of multiaxial loading in metals is not appropriate in the analysis of rubber due to nonlinearity associated with finite strains and near incompressibility. Taken together, these considerations illustrate that traditional criteria are not sufficiently consistent or complete to permit confident analysis of arbitrary multiaxial loading histories, and that an analysis approach specific to the failure plane is needed. Of the three traditional criteria, maximum principal strain is shown to match most closely to the cracking energy density criterion, in terms of a failure locus in principal stretch space. [source] Distinguishing between heterogeneity and inefficiency: stochastic frontier analysis of the World Health Organization's panel data on national health care systemsHEALTH ECONOMICS, Issue 10 2004William Greene Abstract The most commonly used approaches to parametric (stochastic frontier) analysis of efficiency in panel data, notably the fixed and random effects models, fail to distinguish between cross individual heterogeneity and inefficiency. This blending of effects is particularly problematic in the World Health Organization's (WHO) panel data set on health care delivery, which is a 191 country, 5-year panel. The wide variation in cultural and economic characteristics of the worldwide sample produces a large amount of unmeasured heterogeneity in the data. This study examines several alternative approaches to stochastic frontier analysis with panel data, and applies some of them to the WHO data. A more general, flexible model and several measured indicators of cross country heterogeneity are added to the analysis done by previous researchers. Results suggest that there is considerable heterogeneity that has masqueraded as inefficiency in other studies using the same data. Copyright © 2004 John Wiley & Sons, Ltd. [source] Nonlinear approaches to the design of microwave power amplifiersINTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 6 2004Paolo Colantonio Abstract Commonly used approaches for the design of power amplifiers (PAs), specifically in the microwave and millimeter-wave frequency range, are reviewed and discussed. Measurement-based techniques are compared with CAD-based approaches, stressing their relative strengths and weaknesses. Simplified techniques are also discussed, particularly addressing the preliminary evaluation of the power capabilities of a given device and to gather physical insight into the power-generating mechanisms. Finally, harmonic tuning for high-efficiency power amplifier design is outlined, together with its basic application rules. © 2004 Wiley Periodicals, Inc. Int J RF and Microwave CAE 14: 493,506, 2004. [source] Biological Indicator Systems in Floodplains , a ReviewINTERNATIONAL REVIEW OF HYDROBIOLOGY, Issue 4 2006Frank Dziock Abstract Based on a literature review, the different approaches to biological indicator systems in floodplains are summarised. Four general categories of bioindication are defined and proposed here: 1. Classification indicators, 2.1 Environmental indicators, 2.2 Biodiversity indicators, 3. Valuation indicators. Furthermore, existing approaches in floodplains are classified according to the four categories. Relevant and widely used approaches in floodplains are explained in more detail. The results of the RIVA project are put into the context of these indication approaches. It is concluded that especially functional assessment approaches using biological traits of the species can be seen as very promising and deserve more attention by conservation biologists and floodplain ecologists. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] A 15N-aided artificial atmosphere gas flow technique for online determination of soil N2 release using the zeolite Köstrolith SX6®RAPID COMMUNICATIONS IN MASS SPECTROMETRY, Issue 22 2006Oliver Spott N2 is one of the major gaseous nitrogen compounds released by soils due to N-transformation processes. Since it is also the major constituent of the earth's atmosphere (78.08% vol.), the determination of soil N2 release is still one of the main methodological challenges with respect to a complete evaluation of the gaseous N-loss of soils. Commonly used approaches are based either on a C2H2 inhibition technique, an artificial atmosphere or a 15N-tracer technique, and are designed either as closed systems (non-steady state) or gas flow systems (steady state). The intention of this work has been to upgrade the current gas flow technique using an artificial atmosphere for a 15N-aided determination of the soil N2 release simultaneously with N2O. A 15N-aided artificial atmosphere gas flow approach has been developed, which allows a simultaneous online determination of N2 as well as N2O fluxes from an open soil system (steady state). Fluxes of both gases can be determined continuously over long incubation periods and with high sampling frequency. The N2 selective molecular sieve Köstrolith SX6® was tested successfully for the first time for dinitrogen collection. The presented paper mainly focuses on N2 flux determination. For validation purposes soil aggregates of a Haplic Phaeozem were incubated under aerobic (21 and 6 vol.% O2) and anaerobic conditions. Significant amounts of N2 were released only during anaerobic incubation (0.4 and 640.2,pmol N2 h,1,g,1 dry soil). However, some N2 formation also occurred during aerobic incubation. It was also found that, during ongoing denitrification, introduced [NO3], will be more strongly delivered to microorganisms than the original soil [NO3],. Copyright © 2006 John Wiley & Sons, Ltd. [source] The quality index for radar precipitation data: a tower of Babel?ATMOSPHERIC SCIENCE LETTERS, Issue 2 2010Thomas Einfalt Abstract One of the quantitative metrics of quality of radar measurements of precipitation is the quality index (QI): a field of numbers whose values depend on the quality. Such an approach is operationally used in some national meteorological services. Difficulties in using this approach can be observed due to hardware and software differences and continuous quality control algorithm improvement. An overall review of commonly used approaches and connected difficulties is made. The challenges in hydrological applications using the QI are listed, as the technique is used to generate precipitation field ensembles. Recommendations for future common considerations are suggested. Copyright © 2010 Royal Meteorological Society [source] Out of the Loop: Why Research Rarely Reaches Policy Makers and the Public and What Can be DoneBIOTROPICA, Issue 5 2009Patricia Shanley ABSTRACT Most of the world's population that derives their livelihoods or part of their livelihoods from forests are out of the information loop. Exclusion of public users of natural resources from access to scientific research results is not an oversight; it is a systemic problem that has costly ramifications for conservation and development. Results of a survey of 268 researchers from 29 countries indicate that institutional incentives support the linear, top-down communication of results through peer-reviewed journal articles, which often guarantees positive performance measurement. While the largest percentage of respondents (34%) ranked scientists as the most important audience for their work, only 15 percent of respondents considered peer-reviewed journals effective in promoting conservation and/or development. Respondents perceived that local initiatives (27%) and training (16%) were likely to lead to success in conservation and development; but few scientists invest in these activities. Engagement with the media (5%), production of training and educational materials (4%) and popular publications (5%) as outlets for scientific findings was perceived as inconsequential (<14%) in measuring scientific performance. Less than 3 percent of respondents ranked corporate actors as an important audience for their work. To ensure science is shared with those who need it, a shift in incentive structures is needed that rewards actual impact rather than only ,high-impact' journals. Widely used approaches and theoretical underpinnings from the social sciences, which underlie popular education and communication for social change, could enhance communication by linking knowledge and action in conservation biology. [source] |