Distribution by Scientific Domains

Kinds of Reference

  • additional reference
  • bibliographic reference
  • desk reference
  • direct reference
  • explicit reference
  • first reference
  • growth reference
  • literature reference
  • little reference
  • national reference
  • particular reference
  • physician desk reference
  • primary reference
  • relevant reference
  • secondary reference
  • special reference
  • specific reference
  • standard reference
  • useful reference

  • Terms modified by Reference

  • reference adaptive control
  • reference area
  • reference arm
  • reference case
  • reference category
  • reference cell
  • reference centre
  • reference collection
  • reference compound
  • reference condition
  • reference configuration
  • reference curve
  • reference data
  • reference data set
  • reference database
  • reference dataset
  • reference diameter
  • reference diet
  • reference distribution
  • reference dose
  • reference drug
  • reference electrode
  • reference equation
  • reference family
  • reference formulation
  • reference frame
  • reference gene
  • reference genome
  • reference group
  • reference groups
  • reference guide
  • reference image
  • reference input
  • reference intake
  • reference interval
  • reference laboratory
  • reference level
  • reference limit
  • reference line
  • reference list
  • reference map
  • reference material
  • reference measurement
  • reference method
  • reference methods
  • reference model
  • reference models
  • reference pattern
  • reference peak
  • reference period
  • reference point
  • reference population
  • reference price
  • reference prior
  • reference product
  • reference protein
  • reference range
  • reference resource
  • reference sample
  • reference section
  • reference sequence
  • reference services
  • reference set
  • reference signal
  • reference site
  • reference soil
  • reference solution
  • reference species
  • reference specimen
  • reference spectrum
  • reference standard
  • reference standards
  • reference state
  • reference strain
  • reference stream
  • reference structure
  • reference substance
  • reference system
  • reference technique
  • reference temperature
  • reference test
  • reference tracking
  • reference trajectory
  • reference treatment
  • reference unit
  • reference value
  • reference vessel diameter

  • Selected Abstracts

    IV,Three Moments in the Theory of Definition or Analysis: Its Possibility, Its Aim or Aims, and Its Limit or Terminus

    David Wiggins
    The reflections recorded in this paper arise from three moments in the theory of definition and of conceptual analysis. The moments are: (I) Frege's (1894) review of Husserl's Philosophy of Arithmetic (Vol. I), the discussion there of the paradox of analysis, and the division that Frege marks, ensuing upon his distinction of Sinn/sense from Bedeutung/reference, between two different conceptions of definition; (II) Leibniz's still serviceable account (1684, 1704) of a distinction between the clarity and the distinctness of ideas,a distinction that prompts the suggestion that the guiding purpose of lexical definition is Leibnizian clarity whereas that of real definition (as Aristotle has us conceive it) is inseparable from the pursuit of Leibnizian distinctness; (III) Leibniz's speculations (1679) concerning the limit or terminus of analysis. The apparent failure of these speculations, casting doubt as it does upon the aspirations that give rise to them (aspirations not necessarily or entirely alien to the Zeitgeist of our own epoch), points to the long-standing need to reconfigure the philosophical business of enquiry into concepts. [source]


    ECONOMIC INQUIRY, Issue 2 2008
    This study adds to the limited literature on the demand for casino gaming. The major focus is on the effect of a statewide smoking ban. A system of slot machine demand equations, one each for the three Delaware racinos (racetrack casinos), was developed. The number of slot machines at a racino, at competing in-state racinos, and income were significant demand determinants. Competing out-of-state gaming venues had insignificant effects on gaming demand over the study period. The smoking ban had a significant negative impact on demand, which was not significantly different across the three racinos. The smoking ban reduced gaming demand 15.9%. (JEL L83) [source]


    Theodor Ahrens
    First page of article [source]


    Mitoshi YAMAGUCHI
    O11; O13; O41; Q18 The agricultural sector of Sri Lanka reacted sharply to the highly contentious policy reforms called Structural Adjustment Programs. We used a four-sector general equilibrium model under a growth accounting approach to find out the effect of the policy (exogenous) variable on the target (endogenous) variable. Here, we considered only the most important variables, and the overall results indicate that policy changes are favorable to overall agricultural development, although their effect on the domestic food sector is negative. The most serious negative determinant under the policy changes relates to fertilizer, and our study indicates that fertilizer prices considerably affect agricultural production; it especially has a negative effect on domestic food production. Second, this paper analyzes the impact of nonagricultural price, finding that it positively helped the development of overall agriculture. Third, agricultural exports increased under the new policy reforms and made large contributions to agricultural production. [source]


    Frank Jackson
    First page of article [source]


    Robin Jeshion
    First page of article [source]

    HCV-RNA In Sural Nerve From Hcv Infected Patients With Peripheral Neuropathy

    L De Martino
    Objective: Evaluation of hepatitis C virus (HCV) by reverse transcription-polymerase chain reaction (RT-PCR) in peripheral nerve tissues from HCV infected patients with peripheral neuropathy. METHODS: RT-PCR was performed on homogenates of nerve biopsies from 17 consecutive HCV-positive patients with peripheral neuropathy, with or without mixed cryoglobulinemia, hospitalised from 1996 to 2000. Sural nerve specimens were frozen in iso-pentane pre-cooled in liquid nitrogen and stored at ,80°C until use. RNA was extracted from ten 7-,m thick cryostatic sections or from a nerve trunk specimen of about 3 mm length, collected from each biopsy. Three different protocols of RNA extraction were tested (1,3). Complementary DNAs (cDNAs) were obtained without or with RNasin (Promega, Madison, WI) addition in the reaction mixture to inhibit residual RNase activity. Two sets of commercially available PCR primers for the outer and the nested reaction were used. PCR products were analysed by agarose gel electrophoresis and ethidium bromide staining. Serum samples and liver specimens from proven HCV positive patients served as positive controls, whereas sera from healthy subjects were negative controls. RESULTS: Sufficient amount of RNA could be obtained either by cryostatic sections or by in toto nerve specimens. Extraction by Trizol (Gibco-BRL) allowed the best concentration and purity of RNA as assessed by biophotometry. The presence of RNasin didn't improve the cDNA synthesis. The resulting amplification product of the nested PCR was 187 bp long. We have always observed this product in our positive controls and never in the negative. Six samples from patients either with or without cryoglobulinemia resulted positive; 7 were negative. Four samples gave variable results. CONCLUSIONS: While 40% of the nerves in our series were undoubtedly HCV positive, the cause(s) of negative and variable results in the remaining samples is likely more complex than variations in the detection protocols and deserve further investigations. REFERENCES: 1) Chomczynski P, Sacchi N (1987). Anal Biochem 162:156. 2) Marquardt O et al. (1996). Med Microbiol Lett 5:55. 3) Chomczynski P (1993). Bio/Techniques 15:532. [source]

    Quantification of metabolites in breast cancer patients with different clinical prognosis using HR MAS MR spectroscopy

    NMR IN BIOMEDICINE, Issue 4 2010
    Beathe Sitter
    Abstract Absolute quantitative measures of breast cancer tissue metabolites can increase our understanding of biological processes. Electronic REference To access In vivo Concentrations (ERETIC) was applied to high resolution magic angle spinning MR spectroscopy (HR MAS MRS) to quantify metabolites in intact breast cancer samples. The ERETIC signal was calibrated using solutions of creatine and TSP. The largest relative errors of the ERETIC method were 8.4%, compared to 4.4% for the HR MAS MRS method using TSP as a standard. The same MR experimental procedure was applied to intact tissue samples from breast cancer patients with clinically defined good (n,=,13) and poor (n,=,16) prognosis. All samples were examined by histopathology for relative content of different tissue types and proliferation index (MIB-1) after MR analysis. The resulting spectra were analyzed by quantification of tissue metabolites (,-glucose, lactate, glycine, myo-inositol, taurine, glycerophosphocholine, phosphocholine, choline and creatine), by peak area ratios and by principal component analysis. We found a trend toward lower concentrations of glycine in patients with good prognosis (1.1,µmol/g) compared to patients with poor prognosis (1.9,µmol/g, p,=,0.067). Tissue metabolite concentrations (except for ,-glucose) were also found to correlate to the fraction of tumor, connective, fat or glandular tissue by Pearson correlation analysis. Tissue concentrations of ,-glucose correlated to proliferation index (MIB-1) with a negative correlation factor (,0.45, p,=,0.015), consistent with increased energy demand in proliferating tumor cells. By analyzing several metabolites simultaneously, either in ratios or by metabolic profiles analyzed by PCA, we found that tissue metabolites correlate to patients' prognoses and health status five years after surgery. This study shows that the diagnostic and prognostic potential in MR metabolite analysis of breast cancer tissue is greater when combining multiple metabolites (MR Metabolomics). Copyright © 2010 John Wiley & Sons, Ltd. [source]

    Electrical Alternans: An Echocardiographic Visual Reference

    Craig A. Sisson MD
    No abstract is available for this article. [source]

    God, the Mind's Desire: Reference, Reason and Christian Thinking

    Article first published online: 11 OCT 200
    Books reviewed: Paul D. Janz, God, the Mind's Desire: Reference, Reason and Christian Thinking Reviewed by Louise Hickman [source]


    CYTOPATHOLOGY, Issue 2006
    D.R. Bolick
    Liquid based Pap (LBP) specimen adequacy is a highly documented, yet poorly understood cornerstone of our GYN cytology practice. Each day, as cytology professionals, we make adequacy assessments and seldom wonder how the criteria we use were established. Are the criteria appropriate? Are they safe? What is the scientific data that support them? Were they clinically and statistically tested or refined to achieve optimal patient care? In this presentation, we will take a fresh look at what we know about Pap specimen adequacy and challenge some of the core assumptions of our daily practice. LBP tests have a consistent, well-defined surface area for screening, facilitating the quantitative estimates of slide cellularity. This provides an unprecedented opportunity to establish reproducible adequacy standards that can be subjected to scientific scrutiny and rigorous statistical analysis. Capitalizing on this opportunity, the TBS2001 took the landmark step to define specimen adequacy quantitatively, and set the threshold for a satisfactory LBP at greater than 5,000 well visualized squamous epithelial cells. To date, few published studies have attempted to evaluate the validity or receiver operator characteristics for this threshold, define an optimal threshold for clinical utility or assess risks of detection failure in ,satisfactory' but relatively hypocellular Pap specimens. Five years of cumulative adequacy and cellularity data of prospectively collected Pap samples from the author's laboratory will be presented, which will serve as a foundation for a discussion on ,Pap failure'. A relationship between cellularity and detection of HSIL will be presented. Risk levels for Pap failure will be presented for Pap samples of different cellularities. The effect of different cellularity criterion on unsatisfactory Pap rates and Pap failure rates will be demonstrated. Results from this data set raise serious questions as to the safety of current TBS2001 adequacy guidelines and suggest that the risk of Pap failure in specimens with 5,000 to 20 000 squamous cells on the slide is significantly higher than those assumed by the current criteria. TBS2001 designated all LBP to have the same adequacy criterion. Up to this point, it has been assumed that ThinPrep, SurePath, or any other LBP would be sufficiently similar that they should have the same adequacy criteria. Data for squamous cellularity and other performance characteristics of ThinPrep and SurePath from the author's laboratory will be compared. Intriguing data involving the recently approved MonoPrep Pap Test will be reviewed. MonoPrep clinical trial data show the unexpected finding of a strong correlation between abundance of endocervical component and the detection of high-grade lesions, provoking an inquiry of a potential new role for a quantitative assessment of the transition zone component. The current science of LBP adequacy criteria is underdeveloped and does not appear to be founded on statistically valid methods. This condition calls us forward as a body of practitioners and scientists to rigorously explore, clarify and define the fundamental nature of cytology adequacy. As we forge this emerging science, we will improve diagnostic performance, guide the development of future technologies, and better serve the patients who give us their trust. Reference:, Birdsong GG: Pap smear adequacy: Is our understanding satisfactory? Diagn Cytopathol. 2001 Feb; 24(2): 79,81. [source]


    CYTOPATHOLOGY, Issue 2006
    M. Salto-Tellez
    Molecular diagnosis is the application of molecular biology techniques and knowledge of the molecular mechanisms of disease to diagnosis, prognostication and treatment of diseases. Molecular Diagnosis is, arguably, the fastest growing area of diagnostic medicine. The US market for molecular testing generated $1.3 billion in 2000, which was predicted to increase to about $4.2 billion by 2007.1 We proposed the term Diagnostic Molecular Cytopathology to define the application of molecular diagnosis to cytopathology2. Diagnostic Molecular Cytopathology is essential for the following reasons: (i) Molecular testing is sometimes indispensable to establish an unequivocal diagnosis on cell preparations; (ii) Molecular testing provides extra information on the prognosis or therapy of diseases diagnosed by conventional cytology; (iii) Molecular testing provides genetic information on the inherited nature of diseases that can be directly investigated in cytology samples, by either exfoliation or by fine needle aspiration; (iv) Sometimes the cytopathology sample is the most convenient (or the only available) source of material for molecular testing; (v). Direct molecular interrogation of cells allows for a diagnostic correlation that would otherwise not be possible. Parallel to this direct diagnostic implication, cytopathology is increasing important in the validation of biomarkers for specific diseases, and in therefore of significant importance in the overall translational research strategies. We illustrate its application in some of the main areas of oncology molecular testing, such as molecular fingerprinting of neoplasms,3 lymphoreticular diseases,2 sarcomas4 and lung cancer,5 as well as translational research using diagnostic cytopathology techniques. The next years will see the consolidation of Diagnostic Molecular Cytopathology, a process that will lead to a change of many paradigms. In general, diagnostic pathology departments will have to reorganize molecular testing to pursue a cost-efficient operation. Sample preparation will have to take into account optimal preservation of nuclear acids. The training of technical staff and the level of laboratory quality control and quality assurance would have to follow strict clinical (not research) laboratory parameters. And, most importantly, those pathologists undertaking molecular diagnosis as a discipline would have to develop their professional expertise within the same framework of fellowships and professional credentials that is offered in other sub-specialties. The price to pay if this effort is not undertaken is too important for the future of diagnostic pathology in general. The increasing characterization of molecular biomarkers with diagnostic, prognostic or therapeutic value is making the analysis of tissue and cell samples prior to treatment a more complex exercise. If cytopathologists and histopathologists allow others to take charge of molecular diagnosis, our overall contribution to the diagnostic process will be diminished. We may not become less important, but we may become less relevant. However, those within the discipline of diagnostic pathology who can combine the clinical background of diseases with the morphological, immunocytochemical and molecular diagnostic interpretation will represent bona fide diagnostic specialists. Such ,molecular cytopathologists' would place themselves at the centre of clinical decision-making. Reference:, 1. Liz Fletcher. Roche leads molecular diagnostics charge. Nature Biotechnol 20, 6,7; 2002 2. Salto-Tellez M and Koay ESC. Molecular Diagnostic Cytopathology - Definitions, Scope and Clinical Utility. Cytopathology 2004; 15:252,255 3. Salto-Tellez M, Zhang D, Chiu LL, Wang SC, Nilsson B, and Koay ESC. Immunocytochemistry Versus Molecular Fingerprinting of Metastases. Cytopathology, 2003 Aug; 14(4):186,90. 4. Chiu LL, Koay SCE, Chan NL and Salto-Tellez M. Molecular Cytopathology: Sequencing of the EWS-WT1 Gene Fusion Transcript in the Peritoneal Effusion of a Patient with Desmoplastic Small Round Cell Tumour. Diagnostic Cytopathology, 2003 Dec; 29(6): 341,3. 5. TM Chin, D Anuar, R Soo, M Salto-Tellez, WQ Li, B Ahmad, SC Lee, BC Goh, K Kawakami, A Segal, B Iacopetta, R Soong. Sensitive and Cost-Effective deptection of epidermal growth factor Receptor Mutations in Small Biopsies by denaturing High Performance Liquid Chromatography. (In press). [source]

    Indirect Perceptual Realism and Multiple Reference

    DIALECTICA, Issue 3 2008
    Derek Brown
    Indirect realists maintain that our perceptions of the external world are mediated by our ,perceptions' of subjective intermediaries such as sensations. Multiple reference occurs when a word or an instance of it has more than one reference. I argue that, because indirect realists hold that speakers typically and unknowingly directly perceive something subjective and indirectly perceive something objective, the phenomenon of multiple reference is an important resource for their view. In particular, a challenge that A. D. Smith has recently put forward for indirect realists can be overcome by appreciating how multiple reference is likely to arise when a projectivist variety of indirect realism is interpreted by speakers adhering to a naļve direct realism. [source]

    Direct Reference and Definite Descriptions

    DIALECTICA, Issue 1 2008
    Genoveva Marti
    According to Donnellan the characteristic mark of a referential use of a definite description is the fact that it can be used to pick out an individual that does not satisfy the attributes in the description. Friends and foes of the referential/attributive distinction have equally dismissed that point as obviously wrong or as a sign that Donnellan's distinction lacks semantic import. I will argue that, on a strict semantic conception of what it is for an expression to be a genuine referential device, Donnellan is right: if a use of a definite description is referential, it must be possible for it to refer to an object independently of any attributes associated with the description, including those that constitute its conventional meaning. [source]

    Postgraduate education for doctors in smoking cessation

    Abstract Introduction and Aims. Smoking cessation advice from doctors helps improve quit rates but the opportunity to provide this advice is often missed. Postgraduate education is one strategy to improve the amount and quality of cessation support provided. This paper describes a sample of postgraduate education programs for doctors in smoking cessation and suggests future directions to improve reach and quality. Design and Methods. Survey of key informants identified through tobacco control listserves supplemented by a review of the published literature on education programs since 2000. Programs and publications from Europe were not included as these are covered in another paper in this Special Issue. Results. Responses were received from only 21 key informants from eight countries. Two further training programs were identified from the literature review. The following components were present in the majority of programs: 5 As (Ask, Advise, Assess, Assist and Arrange) approach (72%), stage of change (64%), motivational interviewing (72%), pharmacotherapies (84%). Reference to clinical practice guidelines was very common (84%). The most common model of delivery of training was face to face. Lack of interest from doctors and lack of funding were identified as the main barriers to uptake and sustainability of training programs. Discussion and Conclusions. Identifying programs proved difficult and only a limited number were identified by the methods used. There was a high level of consistency in program content and a strong link to clinical practice guidelines. Key informants identified limited reach into the medical profession as an important issue. New approaches are needed to expand the availability and uptake of postgraduate education in smoking cessation.[Zwar NA, Richmond RL, Davidson D, Hasan I. Postgraduate education for doctors in smoking cessation. Drug Alcohol Rev 2009;28:466,473] [source]

    Quantification of Annular Dilatation and Papillary Muscle Separation in Functional Mitral Regurgitation: Role of Anterior Mitral Leaflet Length as Reference

    ECHOCARDIOGRAPHY, Issue 6 2005
    Vinod Jorapur M.D.
    Background: We hypothesized that anterior mitral leaflet length (ALL) does not differ significantly between normal subjects and patients with functional mitral regurgitation (FMR) and hence may be used as a reference measurement to quantify annular dilatation and papillary muscle separation. Methods and Results: We prospectively studied 50 controls, 15 patients with systolic left ventricular dysfunction (LVD) with significant FMR, and 15 patients with LVD without significant FMR. Significant MR was defined as an effective regurgitant orifice area , 0.2 cm2 as measured by the flow convergence method. Annular diameter, interpapillary distance, and ALL were measured, and the following ratios were derived: annular diameter indexed to ALL (ADI) and interpapillary distance indexed to ALL (IPDI). There was no significant difference in ALL among the three groups. The mean ADI was 1.26 times controls in patients with LVD without significant FMR compared to 1.33 times controls in patients with LVD with significant FMR (P = 0.06, no significant difference between groups). The mean IPDI was 1.42 times controls in patients with LVD without significant FMR compared to 2.1 times controls in patients with LVD with significant FMR (P < 0.0001, significant difference between groups). Conclusion: There was no significant difference in ALL between controls and patients with LVD. ALL can be used as a reference measurement to quantify annular dilatation and papillary muscle separation in patients with FMR. Interpapillary distance but not annular diameter indexed to ALL correlates with severity of FMR. [source]

    Disposable Amperometric Sensors for Thiols with Special Reference to Glutathione

    ELECTROANALYSIS, Issue 18 2008
    Dipankar Bhattacharyay
    Abstract The antioxidant ,reduced glutathione' tripeptide is conventionally called glutathione (GSH). The oxidized form is a sulfur-sulfur linked compound, known as glutathione disulfide (GSSG). Glutathione is an essential cofactor for antioxidant enzymes; it provides protection also for the mitochondria against endogenous oxygen radicals. The ratio of these two forms can act as a marker for oxidative stress. The majority of the methods available for estimation of both the forms of glutathione are based on colorimetric and electrochemical assays. In this study, electrochemical sensors were developed for the estimation of both GSH and GSSG. Two different types of transducers were used: i) screen-printed three-electrode disposable sensor (SPE) containing carbon working electrode, carbon counter electrode and silver/silver chloride reference electrode; ii) three-electrode disposable system (CDE) consisting of three copper electrodes. 5,5,-dithiobis(2-nitrobenzoic acid) (DTNB) was used as detector element for estimation of total reduced thiol content. The enzyme glutathione reductase along with a co-enzyme reduced nicotinamide adenine dinucleotide phosphate was used to estimate GSSG. By combining the two methods GSH can also be estimated. The detector elements were immobilized on the working electrodes of the sensors by bulk polymerization of acrylamide. The responses were observed amperometrically. The detection limit for thiol (GSH) was less than 0.6,ppm when DTNB was used, whereas for GSSG it was less than 0.1,ppm. [source]

    Assessment of the sensitivity of the computational programs DEREK, TOPKAT, and MCASE in the prediction of the genotoxicity of pharmaceutical molecules

    Ronald D. Snyder
    Abstract Computational models are currently being used by regulatory agencies and within the pharmaceutical industry to predict the mutagenic potential of new chemical entities. These models rely heavily, although not exclusively, on bacterial mutagenicity data of nonpharmaceutical-type molecules as the primary knowledge base. To what extent, if any, this has limited the ability of these programs to predict genotoxicity of pharmaceuticals is not clear. In order to address this question, a panel of 394 marketed pharmaceuticals with Ames Salmonella reversion assay and other genetic toxicology findings was extracted from the 2000,2002 Physicians' Desk Reference and evaluated using MCASE, TOPKAT, and DEREK, the three most commonly used computational databases. These evaluations indicate a generally poor sensitivity of all systems for predicting Ames positivity (43.4,51.9% sensitivity) and even poorer sensitivity in prediction of other genotoxicities (e.g., in vitro cytogenetics positive; 21.3,31.9%). As might be expected, all three programs were more highly predictive for molecules containing carcinogenicity structural alerts (i.e., the so-called Ashby alerts; 61% ± 14% sensitivity) than for those without such alerts (12% ± 6% sensitivity). Taking all genotoxicity assay findings into consideration, there were 84 instances in which positive genotoxicity results could not be explained in terms of structural alerts, suggesting the possibility of alternative mechanisms of genotoxicity not relating to covalent drug-DNA interaction. These observations suggest that the current computational systems when applied in a traditional global sense do not provide sufficient predictivity of bacterial mutagenicity (and are even less accurate at predicting genotoxicity in tests other than the Salmonella reversion assay) to be of significant value in routine drug safety applications. This relative inability of all three programs to predict the genotoxicity of drugs not carrying obvious DNA-reactive moieties is discussed with respect to the nature of the drugs whose positive responses were not predicted and to expectations of improving the predictivity of these programs. Limitations are primarily a consequence of incomplete understanding of the fundamental genotoxic mechanisms of nonstructurally alerting drugs rather than inherent deficiencies in the computational programs. Irrespective of their predictive power, however, these programs are valuable repositories of structure-activity relationship mutagenicity data that can be useful in directing chemical synthesis in early drug discovery. Environ. Mol. Mutagen. 43:143,158, 2004. © 2004 Wiley-Liss, Inc. [source]

    A European Legal Method?

    EUROPEAN LAW JOURNAL, Issue 1 2009
    On European Private Law, Scientific Method
    This article examines the relationship between European private law and scientific method. It argues that a European legal method is a good idea. Not primarily because it will make European private law scholarship look more scientific, but because a debate on the method of a normative science necessarily has to be a debate on its normative assumptions. In other words, a debate on a European legal method will have much in common with the much desired debate on social justice in European law. Moreover, it submits that, at least after the adoption of the Common Frame of Reference by the European institutions, European contract law can be regarded as a developing multi-level system that can be studied from the inside. Finally, it concludes that the Europeanisation of private law is gradually blurring the dividing line between the internal and external perspectives, with their respective appropriate methods, in two mutually reinforcing ways. First, in the developing multi-level system it is unclear where the external borders of the system lie, in particular the borders between Community law and national law. Second, because of the less formal legal culture the (formerly) external perspectives, such as the economic perspective, have easier access and play an increasing role as policy considerations. [source]

    Performance analysis of TH-UWB radio systems using proper waveform design in the presence of narrow-band interference

    Hassan Khani
    Ultra-wide band (UWB) radio systems, because of their huge bandwidth, must coexist with many narrow-band systems in their frequency band. This coexistence may cause significant degradation in the performance of both kinds of systems. Currently, several methods exist for narrow-band interference (NBI) suppression in UWB radio systems. One of them is based on mitigating the effects of NBI through proper waveform design. In Reference 1, it has been shown that using properly designed doublet waveform can significantly reduce the effects of NBI on an important kind of UWB radio systems, i.e. BPSK time-hopping UWB (TH-UWB) systems. In this paper, the proper waveform design technique is extended to BPPM TH-UWB systems. It is shown that this method can properly suppress the effects of NBI on the performance of BPPM TH-UWB systems. Copyright © 2005 AEIT. [source]

    The New Economy and the Work,Life Balance: Conceptual Explorations and a Case Study of New Media

    Diane Perrons
    Given the varied claims made about the new economy and its implications for the organization of work and life, this article critically evaluates some conceptualizations of the new economy and then explores how the new media sector has materialized and been experienced by people working in Brighton and Hove, a new media hub. New technologies and patterns of working allow the temporal and spatial boundaries of paid work to be extended, potentially allowing more people, especially those with caring responsibilities, to become involved, possibly leading to a reduction in gender inequality. This article, based on 55 in-depth interviews with new media owners, managers and some employees in small and micro enterprises, evaluates this claim. Reference is made to the gender-differentiated patterns of ownership and earnings; flexible working patterns, long hours and homeworking and considers whether these working patterns are compatible with a work,life balance. The results indicate that while new media creates new opportunities for people to combine interesting paid work with caring responsibilities, a marked gender imbalance remains. [source]

    Land Cover Characteristics in Ne Iceland with Special Reference to Jökulhlaup Geomorphology

    Petteri Alho
    ABSTRACT Subglacial eruptions in Vatnajbkull have accounted for several jökulhlaups (glacial outburst floods) in the Northern Volcanic Zone (NVZ). These events and aeolian processes have had a considerable impact on the landscape evolution of the area. Most of this area is occupied by barren land cover; the northern margin of the barren land cover is advancing northwards, burying vegetation under wind-blown sediment. This paper presents a land-cover classification based on a supervised Landsat TM image classification with pre-processing and extensive field observations. Four land cover categories were identified: (a) lava cover (34.8%); (b) barren sediment cover (39.0%); (c) vegetation (25.1%); and (d) water and snow (1.1%). The mapping of sand transport routes demonstrates that a major aeolian sand transportation pathway is situated in the western part of the study area. The sedimentary formation elongated towards the northeast is evidence of active and continuous aeolian sand transportation towards the north. Interpretation of the satellite image suggests that four main areas are affected by jökulhlaups along the Jökulsįį Fjöllum: Įsbyrgi, Grķmsstašir, Heršubreiš,Möšrudalur, and the Dyngjujökull sandur. In addition, jökulhlaup-related sediment cover (8%) in the study area, together with erosional features, are evidence of a severe and extensive jökulhlaup-induced process of land degradation. [source]

    Recognition of Indigenous Interests in Australian Water Resource Management, with Particular Reference to Environmental Flow Assessment

    Sue Jackson
    Australia's new national water policy represents a substantial change from the previous approach, because it recognises a potential need for allocations to meet particular indigenous requirements, which will have to be quantitatively defined in water allocation plans. However, indigenous values associated with rivers and water are presently poorly understood by decision-makers, and some are difficult to quantify or otherwise articulate in allocation decisions. This article describes the range of Australian indigenous values associated with water, and the way they have been defined in contemporary water resource policy and discourse. It argues that the heavy reliance of indigenous values on healthy river systems indicates that, theoretically at least, they are logically suited for consideration in environmental flow assessments. However, where indigenous interests have been considered for assessment planning purposes indigenous values have tended to be overlooked in a scientific process that leaves little room for different world views relating to nature, intangible environmental qualities and human relationships with river systems that are not readily amenable to quantification. There is often an implicit but untested assumption that indigenous interests will be protected through the provision of environmental flows to meet aquatic ecosystem requirements, but the South African and New Zealand approaches to environmental flow assessment, for example, demonstrate different riverine uses potentially can be accommodated. Debate with indigenous land-holders and experimentation will show how suited different environment flow assessment techniques are to addressing indigenous environmental philosophies and values. [source]

    Recent Developments in Trace Element Analysis by ICP-AES and ICP-MS with Particular Reference to Geological and Environmental Samples

    Kathryn L. Linge
    This review describes recent developments in trace element analysis using inductively coupled plasma-atomic emission spectrometry (ICP-AES) and inductively coupled plasma-mass spectrometry (ICP-MS). It aims to focus on the application of ICP techniques to geological and environmental samples. Therefore, fundamental studies in ICP-MS and ICP-AES instrumentation have largely been ignored. Whereas the majority of literature reviewed related to ICP-MS, indicating that ICP-MS is now the preferred technique for all geological analysis, there is still a steady development of ICP-AES to environmental applications. It is clear that true flexibility in elemental analysis can only be achieved by combining the advantages of both ICP-AES and ICP-MS. Two particular groups of elements (long-lived radionuclide and the platinum-group elements) stood out as warranting dedicated sections describing analytical developments these areas. [source]

    Delivery of health informatics education and training

    J. Michael Brittain
    An overview is provided of education and training in health information management in the context of national information strategies. Although the article focuses upon British programmes, there are examples from North America, Australasia and other countries. Reference is made to international activities in the development of generic courses for education and training, the need for education and training, the content of courses, and methods of delivery, including Internet-based training and education. Governments and health authorities in many countries have recognized the urgent need for a highly educated and trained workforce in information management, but universities have been slow to respond, until the last few years. However, there is now a plethora of education and training programmes in North America, most European countries, and Australasia. [source]

    Design, analysis, and synthesis of generalized single step single solve and optimal algorithms for structural dynamics

    X. Zhou
    Abstract The primary objectives of the present exposition are to: (i) provide a generalized unified mathematical framework and setting leading to the unique design of computational algorithms for structural dynamic problems encompassing the broad scope of linear multi-step (LMS) methods and within the limitation of the Dahlquist barrier theorem (Reference [3], G. Dahlquist, BIT 1963; 3: 27), and also leading to new designs of numerically dissipative methods with optimal algorithmic attributes that cannot be obtained employing existing frameworks in the literature, (ii) provide a meaningful characterization of various numerical dissipative/non-dissipative time integration algorithms both new and existing in the literature based on the overshoot behavior of algorithms leading to the notion of algorithms by design, (iii) provide design guidelines on selection of algorithms for structural dynamic analysis within the scope of LMS methods. For structural dynamics problems, first the so-called linear multi-step methods (LMS) are proven to be spectrally identical to a newly developed family of generalized single step single solve (GSSSS) algorithms. The design, synthesis and analysis of the unified framework of computational algorithms based on the overshooting behavior, and additional algorithmic properties such as second-order accuracy, and unconditional stability with numerical dissipative features yields three sub-classes of practical computational algorithms: (i) zero-order displacement and velocity overshoot (U0-V0) algorithms; (ii) zero-order displacement and first-order velocity overshoot (U0-V1) algorithms; and (iii) first-order displacement and zero-order velocity overshoot (U1-V0) algorithms (the remainder involving high-orders of overshooting behavior are not considered to be competitive from practical considerations). Within each sub-class of algorithms, further distinction is made between the design leading to optimal numerical dissipative and dispersive algorithms, the continuous acceleration algorithms and the discontinuous acceleration algorithms that are subsets, and correspond to the designed placement of the spurious root at the low-frequency limit or the high-frequency limit, respectively. The conclusion and design guidelines demonstrating that the U0-V1 algorithms are only suitable for given initial velocity problems, the U1-V0 algorithms are only suitable for given initial displacement problems, and the U0-V0 algorithms are ideal for either or both cases of given initial displacement and initial velocity problems are finally drawn. For the first time, the design leading to optimal algorithms in the context of a generalized single step single solve framework and within the limitation of the Dahlquist barrier that maintains second-order accuracy and unconditional stability with/without numerically dissipative features is described for structural dynamics computations; thereby, providing closure to the class of LMS methods. Copyright © 2003 John Wiley & Sons, Ltd. [source]

    A-scalability and an integrated computational technology and framework for non-linear structural dynamics.

    Part 1: Theoretical developments, parallel formulations
    Abstract For large-scale problems and large processor counts, the accuracy and efficiency with reduced solution times and attaining optimal parallel scalability of the entire transient duration of the simulation for general non-linear structural dynamics problems poses many computational challenges. For transient analysis, explicit time operators readily inherit algorithmic scalability and consequently enable parallel scalability. However, the key issues concerning parallel simulations via implicit time operators within the framework and encompassing the class of linear multistep methods include the totality of the following considerations to foster the proposed notion of A-scalability: (a) selection of robust scalable optimal time discretized operators that foster stabilized non-linear dynamic implicit computations both in terms of convergence and the number of non-linear iterations for completion of large-scale analysis of the highly non-linear dynamic responses, (b) selecting an appropriate scalable spatial domain decomposition method for solving the resulting linearized system of equations during the implicit phase of the non-linear computations, (c) scalable implementation models and solver technology for the interface and coarse problems for attaining parallel scalability of the computations, and (d) scalable parallel graph partitioning techniques. These latter issues related to parallel implicit formulations are of interest and focus in this paper. The former involving parallel explicit formulations are also a natural subset of the present framework and have been addressed previously in Reference 1 (Advances in Engineering Software 2000; 31: 639,647). In the present context, of the key issues, although a particular aspect or a solver as related to the spatial domain decomposition may be designed to be numerically scalable, the totality of the aforementioned issues simultaneously play an important and integral role to attain A-scalability of the parallel formulations for the entire transient duration of the simulation and is desirable for transient problems. As such, the theoretical developments of the parallel formulations are first detailed in Part 1 of this paper, and the subsequent practical applications and performance results of general non-linear structural dynamics problems are described in Part 2 of this paper to foster the proposed notion of A-scalability. Copyright © 2003 John Wiley & Sons, Ltd. [source]

    Coffee consumption and the risk of primary liver cancer: Pooled analysis of two prospective studies in Japan

    Taichi Shimazu
    Abstract Although case-control studies suggested that coffee consumption is associated with a decreased risk of liver cancer, no prospective cohort study has been carried out. To examine the association between coffee consumption and the risk of liver cancer, we conducted a pooled analysis of data available from 2 cohort studies in Japan. A self-administered questionnaire about the frequency of coffee consumption and other health habits was distributed to 22,404 subjects (10,588 men and 11,816 women) in Cohort 1 and 38,703 subjects (18,869 men and 19,834 women) in Cohort 2, aged 40 years or more, with no previous history of cancer. We identified 70 and 47 cases of liver cancer among the subjects in Cohort 1 (9 years of follow-up with 170,640 person-years) and Cohort 2 (7 years of follow-up with 284,948 person-years), respectively. We used Cox proportional hazards regression analysis to estimate the relative risk (RR) and 95% confidence interval (CI) of liver cancer incidence. After adjustment for potential confounders, the pooled RR (95% CI) of drinking coffee never, occasionally and 1 or more cups/day were 1.00 (Reference), 0.71 (0.46,1.09) and 0.58 (0.36,0.96), respectively (p for trend = 0.024). In the subgroup of subjects with a history of liver disease, we found a significant inverse association between coffee consumption and the risk of liver cancer. Our findings support the hypothesis that coffee consumption decreases the risk of liver cancer. Further studies to investigate the role of coffee in prevention of liver cancer among the high-risk population are needed. © 2005 Wiley-Liss, Inc. [source]

    The Spatial Segregation of Zooplankton Communities with Reference to Land Use and Macrophytes in Shallow Lake Wielkowiejskie (Poland)

    Natalia Kuczy, ska-Kippen
    Abstract The spatial distribution of zooplankton in relation to two types of land-use (forested and pastoral-arable) of a lake's surroundings and to various habitats (helophytes, elodeids, nymphaeids and open water) was examined along 16 parallel transects on a macrophyte-dominated lake (area , 13.3 ha; mean depth , 1.4 m). The type of habitat was the main determinant of zooplankton community structure. Dissected-leaved elodeids harboured the richest and most abundant community with typically littoral (e.g., Colurella uncinata) and pelagic species (e.g., Keratella cochlearis). Two species (Polyarthra major and P. vulgaris) selectively chose the open water and one (Lecane quadridentata) the Typha stand. No spatial differentiation in zooplankton abundance was recorded between the two types of the catchment area. One possible explanation may be the shallowness and small area of this lake which may support full mixing and no difference in physical-chemical gradients. (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]

    Seven Decades of Change in the Zooplankton (s.l.) of the Nile Delta Lakes (Egypt), with Particular Reference to Lake Borullus

    Henri J. Dumont
    Abstract Around the 1930s, the zooplankton (and benthos) of the Nile delta lakes, and Lake Borullus in particular, had a mixed, eutrophic facies, with marine and mesohaline elements dominant for about eight months per year, and freshwater species taking over during the four months of the Nile flood. After the Aswan dam became operational, this regime changed: a steady supply of agricultural drainage water of Nilotic origin consistently freshened the delta. Thus, except in the immediate vicinity of their outlet to the sea, the lakes became almost fresh. Only during the rare and short-lived (one-three weeks) occasions when Aswan closes in winter, marine water is sucked in, and along with it, a saline fauna temporarily becomes re-established in the east and centre of lake Borullus, and presumably of the other delta lakes as well. This marine fauna remained the same over 70+ years of observations. The freshwater component, in contrast, partly nilotic, partly mediterranean, changed deeply over time. First, the fraction of species from temporary waters disappeared, as well as (among copepods and cladocerans) all large-bodied species. Several cladocerans and copepods with a euro-mediterranean range appeared and diluted the pre-existing Afrotropical fauna. The abundance of small cladocerans and, especially, rotifers increased by a factor ten or more. This latter change is believed to reflect two pressures. In a first phase, a re-arrangement of the lake's fish fauna (a top down force) occurred. Freshwater fish replaced marine diadromic species, and their predation pressure on the zooplankton preferentially removed large-bodied prey. In a second phase, increased agricultural drainage caused eutrophication (a bottom-up force) and larger filtrators (cladocerans, some copepods) began to be replaced by small filtrators (rotifers). (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]