Several Sources (several + source)

Distribution by Scientific Domains


Selected Abstracts


Detection efficiency of multiplexed Passive Integrated Transponder antennas is influenced by environmental conditions and fish swimming behaviour

ECOLOGY OF FRESHWATER FISH, Issue 4 2009
J. C. Aymes
Abstract,,, The efficiency of a Passive Integrated Transponder (PIT)-tag detection system was tested during a 23-day experiment using a permanent digital video to record the passage of fish through multiplexed antennas. Coupling video to the PIT system allowed the detection of error sources and the correction of erroneous data. The efficiency of the detection system and its variation were investigated according to fish swimming speed, direction of movement and individual fish behaviour. Influence of time and environmental conditions on detection results were also checked. The PIT tag system was 96.7% efficient in detecting fish. Upstream movements were better detected (99.8%) than downstream movements (93.7%). Moreover, results showed that efficiency rate was not stable over the experiment; it was reduced on stormy days. Several sources of errors were identified such as sub-optimal orientation of the PIT tag relative to the antenna plane, the influence of fish swimming speed, individual fish behaviour and influence of environmental conditions. [source]


Model-free evaluation of directional predictability in foreign exchange markets

JOURNAL OF APPLIED ECONOMETRICS, Issue 5 2007
Jaehun Chung
We examine directional predictability in foreign exchange markets using a model-free statistical evaluation procedure. Based on a sample of foreign exchange spot rates and futures prices in six major currencies, we document strong evidence that the directions of foreign exchange returns are predictable not only by the past history of foreign exchange returns, but also the past history of interest rate differentials, suggesting that the latter can be a useful predictor of the directions of future foreign exchange rates. This evidence becomes stronger when the direction of larger changes is considered. We further document that despite the weak conditional mean dynamics of foreign exchange returns, directional predictability can be explained by strong dependence derived from higher-order conditional moments such as the volatility, skewness and kurtosis of past foreign exchange returns. Moreover, the conditional mean dynamics of interest rate differentials contributes significantly to directional predictability. We also examine the co-movements between two foreign exchange rates, particularly the co-movements of joint large changes. There exists strong evidence that the directions of joint changes are predictable using past foreign exchange returns and interest rate differentials. Furthermore, both individual currency returns and interest rate differentials are also useful in predicting the directions of joint changes. Several sources can explain this directional predictability of joint changes, including the level and volatility of underlying currency returns. Copyright © 2007 John Wiley & Sons, Ltd. [source]


THE CORRELATION OF YOUTH PHYSICAL ACTIVITY WITH STATE POLICIES

CONTEMPORARY ECONOMIC POLICY, Issue 4 2007
JOHN CAWLEY
Childhood overweight has risen dramatically in the United States during the past three decades. The search for policy solutions is limited by a lack of evidence regarding the effectiveness of state policies for increasing physical activity among youths. This paper estimates the correlation of student physical activity with a variety of state policies. We study nationwide data on high school students from the Youth Risk Behavior Surveillance System for 1999, 2001, and 2003 merged with data on state policies from several sources. We control for a variety of characteristics of states and students to mitigate bias due to the endogenous selection of policies, but we conservatively interpret our results as correlations, not causal impacts. Two policies are positively correlated with participation in physical education (PE) class for both boys and girls: a binding PE unit requirement and a state PE curriculum. We also find that state spending on parks and recreation is positively correlated with two measures of girls' overall physical activity. (JEL I18, I28) [source]


THE IMPACT OF BRITISH COUNTERTERRORIST STRATEGIES ON POLITICAL VIOLENCE IN NORTHERN IRELAND: COMPARING DETERRENCE AND BACKLASH MODELS,

CRIMINOLOGY, Issue 1 2009
GARY LAFREE
Since philosophers Beccaria and Bentham, criminologists have been concerned with predicting how governmental attempts to maintain lawful behavior affect subsequent rates of criminal violence. In this article, we build on prior research to argue that governmental responses to a specific form of criminal violence,terrorism,may produce both a positive deterrence effect (i.e., reducing future incidence of prohibited behavior) and a negative backlash effect (i.e., increasing future incidence of prohibited behavior). Deterrence-based models have long dominated both criminal justice and counterterrorist policies on responding to violence. The models maintain that an individual's prohibited behavior can be altered by the threat and imposition of punishment. Backlash models are more theoretically scattered but receive mixed support from several sources, which include research on counterterrorism; the criminology literature on labeling, legitimacy, and defiance; and the psychological literature on social power and decision making. In this article, we identify six major British strategies aimed at reducing political violence in Northern Ireland from 1969 to 1992 and then use a Cox proportional hazard model to estimate the impact of these interventions on the risk of new attacks. In general, we find the strongest support for backlash models. The only support for deterrence models was a military surge called Operation Motorman, which was followed by significant declines in the risk of new attacks. The results underscore the importance of considering the possibility that antiterrorist interventions might both increase and decrease subsequent violence. [source]


Neural stem cells for the treatment of disorders of the enteric nervous system: Strategies and challenges

DEVELOPMENTAL DYNAMICS, Issue 1 2007
Maria-Adelaide Micci
Abstract The main goal of this review is to summarize the status of the research in the field of stem cells transplantation, as it is applicable to the treatment of gastrointestinal motility. This field of research has advanced tremendously in the past 10 years, and recent data produced in our laboratories as well as others is contributing to the excitement on the use of neural stem cells (NSC) as a valuable therapeutic approach for disorders of the enteric nervous system characterized by a loss of critical neuronal subpopulations. There are several sources of NSC, and here we describe therapeutic strategies for NSC transplantation in the gut. These include using NSC as a relatively nonspecific cellular replacement strategy in conditions where large populations of neurons or their subsets are missing or destroyed. As with many other recent "breakthroughs" stem cell therapy may eventually prove to be overrated. However, at the present time, it does appear to provide the hope for a true cure for many currently intractable diseases of both the central and the peripheral nervous system. Certainly more extensive research is needed in this field. We hope that our review will encourage new investigators in entering this field of research ad contribute to our knowledge of the potentials of NSC and other cells for the treatment of gastrointestinal dysmotility. Developmental Dynamics 236:33,43, 2007. © 2006 Wiley-Liss, Inc. [source]


Psychiatric epidemiology of old age: the H70 study , the NAPE Lecture 2003

ACTA PSYCHIATRICA SCANDINAVICA, Issue 1 2004
I. Skoog
Objective: To describe methodological issues and possibilities in the epidemiology of old age psychiatry using data from the H70 study in Göteborg, Sweden. Method: A representative sample born during 1901,02 was examined at 70, 75, 79, 81, 83, 85, 87, 90, 92, 95, 97, 99 and 100 years of age, another during 1906,07 was examined at 70 and 79 years of age, and samples born between 1922 and 1930 were examined at 70 years of age. The study includes psychiatric examinations and key informant interviews performed by psychiatrists, physical examinations performed by geriatricians, psychometric testings, blood sampling, computerized tomographies of the brain, cerebrospinal fluid analyses, anthropometric measurements, and psychosocial background factors. Results: Mental disorders are found in approximately 30% of the elderly, but is seldom detected or properly treated. Incidence of depression and dementia increases with age. The relationship between blood pressure and Alzheimer's disease is an example of how cross-sectional and longitudinal studies yield completely different results. Brain imaging is an important tool in epidemiologic studies of the elderly to detect silent cerebrovascular disease and other structural brain changes. The high prevalence of psychotic symptoms is an example of the importance to use several sources of information to detect these symptoms. Dementia should be diagnosed in all types of studies in the elderly, as it influences several outcomes such as mortality, blood pressure, and rates of depression. Suicidal feelings are rare in the elderly and are strongly related to mental disorders. Conclusion: Modern epidemiologic studies in population samples should be longitudinal and include assessments of psychosocial risk factors as well as comprehensive sets of biologic markers, such as brain imaging, neurochemical analyses, and genetic information to maximize the contribution that epidemiology can provide to increase our knowledge about the etiology of mental disorders. [source]


Role of meta-analysis of clinical trials for Alzheimer's disease

DRUG DEVELOPMENT RESEARCH, Issue 3 2002
Jesús M. López Arrieta
Abstract Alzheimer's disease (AD) is a growing worldwide medical, social, and economic problem. In all countries, both prevalence and incidence of this disorder increase with age. The task of translating scientific clinical research into effective interventions for dementia has proved to be a difficult challenge. Data about the effects of therapeutic interventions come from several sources of evidence, ranging from studies with little potential for systematic bias and minimal random error, such as well-designed randomized controlled trials, through controlled but nonrandomized cohort and case-control studies, all the way to opinions based on laboratory evidence or theory. Although clinical trials are widespread in AD, there is increasing recognition that the results of studies do not necessarily apply to the type of patients that are seen by clinicians because of differences in patient characteristics, comorbidities, cotherapies, severity of disease, compliance, local circumstances, and patients preferences, which may differ sufficiently from those in the trial situation to attenuate or change the benefit-to-risk ratio. There are several methods to address those issues, like pragmatic trials and n-of-1 trials. When data from randomized clinical trials do not provide clear answers from sufficiently similar studies in the magnitude of effect sizes, lack of statistical significance, or identification of subgroups, systematic reviews and meta-analysis may help to provide a better summary of the data. A major difference between a traditional review and a systematic is the systematic nature in which studies are chosen and appraised. Traditional reviews are written by experts in the field who use differing and often subjective criteria to decide what studies to include and what weight to give them, and hence the conclusions are often very diverse, depending on the reviewer. Publication and selection bias is a major concern of traditional reviews. Systematic reviews and meta-analysis are being increasingly used in dementia, propelled by the Cochrane Dementia and Cognitive Improvement Group, to make decisions about treatment, management, and care and to guide future research. This narrative review describes the rationale for randomized clinical trials and systematic reviews in dementia, particularly AD. Drug Dev. Res. 56:401,411, 2002. © 2002 Wiley-Liss, Inc. [source]


Commercial aviation in-flight emergencies and the physician

EMERGENCY MEDICINE AUSTRALASIA, Issue 1 2007
Robert Cocks
Abstract Commercial aviation in-flight emergencies are relatively common, so it is likely that a doctor travelling frequently by air will receive a call for help at some stage in their career. These events are stressful, even for experienced physicians. The present paper reviews what is known about the incidence and types of in-flight emergencies that are likely to be encountered, the international regulations governing medical kits and drugs, and the liability, fitness and indemnity issues facing ,Good Samaritan' medical volunteers. The medical and aviation literature was searched, and information was collated from airlines and other sources regarding medical equipment available on board commercial aircraft. Figures for the incidence of significant in-flight emergencies are approximately 1 per 10,40 000 passengers, with one death occurring per 3,5 million passengers. Medically related diversion of an aircraft following an in-flight emergency may occur in up to 7,13% of cases, but passenger prescreening, online medical advice and on-board medical assistance from volunteers reduce this rate. Medical volunteers may find assisting with an in-flight emergency stressful, but should acknowledge that they play a vital role in successful outcomes. The medico-legal liability risk is extremely small, and various laws and industry indemnity practices offer additional protection to the volunteer. In addition, cabin crew receive training in a number of emergency skills, including automated defibrillation, and are one of several sources of help available to the medical volunteer, who is not expected to work alone. [source]


PERSPECTIVE: EMBEDDED MOLECULAR SWITCHES, ANTICANCER SELECTION, AND EFFECTS ON ONTOGENETIC RATES: A HYPOTHESIS OF DEVELOPMENTAL CONSTRAINT ON MORPHOGENESIS AND EVOLUTION

EVOLUTION, Issue 5 2003
Kathryn D. Kavanagh
Abstract The switch between the cell cycle and the progress of differentiation in developmental pathways is prevalent throughout the eukaryotes in all major cell lineages. Disruptions to the molecular signals regulating the switch between proliferative and differentiating states are severe, often resulting in cancer formation (uncontrolled proliferation) or major developmental disorders. Uncontrolled proliferation and developmental disorders are potentially lethal defects in the developing animal. Therefore, natural selection would likely favor a tightly controlled regulatory mechanism to help prevent these fundamental defects. Although selection is usually thought of as a consequence of environmental or ecological influences, in this case the selective force to maintain this molecular switch is internal, manifested as a potentially lethal developmental defect. The morphogenetic consequences of this prevalent, deeply embedded, and tightly controlled mechanistic switch are currently unexplored, however experimental and correlative evidence from several sources suggest that there are important consequences on the control of growth rates and developmental rates in organs and in the whole animal. These observations lead one to consider the possibility of a developmental constraint on ontogenetic rates and morphological evolution maintained by natural selection against cancer and other embryonic lethal defects. [source]


VARIATION OF SHELL SHAPE IN THE CLONAL SNAIL MELANOIDES TUBERCULATA AND ITS CONSEQUENCES FOR THE INTERPRETATION OF FOSSIL SERIES

EVOLUTION, Issue 2 2000
Sarah Samadi
Abstract., Interpreting paleontological data is difficult because the genetic nature of observed morphological variation is generally unknown. Indeed, it is hardly possible to distinguish among several sources of morphological variation including phenotypic plasticity, sexual dimorphism, within-species genetic variation or differences among species. This can be addressed using fossil organisms with recent representatives. The freshwater snail Melanoides tuberculata ranks in this category. A fossil series of this and other species have been studied in the Turkana Basin (Kenya) and is presented as one of the best examples illustrating the punctuated pattern of evolution by the tenants of this theory. Melanoides tuberculata today occupies most of the tropics. We studied variation of shell shape in natural populations of this parthenogenetic snail using Raup's model of shell coiling. We considered different sources of variation on estimates of three relevant parameters of Raup's model: (1) variation in shell shape was detected among clones, and had both genetic and environmental bases; (2) sexual dimorphism, in those clones in which males occur, appeared as an additional source of shell variation; and (3) ecophenotypic variation was detected by comparing samples from different sites and years within two clones. We then tested the performance of discriminant function analyses, a classical tool in paleontological studies, using several datasets. Although the three sources of variation cited above contributed significantly to the observed morphological variance, they could not be detected without a priori knowledge of the biological entities studied. However, it was possible to distinguish between M. tuberculata and a related thiarid species using these analyses. Overall, this suggests that the tools classically used in paleontological studies are poorly efficient when distinguishing between important sources of within-species variation. Our study also gives some empirical bases to the doubts cast on the interpretation of the molluscan series of the Turkana Basin. [source]


Navigating Interdependence: How Adolescents Raised Solely by Grandparents Experience Their Family Relationships

FAMILY RELATIONS, Issue 2 2009
Megan L. Dolbin-MacNab
This study examined how adolescents raised solely by grandparents navigated their relationships with their parents and grandparents and how these relationships were influenced by the caregiving context. Forty-one adolescents participated in qualitative, semistructured interviews. Findings suggest that relationships with parents were primarily companionate or marked by distance and distrust. Grandchildren had strong emotional bonds to their grandparents, although they also negotiated several sources of stress. Participants also reported feelings of gratitude because of the positive influence their grandparents had on their lives. Caregiving context shaped grandchildren's interdependence with their parents and grandparents in numerous ways. Findings highlight the complexity of grandchildren's family relationships and underscore the value of a systemic approach to understanding youth who are being raised by grandparents. [source]


Regional Climate Models for Hydrological Impact Studies at the Catchment Scale: A Review of Recent Modeling Strategies

GEOGRAPHY COMPASS (ELECTRONIC), Issue 7 2010
Claudia Teutschbein
This article reviews recent applications of regional climate model (RCM) output for hydrological impact studies. Traditionally, simulations of global climate models (GCMs) have been the basis of impact studies in hydrology. Progress in regional climate modeling has recently made the use of RCM data more attractive, although the application of RCM simulations is challenging due to often considerable biases. The main modeling strategies used in recent studies can be classified into (i) very simple constructed modeling chains with a single RCM (S-RCM approach) and (ii) highly complex and computing-power intensive model systems based on RCM ensembles (E-RCM approach). In the literature many examples for S-RCM can be found, while comprehensive E-RCM studies with consideration of several sources of uncertainties such as different greenhouse gas emission scenarios, GCMs, RCMs and hydrological models are less common. Based on a case study using control-run simulations of fourteen different RCMs for five Swedish catchments, the biases of and the variability between different RCMs are demonstrated. We provide a short overview of possible bias-correction methods and show that inter-RCM variability also has substantial consequences for hydrological impact studies in addition to other sources of uncertainties in the modeling chain. We propose that due to model bias and inter-model variability, the S-RCM approach is not advised and ensembles of RCM simulations (E-RCM) should be used. The application of bias-correction methods is recommended, although one should also be aware that the need for bias corrections adds significantly to uncertainties in modeling climate change impacts. [source]


A double structure generalized plasticity model for expansive materials

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 8 2005
Marcelo Sánchez
Abstract The constitutive model presented in this work is built on a conceptual approach for unsaturated expansive soils in which the fundamental characteristic is the explicit consideration of two pore levels. The distinction between the macro- and microstructure provides the opportunity to take into account the dominant phenomena that affect the behaviour of each structural level and the main interactions between them. The microstructure is associated with the active clay minerals, while the macrostructure accounts for the larger-scale structure of the material. The model has been formulated considering concepts of classical and generalized plasticity theories. The generalized stress,strain rate equations are derived within a framework of multidissipative materials, which provides a consistent and formal approach when there are several sources of energy dissipation. The model is formulated in the space of stresses, suction and temperature; and has been implemented in a finite element code. The approach has been applied to explaining and reproducing the behaviour of expansive soils in a variety of problems for which experimental data are available. Three application cases are presented in this paper. Of particular interest is the modelling of an accidental overheating, that took place in a large-scale heating test. This test allows the capabilities of the model to be checked when a complex thermo-hydro-mechanical (THM) path is followed. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Analysis of resequencing in downloads

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 8 2003
Yoav Nebat
Abstract Recent studies indicate that out-of-order arrival of data packets during downloads of resources is not pathological network behaviour (ACM/IEEE Trans. Networking 1999; 7(6):789). Though this situation is most intuitive when packets of the same resource arrive in parallel from several sources, it turns out that this phenomenon may also occur in the single source scenario. Knowledge regarding the expected reordering needed is important both for being able to decide on the size of the resequencing buffer needed, and to estimate the burstiness in arrival of data to the application. In this study we present a method to calculate the resequencing buffer occupancy probabilities for the single source scenario, and a study of the resequencing buffer occupancy for the two source scenario, where arrival from each of the sources is in order. Copyright © 2003 John Wiley & Sons, Ltd. [source]


The Moderating Role of Social Support Between Role Stressors and Job Attitudes Among Roman Catholic Priests,

JOURNAL OF APPLIED SOCIAL PSYCHOLOGY, Issue 12 2008
Michael J. Zickar
This study examined the relations role stressors and job attitudinal variables, as well as the potential moderating effects of social support in a sample of 190 Roman Catholic priests. The priesthood is an important occupation to study because the work priests do can be considered a vocation instead of a job. Role stressors were negatively correlated with job attitudes (e.g., job satisfaction, turnover intention). Consistent with a buffering hypothesis, several sources of social support (parishioners, staff, fellow priests) consistently moderated this relationship, in that the relationship attenuated as social support increased. The implications of these results are discussed with respect to the role of the priest, as well as with other types of work-based vocations. [source]


Molecular mechanics (MM4) calculations on carbonyl compounds part I: aldehydes

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 13 2001
Charles H. Langley
Abstract Aliphatic aldehydes have been studied with the aid of the MM4 force field. The structures, moments of inertia, vibrational spectra, conformational energies, barriers to internal rotation, and dipole moments have been examined for six compounds (nine conformations). MM4 parameters have been developed to fit the indicated quantities to the wide variety of experimental data. Ab initio (MP2) and density functional theory (B3LYP) calculations have been used to augment and/or replace experimental data, as appropriate. Because more, and to some extent, better, data have become available since MM3 was developed, it was anticipated that the overall accuracy of the information calculated with MM4 would be better than with MM3. The best single measure of the overall accuracy of a force field is the accuracy to which the moments of inertia of a set of compounds (from microwave spectroscopy) can be reproduced. For all of the 20 moments (seven conformations) experimentally known for the aldehyde compounds, the MM4 rms error is 0.30%, while with MM3, the most accurate force field presently available, the rms error over the same set is 1.01%. The calculation of the vibrational spectra was also improved overall. For the four aldehydes that were fully analyzed (over a total of 78 frequencies), the rms errors with MM4 and MM3 are 18 and 38 cm,1, respectively. These improvements came from several sources, but the major ones were separate parameters involving the carbonyl carbon for formaldehyde, the alkyl aldehydes and the ketones, and new crossterms featured in the MM4 force field that are not present in the MM3 version. © 2001 John Wiley & Sons, Inc. J Comput Chem 22: 1396,1425, 2001 [source]


EFFECT OF VARIOUS ANTIOXIDANTS ON THE OXIDATIVE STABILITY OF ACID AND ALKALI SOLUBILIZED MUSCLE PROTEIN ISOLATES

JOURNAL OF FOOD BIOCHEMISTRY, Issue 2 2009
SIVAKUMAR RAGHAVAN
ABSTRACT Protein isolates prepared from cod (Gadus morhua) myofibrillar proteins using acid or alkali solubilization are susceptible to oxidative rancidity. Oxidation could be delayed by the exogenous addition of antioxidants. The objective of this research was to compare the efficacy of antioxidants such as ,-tocopherol, butylated hydroxyanisole (BHA) and propyl gallate, to inhibit oxidation in acid- and alkali- solubilized cod protein isolates. Oxidation was catalyzed using cod hemolysate. Oxidation of lipids was monitored by the measurement of thiobarbituric acid reactive substances and painty odor. Results showed that protein isolates prepared using the acid process was significantly (P < 0.05) more susceptible to lipid oxidation than alkali-solubilized protein isolates. Regardless of pH treatments, the efficacy of various antioxidants decreased in the order propyl gallate > BHA > ,-tocopherol. PRACTICAL APPLICATIONS Research has shown that seafood available for human consumption is rapidly getting depleted and that many fish species may become extinct in the next half-century or so. Acid and alkali solubilization methods are recent but well-known techniques used for preparing protein isolates from under-utilized aquatic species and the by-products of seafood industry. Although numerous researchers have studied the use of acid and alkali processes on several sources of seafood, almost no research has been done on the use of antioxidants to protect protein isolates from lipid oxidation. In our research, we have studied the effect of various antioxidants on the oxidative stability of acid- and alkali-solubilized fish myofibrillar proteins. The results from this work will enable the seafood industry to properly identify the process and the type of antioxidants required for making muscle food products with increased oxidative stability. [source]


The trivialization of diagnosis,

JOURNAL OF HOSPITAL MEDICINE, Issue 2 2010
Irving Kushner MD
Abstract Although it is widely recognized that diagnosis plays a central role in clinical medicine, in recent years the primacy of diagnosis has come under attack from several sources. 1. "Billable terms" are replacing traditional medical diagnoses. The former are based on International Classification of Diseases lists, which include many non-diagnoses such as symptoms and signs. 2. Diagnosis often gets short shrift because of the perceived urgency of discharge. 3. The problem oriented record, in practice, has frequently led to a shift in emphasis from synthesis of findings to fragmentation of problems. 4. Presumptive diagnoses frequently metamorphose into established diagnoses in medical records, even if incorrect. 5. A number of authors have apparently disparaged the importance of diagnosis. Nonetheless, it is clear that diagnosis must continue to play a central role in clinical medicine. We propose several ways by which we can resist these forces and assure that diagnosis retains its appropriate position of primacy. Journal of Hospital Medicine 2010;5:116,119. © 2010 Society of Hospital Medicine. [source]


Half a Century of Public Software Institutions: Open Source as a Solution to Hold-Up Problem

JOURNAL OF PUBLIC ECONOMIC THEORY, Issue 4 2010
MICHAEL SCHWARZ
We argue that the intrinsic inefficiency of proprietary software has historically created a space for alternative institutions that provide software as a public good. We discuss several sources of such inefficiency, focusing on one that has not been described in the literature: the underinvestment due to fear of hold-up. An inefficient hold-up occurs when a user of software must make complementary investments, when the return on such investments depends on future cooperation of the software vendor, and when contracting about a future relationship with the software vendor is not feasible. We also consider how the nature of the production function of software makes software cheaper to develop when the code is open to the end users. Our framework explains why open source dominates certain sectors of the software industry (e.g., programming languages), while being almost non existent in some other sectors (e.g., computer games). We then use our discussion of efficiency to examine the history of institutions for provision of public software from the early collaborative projects of the 1950s to the modern "open source" software institutions. We look at how such institutions have created a sustainable coalition for provision of software as a public good by organizing diverse individual incentives, both altruistic and profit-seeking, providing open source products of tremendous commercial importance, which have come to dominate certain segments of the software industry. [source]


Automatic quality assessment in structural brain magnetic resonance imaging,

MAGNETIC RESONANCE IN MEDICINE, Issue 2 2009
Bénédicte Mortamet
Abstract MRI has evolved into an important diagnostic technique in medical imaging. However, reliability of the derived diagnosis can be degraded by artifacts, which challenge both radiologists and automatic computer-aided diagnosis. This work proposes a fully-automatic method for measuring image quality of three-dimensional (3D) structural MRI. Quality measures are derived by analyzing the air background of magnitude images and are capable of detecting image degradation from several sources, including bulk motion, residual magnetization from incomplete spoiling, blurring, and ghosting. The method has been validated on 749 3D T1 -weighted 1.5T and 3T head scans acquired at 36 Alzheimer's Disease Neuroimaging Initiative (ADNI) study sites operating with various software and hardware combinations. Results are compared against qualitative grades assigned by the ADNI quality control center (taken as the reference standard). The derived quality indices are independent of the MRI system used and agree with the reference standard quality ratings with high sensitivity and specificity (>85%). The proposed procedures for quality assessment could be of great value for both research and routine clinical imaging. It could greatly improve workflow through its ability to rule out the need for a repeat scan while the patient is still in the magnet bore. Magn Reson Med, 2009. © 2009 Wiley-Liss, Inc. [source]


Optimized protocol of a frequency domain fluorescence lifetime imaging microscope for FRET measurements

MICROSCOPY RESEARCH AND TECHNIQUE, Issue 5 2009
Aymeric Leray
Abstract Frequency-domain fluorescence lifetime imaging microscopy (FLIM) has become a commonly used technique to measure lifetimes in biological systems. However, lifetime measurements are strongly dependent on numerous experimental parameters. Here, we describe a complete calibration and characterization of a FLIM system and suggest parameter optimization for minimizing measurement errors during acquisition. We used standard fluorescent molecules and reference biological samples, exhibiting both single and multiple lifetime components, to calibrate and evaluate our frequency domain FLIM system. We identify several sources of lifetime precision degradation that may occur in FLIM measurements. Following a rigorous calibration of the system and a careful optimization of the acquisition parameters, we demonstrate fluorescence lifetime measurements accuracy and reliability. In addition, we show its potential on living cells by visualizing FRET in CHO cells. The proposed calibration and optimization protocol is suitable for the measurement of multiple lifetime components sample and is applicable to any frequency domain FLIM system. Using this method on our FLIM microscope enabled us to obtain the best fluorescence lifetime precision accessible with such a system. Microsc. Res. Tech., 2009. © 2008 Wiley-Liss, Inc. [source]


Surprising evolution of the parsec-scale Faraday Rotation gradients in the jet of the BL Lac object B1803+784

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 1 2009
M. Mahmud
ABSTRACT Several multifrequency polarization studies have shown the presence of systematic Faraday Rotation gradients across the parsec-scale jets of active galactic nuclei, taken to be due to the systematic variation of the line-of-sight component of a helical magnetic (B) field across the jet. Other studies have confirmed the presence and sense of these gradients in several sources, thus providing evidence that these gradients persist over time and over large distances from the core. However, we find surprising new evidence for a reversal in the direction of the Faraday Rotation gradient across the jet of B1803+784, for which multifrequency polarization observations are available at four epochs. At our three epochs and the epoch of Zavala & Taylor, we observe transverse rotation measure (RM) gradients across the jet, consistent with the presence of a helical magnetic field wrapped around the jet. However, we also observe a ,flip' in the direction of the gradient between 2000 June and 2002 August. Although the origins of this phenomenon are not entirely clear, possibly explanations include (i) the sense of rotation of the central supermassive black hole and accretion disc has remained the same, but the dominant magnetic pole facing the Earth has changed from north to south, (ii) a change in the direction of the azimuthal B field component as a result of torsional oscillations of the jet and (iii) a change in the relative contributions to the observed RMs of the ,inner' and ,outer' helical fields in a magnetic-tower model. Although we cannot entirely rule out the possibility that the observed changes in the RM distribution are associated instead with changes in the thermal-electron distribution in the vicinity of the jet, we argue that this explanation is unlikely. [source]


Craniofacial skeletal deviations following in utero exposure to the anticonvulsant phenytoin: monotherapy and polytherapy

ORTHODONTICS & CRANIOFACIAL RESEARCH, Issue 1 2003
HI Orup Jr
Structured Abstract Authors , Orup Jr HI, Holmes LB, Keith DA, Coull BA. Objective , To identify and quantify the craniofacial effects from prenatal exposure to phenytoin monotherapy and polytherapy using cephalometric, hand-wrist, and panoramic radiographs and to determine if such deviations persist with age. Design , Craniofacial structures of 28 anticonvulsant-exposed individuals were evaluated using 20 landmarks in lateral cephalometric radiographs and 19 landmarks in frontal cephalometric radiographs. Skeletal maturity was assessed using hand-wrist radiographs. Dental maturity and the presence of dental anomalies were evaluated using panoramic radiographs. Eleven individuals were re-evaluated 7 years later, on average, to determine the persistence of any measured deviations. Setting and Sample Population , Department of Growth and Development, Harvard School of Dental Medicine and Massachusetts General Hospital. Patients were recruited from several sources. Outcome Measure , The evaluated dimensions included linear, angular, and proportional measures. Results , The most common deviations were decreased height and length of the maxilla, decreased length of the posterior cranial base, length of the mandible, cranial width and level of the cribriform plate, and a decrease in the Wits Appraisal assessment. The deviations were more significant in the polytherapy-exposed individuals than in the monotherapy-exposed individuals. These deviations, especially in the maxilla, persisted with age as revealed in a re-evaluation of 11 individuals. Conclusion , The craniofacial skeletal findings among individuals exposed in utero to phenytoin monotherapy or phenytoin polytherapy, when considered in aggregate, suggest a mild pattern of maxillary hypoplasia that becomes more pronounced with age. [source]


Reproducibility of tricuspid regurgitant jet velocity measurements in children and young adults with sickle cell disease undergoing screening for pulmonary hypertension,

AMERICAN JOURNAL OF HEMATOLOGY, Issue 10 2010
Robert I. Liem
The reproducibility of tricuspid regurgitant jet velocity (TRJV) measurements by Doppler echocardiography has not been subjected to systematic evaluation among individuals with sickle cell disease (SCD) undergoing screening for pulmonary hypertension. We examined sources of disagreement associated with peak TRJV in children and young adults with SCD. Peak TRJV was independently measured and interpreted a week apart by separate sonographers and readers, respectively, in 30 subjects (mean age, 15.8 ± 3.3 years) who provided 120 observations. We assessed intra-/inter-reader, intra-/inter-sonographer, sonographer-reader, and within subject agreement using Intraclass Correlation Coefficient (ICC) and Cohen's kappa (,). Agreement was examined graphically using Bland-Altman plots. Although sonographers could estimate and measure peak TRJV in all subjects, readers designated tricuspid regurgitation nonquantifiable in 10,17% of their final interpretations. Intra-reader agreement was highest (ICC = 0.93 [95% CI 0.86, 0.97], P = 0.0001) and within subject agreement lowest (ICC = 0.36 [95% CI 0.02, 0.64], P = 0.021) for single TRJV measurements. Similarly, intra-reader agreement was highest (, = 0.74 [95% CI 0.53, 0.95], P = 0.0001) and within subject lowest (, = 0.14 [95% CI ,0.17, 0.46], P = 0.38) when sonographers and readers categorized TRJV measurements. On Bland-Altman plots, absolute differences in observations increased with higher mean TRJV readings for intra-/inter-reader agreement. Peak TRJV measurements in individual children and young adults with SCD are affected by several sources of disagreement, underscoring the need for methodological improvements that ensure reproducibility of this screening modality for making clinical decisions in this population. Am. J. Hematol., 2010. © 2010 Wiley-Liss, Inc. [source]


Technical note: Applicability of tooth cementum annulation to an archaeological population

AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY, Issue 3 2009
Mirjana Roksandic
Abstract The use of tooth cementum annulations for age determination has been deemed promising, exhibiting high correlations with chronological age. Despite its apparent potential, to date, the tooth cementum annulations method has been used rarely for estimating ages in archaeological populations. Here we examine the readability of cementum annulations and the consistency of age estimates using a sample of 116 adults from the Iron Gates Gorge Mesolithic/Neolithic series. Our examination of the method pointed to several sources of error that call into question the use of this method for estimating the chronological ages of archaeologically derived dental samples. The poor performance of the method in our analysis might be explained by taphonomic influences, including the effect of chemical and biological agents on dental microstructures. Am J Phys Anthropol 2009. © 2009 Wiley-Liss, Inc. [source]


Detection of bone glue treatment as a major source of contamination in ancient DNA analyses

AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY, Issue 2 2002
Graeme J. Nicholson
Abstract Paleogenetic investigations of ancient DNA extracted from fossil material is for many reasons susceptible to falsification by the presence of more recent contamination from several sources. Gelatine-based bone glue that has been used extensively for nearly two centuries by curators to preserve hard tissues contributes nonauthentic DNA to paleontological material. This fact has been frequently neglected and is barely mentioned in the literature. Now paleogeneticists, curators, and conservators are faced with the problem that treatment of samples with adhesives and consolidants for conservatory purposes has seldom been recorded. Here, we show that racemization of amino acids, and in particular serine, is an excellent indicator for the treatment of paleontological samples with glue. Am J Phys Anthropol 118:117,120, 2002. © 2002 Wiley-Liss, Inc. [source]


Social interaction distance and stratification

THE BRITISH JOURNAL OF SOCIOLOGY, Issue 2 2003
Wendy Bottero
ABSTRACT There have been calls from several sources recently for a renewal of class analysis that would encompass social and cultural, as well as economic elements. This paper explores a tradition in stratification that is founded on this idea: relational or social distance approaches to mapping hierarchy and inequality which theorize stratification as a social space. The idea of ,social space' is not treated as a metaphor of hierarchy nor is the nature of the structure determined a priori. Rather, the space is identified by mapping social interactions. Exploring the nature of social space involves mapping the network of social interaction , patterns of friendship, partnership and cultural similarity , which gives rise to relations of social closeness and distance. Differential association has long been seen as the basis of hierarchy, but the usual approach is first to define a structure composed of a set of groups and then to investigate social interaction between them. Social distance approaches reverse this, using patterns of interaction to determine the nature of the structure. Differential association can be seen as a way of defining proximity within a social space, from the distances between social groups, or between social groups and social objects (such as lifestyle items). The paper demonstrates how the very different starting point of social distance approaches also leads to strikingly different theoretical conclusions about the nature of stratification and inequality. [source]


Commercial Innovations from Consulting Engineering Firms: An Empirical Exploration of a Novel Source of New Product Ideas

THE JOURNAL OF PRODUCT INNOVATION MANAGEMENT, Issue 4 2003
Ian Alam
Industrial firms interact with many outside organizations such as the customers, suppliers, competitors, and universities to obtain input for their new product development (NPD) programs. The importance of interfirm interactions is reflected in a large number of interdisciplinary studies reported in a wide variety of literature bases. As a result, several sources of new product ideas have been investigated in the extant literature. Yet given the growing complexity and risks in new product development, there seems to be a need for managers to obtain input from new and unutilized sources. Apparently, one source that industry has not tapped adequately for its NPD efforts is the consulting engineering firms (CEFs). To fill the aforementioned gap in the literature, this article explores the roles and suitability of CEFs in new product development by conducting a rigorous in-depth case research of new product idea generation in a large Australian firm manufacturing a variety of industrial products. To generate ideas for the sponsoring firm, longitudinal field interviews with 64 managers and engineers from 32 large CEFs were conducted over a one-and-one-half year period. The findings of the field interviews were combined with the documentary evidences and the archival data. This longitudinal data collection enabled the author to generate new product ideas over real time and to gain access to the information that otherwise might have been difficult to obtain. The results suggest that CEFs are a rich source of new product ideas of potential commercial value. However, industry is making little use of CEFs, which underscores the need for industrial firms to collaborate and to establish an effective idea transfer relationship with them. Moreover, the services of CEFs are not restricted to idea generation but can stretch across the entire NPD process. These findings of the study encourage product managers to conceptualize NPD as a highly synergistic mutually interdependent process between CEFs and industrial firms rather than simply an arm's-length consulting transactions. Given the dearth of research on idea generation with CEFs, this study highlights the findings that are novel and that go beyond the techniques of new product idea generation established in the extant literature. [source]


Biases associated with population estimation using molecular tagging

ANIMAL CONSERVATION, Issue 3 2000
Juliann L. Waits
Although capture,recapture techniques are often used to estimate population size, these approaches are difficult to implement for a wide variety of species. Highly polymorphic microsatellite markers are useful in individual identification, and these ,molecular tags' can be collected without having to capture or trap the individual. However, several sources of error associated with molecular identification techniques, including failure to identify individuals with the same genotype for these markers as being different, and incorrect assignment of individual genotypes, could bias population estimates. Simulations of populations sampled for the purpose of estimating population size were used to assess the extent of these potential biases. Population estimates tended to be biased downward as the likelihood of individuals sharing the same genotype increased (as measured by the probability of identity (PI) of the multi-locus genotype); this bias increased with population size. Populations of 1000 individuals were underestimated by ,5% when the PI was as small as 1.4 × 10,7. A similar-sized bias did not occur for populations of 50 individuals until the PI had increased to approximately 2.5 × 10,5. Errors in genotype assignment resulted in overestimates of population size; this problem increased with the number of samples and loci that were genotyped. Population estimates were often >200% the size of the simulated populations when the probability of making a genotyping error was 0.05/locus and 7,10 loci were used to identify individuals. This bias was substantially reduced by decreasing genotyping error rate to 0.005. If possible, only highly polymorphic loci that are critical for the identification of the individual should be used in molecular tagging, and considerable efforts should be made to minimize errors in genotype determination. [source]


Trends in the management of severe acute pancreatitis: interventions and outcome

ANZ JOURNAL OF SURGERY, Issue 5 2004
Richard Flint
Background: Severe acute pancreatitis (SAP) in the intensive care unit (ICU) is a complex and challenging problem. The aim of the present study was to identify trends in management of SAP patients admitted to a tertiary level ICU, and to relate these to changes in interventions and outcome. Methods: Patients admitted to the Department of Critical Care Medicine (DCCM), Auckland Public Hospital with SAP from 1988 to 2001 (inclusive) were identified from the DCCM prospective database, and data were extracted from several sources. Results: One hundred and twelve patients (men 69, women 43, mean age (±SD) 57.3 years ± 14.3) were admitted with SAP to DCCM in the 13-year period. Aetiology was gallstones (42%), alcohol (29%), or idiopathic (29%). At admission to DCCM the median duration of symptoms was 7 days (range 1,100) and the mean (±SD) Acute Physiology and Chronic Health Evaluation II score was 19.9 ± 8.2. Ninety-nine patients (88%) had respiratory failure and 79 (71%) had circulatory failure. The number of necrosectomies peaked between 1991 and 1995 (17/35 patients (49%) compared to 4/22 (18%) prior 1991; ,2 = 6.90, P = 0.032). Abdominal decompression, enteral nutrition, percutaneous tracheostomy, and the use of stents in endoscopic retrograde cholangiopancreatography were introduced over the study period. The length of stay in DCCM did not alter (median 4 days, range 1,60) but there was a reduction in the length of hospital stay (median 36 days to 15 days; anova= 6.16, P = 0.046). The overall mortality was 31% (35/112) and did not alter over the study period. Conclusions: SAP remains a formidable disease with a high mortality despite a number of changes in intensive care and surgical management. [source]