Home About us Contact | |||
Starting Point (starting + point)
Kinds of Starting Point Selected AbstractsStarting Points for BIG IdeasDESIGN MANAGEMENT REVIEW, Issue 3 2000Gary Waymire First page of article [source] Maintaining Case-Based Reasoners: Dimensions and DirectionsCOMPUTATIONAL INTELLIGENCE, Issue 2 2001David C. Wilson Experience with the growing number of large-scale and long-term case-based reasoning (CBR) applications has led to increasing recognition of the importance of maintaining existing CBR systems. Recent research has focused on case-base maintenance (CBM), addressing such issues as maintaining consistency, preserving competence, and controlling case-base growth. A set of dimensions for case-base maintenance, proposed by Leake and Wilson, provides a framework for understanding and expanding CBM research. However, it also has been recognized that other knowledge containers can be equally important maintenance targets. Multiple researchers have addressed pieces of this more general maintenance problem, considering such issues as how to refine similarity criteria and adaptation knowledge. As with case-base maintenance, a framework of dimensions for characterizing more general maintenance activity, within and across knowledge containers, is desirable to unify and understand the state of the art, as well as to suggest new avenues of exploration by identifying points along the dimensions that have not yet been studied. This article presents such a framework by (1) refining and updating the earlier framework of dimensions for case-base maintenance, (2) applying the refined dimensions to the entire range of knowledge containers, and (3) extending the theory to include coordinated cross-container maintenance. The result is a framework for understanding the general problem of case-based reasoner maintenance (CBRM). Taking the new framework as a starting point, the article explores key issues for future CBRM research. [source] A method for verifying concurrent Java components based on an analysis of concurrency failuresCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 3 2007Brad Long Abstract The Java programming language supports concurrency. Concurrent programs are harder to verify than their sequential counterparts due to their inherent non-determinism and a number of specific concurrency problems, such as interference and deadlock. In previous work, we have developed the ConAn testing tool for the testing of concurrent Java components. ConAn has been found to be effective at testing a large number of components, but there are certain classes of failures that are hard to detect using ConAn. Although a variety of other verification tools and techniques have been proposed for the verification of concurrent software, they each have their strengths and weaknesses. In this paper, we propose a method for verifying concurrent Java components that includes ConAn and complements it with other static and dynamic verification tools and techniques. The proposal is based on an analysis of common concurrency problems and concurrency failures in Java components. As a starting point for determining the concurrency failures in Java components, a Petri-net model of Java concurrency is used. By systematically analysing the model, we come up with a complete classification of concurrency failures. The classification and analysis are then used to determine suitable tools and techniques for detecting each of the failures. Finally, we propose to combine these tools and techniques into a method for verifying concurrent Java components. Copyright © 2006 John Wiley & Sons, Ltd. [source] Reconstructing ripeness I: A study of constructive engagement in protracted social conflictsCONFLICT RESOLUTION QUARTERLY, Issue 1 2008Peter T. Coleman What moves people to work with each other rather than against each other when locked into destructive, long-term conflicts? Ripeness theory has been a useful starting point for understanding such motives, but has limited explanatory power under conditions of intractable conflict. This article is the first of a two-part series presenting the findings from a study that explored various methods of eliciting constructive engagement from stakeholders through interviews with expert scholarpractitioners working with protracted conflicts. A grounded theory analysis was applied to the interviews to allow new insights into constructive conflict engagement to emerge from the data. Our objective was to develop more robust theories and practices. A dynamical systems synthesis of the findings is presented, and its implications for reconceptualizing ripeness are discussed. [source] Reconstructing ripeness II: Models and methods for fostering constructive stakeholder engagement across protracted dividesCONFLICT RESOLUTION QUARTERLY, Issue 1 2008Peter T. Coleman A priority objective for diplomats, mediators, negotiators, and other individuals working to rectify seemingly intractable conflicts is to help foster stakeholder "ripeness," or a willingness and commitment to engage constructively in the conflict. This is often extremely difficult to achieve due to long histories between the parties of animosity, suspicion, hostility, and fear. Ripeness theory has been a useful starting point for understanding such motives, but has limited explanatory power under conditions of intractable conflict. The second in a two-part series, this article outlines the implications for practice resulting from an analysis of interviews with expert scholar-practitioners working in the field with intractable conflicts. [source] Balancing Self-interest and Altruism: corporate governance alone is not enoughCORPORATE GOVERNANCE, Issue 2 2004Sandra Dawson Governance has become a topic of unprecedented emotional significance and fundamental importance in the boardrooms of companies, partly as a result of a confluence of early 21st century corporate scandals, stock market falls and public rage about senior executive remuneration. A simple adherence to formal systems of corporate governance, in terms of structures, rules, procedures and codes of practice, whilst a starting point, will not alone win back confidence in markets and corporations. Consideration needs to be given to how to release entrepreneurial self interest within a moral context. This focuses attention on the role of other major social institutions which may more naturally be able to nurture a moral framework as well as the role of individual citizens and the responsibility of all of us to enact a moral framework for business activities. There is no escape from individual moral responsibility, and our part in creating and sustaining social institutions beyond corporations. [source] Innovation and HRM: Towards an Integrated FrameworkCREATIVITY AND INNOVATION MANAGEMENT, Issue 2 2005Jan De Leede This paper explores the connection between innovation (management) and human resource management. Much has been written about the both concepts separately, but there is no integrated conceptual framework available for the combination of the two. Our goal here is to develop such a framework. We do this in a number of steps, starting with a presentation of the existing approaches and models with respect to innovation (management) and HRM. This is followed by a search for the linkage between the two traditions, as a starting point for an integrated model and an in-depth case study regarding the link between innovation and HRM, in order to further develop our model. We conclude with the presentation of our model and with suggestions for further research. [source] Strengthening Public Safety Nets from the Bottom UpDEVELOPMENT POLICY REVIEW, Issue 5 2002Jonathan Morduch Helping to reduce vulnerability poses a new set of challenges for public policy. A starting point is understanding the ways in which communities and extended families try to cope with difficulties in the absence of public interventions. Coping mechanisms range from the informal exchange of transfers and loans to more structured institutions that enable an entire community to provide protection to its neediest members. This article describes ways of building public safety nets to complement and extend informal and private institutions. The most effective policies will combine transfer systems that are sensitive to existing mechanisms with new institutions for providing insurance and credit and for generating savings. [source] Theoretical influences on research on language development and intervention in individuals with mental retardationDEVELOPMENTAL DISABILITIES RESEARCH REVIEW, Issue 3 2004Leonard Abbeduto Abstract In this article, we consider the theoretical debates and frameworks that have shaped research on language development and intervention in persons with mental retardation over the past four decades. Our starting point is the nativist theory, which has been espoused most forcefully by Chomsky. We also consider more recent alternatives to the nativist approach, including the social-interactionist and emergentist approaches, which have been developed largely within the field of child language research. We also consider the implications for language development and intervention of the genetic syndrome-based approach to behavioral research advocated by Dykens and others. We briefly review the impact and status of the debates spurred by the nativist approach in research on the course of language development in individuals with mental retardation. In addition, we characterize some of the achievements in language intervention that have been made possible by the debates spurred by nativism and the various alternatives to it. The evidence we consider provides support for all three alternatives to the nativist approach. Moreover, successful interventions appear to embody elements of several of these approaches as well as other theoretical approaches (e.g., behaviorism). We conclude that language intervention must be theoretically eclectic in its approach, with different strategies appropriate for teaching different features of language, at different points in development, and for children displaying different characteristics or learning histories. © 2004 Wiley-Liss, Inc. MRDD Research Reviews 2004;10:184,192. [source] Identification of genes expressed preferentially in the developing peripheral margin of the optic cupDEVELOPMENTAL DYNAMICS, Issue 9 2009Jeffrey M. Trimarchi Abstract Specification of the peripheral optic cup by Wnt signaling is critical for formation of the ciliary body/iris. Identification of marker genes for this region during development provides a starting point for functional analyses. During transcriptional profiling of single cells from the developing eye, two cells were identified that expressed genes not found in most other single cell profiles. In situ hybridizations demonstrated that many of these genes were expressed in the peripheral optic cup in both early mouse and chicken development, and in the ciliary body/iris at subsequent developmental stages. These analyses indicate that the two cells probably originated from the developing ciliary body/iris. Changes in expression of these genes were assayed in embryonic chicken retinas when canonical Wnt signaling was ectopically activated by CA-,-catenin. Twelve ciliary body/iris genes were identified as upregulated following induction, suggesting they are excellent candidates for downstream effectors of Wnt signaling in the optic cup. Developmental Dynamics 238:2327,2339, 2009. © 2009 Wiley-Liss, Inc. [source] The effect of prenatal hypoxia on brain development: short- and long-term consequences demonstrated in rodent modelsDEVELOPMENTAL SCIENCE, Issue 4 2006Hava Golan Hypoxia (H) and hypoxia-ischemia (HI) are major causes of foetal brain damage with long-lasting behavioral implications. The effect of hypoxia has been widely studied in human and a variety of animal models. In the present review, we summarize the latest studies testing the behavioral outcomes following prenatal hypoxia/hypoxia-ischemia in rodent models. Delayed development of sensory and motor reflexes during the first postnatal month of rodent life was observed by various groups. Impairment of motor function, learning and memory was evident in the adult animals. Activation of the signaling leading to cell death was detected as early as three hours following H/HI. An increase in the counts of apoptotic cells appeared approximately three days after the insult and peaked about seven days later. Around 14,20 days following the H/HI, the amount of cell death observed in the tissue returned to its basal levels and cell loss was apparent in the brain tissue. The study of the molecular mechanism leading to brain damage in animal models following prenatal hypoxia adds valuable insight to our knowledge of the central events that account for the morphological and functional outcomes. This understanding provides the starting point for the development and improvement of efficient treatment and intervention strategies. [source] Insulin therapy in type 2 diabetes: what is the evidence?DIABETES OBESITY & METABOLISM, Issue 5 2009Mariëlle J. P. Van Avendonk Aim:, To systematically review the literature regarding insulin use in patients with type 2 diabetes mellitus Methods:, A Medline and Embase search was performed to identify randomized controlled trials (RCT) published in English between 1 January 2000 and 1 April 2008, involving insulin therapy in adults with type 2 diabetes mellitus. The RCTs must comprise at least glycaemic control (glycosylated haemoglobin (HbA1c), postprandial plasma glucose and /or fasting blood glucose (FBG)) and hypoglycaemic events as outcome measurements. Results:, The Pubmed search resulted in 943 hits; the Embase search gave 692 hits. A total of 116 RCTs were selected by title or abstract. Eventually 78 trials met the inclusion criteria. The studies were very diverse and of different quality. They comprised all possible insulin regimens with and without combination with oral medication. Continuing metformin and/or sulphonylurea after start of therapy with basal long-acting insulin results in better glycaemic control with less insulin requirements, less weight gain and less hypoglycaemic events. Long-acting insulin analogues in combination with oral medication are associated with similar glycaemic control but fewer hypoglycaemic episodes compared with NPH insulin. Most of the trials demonstrated better glycaemic control with premix insulin therapy than with a long-acting insulin once daily, but premix insulin causes more hypoglycaemic episodes. Analogue premix provides similar HbA1c, but lower postprandial glucose levels compared with human premix, without increase in hypoglycaemic events or weight gain. Drawing conclusions from the limited number of studies concerning basal,bolus regimen seems not possible. Some studies showed that rapid-acting insulin analogues frequently result in a better HbA1c or postprandial glucose without increase of hypoglycaemia than regular human insulin. Conclusion:, A once-daily basal insulin regimen added to oral medication is an ideal starting point. All next steps, from one to two or even more injections per day should be taken very carefully and in thorough deliberation with the patient, who has to comply with such a regimen for many years. [source] What does postprandial hyperglycaemia mean?DIABETIC MEDICINE, Issue 3 2004R. J. Heine Abstract Aims The potential importance of postprandial glucose (PPG) control in the development of complications in Type 2 diabetes is much debated. The recent American Diabetes Association (ADA) consensus statement discussed the role of postprandial hyperglycaemia in the pathogenesis of diabetic complications and concluded that the relationship between PPG excursions and the well-established risk factors for cardiovascular disease (CVD) should be further examined. Using the ADA statement as a starting point and including the more recent American College of Endocrinology guidelines on glycaemic control, a panel of experts in diabetes met to review the role of PPG within the context of the overall metabolic syndrome, in the development of complications in Type 2 diabetes. Results Post-prandial hyperglycaemia is a risk indicator for micro- and macrovascular complications, not only in patients with Type 2 diabetes but also in those with impaired glucose tolerance. In addition, the metabolic syndrome confers an increased risk of CVD morbidity and mortality. The debate focused on the relative contributions of postprandial hyperglycaemia, the metabolic syndrome and, in particular, raised triglyceride levels in the postprandial state, to the development of cardiovascular complications of diabetes. Conclusions The panel recommended that in the prevention and management of microvascular complications of Type 2 diabetes, targeting both chronic and acute glucose fluctuations is necessary. Lowering the macrovascular risk also requires control of (postprandial) triglyceride levels and other components of the metabolic syndrome. [source] Empirical prediction of debris-flow mobility and deposition on fansEARTH SURFACE PROCESSES AND LANDFORMS, Issue 2 2010Christian Scheidl Abstract A new method to predict the runout of debris flows is presented. A data base of documented sediment-transporting events in torrent catchments of Austria, Switzerland and northern Italy has been compiled, using common classification techniques. With this data we test an empirical approach between planimetric deposition area and event volume, and compare it with results from other studies. We introduce a new empirical relation to determine the mobility coefficient as a function of geomorphologic catchment parameters. The mobility coefficient is thought to reflect some of the flow properties during the depositional part of the debris-flow event. The empirical equations are implemented in a geographical information system (GIS) based simulation program and combined with a simple flow routing algorithm, to determine the potential runout area covered by debris-flow deposits. For a given volume and starting point of the deposits, a Monte-Carlo technique is used to produce flow paths that simulate the spreading effect of a debris flow. The runout zone is delineated by confining the simulated potential spreading area in the down slope direction with the empirically determined planimetric deposition area. The debris-flow volume is then distributed over the predicted area according to the calculated outflow probability of each cell. The simulation uses the ARC-Objects environment of ESRI© and is adapted to run with high resolution (2·5,m × 2·5,m) digital elevation models, generated for example from LiDAR data. The simulation program called TopRunDF is tested with debris-flow events of 1987 and 2005 in Switzerland. Copyright © 2009 John Wiley & Sons, Ltd. [source] Complementary representation and zones of ecological transitionECOLOGY LETTERS, Issue 1 2001K.J. Gaston Minimum complementary sets of sites that represent each species at least once have been argued to provide a nominal core reserve network and the starting point for regional conservation programs. However, this approach may be inadequate if there is a tendency to represent several species at marginal areas within their ranges, which may occur if high efficiency results from preferential selection of sites in areas of ecological transition. Here we use data on the distributions of birds in South Africa and Lesotho to explore this idea. We found that for five measures that are expected to reflect the location of areas of ecological transition, complementary sets tend to select higher values of these measures than expected by chance. We recommend that methods for the identification of priority areas for conservation that incorporate viability concerns be preferred to minimum representation sets, even if this results in more costly reserve networks. [source] The Public Role of Teaching: To keep the door closedEDUCATIONAL PHILOSOPHY AND THEORY, Issue 5-6 2010Goele Cornelissen Abstract In this article, I turn my attention to the figure of the ignorant master, Joseph Jacotot, that is depicted in The Ignorant Schoolmaster. Five Lessons in Intellectual Emancipation (1991). I will show that the voice of Jacotot can actually be read as a reaction against the progressive figure of the teacher which, following Rancière's view, can be seen as effecting a stultification. In some respects, however, Rancière's analysis of the pedagogical order no longer seems to be valid in today's partly reconfigured, pedagogical order that depicts the teacher in terms of facilitation. Yet, the figure of the facilitator can be seen as effecting a stultification as well. Therefore, I will stress that Jacotot's voice is highly relevant today. The most important difference between the figure of the (old and current) figure of the stultifyer and that of the ignorant master is identified in their starting point. The stultifying master starts from the assumption of inequality. S/he transforms taught material (words, text, images, etc.) into objects of knowledge or resources for competence development that open the door to another world. The ignorant master (Jacotot) assumes equal intelligence and draws attention to a thing in common. According to Rancière, the ignorant master keeps the door closed and puts his/her students in the presence of a thing in common. [source] Learner, Student, Speaker: Why it matters how we call those we teachEDUCATIONAL PHILOSOPHY AND THEORY, Issue 5-6 2010Gert Biesta Abstract In this paper I discuss three different ways in which we can refer to those we teach: as learner, as student or as speaker. My interest is not in any aspect of teaching but in the question whether there can be such a thing as emancipatory education. Working with ideas from Jacques Rancière I offer the suggestion that emancipatory education can be characterised as education which starts from the assumption that all students can speak. It starts from the assumption, in other words, that students neither lack a capacity for speech, nor that they are producing noise. The idea of the student as a speaker is not offered as an empirical fact but as a different starting point for emancipatory education, one that positions equality at the beginning of education, not at its end. [source] What it Means to be a Stranger to OneselfEDUCATIONAL PHILOSOPHY AND THEORY, Issue 5 2009Olli-Pekka Moisio Abstract In adult education there is always a problem of prefabricated and in many respect fixed opinions and views of the world. In this sense, I will argue, that the starting point of radical education should be in the destruction of these walls of belief that people build around themselves in order to feel safe. In this connection I will talk about ,gentle shattering of identities' as a problem and a method of radical education. When we as adult educators are trying to gently shatter these solidified identities and pre-packed ways of being and acting in the world, we are moving in the field of questions that Sigmund Freud tackled with the concepts of ,de-personalization' and ,de-realization'. These concepts raise the question about the possibility of at the same time believing that something is and at the same time having a fundamentally sceptical attitude towards this given. In my article I will ask, can we integrate the idea of learning in general with the idea of strangeness to oneself as a legitimate and sensible experiential point of departure for radical learning? [source] THE CONCEPT OF FUNDAMENTAL EDUCATIONAL CHANGEEDUCATIONAL THEORY, Issue 3 2007Leonard J Waks By distinguishing sharply between educational change at the organizational and the institutional levels, Waks shows that the mechanisms of change at these two levels are entirely different. He then establishes, by means of a conceptual argument, that fundamental educational change takes place not at the organizational, but rather at the institutional level. Along the way Waks takes Larry Cuban's influential conceptual framework regarding educational change as both a starting point and target of appraisal. [source] Analysis of the Voltammetric Response of Electroactive Guests in the Presence of Non-Electroactive Hosts at Moderate ConcentrationsELECTROANALYSIS, Issue 18 2004Sandra Mendoza Abstract In this work, we present a method to analyze the voltammetric response of reversible redox systems involving molecules that, bearing m non-interactive electroactive sites, can undergo fast complexation equilibria with host molecules present at concentrations of the same order of magnitude as those of the electroactive guest. The approach focuses on systems for which the relative values of the binding constants for the oxidized and reduced forms of the guest result in the displacement of the voltammetric response of the electroactive molecule as the concentration of the host is increased in the electrolytic solution. This behavior is commonly known as "one wave shift behavior". Based on a series of assumptions, the method allows calculation of all the thermodynamic parameters that describe the electrochemical and complexation equilibria of a given host-guest system. The main strength of the suggested method, however, relies on the fact that it only requires cyclic voltammetry data and that it can be used for systems in which large concentrations of the host can not be employed either due to important changes of the ionic strength or to solubility problems. Although the accuracy of the obtained information is limited by the quality of the data provided by the technique, and by the assumptions employed, it certainly represents an excellent starting point for subsequent refinement either using digital simulations or an independent experimental technique. [source] Development of a CE-MS method to analyze components of the potential biomarker vascular endothelial growth factor 165ELECTROPHORESIS, Issue 13 2009Angel Puerta Abstract The vascular endothelial growth factor 165 (VEGF165) is the predominant form of the complex VEGF-A family. Its angiogenic effect is involved in many physiological and pathological events. For this reason, its roles as a potential biomarker and as a therapeutic drug have been considered. Nevertheless, very little is known about the existence of different forms of VEGF165 arising from glycosylation and potentially from other PTMs. This aspect is important because different forms may differ in biological activity (therapeutic drug application) and the pattern of the different forms can vary with pathological changes (biomarker application). In this work a CE-MS method to separate up to seven peaks containing, at least, 19 isoforms of intact VEGF165 is described. Comparison between human VEGF165 expressed in a glycosylating system, i.e. insect cells, and in a non-glycosylating system, i.e. E. coli cells, has been carried out. The method developed provides structural information (mass fingerprint) about the different forms of VEGF165 and after the deconvolution and the analysis of the MS spectra, PTMs pattern of VEGF165 including glycosylation and loss of amino acids at the N- and C-terminus was identified. Glycans involved in PTMs promoting different glycoforms observed in the CE-MS fingerprint were confirmed by MALDI-MS after deglycosylation with peptide N-glycosidase F. This approach is a starting point to study the role of VEGF165 as a potential biomarker and to perform quality control of the drug during manufacturing. To our knowledge this is the first time that a CE-MS method for the analysis of VEGF165 has been developed. [source] Intra- and interlaboratory calibration of the DR CALUX® bioassay for the analysis of dioxins and dioxin-like chemicals in sedimentsENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 12 2004Harrie T. Besselink Abstract In the Fourth National Policy Document on Water Management in the Netherlands [1], it is defined that in 2003, in addition to the assessment of chemical substances, special guidelines for the assessment of dredged material should be recorded. The assessment of dredged material is based on integrated chemical and biological effect measurements. Among others, the DR CALUX® (dioxin responsive,chemically activated luciferase expression) bioassay has tentatively been recommended for inclusion in the dredged material assessment. To ensure the reliability of this bioassay, an intra- and interlaboratory validation study, or ring test, was performed, organized by the Dutch National Institute for Coastal and Marine Management (RIKZ) in cooperation with BioDetection Systems BV (BDS). The intralaboratory repeatability and reproducibility and the limit of detection (LOD) and quantification (LOQ) of the DR CALUX bioassay were determined by analyzing sediment extracts and dimethyl sulfoxide (DMSO) blanks. The highest observed repeatability was found to be 24.1%, whereas the highest observed reproducibility was calculated to be 19.9%. Based on the obtained results, the LOD and LOQ to be applied for the bioassay are 0.3 and 1.0 pM, respectively. The interlaboratory calibration study was divided into three phases, starting with analyzing pure chemicals. During the second phase, sediment extracts were analyzed, whereas in the third phase, whole sediments had to be extracted, cleaned, and analyzed. The average interlaboratory repeatability increased from 14.6% for the analysis of pure compound to 26.1% for the analysis of whole matrix. A similar increase in reproducibility with increasing complexity of handlings was observed with the interlaboratory reproducibility of 6.5% for pure compound and 27.9% for whole matrix. The results of this study are intended as a starting point for implementing the integrated chemical,biological assessment strategy and for systematic monitoring of dredged materials and related materials in the coming years. [source] A logical starting point for developing priorities for lizard and snake ecotoxicology: A review of available dataENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 5 2002Kym Rouse Campbell Abstract Reptiles, specifically lizards and snakes, usually are excluded from environmental contamination studies and ecological risk assessments. This brief summary of available lizard and snake environmental contaminant data is presented to assist in the development of priorities for lizard and snake ecotoxicology. Most contaminant studies were not conducted recently, list animals found dead or dying after pesticide application, report residue concentrations after pesticide exposure, compare contaminant concentrations in animals from different areas, compare residue concentrations found in different tissues and organs, or compare changes in concentrations over time. The biological significance of the contaminant concentrations is rarely studied. A few recent studies, especially those conducted on modern pesticides, link the contaminant effects with exposure concentrations. Nondestructive sampling techniques for determining organic and inorganic contaminant concentrations in lizards and snakes recently have been developed. Studies that relate exposure, concentration, and effects of all types of environmental contaminants on lizards and snakes are needed. Because most lizards eat insects, studies on the exposure, effects, and accumulation of insecticides in lizards, and their predators, should be a top priority. Because all snakes are upper-trophic-level carnivores, studies on the accumulation and effects of contaminants that are known to bioaccumulate or biomagnify up the food chain should be the top priority. [source] Evaluating and expressing the propagation of uncertainty in chemical fate and bioaccumulation modelsENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 4 2002Matthew MacLeod Abstract First-order analytical sensitivity and uncertainty analysis for environmental chemical fate models is described and applied to a regional contaminant fate model and a food web bioaccumulation model. By assuming linear relationships between inputs and outputs, independence, and log-normal distributions of input variables, a relationship between uncertainty in input parameters and uncertainty in output parameters can be derived, yielding results that are consistent with a Monte Carlo analysis with similar input assumptions. A graphical technique is devised for interpreting and communicating uncertainty propagation as a function of variance in input parameters and model sensitivity. The suggested approach is less calculationally intensive than Monte Carlo analysis and is appropriate for preliminary assessment of uncertainty when models are applied to generic environments or to large geographic areas or when detailed parameterization of input uncertainties is unwarranted or impossible. This approach is particularly useful as a starting point for identification of sensitive model inputs at the early stages of applying a generic contaminant fate model to a specific environmental scenario, as a tool to support refinements of the model and the uncertainty analysis for site-specific scenarios, or for examining defined end points. The analysis identifies those input parameters that contribute significantly to uncertainty in outputs, enabling attention to be focused on defining median values and more appropriate distributions to describe these variables. [source] Optimization of ordered distance sampling,ENVIRONMETRICS, Issue 2 2004Ryan M. Nielson Abstract Ordered distance sampling is a point-to-object sampling method that can be labor-efficient for demanding field situations. An extensive simulation study was conducted to find the optimum number, g, of population members to be encountered from each random starting point in ordered distance sampling. Monte Carlo simulations covered 64 combinations of four spatial patterns, four densities and four sample sizes. Values of g from 1 to 10 were considered for each case. Relative root mean squared error (RRMSE) and relative bias were calculated for each level of g, with RRMSE used as the primary assessment criterion for finding the optimum level of g. A non-parametric confidence interval was derived for the density estimate, and this was included in the simulations to gauge its performance. Superior estimation properties were found for g > 3, but diminishing returns, relative to the potential for increased effort in the field, were found for g > 5. The simulations showed noticeable diminishing returns for more than 20 sampled points. The non-parametric confidence interval performed well for populations with random, aggregate or double-clumped spatial patterns, but rarely came close to target coverage for populations that were regularly distributed. The non-parametric confidence interval presented here is recommended for general use. Copyright © 2004 John Wiley & Sons, Ltd. [source] Racial Differences in Division of Labor in Colonies of the Honey Bee (Apis mellifera)ETHOLOGY, Issue 2 2002Charles Brillet We measured the age at onset of foraging in colonies derived from three races of European honey bees, Apis mellifera mellifera, Apis mellifera caucasica and Apis mellifera ligustica, using a cross-fostering design that involved six unrelated colonies of each race. There was a significant effect of the race of the introduced bees on the age at onset of foraging: cohorts of A. m. ligustica bees showed the earliest onset, regardless of the race of the colony they were introduced to. There also was a significant effect of the race of the host colony: cohorts of bees introduced into mellifera colonies showed the earliest onset of foraging, regardless of the race of the bees introduced. Significant inter-trial differences also were detected, primarily because of a later onset of foraging in trials conducted during the autumn (September,October). These results demonstrate differences among European races of honey bees in one important component of colony division of labor. They also provide a starting point for analyses of the evolution of division of labor under different ecological conditions. [source] Our Common European Model of AgricultureEUROCHOICES, Issue 3 2006Juha Korkeaoja Our Common European Model of Agriculture Future internal and external forces on European agriculture mean that the CAP may look very different after 2013. However large these changes, the CAP will need to retain its common principles based on the European Model of Agriculture (EMA). This became clear in an informal September meeting of EU agriculture ministers in Oulu, arranged by the Finnish Presidency. A strong CAP will be needed in the future but it will have to evolve to meet upcoming challenges. Work on the future CAP will need to start soon and the Oulu meeting may become known as the starting point for those discussions. The CAP will have to provide a reasonable environment for practicing agriculture for very different farmers in very diverse conditions, and facilitate the supply of a wide variety of goods and services to consumers and taxpayers as only truly multifunctional agriculture can. If the CAP can maintain these characteristics it has an important role to play in a future Europe. The meeting in Oulu was also an important milestone for a very special reason: for the first time, all ten New Member States took an active part in the EMA-debate with full rights and responsibilities as part of the Union. Once again this underlines the central role of our common European Model of Agriculture. Unser gemeinsames Europäisches Land wirts chafts modell Die zukünftigen internen und externen Einflüsse auf die europäische Landwirtschaft könnten zur Folge haben, dass die GAP nach dem Jahr 2013 ganz anders aussieht. Wie umwälzend diese Veränderungen auch sein mögen, die GAP wird ihre allgemeinen, auf dem Europäischen Landwirtschaftsmodell (EMA) beruhenden Grundsätze beibehalten müssen. Dies wurde im September bei einem von der finnischen Präsidentschaft arrangierten informellen Treffen der EU-Landwirtschaftsminister in Oulu deutlich. In der Zukunft brauchen wir eine starke GAP, die jedoch weiterentwickelt werden muss, um den kommenden Herausforderungen gerecht zu werden. Die Arbeit an der zukünftigen GAP muss in nächster Zeit beginnen, und das Treffen in Oulu könnte möglicherweise als Ausgangspunkt dieser Diskussionen gelten. Die GAP wird ein angemessenes Umfeld schaffen müssen, um sehr unterschiedlichen Landwirten mit sehr unterschiedlichen Arbeitsbedingungen die Ausübung der Landwirtschaft sowie Verbrauchern und Steuerzahlern die Versorgung mit einer großen Vielfalt an Waren und Dienstleistungen zu ermöglichen, wie es nur eine wirklich multifunktionale Landwirtschaft zu leisten vermag. Wenn es der GAP gelingt, diese Merkmale beizubehalten, wird ihr im zukünftigen Europa eine wichtige Rolle zukommen. Bei dem Treffen in Oulu handelt es sich auch aus einem ganz besonderen Grund um einen bedeutenden Meilenstein: Zum ersten Mal beteiligte sich jeder der zehn neuen Mitgliedsstaaten mit allen Rechten und voller Verantwortung als Teil der Union aktiv an der Debatte zum Europäischen Landwirtschaftsmodell. Wieder einmal unterstreicht dies die zentrale Rolle unseres gemeinsamen Europäischen Landwirtschaftsmodells. Ce modèle agricole européen qui nous est commun Du fait des forces internes et externes qui vont bientôt s'exercer sur l'agriculture européenne, la physionomie de la PAC après 2013 pourrait bien être très différente de ce qu'elle est maintenant. Quelque soit cependant l'importance de ces changements, la PAC devra conserver sa base commune actuelle, qui repose sur le « modèle agricole européen » (MAE). La chose est apparue clairement lors d'une réunion informelle des ministres de l'agriculture européens organisée par la présidence finnoise à Oulu, en septembre dernier. Une politique agricole musclée sera nécessaire à l'avenir, mais elle devra évoluer pour répondre à de nouveaux défis. Il va bientôt falloir commencer à travailler cette nouvelle PAC, et la réunion d'Oulu restera peut être comme le point de départ des discussions sur le sujet. La PAC devra fournir un environnement convenable pour la pratique d'agricultures diverses, par des agriculteurs différents les uns des autres, dans un vaste éventail de conditions. Elle devra permettre la production d'une grande variété de biens et de services financés par le consommateur ou le contribuable, comme seule une agriculture multifonctionnelle peut le faire. Si la PAC arrive à conserver ces caractéristiques, elle aura un grand rôle à jouer dans l'Europe de demain. Il y a encore une raison plus spécifique pour marquer d'une pierre blanche la réunion d'Oulu : pour la première fois, les dix nouveaux membres de l'Union ont activement participé et de plein droit aux discussions sur le MAE. Cela, une fois de plus, souligne le rôle essentiel du « modèle agricole européen » qui nous est commun. [source] Iron enhances endothelial cell activation in response to Cytomegalovirus or Chlamydia pneumoniae infectionEUROPEAN JOURNAL OF CLINICAL INVESTIGATION, Issue 10 2006A. E. R. Kartikasari Abstract Background, Chronic inflammation has been implemented in the pathogenesis of inflammatory diseases like atherosclerosis. Several pathogens like Chlamydia pneumoniae (Cp) and cytomegalovirus (CMV) result in inflammation and thereby are potentially artherogenic. Those infections could trigger endothelial activation, the starting point of the atherogenic inflammatory cascade. Considering the role of iron in a wide range of infection processes, the presence of iron may complicate infection-mediated endothelial activation. Materials and methods, Endothelial intercellular adhesion molecule-1 (ICAM-1), vascular cell adhesion molecule-1 (VCAM-1) and endothelial selectin (E-selectin) expression were measured using flow cytometry, as an indication of endothelial activation. Cytotoxicity was monitored using the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. Immunostaining was applied to measure Cp and CMV infectivity to endothelial cells. Results, An increased number of infected endothelial cells in a monolayer population leads to a raised expression of adhesion molecules of the whole cell population, suggesting paracrine interactions. Iron additively up-regulated Cp-induced VCAM-1 expression, whereas synergistically potentiated Cp-induced ICAM-1 expression. Together with CMV, iron also enhanced ICAM-1 and VCAM-1 expression. These iron effects were observed without modulation of the initial infectivity of both microorganisms. Moreover, the effects of iron could be reversed by intracellular iron chelation or radical scavenging, conforming modulating effects of iron on endothelial activation after infections. Conclusions, Endothelial response towards chronic infections depends on intracellular iron levels. Iron status in populations positive for Cp or CMV infections should be considered as a potential determinant for the development of atherosclerosis. [source] Creutzfeldt,Jakob disease risk and PRNP codon 129 polymorphism: necessity to revalue current dataEUROPEAN JOURNAL OF NEUROLOGY, Issue 12 2005E. Mitrová The polymorphism at codon 129 (M129V) of the prion protein gene (PRNP) is a recognized genetic marker for susceptibility to Creutzfeldt,Jakob disease (CJD) in the Caucasians. The distribution of this polymorphism in healthy individuals provides an important starting point for the evaluation of CJD risk in the general population. Early studies of reference population cohorts demonstrated that methionine/valine heterozygosity was the most frequent genotype. These studies were performed in relatively small numbers of control subjects and do not correspond with the findings of more recent investigations. In this study, we present an analysis of the codon M129V distribution in 613 corneal donors, representing one of the largest control groups examined to date. Methionine homozygotes represented 48.1%, valine homozygotes 8.7% and methionine/valine heterozygotes 43.2%. While age-related difference was not significant, differentiation according to the gender showed significant difference. The observed highest proportion of methionine homozygotes and statistically significant difference between genders as well as comparison with results obtained in other countries underline the need to re-evaluate the generally used reference data on M129V, including consideration of the gender, age and geographical distribution. [source] Identification of a Chr 11 quantitative trait locus that modulates proliferation in the rostral migratory stream of the adult mouse brainEUROPEAN JOURNAL OF NEUROSCIENCE, Issue 4 2010Anna Poon Abstract Neuron production takes place continuously in the rostral migratory stream (RMS) of the adult mammalian brain. The molecular mechanisms that regulate progenitor cell division and differentiation in the RMS remain largely unknown. Here, we surveyed the mouse genome in an unbiased manner to identify candidate gene loci that regulate proliferation in the adult RMS. We quantified neurogenesis in adult C57BL/6J and A/J mice, and 27 recombinant inbred lines derived from those parental strains. We showed that the A/J RMS had greater numbers of bromodeoxyuridine-labeled cells than that of C57BL/6J mice with similar cell cycle parameters, indicating that the differences in the number of bromodeoxyuridine-positive cells reflected the number of proliferating cells between the strains. AXB and BXA recombinant inbred strains demonstrated even greater variation in the numbers of proliferating cells. Genome-wide mapping of this trait revealed that chromosome 11 harbors a significant quantitative trait locus at 116.75 ± 0.75 Mb that affects cell proliferation in the adult RMS. The genomic regions that influence RMS proliferation did not overlap with genomic regions regulating proliferation in the adult subgranular zone of the hippocampal dentate gyrus. On the contrary, a different, suggestive locus that modulates cell proliferation in the subgranular zone was mapped to chromosome 3 at 102 ± 7 Mb. A subset of genes in the chromosome 11 quantitative trait locus region is associated with neurogenesis and cell proliferation. Our findings provide new insights into the genetic control of neural proliferation and an excellent starting point to identify genes critical to this process. [source] |