Many Thousands (many + thousand)

Distribution by Scientific Domains


Selected Abstracts


The Scalasca performance toolset architecture

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 6 2010
Markus Geimer
Abstract Scalasca is a performance toolset that has been specifically designed to analyze parallel application execution behavior on large-scale systems with many thousands of processors. It offers an incremental performance-analysis procedure that integrates runtime summaries with in-depth studies of concurrent behavior via event tracing, adopting a strategy of successively refined measurement configurations. Distinctive features are its ability to identify wait states in applications with very large numbers of processes and to combine these with efficiently summarized local measurements. In this article, we review the current toolset architecture, emphasizing its scalable design and the role of the different components in transforming raw measurement data into knowledge of application execution behavior. The scalability and effectiveness of Scalasca are then surveyed from experience measuring and analyzing real-world applications on a range of computer systems. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Political abuse of psychiatry

ACTA PSYCHIATRICA SCANDINAVICA, Issue 399 2000
J. L. T. Birley
The abuse of psychiatry in Nazi Germany 60 years ago was the abuse of the ,duty to care'. Its scale was enormous; 300,000 people were sterilized and 100,000 killed in Germany alone and many thousands further afield, mainly in eastern Europe. This episode occurred in a country with a high reputation for its medicine, including psychiatry, and for its interest in the ethics of medical research. The economic conditions which preceded the violent political upheaval had led to increasing concern about ,the burden on the State' of the mentally ill and disabled. These preoccupations are still with us today. There may still be lessons to be learnt from the Nazi episode. [source]


Landslide inventories and their statistical properties

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 6 2004
Bruce D. Malamud
Abstract Landslides are generally associated with a trigger, such as an earthquake, a rapid snowmelt or a large storm. The landslide event can include a single landslide or many thousands. The frequency,area (or volume) distribution of a landslide event quanti,es the number of landslides that occur at different sizes. We examine three well-documented landslide events, from Italy, Guatemala and the USA, each with a different triggering mechanism, and ,nd that the landslide areas for all three are well approximated by the same three-parameter inverse-gamma distribution. For small landslide areas this distribution has an exponential ,roll-over' and for medium and large landslide areas decays as a power-law with exponent -2·40. One implication of this landslide distribution is that the mean area of landslides in the distribution is independent of the size of the event. We also introduce a landslide-event magnitude scale mL = log(NLT), with NLT the total number of landslides associated with a trigger. If a landslide-event inventory is incomplete (i.e. smaller landslides are not included), the partial inventory can be compared with our landslide probability distribution, and the corresponding landslide-event magnitude inferred. This technique can be applied to inventories of historical landslides, inferring the total number of landslides that occurred over geologic time, and how many of these have been erased by erosion, vegetation, and human activity. We have also considered three rockfall-dominated inventories, and ,nd that the frequency,size distributions differ substantially from those associated with other landslide types. We suggest that our proposed frequency,size distribution for landslides (excluding rockfalls) will be useful in quantifying the severity of landslide events and the contribution of landslides to erosion. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Estimating the minimum distance of large-block turbo codes using iterative multiple-impulse methods

EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 5 2007
Stewart Crozier
A difficult problem for turbo codes is the efficient and accurate determination of the distance spectrum, or even just the minimum distance, for specific interleavers. This is especially true for large blocks, with many thousands of data bits, if the distance is high. This paper compares a number of recent distance estimation techniques and introduces a new approach, based on using specific event impulse patterns and iterative processing, that is specifically tailored to handle long interleavers with high spread. The new method is as reliable as two previous iterative multiple-impulse methods, but with much lower complexity. A minimum distance of 60 has been estimated for a rate 1/3, 8-state, turbo code with a dithered relative prime (DRP) interleaver of length K,=,65,536. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA

HEALTH ECONOMICS, Issue 10 2007
Anthony O'Hagan
Abstract Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially. Copyright © 2006 John Wiley & Sons, Ltd. [source]


The relative roles of domestication, rearing environment, prior residence and body size in deciding territorial contests between hatchery and wild juvenile salmon

JOURNAL OF APPLIED ECOLOGY, Issue 3 2003
Neil B. Metcalfe
Summary 1Interactions between captive-reared and wild salmonids are frequent because hatcheries annually rear millions of fish for release in conservation programmes while many thousands of domesticated fish escape from fish farms. However, the outcome of competition between captive-reared and wild fish is not clear: wild fish may be smaller and less aggressive than hatchery fish, but they have more local experience and a prior residence advantage. Moreover, it is important to know whether any competitive differences are genetic (due to the process of domestication) or due to the rearing environment. 2We therefore examined the factors influencing competition for feeding territories in juvenile Atlantic salmon. We studied the effect of domestication by using three independent stocks of both domesticated and wild-origin fish, all of which were reared in a common hatchery environment. We also used fish from the same wild stocks that had been living in the wild. Territorial contests were staged in stream tank compartments between pairs of fish differing in origin or rearing environment; the relative importance of body size and prior residence was also assessed. 3All three stocks of domesticated fish were generally dominant over wild-origin fish when both had been raised in a common hatchery environment. If the wild-origin fish were given a 2-day period of prior residence on the territory this asymmetry in dominance was reversed. However, domesticated fish did not gain any additional advantage from being prior residents. The relative body size of the two contestants had a negligible effect on contest outcomes. 4Truly wild fish (i.e. those of wild origin that had also grown up in the wild) were generally dominant over domesticated or wild-origin fish that had been hatchery-reared. Differences in body size between contestants had no effect on the outcome. 5Synthesis and applications. These results show that, while juvenile farmed Atlantic salmon are inherently more aggressive than wild-origin fish, the hatchery environment reduces their ability to compete for territories with wild resident fish. Rearing salmon in conventional hatcheries for later release into the wild where natural populations already exist may not be a prudent conservation measure; it is preferable to plant eggs or first-feeding fry rather than attempt to ,help' the fish by rearing them through the early life stages. [source]


Moving to suburbia: ontogenetic and evolutionary consequences of life on predator-free islands

JOURNAL OF BIOGEOGRAPHY, Issue 5-6 2002
Daniel T. Blumstein
Aim Many species find themselves isolated from the predators with which they evolved. This situation commonly occurs with island biota, and is similar to moving from the dangerous inner-city to the suburbs. Economic thinking tells us that we should expect costly antipredator behaviour to be lost if it is no longer beneficial. The loss of antipredator behaviour has important consequences for those seeking to translocate or reintroduce individuals from predator-free islands back to the predator-rich mainland, but we have neither a detailed understanding of the mechanisms of loss nor information on the time course of relaxed selection. Some antipredator behaviours are experience-dependent: experience with predators is required for their proper performance. In these cases, antipredator behaviour is lost after only a single generation of isolation, but it should be able to be regained following exposure to predators. Other behaviours may be more `hard-wired'. The evolutionary loss of antipredator behaviour may occur over as few as several generations, but behaviours may also persist for many thousands of years of predator-free living. Location Australia and New Zealand. Methods I discuss the results of a series of studies designed to document the mechanisms and time course of relaxed selection for antipredator behaviour in macropodid marsupials. Controlled studies of visual, acoustic and olfactory predator recognition, as well as field studies of antipredator vigilance focused on several species of kangaroos and wallabies. Results Visual predator recognition may be retained following 9500 years of relaxed selection, but olfactory and acoustic predator recognition may have to be learned. Insular populations allow humans to approach closer before fleeing than mainland animals. Insular species may retain `group size effects' , the ability to seek safety in numbers , when they are exposed to any predators. Main conclusions I suggest that the presence of any predators may be an important factor in maintaining functional antipredator behaviour. Managers should pay particular attention as to the source and evolutionary history of their population when planning translocations or reintroductions. [source]


Industrial tools for the feature location problem: an exploratory study

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 6 2006
Sharon Simmons
Abstract Software engineers who maintain and enhance large systems often encounter the feature location problem: where in the many thousands of lines of code is a particular user feature implemented? Several methods of addressing the problem have been proposed, most of which involve tracing the execution of the system and analyzing the traces. Some supporting academic tools are available. However, companies that depend on the successful evolution of large systems are more likely to use new methods if they are supported by industrial-strength tools of known reliability. This article describes a study performed with Motorola, Inc. to see whether there were any pitfalls in using Metrowerks CodeTEST and Klocwork inSight for feature location on message-passing software similar to systems that Motorola maintains. These two tools were combined with TraceGraph, an academic trace comparison tool. The study identified two main problems. First, some ,glue' code and workarounds were needed to get CodeTEST to generate a trace for an interval of time in which the feature was operating. Second, getting information out of TraceGraph and into inSight was needlessly complicated for a user. However, with a moderate amount of work, the tool combination was effective in locating, understanding and documenting features. Study participants completed these steps in typically 3,4 hours per feature, studying only a few hundred lines out of a 200,000 line system. An ongoing project with Motorola is focused on improving tool integration with the hope of making feature location common practice at Motorola. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Computational methods in authorship attribution

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 1 2009
Moshe Koppel
Statistical authorship attribution has a long history, culminating in the use of modern machine learning classification methods. Nevertheless, most of this work suffers from the limitation of assuming a small closed set of candidate authors and essentially unlimited training text for each. Real-life authorship attribution problems, however, typically fall short of this ideal. Thus, following detailed discussion of previous work, three scenarios are considered here for which solutions to the basic attribution problem are inadequate. In the first variant, the profiling problem, there is no candidate set at all; in this case, the challenge is to provide as much demographic or psychological information as possible about the author. In the second variant, the needle-in-a-haystack problem, there are many thousands of candidates for each of whom we might have a very limited writing sample. In the third variant, the verification problem, there is no closed candidate set but there is one suspect; in this case, the challenge is to determine if the suspect is or is not the author. For each variant, it is shown how machine learning methods can be adapted to handle the special challenges of that variant. [source]


Rising African cassava production, diseases due to high cyanide intake and control measures

JOURNAL OF THE SCIENCE OF FOOD AND AGRICULTURE, Issue 12 2008
Dulce Nhassico
Abstract Cassava is the staple food of tropical Africa and its production, averaged over 24 countries, has increased more than threefold from 1980 to 2005, and the population has more than doubled over that time compared with a 1.5 times increase worldwide. Agriculturally, cassava performs very well but the roots and leaves contain cyanogenic glucosides that are dangerous to human health. These cyanogens sometimes produce acute intoxication leading to death, they exacerbate goitre and cretinism in iodine-deficient regions, cause konzo and are implicated in the occurrence of tropical ataxic neuropathy and stunting of children. Konzo is an irreversible paralysis of the legs with many thousands of cases, mainly amongst children, in Mozambique, Tanzania, Democratic Republic of Congo, Cameroon, Central African Republic and probably other tropical African countries. Attempts to alleviate cassava cyanide toxicity have included the development of an information network and distribution in developing countries of picrate kits, which measure total cyanide in cassava and urinary thiocyanate. A simple wetting method that reduces total cyanide in cassava flour three- to sixfold has been successfully field tested and is being introduced in Mozambique. Transgenic technology shows promise in increasing the rate of loss of cyanide from roots during processing. World health and agricultural bodies should pay more attention to emerging health problems associated with toxicity of cyanogens in cassava. Copyright © 2008 Society of Chemical Industry [source]


Luteal Deficiency and Embryo Mortality in the Mare

REPRODUCTION IN DOMESTIC ANIMALS, Issue 3-4 2001
WR Allen
Four separate components combine to produce the progesterone and biologically active 5,-reduced pregnanes needed to maintain pregnancy in the mare. The primary corpus luteum (CL) is prolonged beyond its cyclical lifespan by the down-regulation of endometrial oxytocin receptors to prevent activation of the luteolytic pathway and its waning progesterone production is supplemented from day 40 of gestation by the formation of a series of accessory CL which develop in the maternal ovaries as a result of the gonadotrophic actions of pituitary FSH and the equine chorionic gonadotrophin (eCG). From around day 100 the allantochorion secretes progesterone and progestagens directly to the endometrium and underlying myometrium and, in the last month of gestation, the enlarging foetal adrenal gland secretes appreciable quantities of pregnenelone which is also utilized by the placenta to synthesize progestagens. Between 10 and 15% of mares undergo foetal death and abortion at some time in gestation and the majority of these losses occur during the first 40 days of gestation when the primary CL is the sole source of progesterone. Yet, all the available evidence suggests that untoward luteolysis is not common in this period and the losses that do occur have other underlying causes. Beyond day 40 the secondary CL receive powerful luteotrophic support from eCG and from day 80,100 until term the supply organ (placenta) and target tissues (endometrium and myometrium) are in direct contact with each other over their entire surface. In the face of this interlocking and failsafe system for progestagen production throughout pregnancy, and despite a paucity of evidence that a deficiency of progesterone production is a cause of pregnancy loss in the mare, it is surprising, and worrying, that annually many thousands of pregnant mares throughout the world are given exogenous progestagen therapy during part or all of their gestation as a form of preventative insurance against the possibility of pregnancy failure. Basic investigative research is required urgently to validate or debunk the practice. [source]


Subaru tackles a knotty problem

ASTRONOMY & GEOPHYSICS, Issue 4 2009
Article first published online: 20 JUL 200
Near-infrared observations of the Helix Nebula show that the comet-shaped knots found there are more abundant than thought, with many thousands picked out by the Subaru telescope. [source]


Spatial Modeling of Wetland Condition in the U.S. Prairie Pothole Region

BIOMETRICS, Issue 2 2002
J. Andrew Royle
Summary. We propose a spatial modeling framework for wetland data produced from a remote-sensing-based waterfowl habitat survey conducted in the U.S. Prairie Pothole Region (PPR). The data produced from this survey consist of the area containing water on many thousands of wetland basins (i.e., prairie potholes). We propose a two-state model containing wet and dry states. This model provides a concise description of wet probability, i.e., the probability that a basin contains water, and the amount of water contained in wet basins. The two model components are spatially linked through a common latent effect, which is assumed to be spatially correlated. Model fitting and prediction is carried out using Markov chain Monte Carlo methods. The model primarily facilitates mapping of habitat conditions, which is useful in varied monitoring and assessment capacities. More importantly, the predictive capability of the model provides a rigorous statistical framework for directing management and conservation activities by enabling characterization of habitat structure at any point on the landscape. [source]