Data Used (data + used)

Distribution by Scientific Domains
Distribution within Life Sciences


Selected Abstracts


Borrower- and Mortgage-Related Factors Associated With FHA Foreclosures

FAMILY & CONSUMER SCIENCES RESEARCH JOURNAL, Issue 3 2006
Lucy Delgadillo
The study identifies which household factors contribute to the likelihood of foreclosure by responding to the question: What borrower-related and mortgage-related factors are correlated with home foreclosure? Data used are from an inventory of active and foreclosed FHAhomes in Utah from the years 2000 to 2001. The sample consisted of 179 cases. The borrower-related factors included age of borrower, job tenure, self-employed, race of borrower, first-time homebuyer, number of dependents, homeownership counseling, and borrower's income. The mortgage-related factors included loan-to-value ratio, payment-to-income ratio, back-end ratio, gift amount, size of down payment, and interest rate. Results revealed that race, front-end ratio, and interest rate were statistically significant factors associated with the probability of foreclosure. The multiple interaction regression model indicated that the interaction between race and front-end ratio was statistically significant, which suggests that the effect of front-end ratio differs between Whites and non-Whites. [source]


Latitudinal height couplings between single tropopause and 500 and 100 hPa within the Southern Hemisphere

INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 4 2010
Adrián E. Yuchechen
Abstract In order to provide further insights into the relationships between the tropopause and different mandatory levels, this paper discusses the coupling between standardized tropopause height anomalies (STHAs) and standardized 500-hPa and 100-hPa height anomalies (S5HAs and S1HAs, respectively) within the ,climatic year' for three sets of upper-air stations located approximately along 20°S, 30°S and 45°S. Data used in this research consists in a radiosonde database spanning the period 1973,2007. The mandatory levels are supposed to be included in each radiosonde profile. The tropopause, on the other hand, is calculated from the significant levels available for each sounding using the lapse rate definition. After applying a selection procedure, a basic statistical analysis combined with Fourier analysis is carried out in order to build up the standardized variables. Empirical orthogonal functions (EOFs) in S-mode are used to get the normal modes of oscillation as well as their time evolution, for STHA/S5HA as well as for STHA/S1HA coupling, separately, within the aforementioned latitudes. Overall, there are definite cycles in the time evolution associated with each EOF structure at all three latitudes, the semi-annual wave playing the most important role in most of the cases. Nevertheless, 20°S seems to be the only latitude driven by diabatic heating cycles in the middle atmosphere. Certainly, EOF1 at this latitude has a semi-annual behaviour and seems to be strongly influenced by the tropical convection seasonality. Apparently, the convectively driven release of latent heat in the middle troposphere affects the time evolution of the EOF1 structure. By contrast, the vertical propagation of planetary waves is raised as a possible explanation for the EOF1 and EOF2 behaviour at latitudes beyond 20°S, in view of the close connection existent between the semi-annual oscillation (SAO) and the reversion in the direction of the zonal wind. Copyright © 2009 Royal Meteorological Society [source]


Relationship Between Frailty and Cognitive Decline in Older Mexican Americans

JOURNAL OF AMERICAN GERIATRICS SOCIETY, Issue 10 2008
Rafael Samper-Ternent MD
OBJECTIVES: To examine the association between frailty status and change in cognitive function over time in older Mexican Americans. DESIGN: Data used were from the Hispanic Established Population for the Epidemiological Study of the Elderly. SETTING: Five southwestern states: Texas, New Mexico, Colorado, Arizona, and California. PARTICIPANTS: One thousand three hundred seventy noninstitutionalized Mexican-American men and women aged 65 and older with a Mini-Mental State Examination (MMSE) score of 21 or higher at baseline (1995/96). MEASUREMENTS: Frailty, defined as three or more of the following components: unintentional weight loss of more than 10 pounds, weakness (lowest 20% in grip strength), self-reported exhaustion, slow walking speed (lowest 20% in 16-foot walk time in seconds), and low physical activity level (lowest 20% on Physical Activity Scale for the Elderly score). Information about sociodemographic factors, MMSE score, medical conditions (stroke, heart attack, diabetes mellitus, arthritis, cancer, and hypertension), depressive symptoms, and visual impairment was obtained. RESULTS: Of the 1,370 subjects, 684 (49.9%) were not frail, 626 (45.7%) were prefrail (1,2 components), and 60 (4.4%) were frail (,3 components) in 1995/96. Using general linear mixed models, it was found that frail subjects had greater cognitive decline over 10 years than not frail subjects (estimate=,0.67, standard error=0.13; P<.001). This association remained statistically significant after controlling for potential confounding factors. CONCLUSION: Frail status in older Mexican Americans with MMSE scores of 21 or higher at baseline is an independent predictor of MMSE score decline over a 10-year period. Future research is needed to establish pathophysiological components that can clarify the relationship between frailty and cognitive decline. [source]


Early Markers of Prolonged Hospital Stays in Older People: A Prospective, Multicenter Study of 908 Inpatients in French Acute Hospitals

JOURNAL OF AMERICAN GERIATRICS SOCIETY, Issue 7 2006
Pierre-Olivier Lang MD
OBJECTIVES: To identify early markers of prolonged hospital stays in older people in acute hospitals. DESIGN: A prospective, multicenter study. SETTING: Nine hospitals in France. PARTICIPANTS: One thousand three hundred six patients aged 75 and older were hospitalized through an emergency department (Sujet Âgé Fragile: Évaluation et suivi (SAFEs) ,Frail Elderly Subjects: Evaluation and follow-up). MEASUREMENTS: Data used in a logistic regression were obtained through a gerontological evaluation of inpatients, conducted in the first week of hospitalization. The center effect was considered in two models as a random and fixed effect. Two limits were used to define a prolonged hospital stay. The first was fixed at 30 days. The second was adjusted for Diagnosis Related Groups according to the French classification (f-DRG). RESULTS: Nine hundred eight of the 1,306 hospital stays that made up the cohort were analyzed. Two centers (n=298) were excluded because of a large volume of missing f-DRGs. Two-thirds of subjects in the cohort analyzed were women (64%), with a mean age of 84. One hundred thirty-eight stays (15%) lasted more than 30 days; 46 (5%) were prolonged beyond the f-DRG-adjusted limit. No sociodemographic variables seemed to influence the length of stay, regardless of the limit used. For the 30-day limit, only cognitive impairment (odds ratio (OR)=2.2, 95% confidence interval (CI)=1.2,4.0) was identified as a marker for prolongation. f-DRG adjustment revealed other clinical markers. Walking difficulties (OR=2.6, 95% CI=1.2,16.7), fall risk (OR=2.5, 95% CI=1.7,5.3), cognitive impairment (OR=7.1, 95% CI=2.3,49.9), and malnutrition risk (OR=2.5, 95% CI=1.7,19.6) were found to be early markers for prolonged stays, although dependence level and its evolution, estimated using the Katz activity of daily living (ADL) index, were not identified as risk factors. CONCLUSION: When the generally recognized parameters of frailty are taken into account, a set of simple items (walking difficulties, risk of fall, risk of malnutrition, and cognitive impairment) enables a predictive approach to the length of stay of elderly patients hospitalized under emergency circumstances. Katz ADLs were not among the early markers identified. [source]


Deinstitutionalization in Ontario, Canada: Understanding Who Moved When

JOURNAL OF POLICY AND PRACTICE IN INTELLECTUAL DISABILITIES, Issue 3 2010
Lynn Martin
Abstract The results of deinstitutionalization are well known, but less information is available on the process of deinstitutionalization itself. This study sought to understand the process of deinstitutionalization in Ontario by examining the timing of individuals' transitions to the community and the characteristics of individuals who experienced a change in the timing of their move. Data used were based on census information collected between 2005 and 2008 using the interRAI Intellectual Disability assessment instrument on all persons residing in Ontario's specialized institutions. Analyses of characteristics at baseline by the anticipated transition year revealed the existence of significant differences between the groups. Comparisons of anticipated and actual transition years revealed that about 40% of individuals experienced a change in their transition year. Age, bladder incontinence, and number of medical diagnoses were associated with increased likelihood of moving earlier than anticipated, whereas family contact, presence of a strong and supportive relationship with family, psychiatric diagnoses, destructive behavior, and aggression were associated with higher likelihood of moving later. Careful attention to characteristics and level of need was paid at the onset of the deinstitutionalization planning process; however, the timing of transitions to the community was not "set in stone." In the future, studies should not only examine the individual's outcomes and quality of life in the community, but also should seek to qualitatively describe the individual's and family's experiences of the transition process. This type of information is invaluable for other jurisdictions in which deinstitutionalization is planned or under way. [source]


Adaptation of the mayo primary biliary cirrhosis natural history model for application in liver transplant candidates

LIVER TRANSPLANTATION, Issue 4 2000
W. Ray Kim MD
The Mayo natural history model has been used widely as a tool to estimate prognosis in patients with primary biliary cirrhosis (PBC), particularly liver transplant candidates. We present an abbreviated model in which a tabular method is used to approximate the risk score, which may be incorporated in the minimal listing criteria for liver transplant candidates. Data used in the development and validation of the original Mayo model were derived from 418 patients with well-characterized PBC. To construct an abbreviated risk score in a format similar to that of Child-Turcotte-Pugh score, 1 to 3 cut-off criteria were determined for each variable, namely age (0 point for <38, 1 for 38 to 62 and 2 for ,63 years), bilirubin (0 point for <1, 1 for 1 to 1.7, 2 for 1.7 to 6.4, and 3 for >6.4 mg/dL), albumin (0 point for >4.1, 1 for 2.8 to 4.1, and 2 for <2.8 g/dL), prothrombin time (1 point for normal and 2 for prolonged) and edema (0 point for absent and 1 for present). The intervals between these criteria were chosen in a way to enable a meaningful classification of patients according to their risk for death. This score is highly correlated with the original risk score (r = 0.93; P < .01). The Kaplan-Meier estimate at 1 year was 90.6% in patients with a score of 6. The abbreviated risk score is a convenient method to quickly estimate the risk score in patients with PBC. An abbreviated score of 6 may be consistent with the current minimal listing criteria in liver transplant candidates. [source]


Comparison of conventional FASTA identity searches with the 80 amino acid sliding window FASTA search for the elucidation of potential identities to known allergens

MOLECULAR NUTRITION & FOOD RESEARCH (FORMERLY NAHRUNG/FOOD), Issue 8 2007
Gregory S. Ladics
Abstract Food and Agriculture Organization/World Health Organization (FAO/WHO) recommended that IgE cross-reactivity between a transgenic protein and allergen be considered when there is ,F 35% identity over a sliding "window" of 80 amino acids. Our objective was to evaluate the false positive and negative rates observed using the FAO/WHO versus conventional FASTA analyses. Data used as queries against allergen databases and analyzed to assess false positive rates included: 1102 hypothetical corn ORFs; 907 randomly selected proteins; 89 randomly selected corn proteins; and 97 corn seed proteins. To evaluate false negative rates of both methods: Bet v 1a along with several crossreacting fruit/vegetable allergens and a bean ,-amylase inhibitor were used as queries. Both methods were also evaluated for their ability to detect a putative nonallergenic test protein containing a sequence derived from Ara h 1. FASTA versions 3.3t0 and 3.4t25 were utilized. Data indicate a conventional FASTA analysis produced fewer false positives and equivalent false negative rates. Conventional FASTA versus sliding window derived E scores were generally more significant. Results suggest a conventional FASTA search provides more relevant identity to the query protein and better reflects the functional similarities between proteins. It is recommended that the conventional FASTA analysis be conducted to compare identities of proteins to allergens. [source]


Using Affective Attitudes to Identify Christian Fundamentalism: The Ten Commandments Judge and Alabama Politics

POLITICS & POLICY, Issue 5 2010
THOMAS SHAW
This article develops a new and useful indicator to aid in identifying Christian fundamentalism. "Affect" measures individuals' affective attitudes toward the role of Christian fundamentalists in Alabama politics. We demonstrate the analytic utility of this indicator by quantitatively comparing it to other more traditional and direct measures of fundamentalism, such as belief in the Bible as the literal word of God, self-identification as a fundamentalist, and whether one considers oneself to be "born again." We then compare the utility of these different measures of Christian fundamentalism in explaining electoral support for the archetype Christian fundamentalist political candidate, the "Ten Commandments Judge" Roy Moore, former chief justice of the Alabama Supreme Court. We find that our affect indicator compares well to other measures of fundamentalism and actually outperforms all of the more traditional measures in explaining support for Moore. Data used in the analysis come from a public opinion poll conducted by the USA Polling Group in April 2006. Este artículo desarrolla un nuevo y útil indicador para ayudar a identificar el fundamentalismo cristiano. "Afecto" mide las actitudes afectivas de los individuos hacia el rol de los cristianos fundamentalistas en la política de Alabama. Demostramos la utilidad analítica de este indicador al compararlo cuantitativamente con otras medidas más tradicionales y directas del fundamentalismo, tales como la creencia de la Biblia como la palabra literal de Dios, auto-identificación como fundamentalista, y si uno se considera a uno mismo "nacido de nuevo." Después comparamos la utilidad de estas diferentes medidas del fundamentalismo cristiano para explicar el apoyo electoral al candidato político cristiano fundamentalista arquetípico: Roy Moore, "Juez de los Diez Mandamientos," ex-presidente del tribunal de la Corte Suprema de Alabama. Encontramos que nuestro indicador Afecto se equipara con otras medidas del fundamentalismo y en realidad supera a todas las más tradicionales mediciones que explican el apoyo a Moore. La información utilizada en el análisis proviene de una encuesta de opinión pública realizada por el USA Polling Group en Abril del 2006. [source]


Evaluation of gender differences in caregiver burden in home care: Nagoya Longitudinal Study of the Frail Elderly (NLS-FE)

PSYCHOGERIATRICS, Issue 3 2006
Yoshihisa HIRAKAWA
Abstract Background:, Japan is presently experiencing a growth in the number of male caregivers and this situation has given rise to some concerns over gender differences. Previous studies have suggested that there are gender differences in caregiver burden in home care, however, it is still unclear whether or not gender differences exist. We therefore conducted this study to attain a better understanding of the Japanese male caregiver burden in home care, using data from the Nagoya Longitudinal Study of Frail Elderly (NLS-FE). Methods:, NLS-FE is a large prospective study of community-dwelling elderly persons eligible for public long-term care insurance who live in Nagoya city and use the services of the Nagoya City Health Care Service Foundation for Older People, which comprises 17 visiting nursing stations and corresponding care-managing centers, from November to December 2003. Data used in this study included the Japanese version of the Zarit Caregiver Burden Interview, caregivers' and dependents' characteristics, and the caregiving situation. The differences in dependent and caregiver characteristics between male and female caregiver groups were assessed using the ,2 -test for categorical variables or the unpaired t -test for continuous variables. Multiple logistic regression was used to examine the association between dependent and caregiver characteristics and caregiver burden. Results:, A total of 399 male caregivers and 1193 female caregivers were included in our analysis. Before and after controlling baseline variables, we did not detect a difference between male and female caregivers with respect to caregiver burden. Conclusion:, Our study suggests that differences in caregiver burden may not necessarily exist between male and female caregivers in Japan. [source]


Effects of shear sheltering in a stable atmospheric boundary layer with strong shear

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 596 2004
Ann-Sofi Smedman
Abstract Data from two marine field experiments in the Baltic Sea with stable stratification have been analysed. The purpose was to test the concept of the ,detached' or ,top-down' eddies and the ,shear-sheltering' mechanism in the presence of a low-level wind speed maximum in the atmosphere. Data used include turbulence and profile measurements on two 30 m towers and concurrent wind profiles throughout the boundary layer obtained from pilot-balloon soundings. Measurements show that large eddies are being suppressed when there is a low-level wind speed maximum present somewhere in the layer 40,300 m above the water surface and when the stratification is slightly stable. The effect is seen both in normalized standard deviations of the velocity components and in corresponding component spectra. In previous work it was argued that the relatively large eddies, which dominate the low wave number spectra in measurements in the surface layer, are detached or top-down eddies generated higher up in the boundary layer, that interact with the surface layer. The low-level wind maximum introduces a distinct layer with strong vorticity which, according to the shear-sheltering hypothesis, prevents these eddies from penetrating downwards. In the limit of the wind maximum occurring at a very low height (less than about 100 m), usual turbulence statistics characteristic of the ,canonical' boundary layer are found. Combining all the statistics, it is demonstrated that the wavelength of maximum spectral energy is locally related to a turbulence length-scale, which shows that for values of the Richardson number of unity or less the effect of the local wind gradient is greater than that of static stability. The reduction of length-scale with the strength of a low-level wind maximum, explains the observed reduction (by a factor of two) of the turbulent flux of sensible heat at the surface. This result indicates that the shear-sheltering mechanism is likely to play an important role in the turbulent exchange process at the surface in sea areas where low-level wind maxima are a frequently occurring phenomenon, such as the Baltic and other large water bodies surrounded by landmasses. Copyright © 2004 Royal Meteorological Society [source]


Relationship between the lactation curve and udder disease incidence in different lactation stages in first-lactation Holstein cows

ANIMAL SCIENCE JOURNAL, Issue 6 2009
Takeshi YAMAZAKI
ABSTRACT We examined the relationships between the shape of the first parity lactation curve and udder disease incidence at different stages of lactation in 538 Holstein cows. Data used were first-parity daily milk yields and treatment records. Each cow was classified according to whether or not it had had udder disease at least once over the whole lactation period or in one of three stages within the lactation period. We then examined the differences in the shapes of the lactation curves between the disease incidence and non-incidence group in each stage. Cows that had high rates of increase in milk yield and high milk yields in early lactation were predisposed to udder disease afterwards. Cows with high milk production over a long period but with low lactation persistency were predisposed to udder disease after the peak of lactation. There was no difference in total milk yield between incidence and non-incidence groups in all stages, suggesting that, for a comparable level of lactation, cows without udder diseases have flatter lactation curves. [source]


Complementary use of natural and artificial wetlands by waterbirds wintering in Doñana, south-west Spain

AQUATIC CONSERVATION: MARINE AND FRESHWATER ECOSYSTEMS, Issue 7 2009
Janusz Kloskowski
Abstract 1.The Doñana wetland complex (SW Spain) holds more wintering waterfowl than any other wetland in Europe. 2.This study focused on the use made by 12 common waterbirds (eight ducks and four waders) of the natural seasonal marshes in Doñana National Park (DNP) and the adjacent Veta la Palma (VLP) fish ponds created in the early 1990s. Data used were from aerial and terrestrial surveys collected between October and February during six consecutive winters from 1998/99 to 2003/04. Changes in distribution of each bird taxon were related to changes in the extent of flooded marshes within DNP. Up to 295,000 ducks were counted in VLP during dry periods, and up to 770,000 in DNP when it was flooded. 3.The timing and extent of flooding in DNP was highly variable, but there was a consistent pattern in which ducks concentrated in VLP during dry months and winters but redistributed to DNP as more of it was flooded. This refuge effect was also strong for black-tailed godwits Limosa limosa, but much less so for other waders. Waders feed mainly on invertebrates, and invertebrate biomass in VLP was found to be higher than in DNP. Ducks feed mainly on seeds and plant material, which are more abundant in DNP when flooded. 4.When water levels in DNP were stable over the course of a winter, or controlled for in multivariate models, the numbers of ducks at VLP declined over time, probably due to reduced availability of plant foods. In contrast, numbers of waders at VLP were more stable, and their invertebrate prey became more abundant over time, at least in the winter 2003/4. 5.In this extremely important wetland complex, the value of natural and artificial wetlands for wintering waterbirds are complementary, providing suitable habitat for different species and for different conditions in a highly variable Mediterranean environment. Copyright © 2009 John Wiley & Sons, Ltd. [source]


At-home meat consumption in China: an empirical study,

AUSTRALIAN JOURNAL OF AGRICULTURAL & RESOURCE ECONOMICS, Issue 4 2009
Hongbo Liu
The remarkable economic changes occurring within China since 1978 have resulted in a striking alteration in food consumption patterns, and one marked change is the increasing consumption of meat. Given China's large population, a small percentage change in per capita meat consumption could lead to a dramatic impact on the production and trade of agricultural products. Such changes have major implications for policy makers and food marketers. This paper concentrates on meat consumption patterns in the home in China. A censored linear approximate almost ideal demand system model was employed in the study, and major economic parameters were estimated for different meat items. Data used in this study were collected from two separate consumer surveys , one urban and one rural in 2005. [source]


Genetic Data and the Listing of Species Under the U.S. Endangered Species Act

CONSERVATION BIOLOGY, Issue 5 2007
SYLVIA M. FALLON
Acta de Especies en Peligro de E. U. A.; decisiones de enlistado; segmento poblacional distinto Abstract:,Genetic information is becoming an influential factor in determining whether species, subspecies, and distinct population segments qualify for protection under the U.S. Endangered Species Act. Nevertheless, there are currently no standards or guidelines that define how genetic information should be used by the federal agencies that administer the act. I examined listing decisions made over a 10-year period (February 1996,February 2006) that relied on genetic information. There was wide variation in the genetic data used to inform listing decisions in terms of which genomes (mitochondrial vs. nuclear) were sampled and the number of markers (or genetic techniques) and loci evaluated. In general, whether the federal agencies identified genetic distinctions between putative taxonomic units or populations depended on the type and amount of genetic data. Studies that relied on multiple genetic markers were more likely to detect distinctions, and those organisms were more likely to receive protection than studies that relied on a single genetic marker. Although the results may, in part, reflect the corresponding availability of genetic techniques over the given time frame, the variable use of genetic information for listing decisions has the potential to misguide conservation actions. Future management policy would benefit from guidelines for the critical evaluation of genetic information to list or delist organisms under the Endangered Species Act. Resumen:,La información genética se está convirtiendo en un factor influyente para determinar sí una especie, subespecie y segmentos poblacionales distintos califican para ser protegidos por el Acta de Especies en Peligro de E. U. A. Sin embargo, actualmente no hay estándares o lineamientos que definan como deben utilizar información genética las agencias federales que administran el acta. Examiné las decisiones de enlistado basadas en información genética tomadas en un período de 10 años (febrero 1996,febrero 2006). Hubo una amplia variación en los datos genéticos utilizados para informar las decisiones de enlistado en términos de cuáles genomas (mitocondrial vs. nuclear) fueron muestreados y el número de marcadores (o técnicas genéticas) y los loci evaluados. En general, las agencias federales identificaron diferencias genéticas entre unidades taxonómicas putativas o poblaciones dependiendo del tipo y cantidad de datos genéticos. Los estudios que se basaron en marcadores genéticos múltiples tuvieron mayor probabilidad de identificar distinciones, y esos organismos tuvieron mayor probabilidad de recibir protección, que los estudios basados en un solo marcador genético. Aunque los resultados pueden, en parte, reflejar la disponibilidad de técnicas genéticas para decisiones de enlistado en el período analizado, el uso variable de información genética para la toma de decisiones puede desinformar acciones de conservación. Las políticas de manejo futuras se beneficiarían de directrices para la evaluación crítica de información genética para enlistar o quitar de la lista a organismos bajo el Acta de Especies en Peligro. [source]


Earnings Management and Corporate Governance in Asia's Emerging Markets

CORPORATE GOVERNANCE, Issue 5 2007
Chung-Hua Shen
This paper studies the impacts of corporate governance on earnings management. We use firm-level governance data, taken from Credit Lyonnais Security Asia (CLSA), of nine Asian countries, in addition to the country-level governance data used in past studies. Our conclusion is as follows. First, firms with good corporate governance tend to conduct less earnings management. Second, there is a size effect for earnings smoothing, that is, large size firms are prone to conduct earnings smoothing, but good corporate governance can mitigate the effect on average. Third, there is a turning point for leverage effect, i.e. when the governance index is large, leverage effect exists, otherwise reverse leverage effect exists. It shows that a highly leveraged firm with poor governance is prone to be scrutinised closely and thus finds it harder to fool the market by manipulating earnings. Fourth, firms with higher growth (lower earnings yield) are prone to engage in earnings smoothing and earnings aggressiveness, but good corporate governance can mitigate the effect. Finally, firms in stronger anti-director rights countries tend to exhibit stronger earnings smoothing. This counter-intuitive result is different from Leuz et al. (2003). [source]


A question for DSM-V: which better predicts persistent conduct disorder , delinquent acts or conduct symptoms?

CRIMINAL BEHAVIOUR AND MENTAL HEALTH, Issue 1 2002
Jeffrey D. Burke PhD
Background Conduct disorder (CD), a psychiatric index of antisocial behaviour, shares similarities with delinquency, a criminological index. This study sought to examine which factors in childhood predict a repeated diagnosis of CD in adolescence, and whether self-reported delinquent acts enhance the utility of symptoms of CD in predicting later persistent CD. Method Longitudinal data used in this paper come from a clinic-referred sample of 177 boys, along with their parents and teachers, who were assessed using a structured clinical interview. The boys also reported on their delinquent behaviours, as well as a broad range of other family and life events. Results Before age 13, 77 boys met criteria for CD according to their parent, 69 according to their own report, and 36 reported three or more delinquent acts. Forty-eight boys (29%) met criteria for CD three or more times between 13 and 17. In childhood, delinquency overlapped, but was distinct from CD. Both were present in 28 cases, while 41 cases had CD without delinquency, and eight had delinquency without CD. When tested as predictors of later persistent CD, child-reported CD was the strongest predictor of later persistent CD, but self-reported delinquency was stronger than parent-reported CD. A final model of significant predictors included child-reported CD, delinquency, poor child communication with parents, and maternal prenatal smoking. Conclusions It appears that delinquency does add uniquely to the prediction of persistent CD. It may be useful to expand the diagnostic criteria for CD accordingly. Copyright © 2002 Whurr Publishers Ltd. [source]


Predicting species distributions from museum and herbarium records using multiresponse models fitted with multivariate adaptive regression splines

DIVERSITY AND DISTRIBUTIONS, Issue 3 2007
Jane Elith
ABSTRACT Current circumstances , that the majority of species distribution records exist as presence-only data (e.g. from museums and herbaria), and that there is an established need for predictions of species distributions , mean that scientists and conservation managers seek to develop robust methods for using these data. Such methods must, in particular, accommodate the difficulties caused by lack of reliable information about sites where species are absent. Here we test two approaches for overcoming these difficulties, analysing a range of data sets using the technique of multivariate adaptive regression splines (MARS). MARS is closely related to regression techniques such as generalized additive models (GAMs) that are commonly and successfully used in modelling species distributions, but has particular advantages in its analytical speed and the ease of transfer of analysis results to other computational environments such as a Geographic Information System. MARS also has the advantage that it can model multiple responses, meaning that it can combine information from a set of species to determine the dominant environmental drivers of variation in species composition. We use data from 226 species from six regions of the world, and demonstrate the use of MARS for distribution modelling using presence-only data. We test whether (1) the type of data used to represent absence or background and (2) the signal from multiple species affect predictive performance, by evaluating predictions at completely independent sites where genuine presence,absence data were recorded. Models developed with absences inferred from the total set of presence-only sites for a biological group, and using simultaneous analysis of multiple species to inform the choice of predictor variables, performed better than models in which species were analysed singly, or in which pseudo-absences were drawn randomly from the study area. The methods are fast, relatively simple to understand, and useful for situations where data are limited. A tutorial is included. [source]


The implications of data selection for regional erosion and sediment yield modelling

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 15 2009
Joris de Vente
Abstract Regional environmental models often require detailed data on topography, land cover, soil, and climate. Remote sensing derived data form an increasingly important source of information for these models. Yet, it is often not easy to decide what the most feasible source of information is and how different input data affect model outcomes. This paper compares the quality and performance of remote sensing derived data for regional soil erosion and sediment yield modelling with the WATEM-SEDEM model in south-east Spain. An ASTER-derived digital elevation model (DEM) was compared with the DEM obtained from the Shuttle Radar Topography Mission (SRTM), and land cover information from the CORINE database (CLC2000) was compared with classified ASTER satellite images. The SRTM DEM provided more accurate estimates of slope gradient and upslope drainage area than the ASTER DEM. The classified ASTER images provided a high accuracy (90%) land cover map, and due to its higher resolution, it showed a more fragmented landscape than the CORINE land cover data. Notwithstanding the differences in quality and level of detail, CORINE and ASTER land cover data in combination with the SRTM DEM or ASTER DEM allowed accurate predictions of sediment yield at the catchment scale. Although the absolute values of erosion and sediment deposition were different, the qualitative spatial pattern of the major sources and sinks of sediments was comparable, irrespective of the DEM and land cover data used. However, due to its lower accuracy, the quantitative spatial pattern of predictions with the ASTER DEM will be worse than with the SRTM DEM. Therefore, the SRTM DEM in combination with ASTER-derived land cover data presumably provide most accurate spatially distributed estimates of soil erosion and sediment yield. Nevertheless, model calibration is required for each data set and resolution and validation of the spatial pattern of predictions is urgently needed. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Population fluctuations, power laws and mixtures of lognormal distributions

ECOLOGY LETTERS, Issue 1 2001
A.P. Allen
A number of investigators have invoked a cascading local interaction model to account for power-law-distributed fluctuations in ecological variables. Invoking such a model requires that species be tightly coupled, and that local interactions among species influence ecosystem dynamics over a broad range of scales. Here we reanalyse bird population data used by Keitt & Stanley (1998, Dynamics of North American breeding bird populations. Nature, 393, 257,260) to support a cascading local interaction model. We find that the power law they report can be attributed to mixing of lognormal distributions. More tentatively, we propose that mixing of distributions accounts for other empirical power laws reported in the ecological literature. [source]


A strategy to reduce the numbers of fish used in acute ecotoxicity testing of pharmaceuticals

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 12 2003
Thomas H. Hutchinson
Abstract The pharmaceutical industry gives high priority to animal welfare in the process of drug discovery and safety assessment. In the context of environmental assessments of active pharmaceutical ingredients (APIs), existing U.S. Food and Drug Administration and draft European regulations may require testing of APIs for acute ecotoxicity to algae, daphnids, and fish (base-set ecotoxicity data used to derive the predicted no-effect concentration [PNECwater] from the most sensitive of three species). Subject to regulatory approval, it is proposed that testing can be moved from fish median lethal concentration (LC50) testing (typically using ,42 fish/API) to acute threshold tests using fewer fish (typically 10 fish/API). To support this strategy, we have collated base-set ecotoxicity data from regulatory studies of 91 APIs (names coded for commercial reasons). For 73 of the 91 APIs, the algal median effect concentration (EC50) and daphnid EC50 values were lower than or equal to the fish LC50 data. Thus, for approximately 80% of these APIs, algal and daphnid acute EC50 data could have been used in the absence offish LC50 data to derive PNECwater values. For the other 18 APIs, use of an acute threshold test with a step-down factor of 3.2 is predicted to give comparable PNECwater outcomes. Based on this preliminary scenario of 91 APIs, this approach is predicted to reduce the total number offish used from 3,822 to 1,025 (,73%). The present study, although preliminary, suggests that the current regulatory requirement for fish LC50 data regarding APIs should be succeeded by fish acute threshold (step-down) test data, thereby achieving significant animal welfare benefits with no loss of data for PNECwater estimates. [source]


Unto Every One That Hath Shall Be Given: The Subject Areas Under The HEFCE Formula

FINANCIAL ACCOUNTABILITY & MANAGEMENT, Issue 3 2000
Geoffrey Whittington
The Higher Education Funding Council for England and Wales (HEFCE) has recently revised its formulae for the distribution of teaching and research funds between universities. The new formulae are intended to increase the transparency of the allocation process and reduce the reliance on historical patterns of allocation. Analysis shows that the coefficients (costs and prices) on which the formulae depend are estimated from historical data, so that reliance on historical patterns has not been eliminated. Moreover, the process by which the coefficients were derived is not transparent and the data used are not necessarily the most appropriate. Thus, the new formulae, which lead to significant shifts in the allocation of funds between subject areas, cannot be shown to have the transparency and sound empirical basis to which HEFCE aspires. [source]


Are stock assessment methods too complicated?

FISH AND FISHERIES, Issue 3 2004
A J R Cotter
Abstract This critical review argues that several methods for the estimation and prediction of numbers-at-age, fishing mortality coefficients F, and recruitment for a stock of fish are too hard to explain to customers (the fishing industry, managers, etc.) and do not pay enough attention to weaknesses in the supporting data, assumptions and theory. The review is linked to North Sea demersal stocks. First, weaknesses in the various types of data used in North Sea assessments are summarized, i.e. total landings, discards, commercial and research vessel abundance indices, age-length keys and natural mortality (M). A list of features that an ideal assessment should have is put forward as a basis for comparing different methods. The importance of independence and weighting when combining different types of data in an assessment is stressed. Assessment methods considered are Virtual Population Analysis, ad hoc tuning, extended survivors analysis (XSA), year-class curves, catch-at-age modelling, and state-space models fitted by Kalman filter or Bayesian methods. Year-class curves (not to be confused with ,catch-curves') are the favoured method because of their applicability to data sets separately, their visual appeal, simple statistical basis, minimal assumptions, the availability of confidence limits, and the ease with which estimates can be combined from different data sets after separate analyses. They do not estimate absolute stock numbers or F but neither do other methods unless M is accurately known, as is seldom true. [source]


Night sampling improves indices used for management of yellow perch in Lake Erie

FISHERIES MANAGEMENT & ECOLOGY, Issue 1 2010
P. M. KOCOVSKY
Abstract, Catch rate (catch per hour) was examined for age-0 and age-1 yellow perch, Perca flavescens (Mitchill), captured in bottom trawls from 1991 to 2005 in western Lake Erie: (1) to examine variation of catch rate among years, seasons, diel periods and their interactions; and (2) to determine whether sampling during particular diel periods improved the management value of CPH data used in models to project abundance of age-2 yellow perch. Catch rate varied with year, season and the diel period during which sampling was conducted as well as by the interaction between year and season. Indices of abundance of age-0 and age-1 yellow perch estimated from night samples typically produced better fitting models and lower estimates of age-2 abundance than those using morning or afternoon samples, whereas indices using afternoon samples typically produced less precise and higher estimates of abundance. The diel period during which sampling is conducted will not affect observed population trends but may affect estimates of abundance of age-0 and age-1 yellow perch, which in turn affect recommended allowable harvest. A field experiment throughout western Lake Erie is recommended to examine potential benefits of night sampling to management of yellow perch. [source]


Plasticity in vertical behaviour of migrating juvenile southern bluefin tuna (Thunnus maccoyii) in relation to oceanography of the south Indian Ocean

FISHERIES OCEANOGRAPHY, Issue 4 2009
SOPHIE BESTLEY
Abstract Electronic tagging provides unprecedented information on the habitat use and behaviour of highly migratory marine predators, but few analyses have developed quantitative links between animal behaviour and their oceanographic context. In this paper we use archival tag data from juvenile southern bluefin tuna (Thunnus maccoyii, SBT) to (i) develop a novel approach characterising the oceanographic habitats used throughout an annual migration cycle on the basis of water column structure (i.e., temperature-at-depth data from tags), and (ii) model how the vertical behaviour of SBT altered in relation to habitat type and other factors. Using this approach, we identified eight habitat types occupied by juvenile SBT between the southern margin of the subtropical gyre and the northern edge of the Subantarctic Front in the south Indian Ocean. Although a high degree of variability was evident both within and between fish, mixed-effect models identified consistent behavioural responses to habitat, lunar phase, migration status and diel period. Our results indicate SBT do not act to maintain preferred depth or temperature ranges, but rather show highly plastic behaviours in response to changes in their environment. This plasticity is discussed in terms of the potential proximate causes (physiological, ecological) and with reference to the challenges posed for habitat-based standardisation of fishery data used in stock assessments. [source]


Inference of mantle viscosity from GRACE and relative sea level data

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2007
Archie Paulson
SUMMARY Gravity Recovery And Climate Experiment (GRACE) satellite observations of secular changes in gravity near Hudson Bay, and geological measurements of relative sea level (RSL) changes over the last 10 000 yr in the same region, are used in a Monte Carlo inversion to infer-mantle viscosity structure. The GRACE secular change in gravity shows a significant positive anomaly over a broad region (>3000 km) near Hudson Bay with a maximum of ,2.5 ,Gal yr,1 slightly west of Hudson Bay. The pattern of this anomaly is remarkably consistent with that predicted for postglacial rebound using the ICE-5G deglaciation history, strongly suggesting a postglacial rebound origin for the gravity change. We find that the GRACE and RSL data are insensitive to mantle viscosity below 1800 km depth, a conclusion similar to that from previous studies that used only RSL data. For a mantle with homogeneous viscosity, the GRACE and RSL data require a viscosity between 1.4 × 1021 and 2.3 × 1021 Pa s. An inversion for two mantle viscosity layers separated at a depth of 670 km, shows an ensemble of viscosity structures compatible with the data. While the lowest misfit occurs for upper- and lower-mantle viscosities of 5.3 × 1020 and 2.3 × 1021 Pa s, respectively, a weaker upper mantle may be compensated by a stronger lower mantle, such that there exist other models that also provide a reasonable fit to the data. We find that the GRACE and RSL data used in this study cannot resolve more than two layers in the upper 1800 km of the mantle. [source]


Getting the biodiversity intactness index right: the importance of habitat degradation data

GLOBAL CHANGE BIOLOGY, Issue 11 2006
MATHIEU ROUGET
Abstract Given high-level commitments to reducing the rate of biodiversity loss by 2010, there is a pressing need to develop simple and practical indicators to monitor progress. In this context, a biodiversity intactness index (BII) was recently proposed, which provides an overall indicator suitable for policy makers. The index links data on land use with expert assessments of how this impacts the population densities of well-understood taxonomic groups to estimate current population sizes relative to premodern times. However, when calculated for southern Africa, the resulting BII of 84% suggests a far more positive picture of the state of wild nature than do other large-scale estimates. Here, we argue that this discrepancy is in part an artefact of the coarseness of the land degradation data used to calculate the BII, and that the overall BII for southern Africa is probably much lower than 84%. In particular, based on two relatively inexpensive, ground-truthed studies of areas not generally regarded as exceptional in terms of their degradation status, we demonstrate that Scholes and Biggs might have seriously underestimated the extent of land degradation. These differences have substantial bearing on BII scores. Urgent attention should be given to the further development of cost-effective ground-truthing methods for quantifying the extent of land degradation in order to provide reliable estimates of biodiversity loss, both in southern Africa and more widely. [source]


A semi-parametric gap-filling model for eddy covariance CO2 flux time series data

GLOBAL CHANGE BIOLOGY, Issue 9 2006
VANESSA J. STAUCH
Abstract This paper introduces a method for modelling the deterministic component of eddy covariance CO2 flux time series in order to supplement missing data in these important data sets. The method is based on combining multidimensional semi-parametric spline interpolation with an assumed but unstated dependence of net CO2 flux on light, temperature and time. We test the model using a range of synthetic canopy data sets generated using several canopy simulation models realized for different micrometeorological and vegetation conditions. The method appears promising for filling large systematic gaps providing the associated missing data do not overerode critical information content in the conditioning data used for the model optimization. [source]


Model,data synthesis in terrestrial carbon observation: methods, data requirements and data uncertainty specifications

GLOBAL CHANGE BIOLOGY, Issue 3 2005
M. R. Raupach
Systematic, operational, long-term observations of the terrestrial carbon cycle (including its interactions with water, energy and nutrient cycles and ecosystem dynamics) are important for the prediction and management of climate, water resources, food resources, biodiversity and desertification. To contribute to these goals, a terrestrial carbon observing system requires the synthesis of several kinds of observation into terrestrial biosphere models encompassing the coupled cycles of carbon, water, energy and nutrients. Relevant observations include atmospheric composition (concentrations of CO2 and other gases); remote sensing; flux and process measurements from intensive study sites; in situ vegetation and soil monitoring; weather, climate and hydrological data; and contemporary and historical data on land use, land use change and disturbance (grazing, harvest, clearing, fire). A review of model,data synthesis tools for terrestrial carbon observation identifies ,nonsequential' and ,sequential' approaches as major categories, differing according to whether data are treated all at once or sequentially. The structure underlying both approaches is reviewed, highlighting several basic commonalities in formalism and data requirements. An essential commonality is that for all model,data synthesis problems, both nonsequential and sequential, data uncertainties are as important as data values themselves and have a comparable role in determining the outcome. Given the importance of data uncertainties, there is an urgent need for soundly based uncertainty characterizations for the main kinds of data used in terrestrial carbon observation. The first requirement is a specification of the main properties of the error covariance matrix. As a step towards this goal, semi-quantitative estimates are made of the main properties of the error covariance matrix for four kinds of data essential for terrestrial carbon observation: remote sensing of land surface properties, atmospheric composition measurements, direct flux measurements, and measurements of carbon stores. [source]


An empirical model of carbon fluxes in Russian tundra

GLOBAL CHANGE BIOLOGY, Issue 2 2001
Dmitri G. Zamolodchikov
Summary This study presents an empirical model based on a GIS approach, which was constructed to estimate the large-scale carbon fluxes over the entire Russian tundra zone. The model has four main blocks: (i) the computer map of tundra landscapes; (ii) data base of long-term weather records; (iii) the submodel of phytomass seasonal dynamics; and (iv) the submodel of carbon fluxes. The model uses exclusively original in situ diurnal CO2 flux chamber measurements (423 sample plots) conducted during six field seasons (1993,98). The research sites represent the main tundra biome landscapes (arctic, typical, south shrub and mountain tundras) in the latitudinal diapason of 65,74°N and longitudinal profile of 63°E,172°W. The greatest possible diversity of major ecosystem types within the different landscapes was investigated. The majority of the phytomass data used was obtained from the same sample plots. The submodel of carbon fluxes has two dependent [GPP, Gross Respiration (GR)] and several input variables (air temperature, PAR, aboveground phytomass components). The model demonstrates a good correspondence with other independent regional and biome estimates and carbon flux seasonal patterns. The annual GPP of Russian tundra zone for the area of 235 × 106 ha was estimated as ,485.8 ± 34.6 × 106 tC, GR as +474.2 ± 35.0 × 106 tC, and NF as ,11.6 ± 40.8 × 106 tC, which possibly corresponds to an equilibrium state of carbon balance during the climatic period studied (the first half of the 20th century). The results advocate that simple regression-based models are useful for extrapolating carbon fluxes from small to large spatial scales. [source]


What caused the mid-Holocene forest decline on the eastern Tibet-Qinghai Plateau?

GLOBAL ECOLOGY, Issue 2 2010
Ulrike Herzschuh
ABSTRACT Aim, Atmospheric CO2 concentrations depend, in part, on the amount of biomass locked up in terrestrial vegetation. Information on the causes of a broad-scale vegetation transition and associated loss of biomass is thus of critical interest for understanding global palaeoclimatic changes. Pollen records from the north-eastern Tibet-Qinghai Plateau reveal a dramatic and extensive forest decline beginning c. 6000 cal. yr bp. The aim of this study is to elucidate the causes of this regional-scale change from high-biomass forest to low-biomass steppe on the Tibet-Qinghai Plateau during the second half of the Holocene. Location, Our study focuses on the north-eastern Tibet-Qinghai Plateau. Stratigraphical data used are from Qinghai Lake (3200 m a.s.l., 36°32,,37°15, N, 99°36,,100°47, E). Methods, We apply a modern pollen-precipitation transfer function from the eastern and north-eastern Tibet-Qinghai Plateau to fossil pollen spectra from Qinghai Lake to reconstruct annual precipitation changes during the Holocene. The reconstructions are compared to a stable oxygen-isotope record from the same sediment core and to results from two transient climate model simulations. Results, The pollen-based precipitation reconstruction covering the Holocene parallels moisture changes inferred from the stable oxygen-isotope record. Furthermore, these results are in close agreement with simulated model-based past annual precipitation changes. Main conclusions, In the light of these data and the model results, we conclude that it is not necessary to attribute the broad-scale forest decline to human activity. Climate change as a result of changes in the intensity of the East Asian Summer Monsoon in the mid-Holocene is the most parsimonious explanation for the widespread forest decline on the Tibet-Qinghai Plateau. Moreover, climate feedback from a reduced forest cover accentuates increasingly drier conditions in the area, indicating complex vegetation,climate interactions during this major ecological change. [source]