Important Ones (important + ones)

Distribution by Scientific Domains


Selected Abstracts


Contribution of N2O to the greenhouse gas balance of first-generation biofuels

GLOBAL CHANGE BIOLOGY, Issue 1 2009
EDWARD M. W. SMEETS
Abstract In this study, we analyze the impact of fertilizer- and manure-induced N2O emissions due to energy crop production on the reduction of greenhouse gas (GHG) emissions when conventional transportation fuels are replaced by first-generation biofuels (also taking account of other GHG emissions during the entire life cycle). We calculate the nitrous oxide (N2O) emissions by applying a statistical model that uses spatial data on climate and soil. For the land use that is assumed to be replaced by energy crop production (the ,reference land-use system'), we explore a variety of options, the most important of which are cropland for food production, grassland, and natural vegetation. Calculations are also done in the case that emissions due to energy crop production are fully additional and thus no reference is considered. The results are combined with data on other emissions due to biofuels production that are derived from existing studies, resulting in total GHG emission reduction potentials for major biofuels compared with conventional fuels. The results show that N2O emissions can have an important impact on the overall GHG balance of biofuels, though there are large uncertainties. The most important ones are those in the statistical model and the GHG emissions not related to land use. Ethanol produced from sugar cane and sugar beet are relatively robust GHG savers: these biofuels change the GHG emissions by ,103% to ,60% (sugar cane) and ,58% to ,17% (sugar beet), compared with conventional transportation fuels and depending on the reference land-use system that is considered. The use of diesel from palm fruit also results in a relatively constant and substantial change of the GHG emissions by ,75% to ,39%. For corn and wheat ethanol, the figures are ,38% to 11% and ,107% to 53%, respectively. Rapeseed diesel changes the GHG emissions by ,81% to 72% and soybean diesel by ,111% to 44%. Optimized crop management, which involves the use of state-of-the-art agricultural technologies combined with an optimized fertilization regime and the use of nitrification inhibitors, can reduce N2O emissions substantially and change the GHG emissions by up to ,135 percent points (pp) compared with conventional management. However, the uncertainties in the statistical N2O emission model and in the data on non-land-use GHG emissions due to biofuels production are large; they can change the GHG emission reduction by between ,152 and 87 pp. [source]


Modelling small-business credit scoring by using logistic regression, neural networks and decision trees

INTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 3 2005
Mirta Bensic
Previous research on credit scoring that used statistical and intelligent methods was mostly focused on commercial and consumer lending. The main purpose of this paper is to extract important features for credit scoring in small-business lending on a dataset with specific transitional economic conditions using a relatively small dataset. To do this, we compare the accuracy of the best models extracted by different methodologies, such as logistic regression, neural networks (NNs), and CART decision trees. Four different NN algorithms are tested, including backpropagation, radial basis function network, probabilistic and learning vector quantization, by using the forward nonlinear variable selection strategy. Although the test of differences in proportion and McNemar's test do not show a statistically significant difference in the models tested, the probabilistic NN model produces the highest hit rate and the lowest type I error. According to the measures of association, the best NN model also shows the highest degree of association with the data, and it yields the lowest total relative cost of misclassification for all scenarios examined. The best model extracts a set of important features for small-business credit scoring for the observed sample, emphasizing credit programme characteristics, as well as entrepreneur's personal and business characteristics as the most important ones. Copyright © 2005 John Wiley & Sons, Ltd. [source]


An efficient triplet-based algorithm for evidential reasoning,

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 4 2008
Yaxin Bi
Linear-time computational techniques based on the structure of an evidence space have been developed for combining multiple pieces of evidence using Dempster's rule (orthogonal sum), which is available on a number of contending hypotheses. They offer a means of making the computation-intensive calculations involved more efficient in certain circumstances. Unfortunately, they restrict the orthogonal sum of evidential functions to the dichotomous structure that applies only to elements and their complements. In this paper, we present a novel evidence structure in terms of a triplet and a set of algorithms for evidential reasoning. The merit of this structure is that it divides a set of evidence into three subsets, distinguishing the trivial evidential elements from the important ones,focusing particularly on some elements of an evidence space. It avoids the deficits of the dichotomous structure in representing the preference of evidence and estimating the basic probability assignment of evidence. We have established a formalism for this structure and the general formulae for combining pieces of evidence in the form of the triplet, which have been theoretically and empirically justified. © 2008 Wiley Periodicals, Inc. [source]


Soluble protein oligomers as emerging toxins in alzheimer's and other amyloid diseases

IUBMB LIFE, Issue 4-5 2007
Sergio T. Ferreira
Abstract Amyloid diseases are a group of degenerative disorders characterized by cell/tissue damage caused by toxic protein aggregates. Abnormal production, processing and/or clearance of misfolded proteins or peptides may lead to their accumulation and to the formation of amyloid aggregates. Early histopathological investigation of affected organs in different amyloid diseases revealed the ubiquitous presence of fibrillar protein aggregates forming large deposits known as amyloid plaques. Further in vitro biochemical and cell biology studies, as well as studies using transgenic animal models, provided strong support to what initially seemed to be a solid concept, namely that amyloid fibrils played crucial roles in amyloid pathogenesis. However, recent studies describing tissue-specific accumulation of soluble protein oligomers and their strong impact on cell function have challenged the fibril hypothesis and led to the emergence of a new view: Fibrils are not the only toxins derived from amyloidogenic proteins and, quite possibly, not the most important ones with respect to disease etiology. Here, we review some of the recent findings and concepts in this rapidly developing field, with emphasis on the involvement of soluble oligomers of the amyloid-, peptide in the pathogenesis of Alzheimer's disease. Recent studies suggesting that soluble oligomers from different proteins may share common mechanisms of cytotoxicity are also discussed. Increased understanding of the cellular toxic mechanisms triggered by protein oligomers may lead to the development of rational, effective treatments for amyloid disorders. IUBMB Life, 59: 332-345, 2007 [source]


Bayesian mixture models for complex high dimensional count data in phage display experiments

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 2 2007
Yuan Ji
Summary., Phage display is a biological process that is used to screen random peptide libraries for ligands that bind to a target of interest with high affinity. On the basis of a count data set from an innovative multistage phage display experiment, we propose a class of Bayesian mixture models to cluster peptide counts into three groups that exhibit different display patterns across stages. Among the three groups, the investigators are particularly interested in that with an ascending display pattern in the counts, which implies that the peptides are likely to bind to the target with strong affinity. We apply a Bayesian false discovery rate approach to identify the peptides with the strongest affinity within the group. A list of peptides is obtained, among which important ones with meaningful functions are further validated by biologists. To examine the performance of the Bayesian model, we conduct a simulation study and obtain desirable results. [source]


History for a practice profession

NURSING INQUIRY, Issue 4 2006
Patricia D'Antonio
This essay explores the meaning of history for a practice profession. It argues that our clinical backgrounds suggest particular kinds of historical questions that colleagues with different backgrounds would not think to ask. This essay poses three possible questions as examples. What if we place the day after day work of caring for the sick , that which is nursing , at the center of an institution's history? What if we were to embrace a sense of nurses and nursing work as truly diverse and different? What if we were to analytically engage the reluctance of the large number of nurses to formally embrace feminism? This essay acknowledges that these are not the only possible questions and, in the end, they most likely will not even be the important ones. But this essay does argue that we, who are both clinicians and historians, need to more seriously consider the implications of our questions for disciplinary practice and research. It argues, in the end, that the meaning of history to a practice profession lies in our questions: questions that are different than those raised by other methods, and questions that may escape notice in the press of daily practice and research. [source]


Implementation Studies: Time for a Revival?

PUBLIC ADMINISTRATION, Issue 2 2004
Personal Reflections on 20 Years of Implementation Studies
This paper presents a review of three decades of implementation studies and is constructed in the form of a personal reflection. The paper begins with a reflection upon the context within which the book Policy and Action was written, a time when both governments and policy analysts were endeavouring to systematize and improve the public decision-making process and to place such decision-making within a more strategic framework. The review ends with a discussion about how public policy planning has changed in the light of public services reform strategies. It is suggested that as a result of such reforms, interest in the processes of implementation have perhaps been superseded by a focus upon change management and performance targets. It is further argued that this has resulted in the reassertion of normative, top-down processes of policy implementation. The paper raises points that are important ones and indeed are reflected throughout all four papers in the symposium issue. These are: (1) the very real analytical difficulties of understanding the role of bureaucratic discretion and motivation; (2) the problem of evaluating policy outcomes; and (3) the need to also focus upon micro political processes that occur in public services organizations. In conclusion, the paper emphasizes the continued importance of implementation studies and the need for policy analysts to understand what actually happens at policy recipient level. [source]


High-Dimensional Cox Models: The Choice of Penalty as Part of the Model Building Process

BIOMETRICAL JOURNAL, Issue 1 2010
Axel Benner
Abstract The Cox proportional hazards regression model is the most popular approach to model covariate information for survival times. In this context, the development of high-dimensional models where the number of covariates is much larger than the number of observations ( ) is an ongoing challenge. A practicable approach is to use ridge penalized Cox regression in such situations. Beside focussing on finding the best prediction rule, one is often interested in determining a subset of covariates that are the most important ones for prognosis. This could be a gene set in the biostatistical analysis of microarray data. Covariate selection can then, for example, be done by L1 -penalized Cox regression using the lasso (Tibshirani (1997). Statistics in Medicine16, 385,395). Several approaches beyond the lasso, that incorporate covariate selection, have been developed in recent years. This includes modifications of the lasso as well as nonconvex variants such as smoothly clipped absolute deviation (SCAD) (Fan and Li (2001). Journal of the American Statistical Association96, 1348,1360; Fan and Li (2002). The Annals of Statistics30, 74,99). The purpose of this article is to implement them practically into the model building process when analyzing high-dimensional data with the Cox proportional hazards model. To evaluate penalized regression models beyond the lasso, we included SCAD variants and the adaptive lasso (Zou (2006). Journal of the American Statistical Association101, 1418,1429). We compare them with "standard" applications such as ridge regression, the lasso, and the elastic net. Predictive accuracy, features of variable selection, and estimation bias will be studied to assess the practical use of these methods. We observed that the performance of SCAD and adaptive lasso is highly dependent on nontrivial preselection procedures. A practical solution to this problem does not yet exist. Since there is high risk of missing relevant covariates when using SCAD or adaptive lasso applied after an inappropriate initial selection step, we recommend to stay with lasso or the elastic net in actual data applications. But with respect to the promising results for truly sparse models, we see some advantage of SCAD and adaptive lasso, if better preselection procedures would be available. This requires further methodological research. [source]