Learning Methods (learning + methods)

Distribution by Scientific Domains

Kinds of Learning Methods

  • machine learning methods


  • Selected Abstracts


    Some Learning Methods in Functional Networks

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 6 2000
    Enrique Castillo
    This article is devoted to learning functional networks. After a short introduction and motivation of functional networks using a CAD problem, four steps used in learning functional networks are described: (1) selection of the initial topology of the network, which is derived from the physical properties of the problem being modeled, (2) simplification of this topology, using functional equations, (3) estimation of the parameters or weights, using least squares and minimax methods, and (4) selection of the subset of basic functions leading to the best fit to the available data, using the minimum-description-length principle. Several examples are presented to illustrate the learning procedure, including the use of a separable functional network to recover the missing data of the significant wave height records in two different locations, based on a complete record from a third location where the record is complete. [source]


    Perinatal nursing education for single-room maternity care: an evaluation of a competency-based model

    JOURNAL OF CLINICAL NURSING, Issue 1 2005
    Patricia A Janssen PhD
    Aims and objectives., To evaluate the success of a competency-based nursing orientation programme for a single-room maternity care unit by measuring improvement in self-reported competency after six months. Background., Single-room maternity care has challenged obstetrical nurses to provide comprehensive nursing care during all phases of the in-hospital birth experience. In this model, nurses provide intrapartum, postpartum and newborn care in one room. To date, an evaluation of nursing education for single-room maternity care has not been published. Design., A prospective cohort design comparing self-reported competencies prior to starting work in the single-room maternity care and six months after. Methods., Nurses completed a competency-based education programme in which they could select from a menu of learning methods and content areas according to their individual needs. Learning methods included classroom lectures, self-paced learning packages, and preceptorships in the clinical area. Competencies were measured by a standardized perinatal self-efficacy tool and a tool developed by the authors for this study, the Single-Room Maternity Care Competency Tool. A paired analysis was undertaken to take into account the paired (before and after) nature of the design. Results., Scores on the perinatal self-efficacy scale and the single-room maternity care competency tool were improved. These differences were statistically significant. Conclusions., Improvements in perinatal and single-room maternity care-specific competencies suggest that our education programme was successful in preparing nurses for their new role in the single-room maternity care setting. This conclusion is supported by reported increases in nursing and patient satisfaction in the single-room maternity care compared with the traditional labour/delivery and postpartum settings. Relevance to clinical practice., An education programme tailored to the learning needs of experienced clinical nurses contributes to improvements in nursing competencies and patient care. [source]


    SORTAL ANAPHORA RESOLUTION IN MEDLINE ABSTRACTS

    COMPUTATIONAL INTELLIGENCE, Issue 1 2007
    Manabu Torii
    This paper reports our investigation of machine learning methods applied to anaphora resolution for biology texts, particularly paper abstracts. Our primary concern is the investigation of features and their combinations for effective anaphora resolution. In this paper, we focus on the resolution of demonstrative phrases and definite determiner phrases, the two most prevalent forms of anaphoric expressions that we find in biology research articles. Different resolution models are developed for demonstrative and definite determiner phrases. Our work shows that models may be optimized differently for each of the phrase types. Also, because a significant number of definite determiner phrases are not anaphoric, we induce a model to detect anaphoricity, i.e., a model that classifies phrases as either anaphoric or nonanaphoric. We propose several novel features that we call highlighting features, and consider their utility particularly for processing paper abstracts. The system using the highlighting features achieved accuracies of 78% and 71% for demonstrative phrases and definite determiner phrases, respectively. The use of the highlighting features reduced the error rate by about 10%. [source]


    Implicit Surface Modelling with a Globally Regularised Basis of Compact Support

    COMPUTER GRAPHICS FORUM, Issue 3 2006
    C. Walder
    We consider the problem of constructing a globally smooth analytic function that represents a surface implicitly by way of its zero set, given sample points with surface normal vectors. The contributions of the paper include a novel means of regularising multi-scale compactly supported basis functions that leads to the desirable interpolation properties previously only associated with fully supported bases. We also provide a regularisation framework for simpler and more direct treatment of surface normals, along with a corresponding generalisation of the representer theorem lying at the core of kernel-based machine learning methods. We demonstrate the techniques on 3D problems of up to 14 million data points, as well as 4D time series data and four-dimensional interpolation between three-dimensional shapes. Categories and Subject Descriptors (according to ACM CCS): I.3.5 [Computer Graphics]: Curve, surface, solid, and object representations [source]


    Australasian emergency physicians: A learning and educational needs analysis.

    EMERGENCY MEDICINE AUSTRALASIA, Issue 2 2008
    Part Three: Participation by FACEM in available CPD: What do they do, do they like it?
    Abstract Objective: To determine the participation of Emergency Physicians (EP) in currently available continuing professional development opportunities (CPD), their perception of the usefulness of available CPD and their preferred format or method of CPD desired in the future. Method: A mailed survey of Fellows of the Australasian College for Emergency Medicine with 17 Likert type options on educational methods and qualitative analysis grouping volunteered free text responses. Results: The most frequent learning methods reported by EP are on the job contact with other clinicians, formal ED based teaching and reading journals, which were also perceived as useful or very useful learning methods by more than 90% of EP. Less than 15% often or always participate on hospital grand rounds, high fidelity simulation, computer programmes or commercially sponsored events. Increased exposure was desired to high-fidelity simulation center skills training by 58% of respondents with nearly 49% of fellows also wanting more participation in international conferences with around 44% of fellows also wanting more participation in international conferences with around 44% desiring more formal teaching in the ED, more formal feedback on performance, and more meetings with other hospital departments. Over 50% of EP want less or no exposure to commercially sponsored dinners or events. Conclusion: Whilst emergency physicians currently participate in a wide variety of learning methods, the results of this survey suggest EP most appreciate ED based teaching, would like more contact with other departments, along with increased opportunities for simulation based learning and attendance at international conferences. [source]


    Acquiring knowledge with limited experience

    EXPERT SYSTEMS, Issue 3 2007
    Der-Chiang Li
    Abstract: From computational learning theory, sample size in machine learning problems indeed affects the learning performance. Since only few samples can be obtained in the early stages of a system and fewer exemplars usually lead to a low learning accuracy, this research compares different machine learning methods through their classification accuracies to improve small-data-set learning. Techniques used in this paper include the mega-trend diffusion technique, a backpropagation neural network, a support vector machine, and decision trees to explore the machine learning issue with two real medical data sets concerning cancer. The result of the experiment shows that the mega-trend diffusion technique and backpropagation approaches are effective methods of small-data-set learning. [source]


    Different signaling pathways in the livers of patients with chronic hepatitis B or chronic hepatitis C,

    HEPATOLOGY, Issue 5 2006
    Masao Honda
    The clinical manifestations of chronic hepatitis B (CH-B) and chronic hepatitis C (CH-C) are different. We previously reported differences in the gene expression profiles of liver tissue infected with CH-B or CH-C; however, the signaling pathways underlying each condition have yet to be clarified. Using a newly constructed cDNA microarray consisting of 9614 clones selected from 256,550 tags of hepatic serial analysis of gene expression (SAGE) libraries, we compared the gene expression profiles of liver tissue from 24 CH-B patients with those of 23 CH-C patients. Laser capture microdissection was used to isolate hepatocytes from liver lobules and infiltrating lymphoid cells from the portal area, from 16 patients, for gene expression analysis. Furthermore, the comprehensive gene network was analyzed using SAGE libraries of CH-B and CH-C. Supervised and nonsupervised learning methods revealed that gene expression was correlated more with the infecting virus than any other clinical parameters such as histological stage and disease activity. Pro-apoptotic and DNA repair responses were predominant in CH-B with p53 and 14-3-3 interacting genes having an important role. In contrast, inflammatory and anti-apoptotic phenotypes were predominant in CH-C. These differences would evoke different oncogenic factors in CH-B and CH-C. In conclusion, we describe the different signaling pathways induced in the livers of patients with CH-B or CH-C. The results might be useful in guiding therapeutic strategies to prevent the development of hepatocellular carcinoma in cases of CH-B and CH-C. (HEPATOLOGY 2006;44:1122,1138.) [source]


    A missing link in the transfer problem?

    HUMAN RESOURCE MANAGEMENT, Issue 4 2010
    Examining how trainers learn about training transfer
    Abstract This study describes and reports the methods training professionals use to learn about training transfer. Specifically, this study focused on trainers' use and perceived utility of the literature (research and practitioner-based) to develop their knowledge of how to support training transfer in their organization. Consistent with extant research conducted on human resource professionals, our survey results suggest that training professionals seek knowledge mostly through informal learning (e.g., job experiences, discussions with internal and external training professionals, books, searching the Web), but they prefer to learn about training transfer in discussions with external trainers and academics. As a follow-up to the survey, our interview results indicate that trainers select learning methods based on source quality, motivation, and accessibility, but these differed based on which learning methods were chosen. Ideas to guide future human resource researchers are presented within the framework of information-seeking theory. This paper concludes by discussing practical implications for increasing trainer competencies that support training transfer in organizations. © 2010 Wiley Periodicals, Inc. [source]


    Dynamic pricing based on asymmetric multiagent reinforcement learning

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 1 2006
    Ville Könönen
    A dynamic pricing problem is solved by using asymmetric multiagent reinforcement learning in this article. In the problem, there are two competing brokers that sell identical products to customers and compete on the basis of price. We model this dynamic pricing problem as a Markov game and solve it by using two different learning methods. The first method utilizes modified gradient descent in the parameter space of the value function approximator and the second method uses a direct gradient of the parameterized policy function. We present a brief literature survey of pricing models based on multiagent reinforcement learning, introduce the basic concepts of Markov games, and solve the problem by using proposed methods. © 2006 Wiley Periodicals, Inc. Int J Int Syst 21: 73,98, 2006. [source]


    Learning cooperative linguistic fuzzy rules using the best,worst ant system algorithm

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 4 2005
    Jorge Casillas
    Within the field of linguistic fuzzy modeling with fuzzy rule-based systems, the automatic derivation of the linguistic fuzzy rules from numerical data is an important task. In the last few years, a large number of contributions based on techniques such as neural networks and genetic algorithms have been proposed to face this problem. In this article, we introduce a novel approach to the fuzzy rule learning problem with ant colony optimization (ACO) algorithms. To do so, this learning task is formulated as a combinatorial optimization problem. Our learning process is based on the COR methodology proposed in previous works, which provides a search space that allows us to obtain fuzzy models with a good interpretability,accuracy trade-off. A specific ACO-based algorithm, the Best,Worst Ant System, is used for this purpose due to the good performance shown when solving other optimization problems. We analyze the behavior of the proposed method and compare it to other learning methods and search techniques when solving two real-world applications. The obtained results lead us to remark the good performance of our proposal in terms of interpretability, accuracy, and efficiency. © 2005 Wiley Periodicals, Inc. Int J Int Syst 20: 433,452, 2005. [source]


    Flexible constraints for regularization in learning from data

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 6 2004
    Eyke Hüllermeier
    By its very nature, inductive inference performed by machine learning methods mainly is data driven. Still, the incorporation of background knowledge,if available,can help to make inductive inference more efficient and to improve the quality of induced models. Fuzzy set,based modeling techniques provide a convenient tool for making expert knowledge accessible to computational methods. In this article, we exploit such techniques within the context of the regularization (penalization) framework of inductive learning. The basic idea is to express knowledge about an underlying data-generating process in terms of flexible constraints and to penalize those models violating these constraints. An optimal model is one that achieves an optimal trade-off between fitting the data and satisfying the constraints. © 2004 Wiley Periodicals, Inc. [source]


    A comparison of active set method and genetic algorithm approaches for learning weighting vectors in some aggregation operators

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 9 2001
    David Nettleton
    In this article we compare two contrasting methods, active set method (ASM) and genetic algorithms, for learning the weights in aggregation operators, such as weighted mean (WM), ordered weighted average (OWA), and weighted ordered weighted average (WOWA). We give the formal definitions for each of the aggregation operators, explain the two learning methods, give results of processing for each of the methods and operators with simple test datasets, and contrast the approaches and results. © 2001 John Wiley & Sons, Inc. [source]


    Modeling and predicting binding affinity of phencyclidine-like compounds using machine learning methods

    JOURNAL OF CHEMOMETRICS, Issue 1 2010
    Ozlem Erdas
    Abstract Machine learning methods have always been promising in the science and engineering fields, and the use of these methods in chemistry and drug design has advanced especially since the 1990s. In this study, molecular electrostatic potential (MEP) surfaces of phencyclidine-like (PCP-like) compounds are modeled and visualized in order to extract features that are useful in predicting binding affinities. In modeling, the Cartesian coordinates of MEP surface points are mapped onto a spherical self-organizing map (SSOM). The resulting maps are visualized using electrostatic potential (ESP) values. These values also provide features for a prediction system. Support vector machines and partial least-squares method are used for predicting binding affinities of compounds. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    In silico prediction and screening of ,-secretase inhibitors by molecular descriptors and machine learning methods

    JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 6 2010
    Xue-Gang Yang
    Abstract ,-Secretase inhibitors have been explored for the prevention and treatment of Alzheimer's disease (AD). Methods for prediction and screening of ,-secretase inhibitors are highly desired for facilitating the design of novel therapeutic agents against AD, especially when incomplete knowledge about the mechanism and three-dimensional structure of ,-secretase. We explored two machine learning methods, support vector machine (SVM) and random forest (RF), to develop models for predicting ,-secretase inhibitors of diverse structures. Quantitative analysis of the receiver operating characteristic (ROC) curve was performed to further examine and optimize the models. Especially, the Youden index (YI) was initially introduced into the ROC curve of RF so as to obtain an optimal threshold of probability for prediction. The developed models were validated by an external testing set with the prediction accuracies of SVM and RF 96.48 and 98.83% for ,-secretase inhibitors and 98.18 and 99.27% for noninhibitors, respectively. The different feature selection methods were used to extract the physicochemical features most relevant to ,-secretase inhibition. To the best of our knowledge, the RF model developed in this work is the first model with a broad applicability domain, based on which the virtual screening of ,-secretase inhibitors against the ZINC database was performed, resulting in 368 potential hit candidates. © 2009 Wiley Periodicals, Inc. J Comput Chem, 2010 [source]


    Identification of small molecule aggregators from large compound libraries by support vector machines

    JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 4 2010
    Hanbing Rao
    Abstract Small molecule aggregators non-specifically inhibit multiple unrelated proteins, rendering them therapeutically useless. They frequently appear as false hits and thus need to be eliminated in high-throughput screening campaigns. Computational methods have been explored for identifying aggregators, which have not been tested in screening large compound libraries. We used 1319 aggregators and 128,325 non-aggregators to develop a support vector machines (SVM) aggregator identification model, which was tested by four methods. The first is five fold cross-validation, which showed comparable aggregator and significantly improved non-aggregator identification rates against earlier studies. The second is the independent test of 17 aggregators discovered independently from the training aggregators, 71% of which were correctly identified. The third is retrospective screening of 13M PUBCHEM and 168K MDDR compounds, which predicted 97.9% and 98.7% of the PUBCHEM and MDDR compounds as non-aggregators. The fourth is retrospective screening of 5527 MDDR compounds similar to the known aggregators, 1.14% of which were predicted as aggregators. SVM showed slightly better overall performance against two other machine learning methods based on five fold cross-validation studies of the same settings. Molecular features of aggregation, extracted by a feature selection method, are consistent with published profiles. SVM showed substantial capability in identifying aggregators from large libraries at low false-hit rates. © 2009 Wiley Periodicals, Inc.J Comput Chem, 2010 [source]


    Machine learning approaches for predicting compounds that interact with therapeutic and ADMET related proteins

    JOURNAL OF PHARMACEUTICAL SCIENCES, Issue 11 2007
    H. Li
    Abstract Computational methods for predicting compounds of specific pharmacodynamic and ADMET (absorption, distribution, metabolism, excretion and toxicity) property are useful for facilitating drug discovery and evaluation. Recently, machine learning methods such as neural networks and support vector machines have been explored for predicting inhibitors, antagonists, blockers, agonists, activators and substrates of proteins related to specific therapeutic and ADMET property. These methods are particularly useful for compounds of diverse structures to complement QSAR methods, and for cases of unavailable receptor 3D structure to complement structure-based methods. A number of studies have demonstrated the potential of these methods for predicting such compounds as substrates of P-glycoprotein and cytochrome P450 CYP isoenzymes, inhibitors of protein kinases and CYP isoenzymes, and agonists of serotonin receptor and estrogen receptor. This article is intended to review the strategies, current progresses and underlying difficulties in using machine learning methods for predicting these protein binders and as potential virtual screening tools. Algorithms for proper representation of the structural and physicochemical properties of compounds are also evaluated. © 2007 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 96: 2838,2860, 2007 [source]


    Predicting project delivery rates using the Naive,Bayes classifier

    JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 3 2002
    B. Stewart
    Abstract The importance of accurate estimation of software development effort is well recognized in software engineering. In recent years, machine learning approaches have been studied as possible alternatives to more traditional software cost estimation methods. The objective of this paper is to investigate the utility of the machine learning algorithm known as the Naive,Bayes classifier for estimating software project effort. We present empirical experiments with the Benchmark 6 data set from the International Software Benchmarking Standards Group to estimate project delivery rates and compare the performance of the Naive,Bayes approach to two other machine learning methods,model trees and neural networks. A project delivery rate is defined as the number of effort hours per function point. The approach described is general and can be used to analyse not only software development data but also data on software maintenance and other types of software engineering. The paper demonstrates that the Naive,Bayes classifier has a potential to be used as an alternative machine learning tool for software development effort estimation. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Positional effects on citation and readership in arXiv

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 11 2009
    Asif-ul Haque
    arXiv.org mediates contact with the literature for entire scholarly communities, providing both archival access and daily email and web announcements of new materials. We confirm and extend a surprising correlation between article position in these initial announcements and later citation impact, due primarily to intentional "self-promotion" by authors. There is, however, also a pure "visibility" effect: the subset of articles accidentally in early positions fared measurably better in the long-term citation record. Articles in astrophysics (astro-ph) and two large subcommunities of theoretical high energy physics (hep-th and hep-ph) announced in position 1, for example, respectively received median numbers of citations 83%, 50%, and 100% higher than those lower down, while the subsets there accidentally had 44%, 38%, and 71% visibility boosts. We also consider the positional effects on early readership. The median numbers of early full text downloads for astro-ph, hep-th, and hep-ph articles announced in position 1 were 82%, 61%, and 58% higher than for lower positions, respectively, and those there accidentally had medians visibility-boosted by 53%, 44%, and 46%. Finally, we correlate a variety of readership features with long-term citations, using machine learning methods, and conclude with some observations on impact metrics and the dangers of recommender mechanisms. [source]


    Computational methods in authorship attribution

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 1 2009
    Moshe Koppel
    Statistical authorship attribution has a long history, culminating in the use of modern machine learning classification methods. Nevertheless, most of this work suffers from the limitation of assuming a small closed set of candidate authors and essentially unlimited training text for each. Real-life authorship attribution problems, however, typically fall short of this ideal. Thus, following detailed discussion of previous work, three scenarios are considered here for which solutions to the basic attribution problem are inadequate. In the first variant, the profiling problem, there is no candidate set at all; in this case, the challenge is to provide as much demographic or psychological information as possible about the author. In the second variant, the needle-in-a-haystack problem, there are many thousands of candidates for each of whom we might have a very limited writing sample. In the third variant, the verification problem, there is no closed candidate set but there is one suspect; in this case, the challenge is to determine if the suspect is or is not the author. For each variant, it is shown how machine learning methods can be adapted to handle the special challenges of that variant. [source]


    Advancing the diagnosis and treatment of hepatocellular carcinoma

    LIVER TRANSPLANTATION, Issue 4 2005
    J. Wallis Marsh MD
    We analyzed global gene expression patterns of 91 human hepatocellular carcinomas (HCCs) to define the molecular characteristics of the tumors and to test the prognostic value of the expression profiles. Unsupervised classification methods revealed two distinctive subclasses of HCC that are highly associated with patient survival. This association was validated via 5 independent supervised learning methods. We also identified the genes most strongly associated with survival by using the Cox proportional hazards survival analysis. This approach identified a limited number of genes that accurately predicted the length of survival and provides new molecular insight into the pathogenesis of HCC. Tumors from the low survival subclass have strong cell proliferation and antiapoptosis gene expression signatures. In addition, the low survival subclass displayed higher expression of genes involved in ubiquitination and histone modification, suggesting an etiological involvement of these processes in accelerating the progression of HCC. In conclusion, the biological differences identified in the HCC subclasses should provide an attractive source for the development of therapeutic targets (e.g., HIF1a) for selective treatment of HCC patients. Supplementary material for this article can be found on the HEPATOLOGY Web site (http://interscience.wiley.com/jpages/0270-9139/suppmat/index.html) Copyright 2004 American Association for the Study of Liver Diseases. Hepatology. 2004 Sep;40(3):667,76. [source]


    Setting up a clinical skills learning facility

    MEDICAL EDUCATION, Issue 2003
    P Bradley
    Objective, This paper outlines the considerations to be made when establishing a clinical skills learning facility. Considerations, Establishing a clinical skills learning facility is a complex project with many possible options to be considered. A number of professional groups, undergraduate or postgraduate, may be users. Their collaboration can have benefits for funding, uses and promotion of interprofessional education. Best evidence and educational theory should underpin teaching and learning. The physical environment should be flexible to allow a range of clinical settings to be simulated and to facilitate a range of teaching and learning methods, supported by computing and audio-visual resources. Facilities should be available to encourage self-directed learning. The skills programme should be designed to support the intended learning outcomes and be integrated within the overall curriculum, including within the assessment strategy. Teaching staff may be configured in a number of ways and may be drawn from a variety of backgrounds. Appropriate staff development will be required to ensure consistency and quality of teaching with monitoring and evaluation to assure appropriate standards. Patients can also play a role, not only as passive teaching material, but also as teachers and assessors. Clinical, diagnostic and therapeutic equipment will be required, as will models and manikins. The latter will vary from simple part task trainers to highly sophisticated human patient simulators. Care must be taken when choosing equipment to ensure it matches specified requirements for teaching and learning. Conclusion, Detailed planning is required across a number of domains when setting up a clinical skills learning facility. [source]


    Selection criteria for drug-like compounds

    MEDICINAL RESEARCH REVIEWS, Issue 3 2003
    Ingo Muegge
    Abstract The fast identification of quality lead compounds in the pharmaceutical industry through a combination of high throughput synthesis and screening has become more challenging in recent years. Although the number of available compounds for high throughput screening (HTS) has dramatically increased, large-scale random combinatorial libraries have contributed proportionally less to identify novel leads for drug discovery projects. Therefore, the concept of ,drug-likeness' of compound selections has become a focus in recent years. In parallel, the low success rate of converting lead compounds into drugs often due to unfavorable pharmacokinetic parameters has sparked a renewed interest in understanding more clearly what makes a compound drug-like. Various approaches have been devised to address the drug-likeness of molecules employing retrospective analyses of known drug collections as well as attempting to capture ,chemical wisdom' in algorithms. For example, simple property counting schemes, machine learning methods, regression models, and clustering methods have been employed to distinguish between drugs and non-drugs. Here we review computational techniques to address the drug-likeness of compound selections and offer an outlook for the further development of the field. © 2003 Wiley Periodicals, Inc. Med Res Rev, 23, No. 3, 302-321, 2003 [source]


    Status of HTS Data Mining Approaches

    MOLECULAR INFORMATICS, Issue 4 2004
    Alexander Böcker
    Abstract High-throughput screening of large compound collections results in large sets of data. This review gives an overview of the most frequently employed computational techniques for the analysis of such data and the establishment of first QSAR models. Various methods for descriptor selection, classification and data mining are discussed. Recent trends include the application of kernel-based machine learning methods for the design of focused libraries and compilation of target-family biased compound collections. [source]


    Evaluating the Ability of Tree-Based Methods and Logistic Regression for the Detection of SNP-SNP Interaction

    ANNALS OF HUMAN GENETICS, Issue 3 2009
    M. García-Magariños
    Summary Most common human diseases are likely to have complex etiologies. Methods of analysis that allow for the phenomenon of epistasis are of growing interest in the genetic dissection of complex diseases. By allowing for epistatic interactions between potential disease loci, we may succeed in identifying genetic variants that might otherwise have remained undetected. Here we aimed to analyze the ability of logistic regression (LR) and two tree-based supervised learning methods, classification and regression trees (CART) and random forest (RF), to detect epistasis. Multifactor-dimensionality reduction (MDR) was also used for comparison. Our approach involves first the simulation of datasets of autosomal biallelic unphased and unlinked single nucleotide polymorphisms (SNPs), each containing a two-loci interaction (causal SNPs) and 98 ,noise' SNPs. We modelled interactions under different scenarios of sample size, missing data, minor allele frequencies (MAF) and several penetrance models: three involving both (indistinguishable) marginal effects and interaction, and two simulating pure interaction effects. In total, we have simulated 99 different scenarios. Although CART, RF, and LR yield similar results in terms of detection of true association, CART and RF perform better than LR with respect to classification error. MAF, penetrance model, and sample size are greater determining factors than percentage of missing data in the ability of the different techniques to detect true association. In pure interaction models, only RF detects association. In conclusion, tree-based methods and LR are important statistical tools for the detection of unknown interactions among true risk-associated SNPs with marginal effects and in the presence of a significant number of noise SNPs. In pure interaction models, RF performs reasonably well in the presence of large sample sizes and low percentages of missing data. However, when the study design is suboptimal (unfavourable to detect interaction in terms of e.g. sample size and MAF) there is a high chance of detecting false, spurious associations. [source]


    A NEW SURGICAL EDUCATION AND TRAINING PROGRAMME

    ANZ JOURNAL OF SURGERY, Issue 7 2007
    John P. Collins
    Educating and training tomorrow's surgeons has evolved to become a sophisticated and expensive exercise involving a wide range of learning methods, opportunities and stakeholders. Several factors influence this process, prompting those who provide such programmes to identify these important considerations and develop and implement appropriate responses. The Royal Australasian College of Surgeons embarked on this course of action in 2005, the outcome of which is the new Surgical Education and Training programme with the first intake to be selected in 2007 and commence training in 2008. The new programme is competency based and shorter than any designed previously. Implicitly, it recognizes in the curriculum and assessment development and processes, the nine roles and their underpinning competencies identified as essential for a surgeon. It is an evolution of the previous programme retaining that which has been found to be satisfactory. There will be one episode of selection directly into the candidate's specialty of choice and those accepted will progress in an integrated and seamless fashion, provided they meet the clinical and educational requirements of each year. The curriculum and assessment in the basic sciences include both generic and specially aligned components from the commencement of training in each of the nine surgical specialties. Born of necessity and developed through extensive research, discussion and consensus, the implementation of this programme will involve many challenges, particularly during the transition period. Through cooperation, commitment and partnerships, a more efficient and better outcome will be achieved for trainees, their trainers and their patients. [source]


    Does adaptive training work?

    APPLIED COGNITIVE PSYCHOLOGY, Issue 2 2009
    Claudia Metzler-Baddeley
    People intuitively alter the allocation of study time between items of varying difficulty, and such adaptive learning methods are widely used in education and in commercially available memory training programs. We investigated the effectiveness of a computer-based adaptive learning system that utilises spacing and repetition effects by presenting difficult items with short gaps to establish fast learning, and easy items with long intervals to optimise long-term retention. The immediate and delayed effects of adaptive training on cued recall were investigated relative to a control condition of non-adaptive, random spacing. Adaptive training produced significantly higher immediate and delayed recall rates than random spacing. These results encourage the use of adaptive training in education and rehabilitation. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Competency Testing Using a Novel Eye Tracking Device

    ACADEMIC EMERGENCY MEDICINE, Issue 2009
    Paul Wetzel
    Assessment and evaluation metrics currently rely upon interpretation of observed performance or end points by an ,expert' observer. Such metrics are subject to bias since they rely upon the traditional medical education model of ,see one, do one, teach one'. The Institute of Medicine's Report and the Flexner Report have demanded improvements in education metrics as a means to improve patient safety. Additionally, advancements in adult learning methods are challenging traditional medical education measures. Educators are faced with the daunting task of developing rubrics for competency testing that are currently limited by judgment and interpretation bias. Medical education is demanding learner-centered metrics to reflect quantitative and qualitative measures to document competency. Using a novel eye tracking system, educators now have the ability to know how their learners think. The system can track the focus of the learner during task performance. The eye tracking system demonstrates a learner-centered measuring tool capable of identifying deficiencies in task performance. The device achieves the goal of timely and direct feedback of performance metrics based on the learner's perspective. Employment of the eye tracking system in simulation education may identify mastery and retention deficits before compliance and quality improvement issues develop into patient safety concerns. [source]