Home About us Contact | |||
Inductive Inference (inductive + inference)
Selected AbstractsInductive Inference by Using Information CompressionCOMPUTATIONAL INTELLIGENCE, Issue 2 2003Ben Choi Inductive inference is of central importance to all scientific inquiries. Automating the process of inductive inference is the major concern of machine learning researchers. This article proposes inductive inference techniques to address three inductive problems: (1) how to automatically construct a general description, a model, or a theory to describe a sequence of observations or experimental data, (2) how to modify an existing model to account for new observations, and (3) how to handle the situation where the new observations are not consistent with the existing models. The techniques proposed in this article implement the inductive principle called the minimum descriptive length principle and relate to Kolmogorov complexity and Occam's razor. They employ finite state machines as models to describe sequences of observations and measure the descriptive complexity by measuring the number of states. They can be used to draw inference from sequences of observations where one observation may depend on previous observations. Thus, they can be applied to time series prediction problems and to one-to-one mapping problems. They are implemented to form an automated inductive machine. [source] Inductive Inference: An Axiomatic ApproachECONOMETRICA, Issue 1 2003Itzhak Gilboa A predictor is asked to rank eventualities according to their plausibility, based on past cases. We assume that she can form a ranking given any memory that consists of finitely many past cases. Mild consistency requirements on these rankings imply that they have a numerical representation via a matrix assigning numbers to eventuality,case pairs, as follows. Given a memory, each eventuality is ranked according to the sum of the numbers in its row, over cases in memory. The number attached to an eventuality,case pair can be interpreted as the degree of support that the past case lends to the plausibility of the eventuality. Special instances of this result may be viewed as axiomatizing kernel methods for estimation of densities and for classification problems. Interpreting the same result for rankings of theories or hypotheses, rather than of specific eventualities, it is shown that one may ascribe to the predictor subjective conditional probabilities of cases given theories, such that her rankings of theories agree with rankings by the likelihood functions. [source] Inductive Inference by Using Information CompressionCOMPUTATIONAL INTELLIGENCE, Issue 2 2003Ben Choi Inductive inference is of central importance to all scientific inquiries. Automating the process of inductive inference is the major concern of machine learning researchers. This article proposes inductive inference techniques to address three inductive problems: (1) how to automatically construct a general description, a model, or a theory to describe a sequence of observations or experimental data, (2) how to modify an existing model to account for new observations, and (3) how to handle the situation where the new observations are not consistent with the existing models. The techniques proposed in this article implement the inductive principle called the minimum descriptive length principle and relate to Kolmogorov complexity and Occam's razor. They employ finite state machines as models to describe sequences of observations and measure the descriptive complexity by measuring the number of states. They can be used to draw inference from sequences of observations where one observation may depend on previous observations. Thus, they can be applied to time series prediction problems and to one-to-one mapping problems. They are implemented to form an automated inductive machine. [source] Flexible constraints for regularization in learning from dataINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 6 2004Eyke Hüllermeier By its very nature, inductive inference performed by machine learning methods mainly is data driven. Still, the incorporation of background knowledge,if available,can help to make inductive inference more efficient and to improve the quality of induced models. Fuzzy set,based modeling techniques provide a convenient tool for making expert knowledge accessible to computational methods. In this article, we exploit such techniques within the context of the regularization (penalization) framework of inductive learning. The basic idea is to express knowledge about an underlying data-generating process in terms of flexible constraints and to penalize those models violating these constraints. An optimal model is one that achieves an optimal trade-off between fitting the data and satisfying the constraints. © 2004 Wiley Periodicals, Inc. [source] Rejecting "the given" in systematicsCLADISTICS, Issue 4 2006Maureen Kearney How morphology and systematics come together through morphological analysis, homology hypotheses and phylogenetic analysis is a topic of continuing debate. Some contemporary approaches reject biological evaluation of morphological characters and fall back on an atheoretical and putatively objective (but, in fact, phenetic) approach that defers to the test of congruence for homology assessment. We note persistent trends toward an uncritical empiricism (where evidence is believed to be immediately "given" in putatively theory-free observation) and instrumentalism (where hypotheses of primary homology become mere instruments with little or no empirical foundation for choosing among competing phylogenetic hypotheses). We suggest that this situation is partly a consequence of the fact that the test of congruence and the related concept of total evidence have been inappropriately tied to a Popperian philosophy in modern systematics. Total evidence is a classical principle of inductive inference and does not imply a deductive test of homology. The test of congruence by itself is based philosophically on a coherence theory of truth (coherentism in epistemology), which is unconcerned with empirical foundation. We therefore argue that coherence of character statements (congruence of characters) is a necessary, but not a sufficient, condition to support or refute hypotheses of homology or phylogenetic relationship. There should be at least some causal grounding for homology hypotheses beyond mere congruence. Such causal grounding may be achieved, for example, through empirical investigations of comparative anatomy, developmental biology, functional morphology and secondary structure. © The Willi Hennig Society 2006. [source] Is Induction Epistemologically Prior to Deduction?RATIO, Issue 1 2004George Couvalis Most philosophers hold that the use of our deductive powers confers an especially strong warrant on some of our mathematical and logical beliefs. By contrast, many of the same philosophers hold that it is a matter of serious debate whether any inductive inferences are cogent. That is, they hold that we might well have no warrant for inductively licensed beliefs, such as generalizations. I argue that we cannot know that we know logical and mathematical truths unless we use induction. Our confidence in our logical and mathematical powers is not justified if we are inductive sceptics. This means that inductive scepticism leads to a deductive scepticism. I conclude that we should either be philosophical sceptics about our knowledge of deduction and induction, or accept that some of our inductive inferences are cogent. [source] An Apple is More Than Just a Fruit: Cross-Classification in Children's ConceptsCHILD DEVELOPMENT, Issue 6 2003Simone P. Nguyen This research explored children's use of multiple forms of conceptual organization. Experiments 1 and 2 examined script (e.g., breakfast foods), taxonomic (e.g., fruits), and evaluative (e.g., junk foods) categories. The results showed that 4- and 7-year-olds categorized foods into all 3 categories, and 3-year-olds used both taxonomic and script categories. Experiment 3 found that 4- and 7-year-olds can cross-classify items, that is, classify a single food into both taxonomic and script categories. Experiments 4 and 5 showed that 7-year-olds and to some degree 4-year-olds can selectively use categories to make inductive inferences about foods. The results reveal that children do not rely solely on one form of categorization but are flexible in the types of categories they form and use. [source] |