Language Generation (language + generation)

Distribution by Scientific Domains


Selected Abstracts


Choosing Rhetorical Structures To Plan Instructional Texts

COMPUTATIONAL INTELLIGENCE, Issue 3 2000
Leila Kosseim
This paper discusses a fundamental problem in natural language generation: how to organize the content of a text in a coherent and natural way. In this research, we set out to determine the semantic content and the rhetorical structure of texts and to develop heuristics to perform this process automatically within a text generation framework. The study was performed on a specific language and textual genre: French instructional texts. From a corpus analysis of these texts, we determined nine senses typically communicated in instructional texts and seven rhetorical relations used to present these senses. From this analysis, we then developed a set of presentation heuristics that determine how the senses to be communicated should be organized rhetorically in order to create a coherent and natural text. The heuristics are based on five types of constraints: conceptual, semantic, rhetorical, pragmatic, and intentional constraints. To verify the heuristics, we developed the spin natural language generation system, which performs all steps of text generation but focuses on the determination of the content and the rhetorical structure of the text. [source]


When a graph is poorer than 100 words: A comparison of computerised natural language generation, human generated descriptions and graphical displays in neonatal intensive care

APPLIED COGNITIVE PSYCHOLOGY, Issue 1 2010
Marian van der Meulen
Volunteer staff from a Neonatal Intensive Care Unit (NICU) were presented with sets of anonymised physiological data recorded over approximately 45,minute periods from former patients. Staff were asked to select medical/nursing actions appropriate for each of the patients whose data were displayed. Data were shown in one of three conditions (a) as multiple line graphs similar to those commonly shown on the ward, or as textual descriptions generated by (b) expert medical/nursing staff or (c) computerised natural language generation (NLG). An overall advantage was found for the human generated text, but NLG resulted in decisions that were at least as good as those for the graphical displays with which staff were familiar. It is suggested that NLG might offer a viable automated approach to removing noise and artefacts in real, complex and dynamic data sets, thereby reducing visual complexity and mental workload, and enhancing decision-making particularly for inexperienced staff. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Starting with complex primitives pays off: complicate locally, simplify globally

COGNITIVE SCIENCE - A MULTIDISCIPLINARY JOURNAL, Issue 5 2004
Aravind K. Joshi
Abstract In setting up a formal system to specify a grammar formalism, the conventional (mathematical) wisdom is to start with primitives (basic primitive structures) as simple as possible, and then introduce various operations for constructing more complex structures. An alternate approach is to start with complex (more complicated) primitives, which directly capture some crucial linguistic properties and then introduce some general operations for composing these complex structures. These two approaches provide different domains of locality, i.e., domains over which various types of linguistic dependencies can be specified. The latter approach, characterized as complicate locally, simplify globally (CLSG), pushes non-local dependencies to become local, i.e., they arise in the basic primitive structures to start with. The CLSG approach has led to some new insights into syntactic description, semantic composition, language generation, statistical processing, and psycholinguistic phenomena, all these with possible relevance to the cognitive architecture of language. In this paper, we will describe these results in an introductory manner making use of the framework of lexicalized tree-adjoining grammar (LTAG), a key example of the CLSG approach, thereby describing the interplay between formal analysis on the one hand and linguistic and processing issues on the other hand. [source]