Home About us Contact | |||
Semantic Analysis (semantic + analysis)
Selected AbstractsVisualizing polysemy using LSA and the predication algorithmJOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 8 2010Guillermo Jorge-Botana Context is a determining factor in language and plays a decisive role in polysemic words. Several psycholinguistically motivated algorithms have been proposed to emulate human management of context, under the assumption that the value of a word is evanescent and takes on meaning only in interaction with other structures. The predication algorithm (Kintsch, 2001), for example, uses a vector representation of the words produced by LSA (Latent Semantic Analysis) to dynamically simulate the comprehension of predications and even of predicative metaphors. The objective of this study was to predict some unwanted effects that could be present in vector-space models when extracting different meanings of a polysemic word (predominant meaning inundation, lack of precision, and low-level definition), and propose ideas based on the predication algorithm for avoiding them. Our first step was to visualize such unwanted phenomena and also the effect of solutions. We use different methods to extract the meanings for a polysemic word (without context, vector sum, and predication algorithm). Our second step was to conduct an analysis of variance to compare such methods and measure the impact of potential solutions. Results support the idea that a human-based computational algorithm like the predication algorithm can take into account features that ensure more accurate representations of the structures we seek to extract. Theoretical assumptions and their repercussions are discussed. [source] Basis of metamemory judgments for text with multiple-choice, essay and recall tests,APPLIED COGNITIVE PSYCHOLOGY, Issue 2 2009Ruth H. Maki Accuracy of metamemory for text was compared for multiple-choice, essay and recall tests. Essay and recall tests were scored with Latent Semantic Analysis (LSA), number of correct idea units and number of word matches. Each measure was correlated with college students' predictions and posttest confidence judgments across texts to determine metamemory accuracy. Metamemory accuracy varied for different types of tests with multiple-choice tests generally producing greater accuracy than essay tests. However, metamemory accuracy for essay and recall tests depended on the measure used to score them. Number of correct idea units produced the highest metamemory accuracy, word matches produced an intermediate level, and LSA produced the lowest accuracy. Students used the quantity of output in their judgments, so performance measures that related most strongly to quantity matched judgments better than measures based on answer quality. The results are compatible with an accessibility account of judgments about performance on text. Copyright © 2008 John Wiley & Sons, Ltd. [source] Semantic Retrieval in DNA-Based Memories with Gibbs Energy ModelsBIOTECHNOLOGY PROGRESS, Issue 1 2006Andrew Neel At least three types of associative memories based on DNA-affinity have been proposed. Previously, we have quantified the quality of retrieval of genomic and abiotic information in simulation by comparison to state-of-the-art symbolic methods available, such as LSA (Latent Semantic Analysis). Their performance is poor when the evaluation criterion for DNA-affinity is a simple approximation of the Gibbs energy that governs duplex formation for retrievals. Here, we use a more realistic approximation of the Gibbs energy to improve semantic retrievals in DNA memories. Their performance is much closer to that of LSA, according to human expert ratings. With more realistic approximations of DNA affinity, performance is expected to improve for other, more adaptive associative memories with compaction in silico, and even more so with actual DNA molecules in vitro. [source] On web communities mining and recommendationCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 5 2009Yanchun Zhang Abstract Because of the lack of a uniform schema for web documents and the sheer amount and dynamics of web data, both the effectiveness and the efficiency of information management and retrieval of web data are often unsatisfactory when using conventional data management and searching techniques. To address this issue, we have adopted web mining and web community analysis approaches. On the basis of the analysis of web document contents, hyperlinks analysis, user access logs and semantic analysis, we have developed various approaches or algorithms to construct and analyze web communities, and to make recommendations. This paper will introduce and discuss several approaches on web community mining and recommendation. Copyright © 2009 John Wiley & Sons, Ltd. [source] Unified linear subspace approach to semantic analysisJOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 1 2010Dandan Li The Basic Vector Space Model (BVSM) is well known in information retrieval. Unfortunately, its retrieval effectiveness is limited because it is based on literal term matching. The Generalized Vector Space Model (GVSM) and Latent Semantic Indexing (LSI) are two prominent semantic retrieval methods, both of which assume there is some underlying latent semantic structure in a dataset that can be used to improve retrieval performance. However, while this structure may be derived from both the term space and the document space, GVSM exploits only the former and LSI the latter. In this article, the latent semantic structure of a dataset is examined from a dual perspective; namely, we consider the term space and the document space simultaneously. This new viewpoint has a natural connection to the notion of kernels. Specifically, a unified kernel function can be derived for a class of vector space models. The dual perspective provides a deeper understanding of the semantic space and makes transparent the geometrical meaning of the unified kernel function. New semantic analysis methods based on the unified kernel function are developed, which combine the advantages of LSI and GVSM. We also prove that the new methods are stable because although the selected rank of the truncated Singular Value Decomposition (SVD) is far from the optimum, the retrieval performance will not be degraded significantly. Experiments performed on standard test collections show that our methods are promising. [source] Ontology-based speech act identification in a bilingual dialog system using partial pattern treesJOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 5 2008Jui-Feng Yeh This article presents a bilingual ontology-based dialog system with multiple services. An ontology-alignment algorithm is proposed to integrate ontologies of different languages for cross-language applications. A domain-specific ontology is further extracted from the bilingual ontology using an island-driven algorithm and a domain corpus. This study extracts the semantic words/concepts using latent semantic analysis (LSA). Based on the extracted semantic words and the domain ontology, a partial pattern tree is constructed to model the speech act of a spoken utterance. The partial pattern tree is used to deal with the ill-formed sentence problem in a spoken-dialog system. Concept expansion based on domain ontology is also adopted to improve system performance. For performance evaluation, a medical dialog system with multiple services, including registration information, clinic information, and FAQ information, is implemented. Four performance measures were used separately for evaluation. The speech act identification rate was 86.2%. A task success rate of 77% was obtained. The contextual appropriateness of the system response was 78.5%. Finally, the rate for correct FAQ retrieval was 82%, an improvement of 15% over the keyword-based vector-space model. The results show the proposed ontology-based speech-act identification is effective for dialog management. [source] Semantic networking: Flow-based, traffic-aware, and self-managed networkingBELL LABS TECHNICAL JOURNAL, Issue 2 2009Ludovic Noirie In order to overcome current Internet limitations on overall network scalability and complexity, we introduce a new paradigm of semantic networking for the networks of the future, which brings together flow-based networking, traffic awareness, and self-management concepts to deliver plug-and-play networks. The natural traffic granularity is the flow between packet and circuit and between connection-less and connection-oriented modes. Using flow aggregation capabilities, we simplify traffic processing in the nodes through elastic fluid switching, and simplify traffic control through flow admission control, policing, and implicit quality of service (QoS) routing. By leveraging deep packet inspection and behavioral traffic analysis, network elements can autonomously and efficiently process the traffic flows they transport through real-time awareness gained via semantic analysis. The global consistency of node decisions within the whole network is ensured by self-management, applying the concepts of "knowledge plane" and "network mining." © 2009 Alcatel-Lucent. [source] A sensemaking approach to trade-offs and synergies between human and ecological elements of corporate sustainabilityBUSINESS STRATEGY AND THE ENVIRONMENT, Issue 4 2010Tamsin Angus-Leppan Abstract This paper considers the complex relationships between the human and ecological elements of sustainability that exist in the minds of stakeholders and argues that a sensemaking approach allows these to be better understood and compared. This is supported by the results of a study, set in a financial institution, exploring the relationships between these non-financial elements of corporate sustainability. The viewpoints of middle management, branch and contact centre employees, executives, a community consultative council, suppliers and a community partner of a large Australian bank obtained in in-depth interviews are analysed and compared utilizing an innovative methodology of semantic analysis. We find that these stakeholders' perceptions of the human,ecological relationship differ by group, containing different mixes of trade-offs and synergies between the non-financial elements of corporate sustainability. Copyright © 2010 John Wiley & Sons, Ltd and ERP Environment. [source] |