Content Management (content + management)

Distribution by Scientific Domains


Selected Abstracts


Fulfilling the Promise of Content Management

BULLETIN OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE & TECHNOLOGY (ELECTRONIC), Issue 1 2001
Chris Kartchner Adjunct Professor
First page of article [source]


A flexible content repository to enable a peer-to-peer-based wiki

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 7 2010
Udo Bartlang
Abstract Wikis,being major applications of the Web 2.0,are used for a large number of purposes, such as encyclopedias, project documentation, and coordination, both in open communities and in enterprises. At the application level, users are targeted as both consumers and producers of dynamic content. Yet, this kind of peer-to-peer (P2P) principle is not used at the technical level being still dominated by traditional client,server architectures. What lacks is a generic platform that combines the scalability of the P2P approach with, for example, a wiki's requirements for consistent content management in a highly concurrent environment. This paper presents a flexible content repository system that is intended to close the gap by using a hybrid P2P overlay to support scalable, fault-tolerant, consistent, and efficient data operations for the dynamic content of wikis. On the one hand, this paper introduces the generic, overall architecture of the content repository. On the other hand, it describes the major building blocks to enable P2P data management at the system's persistent storage layer, and how these may be used to implement a P2P-based wiki application: (i) a P2P back-end administrates a wiki's actual content resources. (ii) On top, P2P service groups act as indexing groups to implement a wiki's search index. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Applying content management to automated provenance capture

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 5 2008
Karen L. Schuchardt
Abstract Workflows and data pipelines are becoming increasingly valuable to computational and experimental sciences. These automated systems are capable of generating significantly more data within the same amount of time compared to their manual counterparts. Automatically capturing and recording data provenance and annotation as part of these workflows are critical for data management, verification, and dissemination. We have been prototyping a workflow provenance system, targeted at biological workflows, that extends our content management technologies and other open source tools. We applied this prototype to the provenance challenge to demonstrate an end-to-end system that supports dynamic provenance capture, persistent content management, and dynamic searches of both provenance and metadata. We describe our prototype, which extends the Kepler system for the execution environment, the Scientific Annotation Middleware (SAM) content management software for data services, and an existing HTTP-based query protocol. Our implementation offers several unique capabilities, and through the use of standards, is able to provide access to the provenance record with a variety of commonly available client tools. Copyright © 2007 John Wiley & Sons, Ltd. [source]


A blueprint for the implementation of process-oriented knowledge management

KNOWLEDGE AND PROCESS MANAGEMENT: THE JOURNAL OF CORPORATE TRANSFORMATION, Issue 4 2003
Ulrich Remus
Process-oriented Knowledge Management aims at the integration of business processes and knowledge management. In order to provide knowledge for value adding activities within the business processes KM instruments and KM systems have to be adapted to business and knowledge processes. In detail, KM instruments such as content management, skill management, lessons learned, and communities have to be assigned to KM activities and processes. Models and patterns that describe generic pKM processes can build a blueprint for the implementation and support the stepwise integration of business processes into the knowledge life cycle. The introduction of a pKM becomes more efficient, as the flexibility is increased and the complexity is reduced. In this paper the authors show the essential elements of a blueprint developed during the implementation of a pKM in a large transaction bank. The blueprint describes the essential knowledge structures, activities, processes and instruments on different layers of abstraction in the context of a continuous knowledge life cycle. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Managing electronic documents and work flows: Enterprise content management at work in nonprofit organizations

NONPROFIT MANAGEMENT & LEADERSHIP, Issue 4 2007
Joel Iverson
Web management and knowledge management systems have made significant technological advances, culminating in large information management systems such as enterprise content management (ECM). ECM is a Web-based publishing system that manages large numbers of electronic documents and other Web assets intended for publication to Web portals and other complex Web sites. Work in nonprofit organizations can benefit from adopting new communication technologies that promote collaboration and enterprisewide knowledge management. The unique characteristics of ECM are enumerated and analyzed from a knowledge management perspective. We identify three stages of document life cycles in ECM implementations,content, reification, and commodification/process,as the content management model. We present the model as a mechanism for decision makers and scholars to use in evaluating the organizational impacts of systems such as ECM. We also argue that decision makers in nonprofit organizations should take care to avoid overly commodifying business processes in the final stage, where participation may be more beneficial than efficiency. [source]


The architecture of content reuse

PROCEEDINGS OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE & TECHNOLOGY (ELECTRONIC), Issue 1 2002
Ann Rockley President
As organizations have moved towards content management and dynamic delivery of content they have begun to create reusable content. Reusable content is object-oriented content that can be used across documents, across media and across information types. Reusable content can consist of entire documents, sections, paragraphs, sentences, or even individual words. Effective content reuse requires robust information models, metadata, and strategies for utilizing content management and personalization to support reuse. This paper provides the concepts, strategies, guidelines, processes, and technological options that will empower enterprise content managers and Information architects to meet the increasing demands of creating, managing, and distributing reusable content. [source]