Client/server Architecture (server + architecture)

Distribution by Scientific Domains


Selected Abstracts


Estimating and eliminating redundant data transfers over the web: a fragment based approach

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 2 2005
Christos Bouras
Abstract Redundant data transfers over the Web, can be mainly attributed to the repeated transfers of unchanged data. Web caches and Web proxies are some of the solutions that have been proposed, to deal with the issue of redundant data transfers. In this paper we focus on the efficient estimation and reduction of redundant data transfers over the Web. We first prove that a vast amount of redundant data is transferred in Web pages that are considered to carry fresh data. We show this by following an approach based on Web page fragmentation and manipulation. Web pages are broken down to fragments, based on specific criteria. We then deal with these fragments as independent constructors of the Web page and study their change patterns independently and in the context of the whole Web page. After the fragmentation process, we propose solutions for dealing with redundant data transfers. This paper has been based on our previous work on ,Web Components' but also on related work by other researchers. It utilises a proxy based, client/server architecture, and imposes changes to the algorithms executed on the Proxy server and on clients. We show that our proposed solution can considerably reduce the amount of redundant data transferred on the Web. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Graph-based tools for re-engineering

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 4 2002
Katja Cremer
Abstract Maintenance of legacy systems is a challenging task. Often, only the source code is still available, while design or requirements documents have been lost or have not been kept up-to-date with the actual implementation. In particular, this applies to many business applications which are run on a mainframe computer and are written in COBOL. Many companies are confronted with the difficult task of migrating these systems to a client/server architecture with clients running on PCs and servers running on the mainframe. REforDI (REengineering for DIstribution) is a graph-based environment supporting this task. REforDI provides integrated code analysis, re-design, and code transformation for COBOL applications. To prepare the application for distribution, REforDI assists in the transition to an object-based architecture, according to which the source code is subsequently transformed into Object COBOL. Internally, REforDI makes heavy use of generators to reduce the implementation effort and thus to enhance adaptability. In particular, graph-based tools for re-engineering are generated from a formal specification which is based on programmed graph transformations. Copyright © 2002 John Wiley & Sons, Ltd. [source]


SGXPro: a parallel workflow engine enabling optimization of program performance and automation of structure determination

ACTA CRYSTALLOGRAPHICA SECTION D, Issue 7 2005
Zheng-Qing Fu
SGXPro consists of four components. (i) A parallel workflow engine that was designed to automatically manage communication between the different processes and build systematic searches of algorithm/program/parameter space to generate the best possible result for a given data set. This is performed by offering the user a palette of programs and techniques commonly used in X-ray structure determination in an environment that lets the user choose programs in a mix-and-match manner, without worrying about inter-program communication and file formats, during the structure-determination process. The current SGXPro program palette includes 3DSCALE, SHELXD, ISAS, SOLVE/RESOLVE, DM, SOLOMON, DMMULTI, BLAST, AMoRe, EPMR, XTALVIEW, ARP/wARP and MAID. (ii) A client/server architecture that allows the user to utilize the best computing facility available. (iii) Plug-in-and-play design, which allows easily integration of new programs into the system. (iv) User-friendly interface. [source]


Design and analysis of a scalable algorithm to monitor chord-based p2p systems at runtime

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 6 2008
Andreas Binzenhöfer
Abstract Peer-to-peer (p2p) systems are a highly decentralized, fault tolerant, and cost-effective alternative to the classic client,server architecture. Yet companies hesitate to use p2p algorithms to build new applications. Due to the decentralized nature of such a p2p system the carrier does not know anything about the current size, performance, and stability of its application. In this paper, we present an entirely distributed and scalable algorithm to monitor a running p2p network. The snapshot of the system enables a telecommunication carrier to gather information about the current performance parameters of the running system as well as to react to discovered errors. Copyright © 2007 John Wiley & Sons, Ltd. [source]


A flexible content repository to enable a peer-to-peer-based wiki

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 7 2010
Udo Bartlang
Abstract Wikis,being major applications of the Web 2.0,are used for a large number of purposes, such as encyclopedias, project documentation, and coordination, both in open communities and in enterprises. At the application level, users are targeted as both consumers and producers of dynamic content. Yet, this kind of peer-to-peer (P2P) principle is not used at the technical level being still dominated by traditional client,server architectures. What lacks is a generic platform that combines the scalability of the P2P approach with, for example, a wiki's requirements for consistent content management in a highly concurrent environment. This paper presents a flexible content repository system that is intended to close the gap by using a hybrid P2P overlay to support scalable, fault-tolerant, consistent, and efficient data operations for the dynamic content of wikis. On the one hand, this paper introduces the generic, overall architecture of the content repository. On the other hand, it describes the major building blocks to enable P2P data management at the system's persistent storage layer, and how these may be used to implement a P2P-based wiki application: (i) a P2P back-end administrates a wiki's actual content resources. (ii) On top, P2P service groups act as indexing groups to implement a wiki's search index. Copyright © 2009 John Wiley & Sons, Ltd. [source]