Legacy Systems (legacy + system)

Distribution by Scientific Domains


Selected Abstracts


Concepts for computer center power management

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2 2010
A. DiRienzo
Abstract Electrical power usage contributes significantly to the operational costs of large computer systems. At the Hypersonic Missile Technology Research and Operations Center (HMT-ROC) our system usage patterns provide a significant opportunity to reduce operating costs since there are a small number of dedicated users. The relatively predictable nature of our usage patterns allows for the scheduling of computational resource availability. We take advantage of this predictability to shut down systems during periods of low usage to reduce power consumption. With interconnected computer cluster systems, reducing the number of online nodes is more than a simple matter of throwing the power switch on a portion of the cluster. The paper discusses these issues and an approach for power reduction strategies for a computational system with a heterogeneous system mix that includes a large (1560-node) Apple Xserve PowerPC supercluster. In practice, the average load on computer systems may be much less than the peak load although the infrastructure supporting the operation of large computer systems in a computer or data center must still be designed with the peak loads in mind. Given that a significant portion of the time, systems loads can be less than full peak, an opportunity exists for cost savings if idle systems can be dynamically throttled back, slept, or shut off entirely. The paper describes two separate strategies that meet the requirements for both power conservation and system availability at HMT-ROC. The first approach, for legacy systems, is not much more than a brute force approach to power management which we call Time-Driven System Management (TDSM). The second approach, which we call Dynamic-Loading System Management (DLSM), is applicable to more current systems with ,Wake-on-LAN' capability and takes a more granular approach to the management of system resources. The paper details the rule sets that we have developed and implemented in the two approaches to system power management and discusses some results with these approaches. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Suppression of sidelobes in OFDM systems by multiple-choice sequences,

EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 6 2006
Ivan Cosovic
In this paper, we consider the problem of out-of-band radiation in orthogonal frequency-division multiplexing (OFDM) systems caused by high sidelobes of the OFDM transmission signal. Suppression of high sidelobes in OFDM systems enables higher spectral efficiency and/or co-existence with legacy systems in the case of OFDM spectrum sharing systems. To reduce sidelobes, we propose a method termed multiple-choice sequences (MCS). It is based on the idea that transforming the original transmit sequence into a set of sequences and choosing that sequence out of the set with the lowest power in the sidelobes allows to reduce the out-of-band radiation. We describe the general principle of MCS and out of it we derive and compare several practical MCS algorithms. In addition, we shortly consider the combination of MCS sidelobe suppression method with existing sidelobe suppression methods. Numerical results show that with MCS approach OFDM sidelobes can be reduced significantly while requiring only a small amount of signalling information to be sent from transmitter to receiver. For example, in an OFDM overlay scenario sidelobes power is reduced by around 10,dB with a signalling overhead of only 14%. Copyright © 2006 AEIT. [source]


Encapsulating targeted component abstractions using software Reflexion Modelling

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 2 2008
Jim Buckley
Abstract Design abstractions such as components, modules, subsystems or packages are often not made explicit in the implementation of legacy systems. Indeed, often the abstractions that are made explicit turn out to be inappropriate for future evolution agendas. This can make the maintenance, evolution and refactoring of these systems difficult. In this publication, we carry out a fine-grained evaluation of Reflexion Modelling as a technique for encapsulating user-targeted components. This process is a prelude to component recovery, reuse and refactoring. The evaluation takes the form of two in vivo case studies, where two professional software developers encapsulate components in a large, commercial software system. The studies demonstrate the validity of this approach and offer several best-use guidelines. Specifically, they argue that users benefit from having a strong mental model of the system in advance of Reflexion Modelling, even if that model is flawed, and that users should expend effort exploring the expected relationships present in Reflexion Models. Copyright © 2008 John Wiley & Sons, Ltd. [source]


An assessment strategy for identifying legacy system evolution requirements in eBusiness context

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 4-5 2004
Lerina Aversano
Abstract The enactment of e Business processes requires the effective usage of the existing legacy applications in the e Business initiatives. Technical issues are not enough to drive the evolution of the existing legacy applications, but problems concerning the perspectives, strategies, and business of the enterprises have to be considered. In particular, there is a strict relationship between the evolution of the legacy systems and the evolution of the e Business processes. This paper proposes a strategy to extract the requirements for a legacy system evolution from the requirements of the e Business evolution. The proposed strategy aims at characterizing the software system within the whole environment in which its evolution will be performed. It provides a useful set of attributes addressing technical, process, and organizational issues. Moreover, a set of assessment activities is proposed affecting the order in which the attributes are assessed. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Modelling the evolution of legacy systems to Web-based systems

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 1-2 2004
Janet Lavery
Abstract To advance operational legacy systems, with their out-of-date software, distributed data and entrenched business processes, to systems that can take advantage of current Web technologies to give consistent, customized and secure access to existing information bases and legacy systems is a complex and daunting task. The Institutionally Secure Integrated Data Environment (INSIDE) is a collaborative project between the Universities of St Andrews and Durham that is addressing the issues surrounding the development and delivery of integrated systems for large institutions, constrained by the requirement of working with the existing information bases and legacy systems. The work has included an exploration of the incremental evolution of existing systems by building Web-based value-added services upon foundations derived from analysing and modelling the existing legacy systems. Progressing from initial informal models to more formal domain and requirements models in a systematic way, following a meta-process incorporating good practice from domain analysis and requirements engineering has allowed the project to lay the foundation for its development of Web-based services. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Graph-based tools for re-engineering

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 4 2002
Katja Cremer
Abstract Maintenance of legacy systems is a challenging task. Often, only the source code is still available, while design or requirements documents have been lost or have not been kept up-to-date with the actual implementation. In particular, this applies to many business applications which are run on a mainframe computer and are written in COBOL. Many companies are confronted with the difficult task of migrating these systems to a client/server architecture with clients running on PCs and servers running on the mainframe. REforDI (REengineering for DIstribution) is a graph-based environment supporting this task. REforDI provides integrated code analysis, re-design, and code transformation for COBOL applications. To prepare the application for distribution, REforDI assists in the transition to an object-based architecture, according to which the source code is subsequently transformed into Object COBOL. Internally, REforDI makes heavy use of generators to reduce the implementation effort and thus to enhance adaptability. In particular, graph-based tools for re-engineering are generated from a formal specification which is based on programmed graph transformations. Copyright © 2002 John Wiley & Sons, Ltd. [source]


8 Tb/s long haul transmission over low dispersion fibers using 100 Gb/s PDM-QPSK channels paired with coherent detection

BELL LABS TECHNICAL JOURNAL, Issue 4 2010
Jérémie Renaudier
100Gb/s end-to-end broadband optical solutions are attractive to cope with the increasing demand for capacity. Polarization-division-multiplexed (PDM) quaternary-phase-shift-keying (QPSK) paired with coherent detection has been found to be promising for upgrading optical legacy systems based on 50GHz wavelength slots thanks to its high spectral efficiency (2bit/s/Hz) and its tolerance to linear effects. One of the major concerns for the deployment of such a solution is the transmission reach, mainly limited by nonlinear effects. This limitation can be exacerbated over non-zero dispersion shifted fiber (NZDSF) due to low local chromatic dispersion of the transmission fiber. The aim of this paper is first to report on the benefits brought by combining coherent detection techniques with advanced modulation formats as compared to conventional direct detection schemes for optical fiber communications. Digital signal processing paired with coherent detection is described to point out the efficiency of a coherent receiver to combat noise and to mitigate linear impairments. We then report on nonlinear tolerance of 100 Gb/s coherent PDM-QPSK through an 8 Tb/s transmission over a dispersion-managed link based on low dispersion fibers. © 2010 Alcatel-Lucent. [source]