Software Components (software + component)

Distribution by Scientific Domains


Selected Abstracts


Support of Daily ECG Procedures in a Cardiology Department via the Integration of an Existing Clinical Database and a Commercial ECG Management System

ANNALS OF NONINVASIVE ELECTROCARDIOLOGY, Issue 3 2002
Franco Chiarugi Dott.
Background: In the context of HYGEIAnet, the regional health telematics network of Crete, a clinical cardiology database (CARDIS) has been installed in several hospitals. The large number of resting ECGs recorded daily made it a priority to have computerized support for the entire ECG procedure. Methods: Starting in late 2000, ICS-FORTH and Mortara Instrument, Inc., collaborated to integrate the Mortara E-Scribe/NT ECG management system with CAROIS in order to support daily ECG procedures. CARDIS was extended to allow automatic ordering of daily ECGs via E-Scribe/NT. The ECG order list is downloaded to the electrocardiographs and executed, the recorded ECGs are transmitted to E-Scribe/NT, where confirmed ECG records are linked back to CARDIS. A thorough testing period was used to identify and correct problems. An ECG viewer/printer was extended to read ECG files in E-Scribe/NT format. Results: The integration of E-Scribe/NT and CARDIS, enabling automatic scheduling of ECG orders and immediate availability of confirmed ECGs records for viewing and printing in the clinical database, took approximately 4 man months. The performance of the system is highly satisfactory and it is now ready for deployment in the hospital. Conclusions: Integration of a commercially available ECG management system with an existing clinical database can provide a rapid, practical solution that requires no major modifications to either software component. The success of this project makes us optimistic about extending CARDIS to support additional examination-procedures such as digital coronary angiography and ultrasound examinations. A.N.E. 2002;7(3):263,270 [source]


A test framework for CORBA* component model-based software systems

BELL LABS TECHNICAL JOURNAL, Issue 3 2003
Harold J. Batteram
In this paper we present a framework for testing software systems that is based on the Common Object Request Broker Architecture (CORBA*) component model (CCM) standard. An important aspect of CCM-based systems is that they must be verifiable and testable at the abstract level of their design, regardless of the language chosen to implement the component. Component-based systems allow the development and testing of components to be divided among development groups working in parallel. However, dependencies between separately developed components may cause delays in testing. The test framework we present allows for the automatic generation,based on their external specification,of reactor components that testers can use as substitutes for components their components depend on, but that have not yet been developed. The test components generated can respond to an invocation interactively or automatically by means of a test script. The framework can also visualize interactions between components as they flow through a distributed system, and can compare runtime interactions with design specifications. The approach to testing that we describe was first explored in the distributed software component (DSC) framework developed as part of the FRIENDS project, and has been used successfully in the WINMAN European research project, which deals with network management applications. The test framework has now been extended and adapted for the CCM architecture. It is currently implemented as part of the COACH research project, which is sponsored by the European Commission. © 2003 Lucent Technologies Inc. [source]


Scene-Graph-As-Bus: Collaboration between Heterogeneous Stand-alone 3-D Graphical Applications

COMPUTER GRAPHICS FORUM, Issue 3 2000
Bob Zeleznik
We describe the Scene-Graph-As-Bus technique (SGAB), the first step in a staircase of solutions for sharing software components for virtual environments. The goals of SGAB are to allow, with minimal effort, independently-designed applications to share component functionality; and for multiple users to share applications designed for single users. This paper reports on the SGAB design for transparently conjoining different applications by unifying the state information contained in their scene graphs. SGAB monitors and maps changes in the local scene graph of one application to a neutral scene graph representation (NSG), distributes the NSG changes over the network to remote peer applications, and then maps the NSG changes to the local scene graph of the remote application. The fundamental contribution of SGAB is that both the local and remote applications can be completely unaware of each other; that is, both applications can interoperate without code or binary modification despite each having no knowledge of networking or interoperability. [source]


A quality-of-service-based framework for creating distributed heterogeneous software components

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 12 2002
Rajeev R. Raje
Abstract Component-based software development offers a promising solution for taming the complexity found in today's distributed applications. Today's and future distributed software systems will certainly require combining heterogeneous software components that are geographically dispersed. For the successful deployment of such a software system, it is necessary that its realization, based on assembling heterogeneous components, not only meets the functional requirements, but also satisfies the non-functional criteria such as the desired quality of service (QoS). In this paper, a framework based on the notions of a meta-component model, a generative domain model and QoS parameters is described. A formal specification based on two-level grammar is used to represent these notions in a tightly integrated way so that QoS becomes a part of the generative domain model. A simple case study is described in the context of this framework. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Investigating the performance of a middleware protocol architecture for tele-measurement

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 5 2008
Luca Berruti
Abstract The rapid growth of network infrastructures and the large availability of instrumentation supporting remote control have encouraged the deployment of complex and sophisticated laboratories and the design of software platforms for accessing the resources present there. Although the market offers several solutions to remotely manage equipment, little attention has been paid to the hardware and software architectures devoted to control distance learning experimental environments and to manage laboratories consisting of heterogeneous devices. The paper illustrates the architectural approach adopted within the LABNET project and describes in detail the main software components of the devised platform, which allows to exploit the instrumentation via a common Web user interface, thus making the system available independent of any specific (commercial) environment or application. Specifically, attention is focused on the LABNET server (LNS), which represents the supervising central unit and, therefore, a very critical element of the system. The paper mainly points out the architecture and protocols at the basis of the LNS and discusses a set of performance tests aimed at proving the effectiveness of the system and comparing it with a well-known commercial solution. Copyright © 2007 John Wiley & Sons, Ltd. [source]


mmLib Python toolkit for manipulating annotated structural models of biological macromolecules

JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 1 2004
Jay Painter
The Python Macromolecular Library (mmLib) is a software toolkit and library of routines for the analysis and manipulation of macromolecular structural models, implemented in the Python programming language. It is accessed via a layered object-oriented application programming interface, and provides a range of useful software components for parsing mmCIF, PDB and MTZ files, a library of atomic elements and monomers, an object-oriented data structure describing biological macromolecules, and an OpenGL molecular viewer. The mmLib data model is designed to provide easy access to the various levels of detail needed to implement high-level application programs for macromolecular crystallography, NMR, modeling and visualization. We describe here the establishment of mmLib as a collaborative open-source code base, and the use of mmLib to implement several simple illustrative application programs. [source]


Decisional autonomy of planetary rovers

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 7 2007
Félix Ingrand
To achieve the ever increasing demand for science return, planetary exploration rovers require more autonomy to successfully perform their missions. Indeed, the communication delays are such that teleoperation is unrealistic. Although the current rovers (such as MER) demonstrate a limited navigation autonomy, and mostly rely on ground mission planning, the next generation (e.g., NASA Mars Science Laboratory and ESA Exomars) will have to regularly achieve long range autonomous navigation tasks. However, fully autonomous long range navigation in partially known planetary-like terrains is still an open challenge for robotics. Navigating hundreds of meters without any human intervention requires the robot to be able to build adequate representations of its environment, to plan and execute trajectories according to the kind of terrain traversed, to control its motions, and to localize itself as it moves. All these activities have to be planned, scheduled, and performed according to the rover context, and controlled so that the mission is correctly fulfilled. To achieve these objectives, we have developed a temporal planner and an execution controller, which exhibit plan repair and replanning capabilities. The planner is in charge of producing plans composed of actions for navigation, science activities (moving and operating instruments), communication with Earth and with an orbiter or a lander, while managing resources (power, memory, etc.) and respecting temporal constraints (communication visibility windows, rendezvous, etc.). High level actions also need to be refined and their execution temporally and logically controlled. Finally, in such critical applications, we believe it is important to deploy a component that protects the system against dangerous or even fatal situations resulting from unexpected interactions between subsystems (e.g., move the robot while the robot arm is unstowed) and/or software components (e.g., take and store a picture in a buffer while the previous one is still being processed). In this article we review the aforementioned capabilities, which have been developed, tested, and evaluated on board our rovers (Lama and Dala). After an overview of the architecture design principle adopted, we summarize the perception, localization, and motion generation functions required by autonomous navigation, and their integration and concurrent operation in a global architecture. We then detail the decisional components: a high level temporal planner that produces the robot activity plan on board, and temporal and procedural execution controllers. We show how some failures or execution delays are being taken care of with online local repair, or replanning. © 2007 Wiley Periodicals, Inc. [source]


An affordable modular mobile robotic platform with fuzzy logic control and evolutionary artificial neural networks

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 8 2004
Maurice Tedder
Autonomous robotics projects encompass the rich nature of integrated systems that includes mechanical, electrical, and computational software components. The availability of smaller and cheaper hardware components has helped make possible a new dimension in operational autonomy. This paper describes a mobile robotic platform consisting of several integrated modules including a laptop computer that serves as the main control module, microcontroller-based motion control module, a vision processing module, a sensor interface module, and a navigation module. The laptop computer module contains the main software development environment with a user interface to access and control all other modules. Programming language independence is achieved by using standard input/output computer interfaces including RS-232 serial port, USB, networking, audio input and output, and parallel port devices. However, with the same hardware technology available to all, the distinguishing factor in most cases for intelligent systems becomes the software design. The software for autonomous robots must intelligently control the hardware so that it functions in unstructured, dynamic, and uncertain environments while maintaining an autonomous adaptability. This paper describes how we introduced fuzzy logic control to one robot platform in order to solve the 2003 Intelligent Ground Vehicle Competition (IGVC) Autonomous Challenge problem. This paper also describes the introduction of hybrid software design that utilizes Fuzzy Evolutionary Artificial Neural Network techniques. In this design, rather than using a control program that is directly coded, the robot's artificial neural net is first trained with a training data set using evolutionary optimization techniques to adjust weight values between neurons. The trained neural network with a weight average defuzzification method was able to make correct decisions to unseen vision patterns for the IGVC Autonomous Challenge. A comparison of the Lawrence Technological University robot designs and the design of the other competing schools shows that our platforms were the most affordable robot systems to use as tools for computer science and engineering education. © 2004 Wiley Periodicals, Inc. [source]


Guaranteed inconsistency avoidance during software evolution

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 6 2003
Keith Gallagher
Abstract The attempt to design and integrate consistent changes to an existing system is the essence of software maintenance. Software developers also confront similar problems: there are changes during testing and the release of new system builds. Whether in development or maintenance, changes to evolving systems must be made consistently; that is, without damaging correct computations. It is difficult for the programmer to ascertain the complete effect of a code change; the programmer may make a change to a program that is syntactically and semantically legal, but which has ripples into the parts of the program that were intended to remain unchanged. Using the standard denotational semantics for procedural programming languages, this paper formalizes decomposition slicing, which identifies interferences between software components and isolates the components to be changed. We enumerate the conditions for changing one component in ways that will guarantee that changes to it will not interact inconsistently and prove that changes made under these conditions are sound. Thus, the programmer can then execute changes secure in the knowledge that the semantics of the new system are guaranteed to be consistent with the projection of the semantics of the original for which it behaved correctly. Validating that the changes do not interfere not only guarantees consistency with respect to previous unchanging behaviors, but can also be achieved with a complexity proportional to the size of the change to be made. Copyright © 2003 John Wiley & Sons, Ltd. [source]


In-process Control of Design Inspection Effectiveness

QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 1 2004
Tzvi Raz
Abstract We present a methodology for the in-process control of design inspection focusing on escaped defects. The methodology estimates the defect escape probability at each phase in the process using the information available at the beginning of a particular phase. The development of the models is illustrated by a case involving data collected from the design inspections of software components. The data include the size of the product component, as well as the time invested in preparing for the inspection and actually carrying it out. After smoothing the original data with a clustering algorithm, to compensate for its excessive variability, a series of regression models exhibiting increasingly better fits to the data as more information becomes available was obtained. We discuss how management can use such models to reduce escape risk as the inspection process evolves. Copyright © 2003 John Wiley & Sons, Ltd. [source]


The FRIENDS platform,A software platform for advanced services and applications

BELL LABS TECHNICAL JOURNAL, Issue 3 2000
Hendrik B. Meeuwissen
New high-speed networks provide new opportunities for service providers to offer advanced voice, data, and multimedia services. This paper describes an extendible framework for the efficient creation and deployment of services. The framework integrates the needs of service providers, service developers, and end users within a single coherent architecture. In this architecture, services are composed of distributed software components. The framework provides the infrastructure for component interaction and encourages reuse of service logic from a rich set of basic components. This paper describes details of both the infrastructure and the components that implement the reusable service logic. The application of the service framework is illustrated by a case study of a multi-party service for collaborative work in project teams. The integration of the service framework and the Lucent Softswitch is a promising direction for future research. [source]