Source Code (source + code)

Distribution by Scientific Domains


Selected Abstracts


Novel software architecture for rapid development of magnetic resonance applications

CONCEPTS IN MAGNETIC RESONANCE, Issue 3 2002
Josef Debbins
Abstract As the pace of clinical magnetic resonance (MR) procedures grows, the need for an MR scanner software platform on which developers can rapidly prototype, validate, and produce product applications becomes paramount. A software architecture has been developed for a commercial MR scanner that employs state of the art software technologies including Java, C++, DICOM, XML, and so forth. This system permits graphical (drag and drop) assembly of applications built on simple processing building blocks, including pulse sequences, a user interface, reconstruction and postprocessing, and database control. The application developer (researcher or commercial) can assemble these building blocks to create custom applications. The developer can also write source code directly to create new building blocks and add these to the collection of components, which can be distributed worldwide over the internet. The application software and its components are developed in Java, which assures platform portability across any host computer that supports a Java Virtual Machine. The downloaded executable portion of the application is executed in compiled C++ code, which assures mission-critical real-time execution during fast MR acquisition and data processing on dedicated embedded hardware that supports C or C++. This combination permits flexible and rapid MR application development across virtually any combination of computer configurations and operating systems, and yet it allows for very high performance execution on actual scanner hardware. Applications, including prescan, are inherently real-time enabled and can be aggregated and customized to form "superapplications," wherein one or more applications work with another to accomplish the clinical objective with a very high transition speed between applications. © 2002 Wiley Periodicals, Inc. Concepts in Magnetic Resonance (Magn Reson Engineering) 15: 216,237, 2002 [source]


HPCTOOLKIT: tools for performance analysis of optimized parallel programs,

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 6 2010
L. Adhianto
Abstract HPCTOOLKIT is an integrated suite of tools that supports measurement, analysis, attribution, and presentation of application performance for both sequential and parallel programs. HPCTOOLKIT can pinpoint and quantify scalability bottlenecks in fully optimized parallel programs with a measurement overhead of only a few percent. Recently, new capabilities were added to HPCTOOLKIT for collecting call path profiles for fully optimized codes without any compiler support, pinpointing and quantifying bottlenecks in multithreaded programs, exploring performance information and source code using a new user interface, and displaying hierarchical space,time diagrams based on traces of asynchronous call path samples. This paper provides an overview of HPCTOOLKIT and illustrates its utility for performance analysis of parallel applications. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Checkpointing BSP parallel applications on the InteGrade Grid middleware

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 6 2006
Raphael Y. de Camargo
Abstract InteGrade is a Grid middleware infrastructure that enables the use of idle computing power from user workstations. One of its goals is to support the execution of long-running parallel applications that present a considerable amount of communication among application nodes. However, in an environment composed of shared user workstations spread across many different LANs, machines may fail, become inaccessible, or may switch from idle to busy very rapidly, compromising the execution of the parallel application in some of its nodes. Thus, to provide some mechanism for fault tolerance becomes a major requirement for such a system. In this paper, we describe the support for checkpoint-based rollback recovery of Bulk Synchronous Parallel applications running over the InteGrade middleware. This mechanism consists of periodically saving application state to permit the application to restart its execution from an intermediate execution point in case of failure. A precompiler automatically instruments the source code of a C/C++ application, adding code for saving and recovering application state. A failure detector monitors the application execution. In case of failure, the application is restarted from the last saved global checkpoint. Copyright © 2005 John Wiley & Sons, Ltd. [source]


SCALEA: a performance analysis tool for parallel programs

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11-12 2003
Hong-Linh Truong
Abstract Many existing performance analysis tools lack the flexibility to control instrumentation and performance measurement for code regions and performance metrics of interest. Performance analysis is commonly restricted to single experiments. In this paper we present SCALEA, which is a performance instrumentation, measurement, analysis, and visualization tool for parallel programs that supports post-mortem performance analysis. SCALEA currently focuses on performance analysis for OpenMP, MPI, HPF, and mixed parallel programs. It computes a variety of performance metrics based on a novel classification of overhead. SCALEA also supports multi-experiment performance analysis that allows one to compare and to evaluate the performance outcome of several experiments. A highly flexible instrumentation and measurement system is provided which can be controlled by command-line options and program directives. SCALEA can be interfaced by external tools through the provision of a full Fortran90 OpenMP/MPI/HPF frontend that allows one to instrument an abstract syntax tree at a very high-level with C-function calls and to generate source code. A graphical user interface is provided to view a large variety of performance metrics at the level of arbitrary code regions, threads, processes, and computational nodes for single- and multi-experiment performance analysis. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Simulating multiple inheritance in Java

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 12 2002
Douglas Lyon
Abstract The CentiJ system automatically generates code that simulates multiple inheritance in Java. The generated code inputs a series of instances and outputs specifications that can be combined using multiple inheritance. The multiple inheritance of implementation is obtained by simple message forwarding. The reflection API of Java is used to reverse engineer the instances, and so the program can generate source code, but does not require source code on its input. Advantages of CentiJ include compile-time type checking, speed of execution, automatic disambiguation (name space collision resolution) and ease of maintenance. Simulation of multiple inheritance was previously available only to Java programmers who performed manual delegation or who made use of dynamic proxies. The technique has been applied at a major aerospace corporation. Copyright © 2002 John Wiley & Sons, Ltd. [source]


CODE IS SPEECH: Legal Tinkering, Expertise, and Protest among Free and Open Source Software Developers

CULTURAL ANTHROPOLOGY, Issue 3 2009
GABRIELLA COLEMAN
ABSTRACT In this essay, I examine the channels through which Free and Open Source Software (F/OSS) developers reconfigure central tenets of the liberal tradition,and the meanings of both freedom and speech,to defend against efforts to constrain their productive autonomy. I demonstrate how F/OSS developers contest and specify the meaning of liberal freedom,especially free speech,through the development of legal tools and discourses within the context of the F/OSS project. I highlight how developers concurrently tinker with technology and the law using similar skills, which transform and consolidate ethical precepts among developers. I contrast this legal pedagogy with more extraordinary legal battles over intellectual property, speech, and software. I concentrate on the arrests of two programmers, Jon Johansen and Dmitry Sklyarov, and on the protests they provoked, which unfolded between 1999 and 2003. These events are analytically significant because they dramatized and thus made visible tacit social processes. They publicized the challenge that F/OSS represents to the dominant regime of intellectual property (and clarified the democratic stakes involved) and also stabilized a rival liberal legal regime intimately connecting source code to speech. [source]


On-line hybrid test combining with general-purpose finite element software

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 12 2006
Tao Wang
Abstract A new on-line hybrid test system incorporated with the substructuring technique is developed. In this system, a general-purpose finite element software is employed to obtain the restoring forces of the numerical substructure accurately. The restart option is repeatedly used to accommodate the software with alternating loading and analysis characteristic of the on-line test but without touching the source code. An eight-storey base-isolated structure is tested to evaluate the feasibility and effectiveness of the proposed test system. The overall structure is divided into two substructures, i.e. a superstructure to be analysed by the software and a base-isolation layer to be tested physically. Collisions between the base-isolation layer and the surrounding walls are considered in the test. The responses of the overall structure are reasonable, and smooth operation is achieved without any malfunction. Copyright © 2006 John Wiley & Sons, Ltd. [source]


A preconditioned semi-staggered dilation-free finite volume method for the incompressible Navier,Stokes equations on all-hexahedral elements

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 9 2005
Mehmet Sahin
Abstract A new semi-staggered finite volume method is presented for the solution of the incompressible Navier,Stokes equations on all-quadrilateral (2D)/hexahedral (3D) meshes. The velocity components are defined at element node points while the pressure term is defined at element centroids. The continuity equation is satisfied exactly within each elements. The checkerboard pressure oscillations are prevented using a special filtering matrix as a preconditioner for the saddle-point problem resulting from second-order discretization of the incompressible Navier,Stokes equations. The preconditioned saddle-point problem is solved using block preconditioners with GMRES solver. In order to achieve higher performance FORTRAN source code is based on highly efficient PETSc and HYPRE libraries. As test cases the 2D/3D lid-driven cavity flow problem and the 3D flow past array of circular cylinders are solved in order to verify the accuracy of the proposed method. Copyright © 2005 John Wiley & Sons, Ltd. [source]


University timetabling through conceptual modeling

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 11 2005
Jonathan Lee
A number of approaches have been proposed in tackling the timetabling problem, such as operational research, human-machine interaction, constraint programming, expert systems, and neural networks. However, there are still several key challenges to be addressed: easily reformulated to support changes, a generalized framework to handle various timetabling problems, and ability to incorporate knowledge in the timetabling system. In this article, we propose an automatic software engineering approach, called task-based conceptual graphs, to addressing the challenges in the timetabling problem. Task-based conceptual graphs provide the automation of software development processes including specification, verification, and automatic programming. Maintenance can be directly performed on the specifications rather than on the source code; moreover, hard and soft constraints can be easily inserted or removed. A university timetabling system in the Department of Computer Science and Information Engineering at National Central University is used as an illustrative example for the proposed approach. © 2005 Wiley Periodicals, Inc. Int J Int Syst 20: 1137,1160, 2005. [source]


Enhanced docking with the mining minima optimizer: Acceleration and side-chain flexibility

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 16 2002
Visvaldas Kairys
Abstract The ligand,protein docking algorithm based on the Mining Minima method has been substantially enhanced. First, the basic algorithm is accelerated by: (1) adaptively determining the extent of each energy well to help avoid previously discovered energy minima; (2) biasing the search away from ligand positions at the surface of the receptor to prevent the ligand from staying at the surface when large sampling regions are used; (3) quickly testing multiple different ligand positions and orientations for each ligand conformation; and (4) tuning the source code to increase computational efficiency. These changes markedly shorten the time needed to discover an accurate result, especially when large sampling regions are used. The algorithm now also allows user-selected receptor sidechains to be treated as mobile during the docking procedure. The energies associated with the mobile side chains are computed as if they belonged to the ligand, except that atoms at the boundary between side chains and the rigid backbone are treated specially. This new capability is tested for several well-known ligand/protein systems, and preliminary application to an enzyme whose substrate is unknown,the recently solved hypothetical protein YecO (HI0319) from Haemophilus influenzae,indicates that side-chains relaxations allow candidate substrates of various sizes to be accommodated. © 2002 Wiley Periodicals, Inc. J Comput Chem 23: 1656,1670, 2002 [source]


Consulting the source code: prospects for gene-based medical diagnostics

JOURNAL OF INTERNAL MEDICINE, Issue S741 2001
U. Landegren
Abstract. Landegren U (Rudbeck Laboratory, Uppsala, Sweden) Gene-based diagnostics (Internal Medicine in the 21st Century). J Intern Med 2000; 248: 271,276. Gene-based diagnostics has been slow to enter medical routine practice in a grand way, but it is now spurred on by three important developments: the total genetic informational content of humans and most of our pathogens is rapidly becoming available; a very large number of genetic factors of diagnostic value in disease are being identified; and such factors include the identity of genes frequently targeted by mutations in specific diseases, common DNA sequence variants associated with disease or responses to therapy, and copy number alterations at the level of DNA or RNA that are characteristic of specific diseases. Finally, improved methodology for genetic analysis now brings all of these genetic factors within reach in clinical practice. The increasing opportunities for genetic diagnostics may gradually influence views on health and normality, and on the genetic plasticity of human beings, provoking discussions about some of the central attributes of genetics. [source]


Recommending change clusters to support software investigation: an empirical study

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 3 2010
Martin P. Robillard
Abstract During software maintenance tasks, developers often spend a valuable amount of effort investigating source code. This effort can be reduced if tools are available to help developers navigate the source code effectively. We studied to what extent developers can benefit from information contained in clusters of change sets to guide their investigation of a software system. We defined change clusters as groups of change sets that have a certain amount of elements in common. Our analysis of 4200 change sets for seven different systems and covering a cumulative time span of over 17 years of development showed that less than one in five tasks overlapped with change clusters. Furthermore, a detailed qualitative analysis of the results revealed that only 13% of the clusters associated with applicable change tasks were likely to be useful. We conclude that change clusters can only support a minority of change tasks, and should only be recommended if it is possible to do so at minimal cost to the developers. Copyright © 2009 John Wiley & Sons, Ltd. [source]


A metric-based approach to identifying refactoring opportunities for merging code clones in a Java software system

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 6 2008
Yoshiki Higo
Abstract A code clone is a code fragment that has other code fragments identical or similar to it in the source code. The presence of code clones is generally regarded as one factor that makes software maintenance more difficult. For example, if a code fragment with code clones is modified, it is necessary to consider whether each of the other code clones has to be modified as well. Removing code clones is one way of avoiding problems that arise due to the presence of code clones. This makes the source code more maintainable and more comprehensible. This paper proposes a set of metrics that suggest how code clones can be refactored. As well, the tool Aries, which automatically computes these metrics, is presented. The tool gives metrics that are indicators for certain refactoring methods rather than suggesting the refactoring methods themselves. The tool performs only lightweight source code analysis; hence, it can be applied to a large number of code lines. This paper also describes a case study that illustrates how this tool can be used. Based on the results of this case study, it can be concluded that this method can efficiently merge code clones. Copyright © 2008 John Wiley & Sons, Ltd. [source]


An automated approach for abstracting execution logs to execution events

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 4 2008
Zhen Ming Jiang
Abstract Execution logs are generated by output statements that developers insert into the source code. Execution logs are widely available and are helpful in monitoring, remote issue resolution, and system understanding of complex enterprise applications. There are many proposals for standardized log formats such as the W3C and SNMP formats. However, most applications use ad hoc non-standardized logging formats. Automated analysis of such logs is complex due to the loosely defined structure and a large non-fixed vocabulary of words. The large volume of logs, produced by enterprise applications, limits the usefulness of manual analysis techniques. Automated techniques are needed to uncover the structure of execution logs. Using the uncovered structure, sophisticated analysis of logs can be performed. In this paper, we propose a log abstraction technique that recognizes the internal structure of each log line. Using the recovered structure, log lines can be easily summarized and categorized to help comprehend and investigate the complex behavior of large software applications. Our proposed approach handles free-form log lines with minimal requirements on the format of a log line. Through a case study using log files from four enterprise applications, we demonstrate that our approach abstracts log files of different complexities with high precision and recall. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Empirical-based recovery and maintenance of input error-correction features

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 6 2007
Minh Ngoc Ngo
Abstract Most information systems deal with inputs submitted from their external environments. In such systems, input validation is often incorporated to reject erroneous inputs. Unfortunately, many input errors cannot be detected automatically and therefore result in errors in the effects raised by the system. Therefore, the provision of input error-correction features (IECFs) to correct these erroneous effects is critical. However, recovery and maintenance of these features are complicated, tedious and error prone because there are many possible input errors during user interaction with the system; each input error, in turn, might result in several erroneous effects. Through empirical study, we have discovered some interesting control flow graph patterns with regard to the implementation of IECFs in information systems. Motivated by these initial findings, in this paper, we propose an approach to the automated recovery of IECFs by realizing these patterns from the source code. On the basis of the recovered information, we further propose a decomposition-slicing technique to aid the maintenance of these features without interfering with other parts of the system. A case study has been conducted to show the usefulness of the proposed approach. Copyright © 2007 John Wiley & Sons, Ltd. [source]


An empirical study of rules for well-formed identifiers

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 4 2007
Dawn Lawrie
Abstract Readers of programs have two main sources of domain information: identifier names and comments. In order to efficiently maintain source code, it is important that the identifier names (as well as comments) communicate clearly the concepts they represent. Deißenböck and Pizka recently introduced two rules for creating well-formed identifiers: one considers the consistency of identifiers and the other their conciseness. These rules require a mapping from identifiers to the concepts they represent, which may be costly to develop after the initial release of a system. An approach for verifying whether identifiers are well formed without any additional information (e.g., a concept mapping) is developed. Using a pool of 48 million lines of code, experiments with the resulting syntactic rules for well-formed identifiers illustrate that violations of the syntactic pattern exist. Two case studies show that three-quarters of these violations are ,real'. That is, they could be identified using a concept mapping. Three related studies show that programmers tend to use a rather limited vocabulary, that, contrary to many other aspects of system evolution, maintenance does not introduce additional rule violations, and that open and proprietary sources differ in their percentage of violations. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Measuring the complexity of class diagrams in reverse engineering

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 5 2006
Frederick T. Sheldon
Abstract Complexity metrics for object-oriented systems are plentiful. Numerous studies have been undertaken to establish valid and meaningful measures of maintainability as they relate to the static structural characteristics of software. In general, these studies have lacked the empirical validation of their meaning and/or have only succeeded in evaluating partial aspects of the system. In this study we have determined, through limited empirical means, a practical and holistic view by analyzing and comparing the structural characteristics of UML class diagrams as those characteristics relate to or impact maintainability. Class diagrams are composed of three kinds of relation, association, generalization, and aggregation, which make their overall structure difficult to understand. We propose combining these three relations in such a way that enables a comprehensive measure of complexity. Theoretically, this measure is applicable among different class diagrams (including different domains, platforms or systems) to the extent that the measure is widely comparative and context free. Further, this property does not preclude comparison within a specific class diagram (or family) and is therefore very useful in evaluating a given class diagram's strengths and weaknesses. We are reporting empirical results that provide a small measure of validity to enable an objective appraisal of both complexity and maintainability without equating the two. Therefore, to evaluate our structural complexity metric, we determined the level of understandability of the system by measuring the time needed to reverse engineer source code into class diagrams including the number of errors produced while creating the diagram. The number of errors produced offers one indicator of maintainability. The results, as compared with other complexity metrics, indicate that our metric shows promise especially if proven to be scalable. Copyright © 2006 John Wiley & Sons, Ltd. [source]


KERIS: evolving software with extensible modules

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 5 2005
Matthias ZengerArticle first published online: 26 SEP 200
Abstract We present the programming language KERIS, an extension of Java with explicit support for software evolution. KERIS introduces extensible modules as the basic building blocks for software. Modules are composed hierarchically, explicitly revealing the architecture of systems. A distinct feature of the module design is that modules do not get linked manually. Instead, the wiring of modules gets inferred. The module assembly and refinement mechanism of KERIS is not restricted to the unanticipated extensibility of atomic modules. It also allows extensions of already linked systems by replacing selected submodules with compatible versions without needing to re-link the full system. Extensibility is type-safe and non-invasive, i.e., the extension of a module preserves the original version and does not require access to source code. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Using software trails to reconstruct the evolution of software

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 6 2004
Daniel M. German
Abstract This paper describes a method to recover the evolution of a software system using its software trails: information left behind by the contributors to the development process of the product, such as mailing lists, Web sites, version control logs, software releases, documentation, and the source code. This paper demonstrates the use of this method by recovering the evolution of Ximian Evolution, a mail client for Unix. By extracting useful facts stored in these software trails and correlating them, it was possible to provide a detailed view of the history of this project. This view provides interesting insight into how an open source software project evolves and some of the practices used by its software developers. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Software visualization in software maintenance, reverse engineering, and re-engineering: a research survey

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 2 2003
Rainer Koschke
Abstract Software visualization is concerned with the static visualization as well as the animation of software artifacts, such as source code, executable programs, and the data they manipulate, and their attributes, such as size, complexity, or dependencies. Software visualization techniques are widely used in the areas of software maintenance, reverse engineering, and re-engineering, where typically large amounts of complex data need to be understood and a high degree of interaction between software engineers and automatic analyses is required. This paper reports the results of a survey on the perspectives of 82 researchers in software maintenance, reverse engineering, and re-engineering on software visualization. It describes to which degree the researchers are involved in software visualization themselves, what is visualized and how, whether animation is frequently used, whether the researchers believe animation is useful at all, which automatic graph layouts are used if at all, whether the layout algorithms have deficiencies, and,last but not least,where the medium-term and long-term research in software visualization should be directed. The results of this survey help to ascertain the current role of software visualization in software engineering from the perspective of researchers in these domains and give hints on future research avenues. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Identifying high maintenance legacy software

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 6 2002
Matthew S. Harrison
Abstract Legacy software maintenance is a significant cost item for many engineering organizations. This study is a preliminary report on work to investigate maintenance data, usage, and source code for legacy software used by an engineering design company to support a variety of functions, including electromagnetic, thermal, mechanical loading, vibration, and aerodynamic analysis. The results verify the applicability to legacy engineering software of previous research that concluded that size and structural metrics alone are not good indicators of high maintenance costs. Unlike previous research, the study also evaluates the effect of program usage on maintenance cost. Over the three-year period of this study of 71 legacy engineering programs, 11 of the programs (15%) accounted for 80% of all maintenance and 67% of all program runs. The highest maintenance programs were not always the largest programs or the worst structured programs. 49% of the programs accounted for only 1% of total maintenance but 42% of the total lines of code (LOC) thus invalidating LOC as an indicator for maintenance cost. While additional work is needed to validate these findings across other organizations and other code sets, these preliminary results provide strong evidence that expected program usage can be a useful indicator of long-term maintenance cost. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Graph-based tools for re-engineering

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 4 2002
Katja Cremer
Abstract Maintenance of legacy systems is a challenging task. Often, only the source code is still available, while design or requirements documents have been lost or have not been kept up-to-date with the actual implementation. In particular, this applies to many business applications which are run on a mainframe computer and are written in COBOL. Many companies are confronted with the difficult task of migrating these systems to a client/server architecture with clients running on PCs and servers running on the mainframe. REforDI (REengineering for DIstribution) is a graph-based environment supporting this task. REforDI provides integrated code analysis, re-design, and code transformation for COBOL applications. To prepare the application for distribution, REforDI assists in the transition to an object-based architecture, according to which the source code is subsequently transformed into Object COBOL. Internally, REforDI makes heavy use of generators to reduce the implementation effort and thus to enhance adaptability. In particular, graph-based tools for re-engineering are generated from a formal specification which is based on programmed graph transformations. Copyright © 2002 John Wiley & Sons, Ltd. [source]


A concept-oriented belief revision approach to domain knowledge recovery from source code

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 1 2001
Yang Li
Abstract Domain knowledge is the soul of software systems. After decades of software development, domain knowledge has reached a certain degree of saturation. The recovery of domain knowledge from source code is beneficial to many software engineering activities, in particular, software evolution. In the real world, the ambiguous appearance of domain knowledge embedded in source code constitutes the biggest barrier to recovering reliable domain knowledge. In this paper, we introduce an innovative approach to recovering domain knowledge with enhanced reliability from source code. In particular, we divide domain knowledge into interconnected knowledge slices and match these knowledge slices against the source code. Each knowledge slice has its own authenticity evaluation function which takes the belief of the evidence it needs as input and the authenticity of the knowledge slice as output. Moreover, the knowledge slices are arranged to exchange beliefs with each other through interconnections, i.e. concepts, so that a better evaluation of the authenticity of these knowledge slices can be obtained. The decision on acknowledging recovered knowledge slices can therefore be made more easily. Our approach, rooted as it is in cognitive science and social psychology, is also widely applicable to other knowledge recovery tasks. Copyright © 2001 John Wiley & Sons, Ltd. [source]


An approach for extracting code fragments that implement functionality from source programs

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 1 2001
Hee Beng Kuan Tan
Abstract A data-intensive program interacts with its environment through accepting and delivering information from and to its environment respectively. As such, the functionality in a program is achieved through its input/output statements. Based on this hypothesis, this paper proposes a novel approach for the extraction of code fragments for implementing functionality from program source code. This helps the software maintainer identify affected code fragments for making changes to program functionality. The code fragments extracted may also be suitable for reuse in other software projects. Copyright © 2001 John Wiley & Sons, Ltd. [source]


msatcommander: detection of microsatellite repeat arrays and automated, locus-specific primer design

MOLECULAR ECOLOGY RESOURCES, Issue 1 2008
BRANT C. FAIRCLOTH
Abstract msatcommander is a platform-independent program designed to search for microsatellite arrays, design primers, and tag primers using an automated routine. msatcommander accepts as input DNA sequence data in single-sequence or concatenated, fasta -formatted files. Search data and locus-specific primers are written to comma-separated value files for subsequent use in spreadsheet or database programs. Binary versions of the graphical interface for msatcommander are available for Apple OS X and Windows XP. Users of other operating systems may run the graphical interface version using the available source code, provided their environment supports at least Python 2.4, Biopython 1.43, and wxPython 2.8. msatcommander is available from http://code.google.com/p/msatcommander/. [source]


burial (version 1.0): a method for testing genetic similarity within small groups of individuals using fragmentary data sets

MOLECULAR ECOLOGY RESOURCES, Issue 3 2001
Birgitt Schönfisch
Abstract Biologists are frequently facing the problem of dealing with data sets with a small amount of data and a high proportion of missing information. We were particularly interested in analysing fragmentary data sets generated by the application of molecular methods in palaeoanthropology in order to determine whether individuals are genetically related. In this note, we announce the release of the software burial (version 1.0) to test the null hypothesis that the observed grouping of individuals at a particular burial site reflects random placement of genotypes. The proposed test, however, can also be applied to data sets whose objects can be grouped according to nongenetic criteria such as the style of clothing, the kind of burial gifts or cultural artefacts. The C + + source code and binary executables for Windows and Linux are available for download at: http://www.uni-tuebingen.de/uni/bcm/BURIAL/index.html. [source]


An automated quantitation of short echo time MRS spectra in an open source software environment: AQSES

NMR IN BIOMEDICINE, Issue 5 2007
Jean-Baptiste Poullet
Abstract This paper describes a new quantitation method called AQSES for short echo time magnetic resonance spectra. This method is embedded in a software package available online from www.esat.kuleuven.be/sista/members/biomed/new/ with a graphical user interface, under an open source license, which means that the source code is freely available and easy to adapt to specific needs of the user. The quantitation problem is mathematically formulated as a separable nonlinear least-squares fitting problem, which is numerically solved using a modified variable-projection procedure. A macromolecular baseline is incorporated into the fit via nonparametric modelling, efficiently implemented using penalized splines. Unwanted components such as residual water are removed with a maximum-phase FIR filter. Constraints on the phases, dampings and frequencies of the metabolites can be imposed. AQSES has been tested on simulated MR spectra with several types of disturbance and on short echo time in vivo proton MR spectra. Results show that AQSES is robust, easy to use and very flexible. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Tool command language automation of the modular ion cyclotron data acquisition system (MIDAS) for data-dependent tandem Fourier transform ion cyclotron resonance mass spectrometry

RAPID COMMUNICATIONS IN MASS SPECTROMETRY, Issue 4 2003
Michael A. Freitas
This manuscript describes the addition of data-dependent automation to the modular ion cyclotron resonance data acquisition system (MIDAS). The automation is made possible by developments and incorporation of a tool command language (Tcl) interpreter for automated acquisition. To accomplish the automation, real-time generation of excitation waveforms and scriptable data post-processing has been implemented into the MIDAS source code. In addition a new excitation event has also been added to allow for run-time generation of a single notch stored waveform inverse Fourier transform (SWIFT) excitation event. Examples of these new features and discussion of their enhancement to the existing data station are presented. Copyright © 2003 John Wiley & Sons, Ltd. [source]


A non-parametric approach to software reliability

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 1 2004
Axel Gandy
Abstract In this paper we present a new, non-parametric approach to software reliability. It is based on a multivariate counting process with additive intensity, incorporating covariates and including several projects in one model. Furthermore, we present ways to obtain failure data from the development of open source software. We analyse a data set from this source and consider several choices of covariates. We are able to observe a different impact of recently added and older source code onto the failure intensity. Copyright © 2004 John Wiley & Sons, Ltd. [source]


DYNAMIC SEARCH SPACE TRANSFORMATIONS FOR SOFTWARE TEST DATA GENERATION

COMPUTATIONAL INTELLIGENCE, Issue 1 2008
Ramón Sagarna
Among the tasks in software testing, test data generation is particularly difficult and costly. In recent years, several approaches that use metaheuristic search techniques to automatically obtain the test inputs have been proposed. Although work in this field is very active, little attention has been paid to the selection of an appropriate search space. The present work describes an alternative to this issue. More precisely, two approaches which employ an Estimation of Distribution Algorithm as the metaheuristic technique are explained. In both cases, different regions are considered in the search for the test inputs. Moreover, to depart from a region near to the one containing the optimum, the definition of the initial search space incorporates static information extracted from the source code of the software under test. If this information is not enough to complete the definition, then a grid search method is used. According to the results of the experiments conducted, it is concluded that this is a promising option that can be used to enhance the test data generation process. [source]