Memory Management (memory + management)

Distribution by Scientific Domains


Selected Abstracts


A new task scheduling method for distributed programs that require memory management

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2006
Hiroshi Koide
Abstract In parallel and distributed applications, it is very likely that object-oriented languages, such as Java and Ruby, and large-scale semistructured data written in XML will be employed. However, because of their inherent dynamic memory management, parallel and distributed applications must sometimes suspend the execution of all tasks running on the processors. This adversely affects their execution on the parallel and distributed platform. In this paper, we propose a new task scheduling method called CP/MM (Critical Path/Memory Management) which can efficiently schedule tasks for applications requiring memory management. The underlying concept is to consider the cost due to memory management when the task scheduling system allocates ready (executable) coarse-grain tasks, or macro-tasks, to processors. We have developed three task scheduling modules, including CP/MM, for a task scheduling system which is implemented on a Java RMI (Remote Method Invocation) communication infrastructure. Our experimental results show that CP/MM can successfully prevent high-priority macro-tasks from being affected by the garbage collection arising from memory management, so that CP/MM can efficiently schedule distributed programs whose critical paths are relatively long. Copyright © 2005 John Wiley & Sons, Ltd. [source]


A Hierarchical Topology-Based Model for Handling Complex Indoor Scenes

COMPUTER GRAPHICS FORUM, Issue 2 2006
D. Fradin
Abstract This paper presents a topology-based representation dedicated to complex indoor scenes. It accounts for memory management and performances during modelling, visualization and lighting simulation. We propose to enlarge a topological model (called generalized maps) with multipartition and hierarchy. Multipartition allows the user to group objects together according to semantics. Hierarchy provides a coarse-to-fine description of the environment. The topological model we propose has been used for devising a modeller prototype and generating efficient data structure in the context of visualization, global illumination and 1 GHz wave propagation simulation. We presently handle buildings composed of up to one billion triangles. [source]


Dynamic scratch-pad memory management with data pipelining for embedded systems

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2010
Yanqin Yang
Abstract In this paper, we propose an effective data pipelining technique, SPDP (Scratch-Pad Data Pipelining), for dynamic scratch-pad memory (SPM) management with DMA (Direct Memory Access). Our basic idea is to overlap the execution of CPU instructions and DMA operations. In SPDP, based on the iteration access patterns of arrays, we group multiple iterations into a block to improve the data locality of regular array accesses. We allocate the data of multiple iterations into different portions of the SPM. In this way, when the CPU executes instructions and accesses data from one portion of the SPM, DMA operations can be performed to transfer data between the off-chip memory and another portion of SPM simultaneously. We perform code transformation to insert DMA instructions to achieve the data pipelining. We have implemented our SPDP technique with the IMPACT compiler, and conduct experiments using a set of loop kernels from DSPstone, Mibench, and Mediabench on the cycle-accurate VLIW simulator of Trimaran. The experimental results show that our technique achieves performance improvement compared with the previous work. Copyright © 2010 John Wiley & Sons, Ltd. [source]


High-level distribution for the rapid production of robust telecoms software: comparing C++ and ERLANG

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 8 2008
J. H. Nyström
Abstract Currently most distributed telecoms software is engineered using low- and mid-level distributed technologies, but there is a drive to use high-level distribution. This paper reports the first systematic comparison of a high-level distributed programming language in the context of substantial commercial products. Our research strategy is to reengineer some C++/CORBA telecoms applications in ERLANG, a high-level distributed language, and make comparative measurements. Investigating the potential advantages of the high-level ERLANG technology shows that two significant benefits are realized. Firstly, robust configurable systems are easily developed using the high-level constructs for fault tolerance and distribution. The ERLANG code exhibits resilience: sustaining throughput at extreme loads and automatically recovering when load drops; availability: remaining available despite repeated and multiple failures; dynamic reconfigurability: with throughput scaling near-linearly when resources are added or removed. Secondly, ERLANG delivers significant productivity and maintainability benefits: the ERLANG components are less than one-third of the size of their C++ counterparts. The productivity gains are attributed to specific language features, for example, high-level communication saves 22%, and automatic memory management saves 11%,compared with the C++ implementation. Investigating the feasibility of the high-level ERLANG technology demonstrates that it fulfils several essential requirements. The requisite distributed functionality is readily specified, even although control of low-level distributed coordination aspects is abrogated to the ERLANG implementation. At the expense of additional memory residency, excellent time performance is achieved, e.g. three times faster than the C++ implementation, due to ERLANG's lightweight processes. ERLANG interoperates at low cost with conventional technologies, allowing incremental reengineering of large distributed systems. The technology is available on the required hardware/operating system platforms, and is well supported. Copyright © 2007 John Wiley & Sons, Ltd. [source]


A new task scheduling method for distributed programs that require memory management

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2006
Hiroshi Koide
Abstract In parallel and distributed applications, it is very likely that object-oriented languages, such as Java and Ruby, and large-scale semistructured data written in XML will be employed. However, because of their inherent dynamic memory management, parallel and distributed applications must sometimes suspend the execution of all tasks running on the processors. This adversely affects their execution on the parallel and distributed platform. In this paper, we propose a new task scheduling method called CP/MM (Critical Path/Memory Management) which can efficiently schedule tasks for applications requiring memory management. The underlying concept is to consider the cost due to memory management when the task scheduling system allocates ready (executable) coarse-grain tasks, or macro-tasks, to processors. We have developed three task scheduling modules, including CP/MM, for a task scheduling system which is implemented on a Java RMI (Remote Method Invocation) communication infrastructure. Our experimental results show that CP/MM can successfully prevent high-priority macro-tasks from being affected by the garbage collection arising from memory management, so that CP/MM can efficiently schedule distributed programs whose critical paths are relatively long. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Object combining: a new aggressive optimization for object intensive programs

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 5-6 2005
Ronald Veldema
Abstract Object combining tries to put objects together that have roughly the same life times in order to reduce strain on the memory manager and to reduce the number of pointer indirections during a program's execution. Object combining works by appending the fields of one object to another, allowing allocation and freeing of multiple objects with a single heap (de)allocation. Unlike object inlining, which will only optimize objects where one has a (unique) pointer to another, our optimization also works if there is no such relation. Object inlining also directly replaces the pointer by the inlined object's fields. Object combining leaves the pointer in place to allow more combining. Elimination of the pointer accesses is implemented in a separate compiler optimization pass. Unlike previous object inlining systems, reference field overwrites are allowed and handled, resulting in much more aggressive optimization. Our object combining heuristics also allow unrelated objects to be combined, for example, those allocated inside a loop; recursive data structures (linked lists, trees) can be allocated several at a time and objects that are always used together can be combined. As Java explicitly permits code to be loaded at runtime and allows the new code to contribute to a running computation, we do not require a closed-world assumption to enable these optimizations (but it will increase performance). The main focus of object combining in this paper is on reducing object (de)allocation overhead, by reducing both garbage collection work and the number of object allocations. Reduction of memory management overhead causes execution time to be reduced by up to 35%. Indirection removal further reduces execution time by up to 6%. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Resource management in open Linda systems

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2003
Ronaldo Menezes
Abstract Coordination systems, in particular Linda, have established themselves as important tools for the development of applications to open systems such as the Internet. This paper shows how to tackle a forgotten, but crucial problem in open coordination systems: memory management. As with any system which intends to be of wide use and because memory is a finite resource, coordination systems must address the problems of memory exhaustion. This paper first explores the orthogonality between coordination and computation in order to make it clear that the problem of memory exhaustion in coordination systems cannot be solved using garbage collection schemes implemented at the computation language,a garbage collection scheme must exist in the coordination environment as well. Following the explanation on orthogonality, the paper will focus on describing a garbage collection scheme for the Linda family of coordination systems. It is expected that the solution in Linda can be adapted to other coordination systems as long as they are based on tuple space communication. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Managing memories in post-war Sarajevo: individuals, bad memories, and new wars

THE JOURNAL OF THE ROYAL ANTHROPOLOGICAL INSTITUTE, Issue 1 2006
Cornelia Sorabji
In the wake of the 1992-5 war in Bosnia a number of anthropologists have written about the role of memory in creating and sustaining hostility in the region. One trend focuses on the authenticity and power of personal memories of Second World War violence and on the possibility of transmitting such memories down the generations to the 1990s. Another focuses less on memory as a phenomenon which determines human action than on the ,politics of memory': the political dynamics which play on and channel individuals' memories. In this article I use the example of three Sarajevo Bosniacs whom I have known since the pre-war 1980s in order to propose the merit of a third, additional, focus on the individual as an active manager of his or her own memories. I briefly consider whether work by Maurice Bloch on the nature of semantic and of autobiographic memory supports a strong version of the first interpretative trend, or whether, as I suggest, the conclusions of this work instead leave room for individual memory management and for change down the generations. [source]