Better Scalability (good + scalability)

Distribution by Scientific Domains


Selected Abstracts


Power aware scalable multicast routing protocol for MANETs

INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 10 2006
R. Manoharan
Abstract Multicasting is an effective way to provide group communication. In mobile ad hoc networks (MANETs), multicasting can support a wide variety of applications that are characterized by a close degree of collaboration. Since MANETs exhibit severe resource constraints such as battery power, limited bandwidth, dynamic network topology and lack of centralized administration, multicasting in MANETs become complex. The existing multicast routing protocols concentrate more on quality of service parameters like end-to-end delay, jitter, bandwidth and power. They do not stress on the scalability factor of the multicast. In this paper, we address the problem of multicast scalability and propose an efficient scalable multicast routing protocol called ,Power Aware Scalable Multicast Routing Protocol (PASMRP)' for MANETs. PASMRP uses the concept of class of service with three priority levels and local re-routing to provide scalability. The protocol also ensures fair utilization of the resources among the nodes through re-routing and hence the lifetime of the network is increased. The protocol has been simulated and the results show that PASMRP has better scalability and enhanced lifetime than the existing multicast routing protocols. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Scalable and lightweight key distribution for secure group communications

INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 3 2004
Fu-Yuan Lee
Securing group communications in dynamic and large-scale groups is more complex than securing one-to-one communications due to the inherent scalability issue of group key management. In particular, cost for key establishment and key renewing is usually relevant to the group size and subsequently becomes a performance bottleneck in achieving scalability. To address this problem, this paper proposes a new approach that features decoupling of group size and computation cost for group key management. By using a hierarchical key distribution architecture and load sharing, the load of key management can be shared by a cluster of third parties without revealing group messages to them. The proposed scheme provides better scalability because the cost for key management of each component is independent of the group size. Specifically, our scheme incurs constant computation and communication overheads for key renewing. In this paper, we present the detailed design of the proposed scheme and performance comparisons with other schemes. Briefly, our scheme provides better scalability than existing group key distribution approaches.,Copyright © 2004 John Wiley & Sons, Ltd. [source]


Fast fragments: The development of a parallel effective fragment potential method

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 15 2004
Heather M. Netzloff
Abstract The Effective Fragment Potential (EFP) method for solvation decreases the cost of a fully quantum mechanical calculation by dividing a chemical system into an ab initio region that contains the solute plus some number of solvent molecules, if desired, and an "effective fragment" region that contains the remaining solvent molecules. Interactions introduced with this fragment region (for example, Coulomb and polarization interactions) are added as one-electron terms to the total system Hamiltonian. As larger systems and dynamics are just starting to be studied with the EFP method, more needs to be done to decrease the calculation time of the method. This article considers parallelization of both the EFP fragment-fragment and mixed quantum mechanics (QM)-EFP interaction energy and gradient computation within the GAMESS suite of programs. The iteratively self-consistent polarization term is treated with a new algorithm that makes use of nonblocking communication to obtain better scalability. Results show that reasonable speedup is achieved with a variety of sizes of water clusters and number of processors. © 2004 Wiley Periodicals, Inc. J Comput Chem 25: 1926,1935, 2004 [source]


Flow modeling and simulation for vacuum assisted resin transfer molding process with the equivalent permeability method

POLYMER COMPOSITES, Issue 2 2004
Renliang Chen
Vacuum assisted resin transfer molding (VARTM) offers numerous advantages over traditional resin transfer molding, such as lower tooling costs, shorter mold filling time and better scalability for large structures. In the VARTM process, complete filling of the mold with adequate wet-out of the fibrous preform has a critical impact on the process efficiency and product quality. Simulation is a powerful tool for understanding the resin flow in the VARTM process. However, conventional three-dimensional Control Volume/Finite Element Method (CV/FEM) based simulation models often require extensive computations, and their application to process modeling of large part fabrication is limited. This paper introduces a new approach to model the flow in the VARTM process based on the concept of equivalent permeability to significantly reduce computation time for VARTM flow simulation of large parts. The equivalent permeability model of high permeable medium (HPM) proposed in the study can significantly increase convergence efficiency of simulation by properly adjusting the aspect ratio of HPM elements. The equivalent permeability model of flow channel can simplify the computational model of the CV/FEM simulation for VARTM processes. This new modeling technique was validated by the results from conventional 3D computational methods and experiments. The model was further validated with a case study of an automobile hood component fabrication. The flow simulation results of the equivalent permeability models were in agreement with those from experiments. The results indicate that the computational time required by this new approach was greatly reduced compared to that by the conventional 3D CV/FEM simulation model, while maintaining the accuracy, of filling time and flow pattern. This approach makes the flow simulation of large VARTM parts with 3D CV/FEM method computationally feasible and may help broaden the application base of the process simulation. Polym. Compos. 25:146,164, 2004. © 2004 Society of Plastics Engineers. [source]


Implementation, performance, and science results from a 30.7 TFLOPS IBM BladeCenter cluster

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2 2010
Craig A. Stewart
Abstract This paper describes Indiana University's implementation, performance testing, and use of a large high performance computing system. IU's Big Red, a 20.48 TFLOPS IBM e1350 BladeCenter cluster, appeared in the 27th Top500 list as the 23rd fastest supercomputer in the world in June 2006. In spring 2007, this computer was upgraded to 30.72 TFLOPS. The e1350 BladeCenter architecture, including two internal networks accessible to users and user applications and two networks used exclusively for system management, has enabled the system to provide good scalability on many important applications while being well manageable. Implementing a system based on the JS21 Blade and PowerPC 970MP processor within the US TeraGrid presented certain challenges, given that Intel-compatible processors dominate the TeraGrid. However, the particular characteristics of the PowerPC have enabled it to be highly popular among certain application communities, particularly users of molecular dynamics and weather forecasting codes. A critical aspect of Big Red's implementation has been a focus on Science Gateways, which provide graphical interfaces to systems supporting end-to-end scientific workflows. Several Science Gateways have been implemented that access Big Red as a computational resource,some via the TeraGrid, some not affiliated with the TeraGrid. In summary, Big Red has been successfully integrated with the TeraGrid, and is used by many researchers locally at IU via grids and Science Gateways. It has been a success in terms of enabling scientific discoveries at IU and, via the TeraGrid, across the US. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Solving the block,Toeplitz least-squares problem in parallel

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 1 2005
P. Alonso
Abstract In this paper we present two versions of a parallel algorithm to solve the block,Toeplitz least-squares problem on distributed-memory architectures. We derive a parallel algorithm based on the seminormal equations arising from the triangular decomposition of the product TTT. Our parallel algorithm exploits the displacement structure of the Toeplitz-like matrices using the Generalized Schur Algorithm to obtain the solution in O(mn) flops instead of O(mn2) flops of the algorithms for non-structured matrices. The strong regularity of the previous product of matrices and an appropriate computation of the hyperbolic rotations improve the stability of the algorithms. We have reduced the communication cost of previous versions, and have also reduced the memory access cost by appropriately arranging the elements of the matrices. Furthermore, the second version of the algorithm has a very low spatial cost, because it does not store the triangular factor of the decomposition. The experimental results show a good scalability of the parallel algorithm on two different clusters of personal computers. Copyright © 2005 John Wiley & Sons, Ltd. [source]