Distribution by Scientific Domains
Distribution within Information Science and Computing

Kinds of Scalability

  • good scalability
  • parallel scalability

  • Terms modified by Scalability

  • scalability problem

  • Selected Abstracts

    Cyclic Enones as Substrates in the Morita,Baylis,Hillman Reaction: Surfactant Interactions, Scope and Scalability with an Emphasis on Formaldehyde

    Abstract Traditionally, cyclic enones and formalin are reactants notorious for displaying problematic behaviour (i.e., poor solubility and low yields) under Morita,Baylis,Hillman (MBH) reaction conditions. The body of research presented herein focuses on the use of surfactants in water as a solvent medium that offers a resolution to many of the issues associated with the MBH reaction. Reaction scope, scalability and small angle X-ray scattering have been studied to assist with the understanding of the reaction mechanism and industrial application. A comparison against known literature methods for reaction scale-up is also discussed. [source]

    ChemInform Abstract: Copper-Mediated N- and O-Arylations with Arylboronic Acids in a Continuous Flow Microreactor: A New Avenue for Efficient Scalability.

    CHEMINFORM, Issue 14 2009
    Brajendra K. Singh
    Abstract ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 200 leading journals. To access a ChemInform Abstract of an article which was published elsewhere, please select a "Full Text" option. The original article is trackable via the "References" option. [source]

    Exact and Robust (Self-)Intersections for Polygonal Meshes

    Marcel Campen
    Abstract We present a new technique to implement operators that modify the topology of polygonal meshes at intersections and self-intersections. Depending on the modification strategy, this effectively results in operators for Boolean combinations or for the construction of outer hulls that are suited for mesh repair tasks and accurate mesh-based front tracking of deformable materials that split and merge. By combining an adaptive octree with nested binary space partitions (BSP), we can guarantee exactness (= correctness) and robustness (= completeness) of the algorithm while still achieving higher performance and less memory consumption than previous approaches. The efficiency and scalability in terms of runtime and memory is obtained by an operation localization scheme. We restrict the essential computations to those cells in the adaptive octree where intersections actually occur. Within those critical cells, we convert the input geometry into a plane-based BSP-representation which allows us to perform all computations exactly even with fixed precision arithmetics. We carefully analyze the precision requirements of the involved geometric data and predicates in order to guarantee correctness and show how minimal input mesh quantization can be used to safely rely on computations with standard floating point numbers. We properly evaluate our method with respect to precision, robustness, and efficiency. [source]

    Fast BVH Construction on GPUs

    C. Lauterbach
    We present two novel parallel algorithms for rapidly constructing bounding volume hierarchies on manycore GPUs. The first uses a linear ordering derived from spatial Morton codes to build hierarchies extremely quickly and with high parallel scalability. The second is a top-down approach that uses the surface area heuristic (SAH) to build hierarchies optimized for fast ray tracing. Both algorithms are combined into a hybrid algorithm that removes existing bottlenecks in the algorithm for GPU construction performance and scalability leading to significantly decreased build time. The resulting hierarchies are close in to optimized SAH hierarchies, but the construction process is substantially faster, leading to a significant net benefit when both construction and traversal cost are accounted for. Our preliminary results show that current GPU architectures can compete with CPU implementations of hierarchy construction running on multicore systems. In practice, we can construct hierarchies of models with up to several million triangles and use them for fast ray tracing or other applications. [source]

    On-Line Control Architecture for Enabling Real-Time Traffic System Operations

    Srinivas Peeta
    Critical to their effectiveness are the control architectures that provide a blueprint for the efficient transmission and processing of large amounts of real-time data, and consistency-checking and fault tolerance mechanisms to ensure seamless automated functioning. However, the lack of low-cost, high-performance, and easy-to-build computing environments are key impediments to the widespread deployment of such architectures in the real-time traffic operations domain. This article proposes an Internet-based on-line control architecture that uses a Beowulf cluster as its computational backbone and provides an automated mechanism for real-time route guidance to drivers. To investigate this concept, the computationally intensive optimization modules are implemented on a low-cost 16-processor Beowulf cluster and a commercially available supercomputer, and the performance of these systems on representative computations is measured. The results highlight the effectiveness of the cluster in generating substantial computational performance scalability, and suggest that its performance is comparable to that of the more expensive supercomputer. [source]

    A flexible content repository to enable a peer-to-peer-based wiki

    Udo Bartlang
    Abstract Wikis,being major applications of the Web 2.0,are used for a large number of purposes, such as encyclopedias, project documentation, and coordination, both in open communities and in enterprises. At the application level, users are targeted as both consumers and producers of dynamic content. Yet, this kind of peer-to-peer (P2P) principle is not used at the technical level being still dominated by traditional client,server architectures. What lacks is a generic platform that combines the scalability of the P2P approach with, for example, a wiki's requirements for consistent content management in a highly concurrent environment. This paper presents a flexible content repository system that is intended to close the gap by using a hybrid P2P overlay to support scalable, fault-tolerant, consistent, and efficient data operations for the dynamic content of wikis. On the one hand, this paper introduces the generic, overall architecture of the content repository. On the other hand, it describes the major building blocks to enable P2P data management at the system's persistent storage layer, and how these may be used to implement a P2P-based wiki application: (i) a P2P back-end administrates a wiki's actual content resources. (ii) On top, P2P service groups act as indexing groups to implement a wiki's search index. Copyright © 2009 John Wiley & Sons, Ltd. [source]

    Implementation, performance, and science results from a 30.7 TFLOPS IBM BladeCenter cluster

    Craig A. Stewart
    Abstract This paper describes Indiana University's implementation, performance testing, and use of a large high performance computing system. IU's Big Red, a 20.48 TFLOPS IBM e1350 BladeCenter cluster, appeared in the 27th Top500 list as the 23rd fastest supercomputer in the world in June 2006. In spring 2007, this computer was upgraded to 30.72 TFLOPS. The e1350 BladeCenter architecture, including two internal networks accessible to users and user applications and two networks used exclusively for system management, has enabled the system to provide good scalability on many important applications while being well manageable. Implementing a system based on the JS21 Blade and PowerPC 970MP processor within the US TeraGrid presented certain challenges, given that Intel-compatible processors dominate the TeraGrid. However, the particular characteristics of the PowerPC have enabled it to be highly popular among certain application communities, particularly users of molecular dynamics and weather forecasting codes. A critical aspect of Big Red's implementation has been a focus on Science Gateways, which provide graphical interfaces to systems supporting end-to-end scientific workflows. Several Science Gateways have been implemented that access Big Red as a computational resource,some via the TeraGrid, some not affiliated with the TeraGrid. In summary, Big Red has been successfully integrated with the TeraGrid, and is used by many researchers locally at IU via grids and Science Gateways. It has been a success in terms of enabling scientific discoveries at IU and, via the TeraGrid, across the US. Copyright © 2009 John Wiley & Sons, Ltd. [source]

    Increasing data reuse of sparse algebra codes on simultaneous multithreading architectures

    J. C. Pichel
    Abstract In this paper the problem of the locality of sparse algebra codes on simultaneous multithreading (SMT) architectures is studied. In these kind of architectures many hardware structures are dynamically shared among the running threads. This puts a lot of stress on the memory hierarchy, and a poor locality, both inter-thread and intra-thread, may become a major bottleneck in the performance of a code. This behavior is even more pronounced when the code is irregular, which is the case of sparse matrix ones. Therefore, techniques that increase the locality of irregular codes on SMT architectures are important to achieve high performance. This paper proposes a data reordering technique specially tuned for these kind of architectures and codes. It is based on a locality model developed by the authors in previous works. The technique has been tested, first, using a simulator of a SMT architecture, and subsequently, on a real architecture as Intel's Hyper-Threading. Important reductions in the number of cache misses have been achieved, even when the number of running threads grows. When applying the locality improvement technique, we also decrease the total execution time and improve the scalability of the code. Copyright © 2009 John Wiley & Sons, Ltd. [source]

    Security in distributed metadata catalogues

    Nuno Santos
    Abstract Catalogue services provide the discovery and location mechanisms that allow users and applications to locate data on Grids. Replication is a highly desirable feature in these services, since it provides the scalability and reliability required on large data Grids and is the basis for federating catalogues from different organizations. Grid catalogues are often used to store sensitive data and must have access control mechanisms to protect their data. Replication has to take this security policy into account, making sure that replicated information cannot be abused but allowing some flexibility such as selective replication for the sites depending on the level of trust in them. In this paper we discuss the security requirements and implications of several replication scenarios for Grid catalogues based on experiences gained within the EGEE project. Using the security infrastructure of the EGEE Grid as a basis, we then propose a security architecture for replicated Grid catalogues, which, among other features, supports partial and total replication of the security mechanisms on the master. The implementation of this architecture in the AMGA metadata catalogue of the EGEE project is then described including the application to a complex scenario in a biomedical application. Copyright © 2008 John Wiley & Sons, Ltd. [source]

    Performance analysis of a semantics-enabled service registry

    W. Fang
    Abstract Service discovery is a critical task in service-oriented architectures. In this paper, we study GRIMOIRES, the semantics-enabled service registry of the OMII software distribution, from a performance perspective. We study the scalability of GRIMOIRES against the amount of information that has been published into it. The methodology we use and the data we present are helpful for researchers to understand the performance characteristics of the registry and, more generally, of semantics-enabled service discovery. Based on this experimentation, we claim that GRIMOIRES is an efficient semantics-aware service discovery engine. Copyright © 2007 John Wiley & Sons, Ltd. [source]

    Distributed end-host multicast algorithms for the Knowledge Grid

    Wanqing Tu
    Abstract The Knowledge Grid built on top of the peer-to-peer (P2P) network has been studied to implement scalable, available and sematic-based querying. In order to improve the efficiency and scalability of querying, this paper studies the problem of multicasting queries in the Knowledge Grid. An m -dimensional irregular mesh is a popular overlay topology of P2P networks. We present a set of novel distributed algorithms on top of an m -dimensional irregular mesh overlay for the short delay and low network resource consumption end-host multicast services. Our end-host multicast fully utilizes the advantages of an m -dimensional mesh to construct a two-layer architecture. Compared to previous approaches, the novelty and contribution here are: (1) cluster formation that partitions the group members into clusters in the lower layer where cluster consists of a small number of members; (2) cluster core selection that searches a core with the minimum sum of overlay hops to all other cluster members for each cluster; (3) weighted shortest path tree construction that guarantees the minimum number of shortest paths to be occupied by the multicast traffic; (4) distributed multicast routing that directs the multicast messages to be efficiently distributed along the two-layer multicast architecture in parallel, without a global control; the routing scheme enables the packets to be transmitted to the remote end hosts within short delays through some common shortest paths; and (5) multicast path maintenance that restores the normal communication once the membership alteration appears. Simulation results show that our end-host multicast can distributively achieve a shorter delay and lower network resource consumption multicast services as compared with some well-known end-host multicast systems. Copyright © 2006 John Wiley & Sons, Ltd. [source]

    Parallel space-filling curve generation through sorting

    J. Luitjens
    Abstract In this paper we consider the scalability of parallel space-filling curve generation as implemented through parallel sorting algorithms. Multiple sorting algorithms are studied and results show that space-filling curves can be generated quickly in parallel on thousands of processors. In addition, performance models are presented that are consistent with measured performance and offer insight into performance on still larger numbers of processors. At large numbers of processors, the scalability of adaptive mesh refined codes depends on the individual components of the adaptive solver. One such component is the dynamic load balancer. In adaptive mesh refined codes, the mesh is constantly changing resulting in load imbalance among the processors requiring a load-balancing phase. The load balancing may occur often, requiring the load balancer to perform quickly. One common method for dynamic load balancing is to use space-filling curves. Space-filling curves, in particular the Hilbert curve, generate good partitions quickly in serial. However, at tens and hundreds of thousands of processors serial generation of space-filling curves will hinder scalability. In order to avoid this issue we have developed a method that generates space-filling curves quickly in parallel by reducing the generation to integer sorting. Copyright © 2007 John Wiley & Sons, Ltd. [source]

    Parallelization and scalability of a spectral element channel flow solver for incompressible Navier,Stokes equations

    C. W. Hamman
    Abstract Direct numerical simulation (DNS) of turbulent flows is widely recognized to demand fine spatial meshes, small timesteps, and very long runtimes to properly resolve the flow field. To overcome these limitations, most DNS is performed on supercomputing machines. With the rapid development of terascale (and, eventually, petascale) computing on thousands of processors, it has become imperative to consider the development of DNS algorithms and parallelization methods that are capable of fully exploiting these massively parallel machines. A highly parallelizable algorithm for the simulation of turbulent channel flow that allows for efficient scaling on several thousand processors is presented. A model that accurately predicts the performance of the algorithm is developed and compared with experimental data. The results demonstrate that the proposed numerical algorithm is capable of scaling well on petascale computing machines and thus will allow for the development and analysis of high Reynolds number channel flows. Copyright © 2007 John Wiley & Sons, Ltd. [source]

    Seine: a dynamic geometry-based shared-space interaction framework for parallel scientific applications

    L. Zhang
    Abstract While large-scale parallel/distributed simulations are rapidly becoming critical research modalities in academia and industry, their efficient and scalable implementations continue to present many challenges. A key challenge is that the dynamic and complex communication/coordination required by these applications (dependent on the state of the phenomenon being modeled) are determined by the specific numerical formulation, the domain decomposition and/or sub-domain refinement algorithms used, etc. and are known only at runtime. This paper presents Seine, a dynamic geometry-based shared-space interaction framework for scientific applications. The framework provides the flexibility of shared-space-based models and supports extremely dynamic communication/coordination patterns, while still enabling scalable implementations. The design and prototype implementation of Seine are presented. Seine complements and can be used in conjunction with existing parallel programming systems such as MPI and OpenMP. An experimental evaluation using an adaptive multi-block oil-reservoir simulation is used to demonstrate the performance and scalability of applications using Seine. Copyright © 2006 John Wiley & Sons, Ltd. [source]

    Solving the block,Toeplitz least-squares problem in parallel

    P. Alonso
    Abstract In this paper we present two versions of a parallel algorithm to solve the block,Toeplitz least-squares problem on distributed-memory architectures. We derive a parallel algorithm based on the seminormal equations arising from the triangular decomposition of the product TTT. Our parallel algorithm exploits the displacement structure of the Toeplitz-like matrices using the Generalized Schur Algorithm to obtain the solution in O(mn) flops instead of O(mn2) flops of the algorithms for non-structured matrices. The strong regularity of the previous product of matrices and an appropriate computation of the hyperbolic rotations improve the stability of the algorithms. We have reduced the communication cost of previous versions, and have also reduced the memory access cost by appropriately arranging the elements of the matrices. Furthermore, the second version of the algorithm has a very low spatial cost, because it does not store the triangular factor of the decomposition. The experimental results show a good scalability of the parallel algorithm on two different clusters of personal computers. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    A cache-efficient implementation of the lattice Boltzmann method for the two-dimensional diffusion equation

    A. C. Velivelli
    Abstract The lattice Boltzmann method is an important technique for the numerical solution of partial differential equations because it has nearly ideal scalability on parallel computers for many applications. However, to achieve the scalability and speed potential of the lattice Boltzmann technique, the issues of data reusability in cache-based computer architectures must be addressed. Utilizing the two-dimensional diffusion equation, , this paper examines cache optimization for the lattice Boltzmann method in both serial and parallel implementations. In this study, speedups due to cache optimization were found to be 1.9,2.5 for the serial implementation and 3.6,3.8 for the parallel case in which the domain decomposition was optimized for stride-one access. In the parallel non-cached implementation, the method of domain decomposition (horizontal or vertical) used for parallelization did not significantly affect the compute time. In contrast, the cache-based implementation of the lattice Boltzmann method was significantly faster when the domain decomposition was optimized for stride-one access. Additionally, the cache-optimized lattice Boltzmann method in which the domain decomposition was optimized for stride-one access displayed superlinear scalability on all problem sizes as the number of processors was increased. Copyright © 2004 John Wiley & Sons, Ltd. [source]

    The performance and scalability of SHMEM and MPI-2 one-sided routines on a SGI Origin 2000 and a Cray T3E-600

    Glenn R. Luecke
    Abstract This paper compares the performance and scalability of SHMEM and MPI-2 one-sided routines on different communication patterns for a SGI Origin 2000 and a Cray T3E-600. The communication tests were chosen to represent commonly used communication patterns with low contention (accessing distant messages, a circular right shift, a binary tree broadcast) to communication patterns with high contention (a ,naive' broadcast and an all-to-all). For all the tests and for small message sizes, the SHMEM implementation significantly outperformed the MPI-2 implementation for both the SGI Origin 2000 and Cray T3E-600. Copyright © 2004 John Wiley & Sons, Ltd. [source]

    Performance and scalability of MPI on PC clusters

    Glenn R. Luecke
    Abstract The purpose of this paper is to compare the communication performance and scalability of MPI communication routines on a Windows Cluster, a Linux Cluster, a Cray T3E-600, and an SGI Origin 2000. All tests in this paper were run using various numbers of processors and two message sizes. In spite of the fact that the Cray T3E-600 is about 7 years old, it performed best of all machines for most of the tests. The Linux Cluster with the Myrinet interconnect and Myricom's MPI performed and scaled quite well and, in most cases, performed better than the Origin 2000, and in some cases better than the T3E. The Windows Cluster using the Giganet Full Interconnect and MPI/Pro's MPI performed and scaled poorly for small messages compared with all of the other machines. Copyright © 2004 John Wiley & Sons, Ltd. [source]

    Deep Start: a hybrid strategy for automated performance problem searches

    Philip C. Roth
    Abstract To attack the problem of scalability of performance diagnosis tools with respect to application code size, we have developed the Deep Start search strategy,a new technique that uses stack sampling to augment an automated search for application performance problems. Our hybrid approach locates performance problems more quickly and finds performance problems hidden from a more straightforward search strategy. The Deep Start strategy uses stack samples collected as a by-product of normal search instrumentation to select deep starters, functions that are likely to be application bottlenecks. With priorities and careful control of the search refinement, our strategy gives preference to experiments on the deep starters and their callees. This approach enables the Deep Start strategy to find application bottlenecks more efficiently and more effectively than a more straightforward search strategy. We implemented the Deep Start search strategy in the Performance Consultant, Paradyn's automated bottleneck detection component. In our tests, Deep Start found half of our test applications' known bottlenecks between 32% and 59% faster than the Performance Consultant's current search strategy, and finished finding bottlenecks between 10% and 61% faster. In addition to improving the search time, Deep Start often found more bottlenecks than the call graph search strategy. Copyright © 2003 John Wiley & Sons, Ltd. [source]

    An Independent Evaluation of Four Quantitative Emergency Department Crowding Scales

    Spencer S. Jones MStat
    Background Emergency department (ED) overcrowding has become a frequent topic of investigation. Despite a significant body of research, there is no standard definition or measurement of ED crowding. Four quantitative scales for ED crowding have been proposed in the literature: the Real-time Emergency Analysis of Demand Indicators (READI), the Emergency Department Work Index (EDWIN), the National Emergency Department Overcrowding Study (NEDOCS) scale, and the Emergency Department Crowding Scale (EDCS). These four scales have yet to be independently evaluated and compared. Objectives The goals of this study were to formally compare four existing quantitative ED crowding scales by measuring their ability to detect instances of perceived ED crowding and to determine whether any of these scales provide a generalizable solution for measuring ED crowding. Methods Data were collected at two-hour intervals over 135 consecutive sampling instances. Physician and nurse agreement was assessed using weighted , statistics. The crowding scales were compared via correlation statistics and their ability to predict perceived instances of ED crowding. Sensitivity, specificity, and positive predictive values were calculated at site-specific cut points and at the recommended thresholds. Results All four of the crowding scales were significantly correlated, but their predictive abilities varied widely. NEDOCS had the highest area under the receiver operating characteristic curve (AROC) (0.92), while EDCS had the lowest (0.64). The recommended thresholds for the crowding scales were rarely exceeded; therefore, the scales were adjusted to site-specific cut points. At a site-specific cut point of 37.19, NEDOCS had the highest sensitivity (0.81), specificity (0.87), and positive predictive value (0.62). Conclusions At the study site, the suggested thresholds of the published crowding scales did not agree with providers' perceptions of ED crowding. Even after adjusting the scales to site-specific thresholds, a relatively low prevalence of ED crowding resulted in unacceptably low positive predictive values for each scale. These results indicate that these crowding scales lack scalability and do not perform as designed in EDs where crowding is not the norm. However, two of the crowding scales, EDWIN and NEDOCS, and one of the READI subscales, bed ratio, yielded good predictive power (AROC >0.80) of perceived ED crowding, suggesting that they could be used effectively after a period of site-specific calibration at EDs where crowding is a frequent occurrence. [source]

    Novel and Efficient Chemoenzymatic Synthesis of D -Glucose 6-Phosphate and Molecular Modeling Studies on the Selective Biocatalysis

    Tatiana Rodríguez-Pérez
    Abstract A concise chemoenzymatic synthesis of glucose 6-phosphate is described. Candida rugosa lipase was found to be an efficient catalyst for both regio- and stereoselective deacetylation of the primary hydroxy group in the peracetylated D -glucose. In addition, we report an improved synthesis of 1,2,3,4,6-penta- O -acetyl-,- D -glucopyranose providing a large-scale procedure for the acetylation of ,- D -glucose without isomerization at the anomeric center. The high overall yield and the easy scalability makes this chemoenzymatic strategy attractive for industrial application. Furthermore, molecular modeling of phosphonate transition-state analog for the enzymatic hydrolysis step supports the substrate selectivity observed with Candida rugosa lipase.(© Wiley-VCH Verlag GmbH & Co. KGaA, 69451 Weinheim, Germany, 2007) [source]

    Dynamic zone topology routing protocol for MANETs

    Mehran Abolhasan
    The limited scalability of the proactive and reactive routing protocols have resulted in the introduction of new generation of routing in mobile ad hoc networks, called hybrid routing. These protocols aim to extend the scalability of such networks beyond several hundred to thousand of nodes by defining a virtual infrastructure in the network. However, many of the hybrid routing protocols proposed to date are designed to function using a common pre-programmed static zone map. Other hybrid protocols reduce flooding by grouping nodes into clusters, governed by a cluster-head, which may create performance bottlenecks or a single point of failures at each cluster-head node. We propose a new routing strategy in which zones are created dynamically, using a dynamic zone creation algorithm. Therefore, nodes are not restricted to a specific region. Additionally, nodes perform routing and data forwarding in a cooperative manner, which means that in the case failure, route recalculation is minimised. Routing overheads are also further reduced by introducing a number of GPS-based location tracking mechanisms, which reduces the route discovery area and the number of nodes queried to find the required destination. Copyright © 2006 AEIT [source]

    Augmentation of a nearest neighbour clustering algorithm with a partial supervision strategy for biomedical data classification

    EXPERT SYSTEMS, Issue 1 2009
    Sameh A. Salem
    Abstract: In this paper, a partial supervision strategy for a recently developed clustering algorithm, the nearest neighbour clustering algorithm (NNCA), is proposed. The proposed method (NNCA-PS) offers classification capability with a smaller amount of a priori knowledge, where a small number of data objects from the entire data set are used as labelled objects to guide the clustering process towards a better search space. Experimental results show that NNCA-PS gives promising results of 89% sensitivity at 95% specificity when used to segment retinal blood vessels, and a maximum classification accuracy of 99.5% with 97.2% average accuracy when applied to a breast cancer data set. Comparisons with other methods indicate the robustness of the proposed method in classification. Additionally, experiments on parallel environments indicate the suitability and scalability of NNCA-PS in handling larger data sets. [source]

    On the Application of Inductive Machine Learning Tools to Geographical Analysis

    Mark Gahegan
    Inductive machine learning tools, such as neural networks and decision trees, offer alternative methods for classification, clustering, and pattern recognition that can, in theory, extend to the complex or "deep" data sets that pervade geography. By contrast, traditional statistical approaches may fail, due to issues of scalability and flexibility. This paper discusses the role of inductive machine learning as it relates to geographical analysis. The discussion presented is not based on comparative results or on mathematical description, but instead focuses on the often subtle ways in which the various inductive learning approaches differ operationally, describing (1) the manner in which the feature space is partitioned or clustered, (2) the search mechanisms employed to identify good solutions, and (3) the different biases that each technique imposes. The consequences arising from these issues, when considering complex geographic feature spaces, are then described in detail. The overall aim is to provide a foundation upon which reliable inductive analysis methods can be constructed, instead of depending on piecemeal or haphazard experimentation with the various operational criteria that inductive learning tools call for. Often, it would appear that these criteria are not well understood by practitioners in the geographic sphere, which can lead to difficulties in configuration and operation, and ultimately to poor performance. [source]

    Large-Scale Synthesis of Long Crystalline Cu2-xSe Nanowire Bundles by Water-Evaporation-Induced Self-Assembly and Their Application in Gas Sensing

    Jun Xu
    Abstract By a facile water evaporation process without adding any directing agent, Cu2-xSe nanowire bundles with diameters of 100,300,nm and lengths up to hundreds of micrometers, which comprise crystalline nanowires with diameters of 5,8,nm, are obtained. Experiments reveal the initial formation/stacking of CuSe nanoplates and the subsequent transformation to the Cu2-xSe nanowire bundles. A water-evaporation-induced self-assembly (WEISA) mechanism is proposed, which highlights the driving force of evaporation in promoting the nanoplate stacking, CuSe-to-Cu2-xSe transformation and the growth/bundling of the Cu2-xSe nanowires. The simplicity, benignancy, scalability, and high-yield of the synthesis of this important nanowire material herald its numerous applications. As one example, the use of the Cu2-xSe nanowire bundles as a photoluminescence-type sensor of humidity is demonstrated, which shows good sensitivity, ideal linearity, quick response/recovery and long lifetime in a very wide humidity range at room temperature. [source]

    Development of terabit-class super-networking technologies

    Junichi Murayama Member
    Abstract We propose terabit-class super-networking technologies, designed to improve the scalability, reliability and performance of optical Internet protocol networks. Our technologies comprise both intra- and interlayer traffic engineering technologies. The intralayer technologies include an optical path protection scheme, an electrical load-balancing scheme and a distributed content-caching scheme. These provide an effective and economical way of improving performance and reliability. The interlayer technologies include both traffic-driven and application-driven optical cut-through control schemes and a policy control scheme. These provide an effective and economical way of improving scalability and performance. The feasibility of our technologies has been verified by means of experiments using prototype systems. The results showed that the different techniques can be combined to form a single network architecture for dynamic optical path control. Copyright © 2007 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc. [source]

    Materials Fabricated by Micro- and Nanoparticle Assembly , The Challenging Path from Science to Engineering

    ADVANCED MATERIALS, Issue 19 2009
    Orlin D. Velev
    Abstract We classify the strategies for colloidal assembly and review the diverse potential applications of micro- and nanoparticle structures in materials and device prototypes. The useful properties of the particle assemblies, such as high surface-to-volume ratio, periodicity at mesoscale, large packing density, and long-range ordering, can be harnessed in optical, electronic, and biosensing devices. We discuss the present and future trends in the colloidal- assembly field, focusing on the challenges of developing fabrication procedures that are rapid and efficiently controlled. We speculate on how the issues of scalability, control, and precision could be addressed, and how the functionality of the assemblies can be increased to better match the needs of technology. [source]

    Agile requirements engineering practices and challenges: an empirical study

    Balasubramaniam Ramesh
    Abstract This paper describes empirical research into agile requirements engineering (RE) practices. Based on an analysis of data collected in 16 US software development organizations, we identify six agile practices. We also identify seven challenges that are created by the use of these practices. We further analyse how this collection of practices helps mitigate some, while exacerbating other risks in RE. We provide a framework for evaluating the impact and appropriateness of agile RE practices by relating them to RE risks. Two risks that are intractable by agile RE practices emerge from the analysis. First, problems with customer inability and a lack of concurrence among customers significantly impact agile development. Second, risks associated with the neglecting non-functional requirements such as security and scalability are a serious concern. Developers should carefully evaluate the risk factors in their project environment to understand whether the benefits of agile RE practices outweigh the costs imposed by the challenges. [source]

    ParCYCLIC: finite element modelling of earthquake liquefaction response on parallel computers

    Jun Peng
    Abstract This paper presents the computational procedures and solution strategy employed in ParCYCLIC, a parallel non-linear finite element program developed based on an existing serial code CYCLIC for the analysis of cyclic seismically-induced liquefaction problems. In ParCYCLIC, finite elements are employed within an incremental plasticity, coupled solid,fluid formulation. A constitutive model developed for simulating liquefaction-induced deformations is a main component of this analysis framework. The elements of the computational strategy, designed for distributed-memory message-passing parallel computer systems, include: (a) an automatic domain decomposer to partition the finite element mesh; (b) nodal ordering strategies to minimize storage space for the matrix coefficients; (c) an efficient scheme for the allocation of sparse matrix coefficients among the processors; and (d) a parallel sparse direct solver. Application of ParCYCLIC to simulate 3-D geotechnical experimental models is demonstrated. The computational results show excellent parallel performance and scalability of ParCYCLIC on parallel computers with a large number of processors. Copyright © 2004 John Wiley & Sons, Ltd. [source]

    A distributed memory parallel implementation of the multigrid method for solving three-dimensional implicit solid mechanics problems

    A. Namazifard
    Abstract We describe the parallel implementation of a multigrid method for unstructured finite element discretizations of solid mechanics problems. We focus on a distributed memory programming model and use the MPI library to perform the required interprocessor communications. We present an algebraic framework for our parallel computations, and describe an object-based programming methodology using Fortran90. The performance of the implementation is measured by solving both fixed- and scaled-size problems on three different parallel computers (an SGI Origin2000, an IBM SP2 and a Cray T3E). The code performs well in terms of speedup, parallel efficiency and scalability. However, the floating point performance is considerably below the peak values attributed to these machines. Lazy processors are documented on the Origin that produce reduced performance statistics. The solution of two problems on an SGI Origin2000, an IBM PowerPC SMP and a Linux cluster demonstrate that the algorithm performs well when applied to the unstructured meshes required for practical engineering analysis. Copyright © 2004 John Wiley & Sons, Ltd. [source]