Home About us Contact | |||
Application Development (application + development)
Kinds of Application Development Selected AbstractsGridBLAST: a Globus-based high-throughput implementation of BLAST in a Grid computing frameworkCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2005Arun KrishnanArticle first published online: 24 JUN 200 Abstract Improvements in the performance of processors and networks have made it feasible to treat collections of workstations, servers, clusters and supercomputers as integrated computing resources or Grids. However, the very heterogeneity that is the strength of computational and data Grids can also make application development for such an environment extremely difficult. Application development in a Grid computing environment faces significant challenges in the form of problem granularity, latency and bandwidth issues as well as job scheduling. Currently existing Grid technologies limit the development of Grid applications to certain classes, namely, embarrassingly parallel, hierarchical parallelism, work flow and database applications. Of all these classes, embarrassingly parallel applications are the easiest to develop in a Grid computing framework. The work presented here deals with creating a Grid-enabled, high-throughput, standalone version of a bioinformatics application, BLAST, using Globus as the Grid middleware. BLAST is a sequence alignment and search technique that is embarrassingly parallel in nature and thus amenable to adaptation to a Grid environment. A detailed methodology for creating the Grid-enabled application is presented, which can be used as a template for the development of similar applications. The application has been tested on a ,mini-Grid' testbed and the results presented here show that for large problem sizes, a distributed, Grid-enabled version can help in significantly reducing execution times. Copyright © 2005 John Wiley & Sons, Ltd. [source] Novel software architecture for rapid development of magnetic resonance applicationsCONCEPTS IN MAGNETIC RESONANCE, Issue 3 2002Josef Debbins Abstract As the pace of clinical magnetic resonance (MR) procedures grows, the need for an MR scanner software platform on which developers can rapidly prototype, validate, and produce product applications becomes paramount. A software architecture has been developed for a commercial MR scanner that employs state of the art software technologies including Java, C++, DICOM, XML, and so forth. This system permits graphical (drag and drop) assembly of applications built on simple processing building blocks, including pulse sequences, a user interface, reconstruction and postprocessing, and database control. The application developer (researcher or commercial) can assemble these building blocks to create custom applications. The developer can also write source code directly to create new building blocks and add these to the collection of components, which can be distributed worldwide over the internet. The application software and its components are developed in Java, which assures platform portability across any host computer that supports a Java Virtual Machine. The downloaded executable portion of the application is executed in compiled C++ code, which assures mission-critical real-time execution during fast MR acquisition and data processing on dedicated embedded hardware that supports C or C++. This combination permits flexible and rapid MR application development across virtually any combination of computer configurations and operating systems, and yet it allows for very high performance execution on actual scanner hardware. Applications, including prescan, are inherently real-time enabled and can be aggregated and customized to form "superapplications," wherein one or more applications work with another to accomplish the clinical objective with a very high transition speed between applications. © 2002 Wiley Periodicals, Inc. Concepts in Magnetic Resonance (Magn Reson Engineering) 15: 216,237, 2002 [source] Cross-domain authorization for federated virtual organizations using the myVocs collaboration environmentCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2009Jill Gemmill Abstract This paper describes our experiences building and working with the reference implementation of myVocs (my Virtual Organization Collaboration System). myVocs provides a flexible environment for exploring new approaches to security, application development, and access control built from Internet services without a central identity repository. The myVocs framework enables virtual organization (VO) self-management across unrelated security domains for multiple, unrelated VOs. By leveraging the emerging distributed identity management infrastructure. myVocs provides an accessible, secure collaborative environment using standards for federated identity management and open-source software developed through the National Science Foundation Middleware Initiative. The Shibboleth software, an early implementation of the Organization for the Advancement of Structured Information Standards Security Assertion Markup Language standard for browser single sign-on, provides the middleware needed to assert identity and attributes across domains so that access control decisions can be determined at each resource based on local policy. The eduPerson object class for lightweight directory access protocol (LDAP) provides standardized naming, format, and semantics for a global identifier. We have found that a Shibboleth deployment supporting VOs requires the addition of a new VO service component allowing VOs to manage their own membership and control access to their distributed resources. The myVocs system can be integrated with Grid authentication and authorization using GridShib. Copyright © 2008 John Wiley & Sons, Ltd. [source] Parallel programming on a high-performance application-runtimeCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 18 2008Wojtek James Goscinski Abstract High-performance application development remains challenging, particularly for scientists making the transition to a heterogeneous grid environment. In general areas of computing, virtual environments such as Java and .Net have proved to be successful in fostering application development, allowing users to target and compile to a single environment, rather than a range of platforms, instruction sets and libraries. However, existing runtime environments are focused on business and desktop computing and they do not support the necessary high-performance computing (HPC) abstractions required by e-Scientists. Our work is focused on developing an application-runtime that can support these services natively. The result is a new approach to the development of an application-runtime for HPC: the Motor system has been developed by integrating a high-performance communication library directly within a virtual machine. The Motor message passing library is integrated alongside and in cooperation with other runtime libraries and services while retaining a strong message passing performance. As a result, the application developer is provided with a common environment for HPC application development. This environment supports both procedural languages, such as C, and modern object-oriented languages, such as C#. This paper describes the unique Motor architecture, presents its implementation and demonstrates its performance and use. Copyright © 2008 John Wiley & Sons, Ltd. [source] MyCoG.NET: a multi-language CoG toolkitCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2007A. Paventhan Abstract Grid application developers utilize Commodity Grid (CoG) toolkits to access Globus Grid services. Existing CoG toolkits are language-specific and have, for example, been developed for Java, Python and the Matlab scripting environment. In this paper we describe MyCoG.NET, a CoG toolkit supporting multi-language programmability under the Microsoft .NET framework. MyCoG.NET provides a set of classes and APIs to access Globus Grid services from languages supported by the .NET Common Language Runtime. We demonstrate its programmability using FORTRAN, C++, C# and Java, and discuss its performance over LAN and WAN infrastructures. We present a Grid application, in the field of experimental aerodynamics, as a case study to show how MyCoG.NET can be exploited. We demonstrate how scientists and engineers can create and use domain-specific workflow activity sets for rapid application development using Windows Workflow Foundation. We also show how users can easily extend and customize these activities. Copyright © 2006 John Wiley & Sons, Ltd. [source] GridBLAST: a Globus-based high-throughput implementation of BLAST in a Grid computing frameworkCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2005Arun KrishnanArticle first published online: 24 JUN 200 Abstract Improvements in the performance of processors and networks have made it feasible to treat collections of workstations, servers, clusters and supercomputers as integrated computing resources or Grids. However, the very heterogeneity that is the strength of computational and data Grids can also make application development for such an environment extremely difficult. Application development in a Grid computing environment faces significant challenges in the form of problem granularity, latency and bandwidth issues as well as job scheduling. Currently existing Grid technologies limit the development of Grid applications to certain classes, namely, embarrassingly parallel, hierarchical parallelism, work flow and database applications. Of all these classes, embarrassingly parallel applications are the easiest to develop in a Grid computing framework. The work presented here deals with creating a Grid-enabled, high-throughput, standalone version of a bioinformatics application, BLAST, using Globus as the Grid middleware. BLAST is a sequence alignment and search technique that is embarrassingly parallel in nature and thus amenable to adaptation to a Grid environment. A detailed methodology for creating the Grid-enabled application is presented, which can be used as a template for the development of similar applications. The application has been tested on a ,mini-Grid' testbed and the results presented here show that for large problem sizes, a distributed, Grid-enabled version can help in significantly reducing execution times. Copyright © 2005 John Wiley & Sons, Ltd. [source] Compiling data-parallel programs for clusters of SMPsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 2-3 2004Siegfried Benkner Abstract Clusters of shared-memory multiprocessors (SMPs) have become the most promising parallel computing platforms for scientific computing. However, SMP clusters significantly increase the complexity of user application development when using the low-level application programming interfaces MPI and OpenMP, forcing users to deal with both distributed-memory and shared-memory parallelization details. In this paper we present extensions of High Performance Fortran (HPF) for SMP clusters which enable the compiler to adopt a hybrid parallelization strategy, efficiently combining distributed-memory with shared-memory parallelism. By means of a small set of new language features, the hierarchical structure of SMP clusters may be specified. This information is utilized by the compiler to derive inter-node data mappings for controlling distributed-memory parallelization across the nodes of a cluster and intra-node data mappings for extracting shared-memory parallelism within nodes. Additional mechanisms are proposed for specifying inter- and intra-node data mappings explicitly, for controlling specific shared-memory parallelization issues and for integrating OpenMP routines in HPF applications. The proposed features have been realized within the ADAPTOR and VFC compilers. The parallelization strategy for clusters of SMPs adopted by these compilers is discussed as well as a hybrid-parallel execution model based on a combination of MPI and OpenMP. Experimental results indicate the effectiveness of the proposed features. Copyright © 2004 John Wiley & Sons, Ltd. [source] pyGlobus: a Python interface to the Globus ToolkitÔCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13-15 2002Keith R. Jackson Developing high-performance, problem-solving environments/applications that allow scientists to easily harness the power of the emerging national-scale ,Grid' infrastructure is currently a difficult task. Although many of the necessary low-level services, e.g. security, resource discovery, remote access to computation/data resource, etc., are available, it can be a challenge to rapidly integrate them into a new application. To address this difficulty we have begun the development of a Python-based high-level interface to the Grid services provided by the Globus Toolkit. In this paper we will explain why rapid application development using Grid services is important, look briefly at a motivating example, and finally look at the design and implementation of the pyGlobus package. Copyright © 2002 John Wiley & Sons, Ltd. [source] The utility of rapid application development in large-scale, complex projectsINFORMATION SYSTEMS JOURNAL, Issue 6 2009Hilary Berger Abstract This paper describes an investigation into the utility of an agile development approach known as rapid application development (RAD) within a large-scale development project conducted within the public sector in the UK. Features of RAD as an ,agile' information systems development method (ISDM) are discussed and previous research identifying properties of development environments most conducive to its application are described. A case study is then presented based upon a long-term qualitative investigation of the ,unbundling' of a commercial RAD-like ISDM. The evidence from this case leads us to question established wisdom in relation to appropriate adoption and application of agile development within large-scale and complex development environments. [source] ,It's lots of bits of paper and ticks and post-it notes and things . . .': a case study of a rapid application development projectINFORMATION SYSTEMS JOURNAL, Issue 3 2000Paul Beynon-Davies This paper reports an in-depth case study of a rapid application development (RAD) project. RAD is a recent information systems development method noted for its high levels of user involvement and use of iterative prototyping. The paper develops a model of the core features of RAD gleaned from literature such as that published on the dynamic systems development method (DSDM). We report an ethnographic study of a RAD project and use this case material to contrast the theory with the practice of RAD. We conclude the paper with a consideration of a number of possible revisions to the prescriptions of RAD and also discuss the role of RAD within the broader context of IS development. ,It's lots of bits of paper and ticks and post-it notes and things . . . you'll get to understand it, it's not that bad really'. A RAD team member [source] Business process semi-automation based on business model managementINTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 4 2002Koichi Terai It is important to respond to customers' requirements more rapidly than ever before due to the recent trend in e -business and its technologies. In order to achieve an agile response, we have to manage business models, to re,ect the changes in the models and to develop or modify IT systems for further chances. This paper proposes a management framework of layered enterprise models. The proposed framework consists of a business model repository and a software repository, and de,nes three different grain sizes of modeling layers, namely business modeling, business process modeling and business application modeling, in order to support business modeling and application development. This framework helps us to develop business application in incremental deployment of analysis, design, and implementation to execute business processes. We have implemented a prototype environment using Java. Each repository's contents are described using XML so that the repositories are interoperable. Copyright © 2003 John Wiley & Sons, Ltd. [source] A survey of the year 2002 literature on applications of isothermal titration calorimetryJOURNAL OF MOLECULAR RECOGNITION, Issue 6 2003Matthew J. Cliff Abstract Isothermal titration calorimetry (ITC) is becoming widely accepted as a key instrument in any laboratory in which quantification of biomolecular interactions is a requisite. The method has matured with respect to general acceptance and application development over recent years. The number of publications on ITC has grown exponentially over the last 10 years, reflecting the general utility of the method. Here all the published works of the year 2002 in this area have been surveyed. We review the broad range of systems to which ITC is being directed and classify these into general areas highlighting key publications of interest. This provides an overview of what can be achieved using this method and what developments are likely to occur in the near future. Copyright © 2003 John Wiley & Sons, Ltd. [source] Using a modified shepards method for optimization of a nanoparticulate cyclosporine a formulation prepared by a static mixer techniqueJOURNAL OF PHARMACEUTICAL SCIENCES, Issue 2 2008Dionysios Douroumis Abstract An innovative methodology has been used for the formulation development of Cyclosporine A (CyA) nanoparticles. In the present study the static mixer technique, which is a novel method for producing nanoparticles, was employed. The formulation optimum was calculated by the modified Shepard's method (MSM), an advanced data analysis technique not adopted so far in pharmaceutical applications. Controlled precipitation was achieved injecting the organic CyA solution rapidly into an aqueous protective solution by means of a static mixer. Furthermore the computer based MSM was implemented for data analysis, visualization, and application development. For the optimization studies, the gelatin/lipoid S75 amounts and the organic/aqueous phase were selected as independent variables while the obtained particle size as a dependent variable. The optimum predicted formulation was characterized by cryo-TEM microscopy, particle size measurements, stability, and in vitro release. The produced nanoparticles contain drug in amorphous state and decreased amounts of stabilizing agents. The dissolution rate of the lyophilized powder was significantly enhanced in the first 2 h. MSM was proved capable to interpret in detail and to predict with high accuracy the optimum formulation. The mixer technique was proved capable to develop CyA nanoparticulate formulations. © 2007 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 97:919,930, 2008 [source] Twenty-four well plate miniature bioreactor system as a scale-down model for cell culture process developmentBIOTECHNOLOGY & BIOENGINEERING, Issue 1 2009Aaron Chen Abstract Increasing the throughput and efficiency of cell culture process development has become increasingly important to rapidly screen and optimize cell culture media and process parameters. This study describes the application of a miniaturized bioreactor system as a scaled-down model for cell culture process development using a CHO cell line expressing a recombinant protein. The microbioreactor system (M24) provides non-invasive online monitoring and control capability for process parameters such as pH, dissolved oxygen (DO), and temperature at the individual well level. A systematic evaluation of the M24 for cell culture process applications was successfully completed. Several challenges were initially identified. These included uneven gas distribution in the wells due to system design and lot to lot variability, foaming issues caused by sparging required for active DO control, and pH control limitation under conditions of minimal dissolved CO2. A high degree of variability was found which was addressed by changes in the system design. The foaming issue was resolved by addition of anti-foam, reduction of sparge rate, and elimination of DO control. The pH control limitation was overcome by a single manual liquid base addition. Intra-well reproducibility, as indicated by measurements of process parameters, cell growth, metabolite profiles, protein titer, protein quality, and scale-equivalency between the M24 and 2 L bioreactor cultures were very good. This evaluation has shown feasibility of utilizing the M24 as a scale-down tool for cell culture application development under industrially relevant process conditions. Biotechnol. Bioeng. 2009;102: 148,160. © 2008 Wiley Periodicals, Inc. [source] Canadian Approaches to E-Business ImplementationCANADIAN JOURNAL OF ADMINISTRATIVE SCIENCES, Issue 1 2003Rebecca Grant As Web-based business nears the decade mark, it is appropriate to take stock of its progress and the degree to which it has met or fallen short of predictions. This paper examines the extent to which companies have followed the advice of experts when it comes to designing an organizational structure for their e-business initiatives. It compares the prevalence of centralized versus business unit level decision-making in Canadian companies with e-business experience. It also explores who is given responsibility for application development, backend integration, and infrastructure maintenance. The data demonstrate that use of independent contractors has increased. However, outsourcing in general is less prevalent than predicted and implementation driven by business units, rare. Furthermore, the practices of companies with well-established initiatives differ significantly from those of the less experienced, offering important lessons for newcomers to e-business. Résumé Etant donné que le commerce basé sur la toile a presque dix ans, il est temps d'analyser son progrés et l'écart de résultat par rapport aux prédictions. La présente étude analyse quel a été le niveau de suivi des conseils d'expert par les entreprises en qui concerne la programmation des structures de mise en ,uvre de leurs projets de commerce électronique. Elle compare la position dominante des décisions prises au niveau centralisée sur celles prises au niveau de la division spécialisée chez les entreprises canadiennes ayant une expérience en commerce électronique. Elle fait apparaître également à qui a été confiée la responsabilité du développement des logiciels, de l'étape finale de l'intégration et de l'entretien de l'infrastructure. Les données démontrent que l'emploi des entrepreneurs indépendants a augmenté. Cependant, la sous-traitance n'est en général pas aussi forte que prévue et le développement assuré par les divisions spécialisées reste faible. En outre, la pratique des entreprises ayant des activités bien établies se distingue considérablement de celle qui ont moins d'expérience, offrant ainsi des leçons importantes pour les nouveaux venus du commerce électronique. [source] |