Software Development (software + development)

Distribution by Scientific Domains

Terms modified by Software Development

  • software development process

  • Selected Abstracts


    Active Learning through Modeling: Introduction to Software Development in the Business Curriculum,

    DECISION SCIENCES JOURNAL OF INNOVATIVE EDUCATION, Issue 2 2004
    Boris Roussev
    ABSTRACT Modern software practices call for the active involvement of business people in the software process. Therefore, programming has become an indispensable part of the information systems component of the core curriculum at business schools. In this paper, we present a model-based approach to teaching introduction to programming to general business students. The theoretical underpinnings of the new approach are metaphor, abstraction, modeling, Bloom's classification of cognitive skills, and active learning. We employ models to introduce the basic programming constructs and their semantics. To this end, we use statecharts to model object's state and the environment model of evaluation as a virtual machine interpreting the programs written in JavaScript. The adoption of this approach helps learners build a sound mental model of the notion of computation process. Scholastic performance, student evaluations, our experiential observations, and a multiple regression statistical test prove that the proposed ideas improve the course significantly. [source]


    Digital artifacts as quasi-objects: Qualification, mediation, and materiality

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 12 2009
    Hamid R. Ekbia
    Digital artifacts have novel properties that largely derive from the processes that mediate their creation, and that can be best understood by a close examination of such processes. This paper introduces the concept of "quasi-object" to characterize these objects and elucidate the activities that comprise their mediations. A case study of "bugs" is analyzed to illustrate exemplary activities of justification, qualification, and binding in the process of bug fixing in Free/Open Source Software development. The findings of the case study lead to broader reflections on the character of digital artifacts in general. The relationship of "quasi-object" to other similar concepts are explored. [source]


    A quality-of-service-based framework for creating distributed heterogeneous software components

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 12 2002
    Rajeev R. Raje
    Abstract Component-based software development offers a promising solution for taming the complexity found in today's distributed applications. Today's and future distributed software systems will certainly require combining heterogeneous software components that are geographically dispersed. For the successful deployment of such a software system, it is necessary that its realization, based on assembling heterogeneous components, not only meets the functional requirements, but also satisfies the non-functional criteria such as the desired quality of service (QoS). In this paper, a framework based on the notions of a meta-component model, a generative domain model and QoS parameters is described. A formal specification based on two-level grammar is used to represent these notions in a tightly integrated way so that QoS becomes a part of the generative domain model. A simple case study is described in the context of this framework. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    The Role of Effective Modeling in the Development of Self-Efficacy: The Case of the Transparent Engine,

    DECISION SCIENCES JOURNAL OF INNOVATIVE EDUCATION, Issue 1 2007
    Kevin P. Scheibe
    ABSTRACT Computing technology augments learning in education in a number of ways. One particular method uses interactive programs to demonstrate complex concepts. The purpose of this article is to examine one type of interactive learning technology, the transparent engine. The transparent engine allows instructors and students to view and directly interact with educational concepts such as Web-enabled software development. The article first presents a framework describing transparent engines. The framework details four types of transparent engines: (1) enactive mastery/manipulatable, (2) enactive mastery/nonmanipulatable, (3) vicarious experience/manipulatable, and (4) vicarious experience/nonmanipulatable. Following this, we present the results of an experiment designed to examine this framework by testing its predictions for one quadrant, vicarious experience/nonmanipulatable. The results support the framework in that students taught concepts with the aid of the vicarious experience/nonmanipulatable transparent engine had significantly higher domain-specific self-efficacy compared to those taught the same concepts without this tool. [source]


    That site looks 88.46% familiar: quantifying similarity of Web page design

    EXPERT SYSTEMS, Issue 3 2005
    Giselle Martine
    Abstract: Web page design guidelines produce a pressure towards uniformity; excessive uniformity lays a Web page designer open to accusations of plagiarism. In the past, assessment of similarity between visual products such as Web pages has involved an uncomfortably high degree of subjectivity. This paper describes a method for measuring perceived similarity of visual products which avoids previous problems with subjectivity, and which makes it possible to pool results from respondents without the need for intermediate coding. This method is based on co-occurrence matrices derived from card sorts. It can also be applied to other areas of software development, such as systems analysis and market research. [source]


    Defining Expertise in Software Development While Doing Gender

    GENDER, WORK & ORGANISATION, Issue 4 2007
    Esther Ruiz Ben
    The optimism regarding opportunities for women to enter the professionalization process in software development during the past years has not been fully realized and the gender gap in Germany's information technology (IT) sector still persists. Women are almost completely unrepresented in the technical fields of the German software industry, particularly in small enterprises. In this article, I firstly offer an overview of the German IT sector's development and current status. Secondly, I discuss the construction of expertise and gendered meanings in the practice of software development and related implications for the enrolment of women in this field. Gender stereotypical assumptions about expertise in the practice of software development and structural factors related to the lack of life,work balance programmes, as well as the lack of internal training in most IT companies, contribute to organizational segregation [source]


    Toward a hybrid model for usability resource allocation in industrial software product development

    HUMAN FACTORS AND ERGONOMICS IN MANUFACTURING & SERVICE INDUSTRIES, Issue 3 2007
    Colleen M. Duffy
    The organizational aspects of user-centered software development in a financial services company are presented. The financial services industry sector is one of the industrial sectors to embark on the development of computer software as a consumer product. The nature of business in the service sector predisposes it to encounter difficulties in developing software aimed at meeting customer demands. Lack of familiarity and experience with the product design and implementation processes, as well as reliance on usability for acceptance, are major obstacles encountered. Difficulties, insights, and lessons learned regarding organizational ergonomics issues faced by a user-centered design group are provided, and a hybrid resource distribution model is proposed to guide other service sector companies in their future software development efforts. © 2007 Wiley Periodicals, Inc. Hum Factors Man 17: 245,262, 2007. [source]


    A temporal perspective of the computer game development process

    INFORMATION SYSTEMS JOURNAL, Issue 5 2009
    Patrick Stacey
    Abstract., This paper offers an insight into the games software development process from a time perspective by drawing on an in-depth study in a games development organization. The wider market for computer games now exceeds the annual global revenues of cinema. We have, however, only a limited scholarly understanding of how games studios produce games. Games projects require particular attention because their context is unique. Drawing on a case study, the paper offers a theoretical conceptualization of the development process of creative software, such as games software. We found that the process, as constituted by the interactions of developers, oscillates between two modes of practice: routinized and improvised, which sediment and flux the working rhythms in the context. This paper argues that while we may predeterminately lay down the broad stages of creative software development, the activities that constitute each stage, and the transition criteria from one to the next, may be left to the actors in the moment, to the temporality of the situation as it emerges. If all development activities are predefined, as advocated in various process models, this may leave little room for opportunity and the creative fruits that flow from opportunity, such as enhanced features, aesthetics and learning. [source]


    Information and Communications Technology and Auditing: Current Implications and Future Directions

    INTERNATIONAL JOURNAL OF AUDITING, Issue 2 2010
    Kamil Omoteso
    This exploratory study assesses, from a structuration theory perspective, the impact information and communications technology (ICT) tools and techniques are currently having on audit tasks, auditors (internal and external) and the organisations they work for from the point of view of coordination, control, authority and structure. Based on a triangulation of interview and questionnaire techniques, the findings indicate that ICT is re-shaping auditors' roles and outputs as well as audit organisations' structures. The findings also project the view that continuous auditing, artificial intelligence and CobiT are expected to gain more prominence while a need was also seen for new software development to help auditors match the complexity of their clients' information systems. The study's results reveal the current state of affairs of the relationship between ICT and auditing against the backdrop of continuous global ICT sophistication thereby updating ICT audit literature and the likely future direction of this relationship. [source]


    Aspect-enhanced goal-driven sequence diagram

    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 8 2010
    Jonathan Lee
    Recently, aspect-oriented approaches have resulted in a tremendous impact on the processing of broadly scoped properties during the development of software systems. However, the weaving mechanism of these crosscutting concerns cannot be easily represented with the extant unified modeling language (UML) notation at the early stage of software development life cycle. As an attempt toward the investigation of how the crosscutting behavior takes place, we proposed, in this work, an aspect-enhanced goal-driven approach to modeling the aspectual behavior in UML state transition diagrams and sequence diagrams with the proposed interaction operators based on the aspectual weaving semantics. By introducing the proposed interaction operations in the UML combined fragment, UML sequence diagrams can be further enhanced to support the modeling of the interactions between aspectual and base behavior in the analysis and design stage of software development. To further exemplify our points, the meeting scheduler system is chosen as a vehicle to illustrate the proposed approach. © 2010 Wiley Periodicals, Inc. [source]


    Dynamic update of Java applications,balancing change flexibility vs programming transparency

    JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 2 2009
    Allan Raundahl Gregersen
    Abstract The ability to dynamically change the behavior of an application is becoming an important issue in contemporary rich client software development. Not only can programmers benefit from dynamic updates during the development of concurrent applications where recreation of complex application states can be avoided during test and debugging but also at post-deployment time where applications can be updated transparently without going through the well-known halt, redeploy and restart scheme. In this paper, we explain how our dynamic update framework achieves transparent dynamic updates of running Java applications while guaranteeing both type and thread safety. A novel feature of our approach is that it supports full redefinition of classes by allowing changes to the type hierarchy. Our approach is based on a lightweight runtime system, which is injected into an application via bytecode transformations at class loading. We show how our approach can add dynamic update capabilities to rich client development by integrating it with the NetBeans rich client platform. Performance experiments on our NetBeans implementation show that the overhead of our approach is low when applied to component application programming interface classes. To the best of our knowledge no other existing approach achieves the same level of low overhead and programming transparency. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Continuous evolution through software architecture evaluation: a case study

    JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 5 2006
    Christian Del Rosso
    Abstract The need for software architecture evaluation is based on the realization that software development, like all engineering disciplines, is a process of continuous modeling and refinement. Detecting architectural problems before the bulk of development work is done allows re-architecting activities to take place in due time, without having to rework what has already been done. At the same time, tuning activities allow software performance to be enhanced and maintained during the software lifetime. When dealing with product families, architectural evaluations have an even more crucial role: the evaluations are targeted to a set of common products. We have tried different approaches to software assessments with our mobile phone software, an embedded real-time software platform, which must support an increasingly large number of different product variants. In this paper, we present a case study and discuss the experiences gained with three different assessment techniques that we have worked on during the past five years. The assessment techniques presented include scenario-based software architecture assessment, software performance assessment and experience-based assessment. The various evaluation techniques are complementary and, when used together, constitute a tool which a software architect must be aware of in order to maintain and evolve a large software intensive system. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Organizational evolution of digital signal processing software development

    JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 4 2006
    Susanna Pantsar-Syväniemi
    Abstract A base station, as a network element, has become an increasingly software-intensive system. Digital signal processing (DSP) software is hard real-time software that is a part of the software system needed in a base station. This article reports practical experiences related to organizing the development of embedded software in the telecommunication industry, at Nokia Networks. The article introduces the main factors influencing the development of DSP software and also compares the evolutionary process under study with both selected organizational models for a software product line and a multistage model for the software life cycle. We believe it is vitally important to formulate the organization according to the software architecture, and it is essential to have a dedicated development organization with long-term responsibility for the software. History shows that without long-term responsibility, there is no software reuse. In this paper we introduce a new organizational model for product line development. This new hybrid model clarifies long-term responsibilities in large software organizations with hundreds of staff members and formulates the organization according to the software architecture. Our case needs a couple more constraints to keep it in the evolution stage of the software life cycle. Thus, we extend the evolution phase in the multistage model to make it relevant for embedded, hard real-time software. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Architectural support in industry: a reflection using C-POSH

    JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 1 2005
    R. J. Bril
    Abstract Software architecture plays a vital role in the development (and hence maintenance) of large complex systems (containing millions of lines of code) with a long lifetime. It is therefore required that the software architecture is also maintained, i.e., sufficiently documented, clearly communicated, and explicitly controlled during its life-cycle. In our experience, these requirements cannot be met without appropriate support. Commercial-off-the-shelf support for architectural maintenance is still scarcely available, if at all, implying the need to develop appropriate proprietary means. In this paper, we reflect upon software architecture maintenance taken within three organizations within Philips that develop professional systems. We extensively describe the experience gained with introducing and embedding of architectural support in these three organizations. We focus on architectural support in the area of software architecture recovery, visualization, analysis, and verification. In our experience, the support must be carried by a number of pillars of software development, and all of these pillars have to go through a change process to ensure sustainable embedding. Managing these changes requires several key roles to be fulfilled in the organization: a champion, a company angel, a change agent, and a target. We call our reflection model C-POSH, which is an acronym for Change management of the four identified pillars of software development: Process, Organization, Software development environment, and Humans. Our experiences will be presented in terms of the C-POSH model. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Consistent database sampling as a database prototyping approach

    JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 6 2002
    Jesús Bisbal
    Abstract Requirements elicitation has been reported to be the stage of software development when errors have the most expensive consequences. Users usually find it difficult to articulate a consistent and complete set of requirements at the beginning of a development project. Prototyping is considered a powerful technique to ease this problem by exposing a partial implementation of the software system to the user, who can then identify required modifications. When prototyping data-intensive applications a so-called prototype database is needed. This paper investigates how a prototype database can be built. Two different approaches are analysed, namely test databases and sample databases; the former populates the resulting database with synthetic values, while the latter uses data values from an existing database. The application areas that require prototype databases, in addition to requirements analysis, are also identified. The paper reports on existing research into the construction of both types of prototype databases, and indicates to which type of application area each is best suited. This paper advocates for the use of sample databases when an operational database is available, as is commonly the case in software maintenance and evolution. Domain-relevant data values and integrity constraints will produce a prototype database which will support the information system development process better than synthetic data. The process of extracting a sample database is also investigated. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    A semantic entropy metric

    JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 4 2002
    Letha H. Etzkorn
    Abstract This paper presents a new semantically-based metric for object-oriented systems, called the Semantic Class Definition Entropy (SCDE) metric, which examines the implementation domain content of a class to measure class complexity. The domain content is determined using a knowledge-based program understanding system. The metric's examination of the domain content of a class provides a more direct mapping between the metric and common human complexity analysis than is possible with traditional complexity measures based on syntactic aspects (software aspects related to the format of the code). Additionally, this metric represents a true design metric that can measure complexity early in the life cycles of software maintenance and software development. The SCDE metric is correlated with analyses from a human expert team, and is also compared to syntactic complexity measures. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    An empirical study of the influence of departmentalization and organizational position on software maintenance

    JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 1 2002
    Dowming Yeh
    Abstract Differences between software development and maintenance imply that software maintenance work should be measured and managed somewhat differently from software development. On the other hand, maintenance programmers frequently perform the same tasks as development programmers do. How to departmentalize maintenance and development is thus becoming an issue. Departmentalization in software development and maintenance can be classified into two categories, maintenance separated from development and maintenance jointly with development. Departmentalization has its strengths and weaknesses. In this work, quantitative empirical methods are applied to investigate the influence of departmentalization on fulfillment opportunity, time allocations of activities, problem occurrences, and management process in software maintenance. Seven hypotheses are formed and tested by statistical methods. The result shows that separate organizations demonstrate specialization in software maintenance, but managerial attitudes also aggravate the potential status difference for such organizations. Other major pitfalls for departmentalization are also identified. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    A concept-oriented belief revision approach to domain knowledge recovery from source code

    JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 1 2001
    Yang Li
    Abstract Domain knowledge is the soul of software systems. After decades of software development, domain knowledge has reached a certain degree of saturation. The recovery of domain knowledge from source code is beneficial to many software engineering activities, in particular, software evolution. In the real world, the ambiguous appearance of domain knowledge embedded in source code constitutes the biggest barrier to recovering reliable domain knowledge. In this paper, we introduce an innovative approach to recovering domain knowledge with enhanced reliability from source code. In particular, we divide domain knowledge into interconnected knowledge slices and match these knowledge slices against the source code. Each knowledge slice has its own authenticity evaluation function which takes the belief of the evidence it needs as input and the authenticity of the knowledge slice as output. Moreover, the knowledge slices are arranged to exchange beliefs with each other through interconnections, i.e. concepts, so that a better evaluation of the authenticity of these knowledge slices can be obtained. The decision on acknowledging recovered knowledge slices can therefore be made more easily. Our approach, rooted as it is in cognitive science and social psychology, is also widely applicable to other knowledge recovery tasks. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    Lessons learned by participants of distributed software development

    KNOWLEDGE AND PROCESS MANAGEMENT: THE JOURNAL OF CORPORATE TRANSFORMATION, Issue 2 2005
    Seija Komi-Sirviö
    The maturation of the technical infrastructure has enabled the emergence and growth of distributed software development. This has created tempting opportunities for companies to distribute their software development, for example, to economically favourable countries so as to gain needed expertise or to get closer to customers. Nonetheless, such distribution potentially creates problems that need to be understood and addressed in order to make possible the gains offered. To clarify and understand the most difficult problems and their nature, a survey of individuals engaged in distributed software development was conducted. The purpose of this survey was to gather and share lessons learned in order to better understand the nature of the software development process when operating in a distributed software development environment and the problems that may be associated with such distributed processes. Through a clear appreciation of the risks associated with distributed development it becomes possible to develop approaches for the mitigation of these risks. This paper presents the results of the survey, focusing on the most serious problems raised by the respondents. Some practical guidelines that have been developed by industry to overcome these problems are also briefly summarized. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    SOFTWARE ENGINEERING CONSIDERATIONS FOR INDIVIDUAL-BASED MODELS

    NATURAL RESOURCE MODELING, Issue 1 2002
    GLEN E. ROPELLA
    ABSTRACT. Software design is much more important for individual-based models (IBMs) than it is for conventional models, for three reasons. First, the results of an IBM are the emergent properties of a system of interacting agents that exist only in the software; unlike analytical model results, an IBMs outcomes can be reproduced only by exactly reproducing its software implementation. Second, outcomes of an IBM are expected to be complex and novel, making software errors difficult to identify. Third, an IBM needs ,systems software' that manages populations of multiple kinds of agents, often has nonlinear and multi-threaded process control and simulates a wide range of physical and biological processes. General software guidelines for complex models are especially important for IBMs. (1) Have code critically reviewed by several people. (2) Follow prudent release management prac-tices, keeping careful control over the software as changes are implemented. (3) Develop multiple representations of the model and its software; diagrams and written descriptions of code aid design and understanding. (4) Use appropriate and widespread software tools which provide numerous major benefits; coding ,from scratch' is rarely appropriate. (5) Test the software continually, following a planned, multi-level, exper-imental strategy. (6) Provide tools for thorough, pervasive validation and verification. (7) Pay attention to how pseudorandom numbers are generated and used. Additional guidelines for IBMs include: (a) design the model's organization before starting to write code,(b) provide the ability to observe all parts of the model from the beginning,(c) make an extensive effort to understand how the model executes how often different pieces of code are called by which objects, and (d) design the software to resemble the system being mod-eled, which helps maintain an understanding of the software. Strategies for meeting these guidelines include planning adequate resources for software development, using software professionals to implement models and using tools like Swarm that are designed specifically for IBMs. [source]


    Patent Protection of Computer-Implemented Inventions Vis-Ŕ-Vis Open Source Software

    THE JOURNAL OF WORLD INTELLECTUAL PROPERTY, Issue 3 2006
    Asunción Esteve
    This article describes the different positions of the open source proponents versus the "traditional intellectual property approach" towards the so-called computer-implemented inventions. It analyses the position of both sides regarding the European Commission proposal on the patentability of computer-implemented inventions, and tries to clarify the confusion and misunderstanding that followed this proposal. The article first focuses on the reasons that could justify providing patent protection on computer-implemented inventions, in particular how the gradual growth of computer technology is having an increasing effect on the performance of inventions. Second, it examines the three main risks that the EU proposal was said, by some open source lobbies, to introduce: the overprotection of computer programs; the blocking effect in interoperability; and its negative impact on innovation and software development. The article evaluates these risks and provides reasons and arguments that show that they were overestimated. It also shows that no hard empirical data have been provided to support conclusions over the negative impact of computer-implemented inventions on innovation. Third, the article analyses if the legal provisions of the EU proposal on the patentability of computer-implemented inventions could have brought positive effects in terms of legal certainty. In this respect, the article considers that both the definition of computer-implemented invention and the criteria to evaluate the patentability of such inventions were a bit disappointing. Finally, the article considers any new legal initiative to endorse patent protection on computer-implemented inventions to be positive. [source]


    Rotational order,disorder structure of fluorescent protein FP480

    ACTA CRYSTALLOGRAPHICA SECTION D, Issue 9 2009
    Sergei Pletnev
    In the last decade, advances in instrumentation and software development have made crystallography a powerful tool in structural biology. Using this method, structural information can now be acquired from pathological crystals that would have been abandoned in earlier times. In this paper, the order,disorder (OD) structure of fluorescent protein FP480 is discussed. The structure is composed of tetramers with 222 symmetry incorporated into the lattice in two different ways, namely rotated 90° with respect to each other around the crystal c axis, with tetramer axes coincident with crystallographic twofold axes. The random distribution of alternatively oriented tetramers in the crystal creates a rotational OD structure with statistically averaged I422 symmetry, although the presence of very weak and diffuse additional reflections suggests that the randomness is only approximate. [source]


    RTS2: Lessons learned from a widely distributed telescope network

    ASTRONOMISCHE NACHRICHTEN, Issue 3 2008
    P. Kubánek
    Abstract RTS2 (Remote Telescope System 2) is a highly modular open source telescope and observatory management software package. It evolved from RTS, which was developed in Python to control a telescope aimed at observing optical transients of , ray burts. The development of a network system capable of operating robotic telescopes is both difficult and complicated. Along with continued software development one must be concerned with maintaining operations and obtaining results. This is a review of experiences gained building a network of robotic telescopes. It focuses on describing which issues are important during development of the robotic observatory software and requirements for future development of the RTS2 package. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Automated software development with XML and the Java* language

    BELL LABS TECHNICAL JOURNAL, Issue 2 2000
    Glenn R. Bruns
    In software development with domain-specific languages (DSLs), one defines a requirements language for an application domain and then develops a compiler to generate an implementation from a requirements document. Because DSLs and DSL compilers are expensive to develop, DSLs are seen as cost effective only when many products of the same domain will be developed. In this paper, we show how the cost of DSL design and DSL compiler development can be reduced by defining DSLs as Extensible-Markup-Language (XML) dialects and by developing DSL compilers using commercial XML tools and the Java* language. This approach is illustrated through the Call View Data Language (CDL), a new DSL that generates provisioning support code and database table definitions for Lucent Technologies' 7R/EÔ Network Feature Server. [source]


    Understanding and addressing the essential costs of evolving systems

    BELL LABS TECHNICAL JOURNAL, Issue 2 2000
    Joseph W. Davison
    A major attribute of telecommunications software systems is change. For evolving telecom systems, significant expertise is needed to effectively handle and capitalize on these changes. This paper discusses some of the key dimensions of change that occur during telecom systems software development, the areas of expertise that software developers apply in managing these changes, and some of the means by which high-performing project members have overcome the learning curves associated with these systems. We base our results on data gathered from several Bell Labs multiyear development projects and interviews with experienced staff. [source]


    The evolution and redefining of ,CAL': a reflection on the interplay of theory and practice

    JOURNAL OF COMPUTER ASSISTED LEARNING, Issue 1 2010
    R. Hartley
    Abstract This article comments on how the core idea of the computer as an assistant to teaching and learning became reconfigured through changing technologies, pedagogies and educational cultures. Early influential researchers in computer assisted learning (CAL) made strong but differing links to theories and representations of learning, showing a relevance to pedagogy through innovative projects. Amid controversy, the educational potential of CAL became recognized and hardware,software developments stimulated the involvement of teachers in shaping applications and practices within contexts that favoured a constructivist student focus. Further advances in technology gave students greater autonomy in the style and management of learning, and enabled CAL to be redefined as a participative and collaborative enterprise. Institutions responded through supports and structures in ways that suited their wider educational policies. Technological developments (and controversies) continue to extend and reshape the applications of CAL, and this reflection points to the significance of the interplay between theory and practice in this evolving and redefining process. [source]