Servers

Distribution by Scientific Domains

Kinds of Servers

  • web servers


  • Selected Abstracts


    Tunable scheduling in a GridRPC framework

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2008
    A. Amar
    Abstract Among existing grid middleware approaches, one simple, powerful, and flexible approach consists of using servers available in different administrative domains through the classic client,server or remote procedure call paradigm. Network Enabled Servers (NES) implement this model, also called GridRPC. Clients submit computation requests to a scheduler, whose goal is to find a server available on the grid using some performance metric. The aim of this paper is to give an overview of a NES middleware developed in the GRAAL team called distributed interactive engineering toolbox (DIET) and to describe recent developments around plug-in schedulers, workflow management, and tools. DIET is a hierarchical set of components used for the development of applications based on computational servers on the grid. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Social Optimal Location of Facilities with Fixed Servers, Stochastic Demand, and Congestion

    PRODUCTION AND OPERATIONS MANAGEMENT, Issue 6 2009
    Ignacio Castillo
    We consider two capacity choice scenarios for the optimal location of facilities with fixed servers, stochastic demand, and congestion. Motivating applications include virtual call centers, consisting of geographically dispersed centers, walk-in health clinics, motor vehicle inspection stations, automobile emissions testing stations, and internal service systems. The choice of locations for such facilities influences both the travel cost and waiting times of users. In contrast to most previous research, we explicitly embed both customer travel/connection and delay costs in the objective function and solve the location,allocation problem and choose facility capacities simultaneously. The choice of capacity for a facility that is viewed as a queueing system with Poisson arrivals and exponential service times could mean choosing a service rate for the servers (Scenario 1) or choosing the number of servers (Scenario 2). We express the optimal service rate in closed form in Scenario 1 and the (asymptotically) optimal number of servers in closed form in Scenario 2. This allows us to eliminate both the number of servers and the service rates from the optimization problems, leading to tractable mixed-integer nonlinear programs. Our computational results show that both problems can be solved efficiently using a Lagrangian relaxation optimization procedure. [source]


    Computer-based management environment for an assembly language programming laboratory

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 1 2007
    Santiago Rodríguez
    Abstract This article describes the environment used in the Computer Architecture Department of the Technical University of Madrid (UPM) for managing small laboratory work projects and a specific application for an Assembly Language Programming Laboratory. The approach is based on a chain of tools that a small team of teachers can use to efficiently manage a course with a large number of students (400 per year). Students use this tool chain to complete their assignments using an MC88110 CPU simulator also developed by the Department. Students use a Delivery Agent tool to send files containing their implementations. These files are stored in one of the Department servers. Every student laboratory assignment is tested by an Automatic Project Evaluator that executes a set of previously designed and configured tests. These tools are used by teachers to manage mass courses thereby avoiding restrictions on students working on the same assignment. This procedure may encourage students to copy others' laboratory work and we have therefore developed a complementary tool to help teachers find "replicated" laboratory assignment implementations. This tool is a plagiarism detection assistant that completes the tool-chain functionality. Jointly, these tools have demonstrated over the last decade that important benefits can be gained from the exploitation of a global laboratory work management system. Some of the benefits may be transferable to an area of growing importance that we have not directly explored, i.e. distance learning environments for technical subjects. © 2007 Wiley Periodicals, Inc. Comput Appl Eng Educ 15: 41,54, 2007; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20094 [source]


    DRIVE,Dispatching Requests Indirectly through Virtual Environment

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2010
    Hyung Won Choi
    Abstract Dispatching a large number of dynamically changing requests directly to a small number of servers exposes the disparity between the requests and the machines. In this paper, we present a novel approach that dispatches requests to servers through virtual machines, called Dispatching Requests Indirectly through Virtual Environment (DRIVE). Client requests are first dispatched to virtual machines that are subsequently dispatched to actual physical machines. This buffering of requests helps to reduce the complexity involved in dispatching a large number of requests to a small number of machines. To demonstrate the effectiveness of the DRIVE framework, we set up an experimental environment consisting of a PC cluster and four benchmark suites. With the experimental results, we demonstrate that the use of virtual machines indeed abstracts away the client requests and hence helps to improve the overall performance of a dynamically changing computing environment. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Tunable scheduling in a GridRPC framework

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 9 2008
    A. Amar
    Abstract Among existing grid middleware approaches, one simple, powerful, and flexible approach consists of using servers available in different administrative domains through the classic client,server or remote procedure call paradigm. Network Enabled Servers (NES) implement this model, also called GridRPC. Clients submit computation requests to a scheduler, whose goal is to find a server available on the grid using some performance metric. The aim of this paper is to give an overview of a NES middleware developed in the GRAAL team called distributed interactive engineering toolbox (DIET) and to describe recent developments around plug-in schedulers, workflow management, and tools. DIET is a hierarchical set of components used for the development of applications based on computational servers on the grid. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Towards an autonomic approach for edge computing

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 14 2007
    Mikael Desertot
    Abstract Nowadays, one of the biggest challenges for companies is to cope with the high cost of their information technologies infrastructure. Edge computing is a new computing paradigm designed to allocate on-demand computing and storage resources. Those resources are Web cache servers scattered over the ISP backbones. We argue that this paradigm could be applied for on-demand full application hosting, helping to reduce costs. In this paper, we present a J2EE (Java Enterprise Edition) dynamic server able to deploy/host J2EE applications on demand and its autonomic manager. For this, we reengineer and experiment with JOnAS, an open-source J2EE static server. Two management policies of the autonomic manager were stressed by a simulation of a worldwide ISP network. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    GridBLAST: a Globus-based high-throughput implementation of BLAST in a Grid computing framework

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2005
    Arun KrishnanArticle first published online: 24 JUN 200
    Abstract Improvements in the performance of processors and networks have made it feasible to treat collections of workstations, servers, clusters and supercomputers as integrated computing resources or Grids. However, the very heterogeneity that is the strength of computational and data Grids can also make application development for such an environment extremely difficult. Application development in a Grid computing environment faces significant challenges in the form of problem granularity, latency and bandwidth issues as well as job scheduling. Currently existing Grid technologies limit the development of Grid applications to certain classes, namely, embarrassingly parallel, hierarchical parallelism, work flow and database applications. Of all these classes, embarrassingly parallel applications are the easiest to develop in a Grid computing framework. The work presented here deals with creating a Grid-enabled, high-throughput, standalone version of a bioinformatics application, BLAST, using Globus as the Grid middleware. BLAST is a sequence alignment and search technique that is embarrassingly parallel in nature and thus amenable to adaptation to a Grid environment. A detailed methodology for creating the Grid-enabled application is presented, which can be used as a template for the development of similar applications. The application has been tested on a ,mini-Grid' testbed and the results presented here show that for large problem sizes, a distributed, Grid-enabled version can help in significantly reducing execution times. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    An approach for quality of service adaptation in service-oriented Grids

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 5 2004
    Rashid Al-Ali
    Abstract Some applications utilizing Grid computing infrastructure require the simultaneous allocation of resources, such as compute servers, networks, memory, disk storage and other specialized resources. Collaborative working and visualization is one example of such applications. In this context, quality of service (QoS) is related to Grid services, and not just to the network connecting these services. With the emerging interest in service-oriented Grids, resources may be advertised and traded as services based on a service level agreement (SLA). Such a SLA must include both general and technical specifications, including pricing policy and properties of the resources required to execute the service, to ensure QoS requirements are satisfied. An approach for QoS adaptation is presented to enable the dynamic adjustment of behavior of an application based on changes in the pre-defined SLA. The approach is particularly useful if workload or network traffic changes in unpredictable ways during an active session. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Experimental analysis of the impact of peer-to-peer applications on traffic in commercial IP networks

    EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 6 2004
    Nadia Ben Azzouna
    To evaluate the impact of peer-to-peer (P2P) applications on traffic in wide area networks, we analyze measurements from a high speed IP backbone link carrying TCP traffic towards several ADSL areas. The first observations are that the prevalent part of traffic is due to P2P applications (almost 80% of total traffic) and that the usage of network becomes symmetric in the sense that customers are not only clients but also servers. This latter point is observed by the significant proportion of long flows mainly composed of ACK segments. When analyzing the bit rate created by long flows, it turns out that the TCP connections due to P2P applications have a rather small bit rate and that there is no evidence for long range dependence. These facts are intimately related to the way P2P protocols are running. We separately analyze signaling traffic and data traffic. It turns out that by adopting a suitable level of aggregation, global traffic can be described by means of usual tele-traffic models based on M/G/, queues with Weibullian service times. Copyright © 2004 AEI [source]


    Novel immunodeficiency data servers

    IMMUNOLOGICAL REVIEWS, Issue 1 2000
    Jouni Väliaho
    First page of article [source]


    Class-based weighted fair queueing: validation and comparison by trace-driven simulation

    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 10 2005
    Rachid El Abdouni Khayari
    Abstract World-wide web as well as proxy servers rely for their scheduling on services provided by the underlying operating system. In practice, this means that some form of first-come-first-served (FCFS) scheduling is utilized. Although FCFS is a reasonable scheduling strategy for job sequences that do not show much variance, for the world-wide web it has been shown that the requested-object sizes do exhibit heavy tails. Under these circumstances, job scheduling on the basis of shortest-job first (SJF) or shortest remaining processing time (SRPT) has been shown to minimize the total average waiting time. However, these methods have the disadvantage of potential job starvation. In order to avoid the problems of both FCFS and SJF we present in this paper a new scheduling approach called class-based interleaving weighted fair queueing (CI-WFQ). This scheduling approach exploits the specific characteristics of the job stream being served, that is, the distribution of the sizes of the objects being requested, to set its parameters such that good mean response times are obtained and starvation does not occur. In that sense, the new scheduling strategy can be made adaptive to the characteristics of the job stream being served. In this paper we compare the new scheduling approach (using trace-driven simulations) to FCFS, SJF and the recently introduced ,-scheduling, and show that CI-WFQ combines very good performance (as far as mean and variance of response time and blocking probability are concerned) with a scheduling complexity almost as low as for FCFS (and hence, lower than for SJF and ,-scheduling). The use of trace-driven simulation is essential, since the special properties of the arrival process makes analytical solutions very difficult to achieve. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Design and implementation of Anycast communication model in IPv6

    INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 3 2009
    Xiaonan Wang
    The existing designs for providing Anycast services are either to confine each Anycast group to a preconfigured topological region or to distribute members of Anycast groups over global regions. The former brings an Anycast scalability problem and the latter causes the routing tables to grow proportionally to the number of all global Anycast groups in the entire Internet. Therefore, both of the above designs restrict and hinder the application and development of Anycast services. A new kind of Anycast communication model is proposed in this paper which solves some existing problems, such as scalability and communication errors between clients and servers. In this paper, the Anycast communication model is analyzed in depth and discussed, and the experimental data of this Anycast communication model demonstrate its feasibility and validity. [source]


    Automated application component placement in data centers using mathematical programming

    INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 6 2008
    Xiaoyun Zhu
    In this article we address the application component placement (ACP) problem for a data center. The problem is defined as follows: for a given topology of a network consisting of switches, servers and storage devices with varying capabilities, and for a given specification of a component-based distributed application, decide which physical server should be assigned to each application component, such that the application's processing, communication and storage requirements are satisfied without creating bottlenecks in the infrastructure, and that scarce resources are used most efficiently. We explain how the ACP problem differs from traditional task assignment in distributed systems, or existing grid scheduling problems. We describe our approach of formalizing this problem using a mathematical optimization framework and further formulating it as a mixed integer program (MIP). We then present our ACP solver using GAMS and CPLEX to automate the decision-making process. The solver was numerically tested on a number of examples, ranging from a 125-server real data center to a set of hypothetical data centers with increasing size. In all cases the ACP solver found an optimal solution within a reasonably short time. In a numerical simulation comparing our solver to a random selection algorithm, our solver resulted in much more efficient use of scarce network resources and allowed more applications to be placed in the same infrastructure. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    A scheme for solving Anycast scalability in IPv6

    INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 4 2008
    Wang Xiaonan
    The existing designs for providing Anycast services are either to confine Anycast groups to a preconfigured topological region or to distribute Anycast groups globally across the whole Internet. The latter causes routing tables to grow proportionally to the number of global Anycast groups in the entire Internet and both of the above designs restrict and hinder the application and development of Anycast services. A new kind of Anycast communication scheme is proposed in this paper. This scheme adopts a novel Anycast address structure which can achieve a dynamic Anycast group while allowing Anycast members to freely leave and join the Anycast group without geographical restriction and it effectively solves the expanding explosion of the Anycast routing table. In addition, this scheme can evenly disperse Anycast request messages from clients across the Anycast servers of one Anycast group, thus achieving load balance. This paper analyzes the communication scheme in depth and discusses its feasibility and validity. The experimental data in IPv6 simulation demonstrate that the TRT (Total Response Time) of one Anycast service (e.g., file downloading) acquired through this communication scheme is shorter by 15% than that through the existing Anycast communication scheme. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Prioritized e-mail servicing to reduce non-spam delay and loss: A performance analysis

    INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 4 2008
    Muhammad N. Marsono
    This paper proposes a prioritized e-mail servicing on e-mail servers to reduce the delay and loss of non-spam e-mails due to queuing. Using a prioritized two-queue scheme, non-spam e-mails are queued in a fast queue and given higher service priority than spam e-mails that are queued in a slow queue. Four prioritized e-mail service strategies for the two-queue scheme are proposed and analyzed. We modeled these four strategies using discrete-time Markov chain analysis under different e-mail traffic loads and service capacities. Non-spam e-mails can be delivered within a small delay, even under heavy e-mail loadings and high spam-to-non-spam a priori. Results from our analysis of the two-queue scheme show that it gives non-spam delay and loss probability two orders of magnitude smaller than the typical single-queue approach during heavy spam traffic. Moreover, prioritized e-mail servicing protects e-mail servers from spam attacks. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Resource allocation in the new fixed and mobile Internet generation

    INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 3 2003
    Guy Pujolle
    In this paper we study the scalability issue in the design of a centralized policy server controlling resources in the future IP-based telecom network generation. The policy servers are in charge of controlling and managing QoS, security and mobility in a centralized way in future IP-based telecom networks. Our study demonstrates that the policy servers can be designed in such a manner that they scale with increase in network capacity. Copyright © 2003 John Wiley &Sons, Ltd. [source]


    Networking lessons in delivering ,Software as a Service',Part II

    INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 6 2002
    David Greschler
    In part I of this paper, we described the origins and evolution of Software as a Service (SaaS) and its value proposition to Corporate IT, Service Providers, Independent Software Vendors and End Users. SaaS is a model in which software applications are deployed, managed, updated and supported on demand,like a utility,and are served to users centrally using servers that are internal or external to the enterprise. Applications are no longer installed locally on a user's desktop PC; instead, upgrades, licensing and version control, metering, support and provisioning are all managed at the server level. In part we examine the lessons learned in researching, building and running an SaaS service. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    An adaptive load balancing scheme for web servers

    INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 1 2002
    Dr. James Aweya
    This paper describes an overload control scheme for web servers which integrates admission control and load balancing. The admission control mechanism adaptively determines the client request acceptance rate to meet the web servers' performance requirements while the load balancing or client request distribution mechanism determines the fraction of requests to be assigned to each web server. The scheme requires no prior knowledge of the relative speeds of the web servers, nor the work required to process each incoming request. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Emergency service systems: The use of the hypercube queueing model in the solution of probabilistic location problems

    INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 5 2008
    Roberto D. Galvão
    Abstract Probabilistic location problems are surveyed from the perspective of their use in the design of emergency service systems, with special emphasis on emergency medical systems. Pioneering probabilistic models were defined in the 1980s, as a natural extension of deterministic covering models (first generation models) and backup models (second generation). These probabilistic models, however, adopted simplifying assumptions that in many cases do not correspond to real-world situations, where servers usually cooperate and have specific individual workloads. Thus the idea of embedding the hypercube queueing model into these formulations is to make them more adherent to the real world. The hypercube model and its extensions are initially presented in some detail, which is followed by a brief review of exact and approximate methods for its solution. Probabilistic models for the design of emergency service systems are then reviewed. The pioneering models of Daskin and ReVelle and Hogan are extended by embedding the hypercube model into them. Solution methods for these models are surveyed next, with comments on specialized models for the design of emergency medical systems for urban areas and highways. [source]


    On Estimation in M/G/c/c Queues

    INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 6 2001
    Mei Ling Huang
    We derive the minimum variance unbiased estimator (MVUE) and the maximum likelihood estimator (MLE) of the stationary probability function (pf) of the number of customers in a collection of independent M/G/c/c subsystems. It is assumed that the offered load and number of servers in each subsystem are unknown. We assume that observations of the total number of customers in the system are utilized because it may be impractical or impossible to observe individual server occupancies. Both estimators depend on the R distribution (the distribution of the sum of independent right truncated Poisson random variables) and R numbers. [source]


    Sexual reproduction of scleractinian corals in public aquariums: current status and future perspectives

    INTERNATIONAL ZOO YEARBOOK, Issue 1 2007
    D. PETERSEN
    A multiple-choice questionnaire was distributed, mainly via the list servers of the EUAC (European Union of Aquarium Curators) Coral ASP (Animal Sustainability Program) and AquaticInfo, to evaluate the potential of today's aquariums for the captive breeding of scleractinian corals. Sixteen (including the temperate coral Astroides calycularis) of, in total, 24 species (nine families) were recorded as showing reproductive behaviour that could establish an F1 generation. Broadcast spawners (13 species) reproduced mainly in open systems under natural light conditions (in all cases natural moonlight exposure), whereas brooders (11 species) showed less sensitivity towards certain environmental factors known to trigger reproduction in field populations (here moonlight and temperature fluctuations). Except for a few recruits of Galaxea fascicularis and Echinopora lamellosa maintained in a 750 000 litre system, recruits of broadcast spawners could be exclusively obtained by manipulating fertilization and settlement. Brooding corals established generally less than 100 recruits if settlement was not enhanced experimentally. When reproduction was manipulated, it enhanced reproductive success, in most cases to above 100 recruits. We assume that more species, especially brooders, might reproduce in public aquariums without being noticed by the staff owing to the lack of recruitment and of experimental design (larval collection). This study illustrates the great potential for public aquariums to reproduce corals sexually. However, more investigation is necessary to optimize reproductive success and possibly to broaden the spectrum of species reproduced in public aquariums. [source]


    TOP: a new method for protein structure comparisons and similarity searches

    JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 1 2000
    Guoguang Lu
    In order to facilitate the three-dimensional structure comparison of proteins, software for making comparisons and searching for similarities to protein structures in databases has been developed. The program identifies the residues that share similar positions of both main-chain and side-chain atoms between two proteins. The unique functions of the software also include database processing via Internet- and Web-based servers for different types of users. The developed method and its friendly user interface copes with many of the problems that frequently occur in protein structure comparisons, such as detecting structurally equivalent residues, misalignment caused by coincident match of C, atoms, circular sequence permutations, tedious repetition of access, maintenance of the most recent database, and inconvenience of user interface. The program is also designed to cooperate with other tools in structural bioinformatics, such as the 3DB Browser software [Prilusky (1998). Protein Data Bank Q. Newslett.84, 3,4] and the SCOP database [Murzin, Brenner, Hubbard & Chothia (1995). J. Mol. Biol.247, 536,540], for convenient molecular modelling and protein structure analysis. A similarity ranking score of `structure diversity' is proposed in order to estimate the evolutionary distance between proteins based on the comparisons of their three-dimensional structures. The function of the program has been utilized as a part of an automated program for multiple protein structure alignment. In this paper, the algorithm of the program and results of systematic tests are presented and discussed. [source]


    Effect on Restaurant Tipping of Presenting Customers With an Interesting Task and of Reciprocity

    JOURNAL OF APPLIED SOCIAL PSYCHOLOGY, Issue 7 2001
    Bruce Rind
    Research has shown that servers can increase their tip percentages by positively influencing customers' mood and using the compliance technique of reciprocity. These factors were examined in the current study. An experiment was conducted in which a female server either did or did not present customers with a novel, interesting task that has been shown in previous research to stimulate interest and enhance mood. Additionally, sometimes she allowed customers to keep the task, in an attempt to elicit reciprocity. It was predicted that both of these manipulations would increase tip percentages. Presenting customers with the interesting task did increase tips, from about 18.5% to 22%, although the reciprocity manipulation had no effect. [source]


    Development and evolution of a heterogeneous continuous media server: a case study

    JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 2 2005
    Dwight J. Makaroff
    Abstract Media server software is significantly complicated to develop and maintain, due to the nature of the many interface aspects which must be considered. This paper provides a case study of the design, implementation, and evolution of a continuous media file server. We place emphasis on the evolution of the software and our approach to maintainability. The user interface is a major consideration, even though the server software would appear isolated from that factor. Since continuous media servers must send the raw data to a client application over a network, the protocol considerations, hardware interface, and data storage/retrieval methods are of the paramount importance. In addition, the application programmer's interface to the server facilities has an impact on both the internal design and the performance of such a server. We discuss our experiences and insight into the development of such software products within a small research-based university environment. We experienced two main types of evolutionary change: requirements changes from the limited user community and performance enhancements/corrections. While the former were anticipated via a generic interface and modular design structure, the latter were surprising and substantially more difficult to solve. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Graph-based tools for re-engineering

    JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 4 2002
    Katja Cremer
    Abstract Maintenance of legacy systems is a challenging task. Often, only the source code is still available, while design or requirements documents have been lost or have not been kept up-to-date with the actual implementation. In particular, this applies to many business applications which are run on a mainframe computer and are written in COBOL. Many companies are confronted with the difficult task of migrating these systems to a client/server architecture with clients running on PCs and servers running on the mainframe. REforDI (REengineering for DIstribution) is a graph-based environment supporting this task. REforDI provides integrated code analysis, re-design, and code transformation for COBOL applications. To prepare the application for distribution, REforDI assists in the transition to an object-based architecture, according to which the source code is subsequently transformed into Object COBOL. Internally, REforDI makes heavy use of generators to reduce the implementation effort and thus to enhance adaptability. In particular, graph-based tools for re-engineering are generated from a formal specification which is based on programmed graph transformations. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    Usage impact factor: The effects of sample characteristics on usage-based impact metrics

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 1 2008
    Johan Bollen
    There exist ample demonstrations that indicators of scholarly impact analogous to the citation-based ISI Impact Factor can be derived from usage data; however, so far, usage can practically be recorded only at the level of distinct information services. This leads to community-specific assessments of scholarly impact that are difficult to generalize to the global scholarly community. In contrast, the ISI Impact Factor is based on citation data and thereby represents the global community of scholarly authors. The objective of this study is to examine the effects of community characteristics on assessments of scholarly impact from usage. We define a journal Usage Impact Factor that mimics the definition of the Thomson Scientific ISI Impact Factor. Usage Impact Factor rankings are calculated on the basis of a large-scale usage dataset recorded by the linking servers of the California State University system from 2003 to 2005. The resulting journal rankings are then compared to the Thomson Scientific ISI Impact Factor that is used as a reference indicator of general impact. Our results indicate that the particular scientific and demographic characteristics of a discipline have a strong effect on resulting usage-based assessments of scholarly impact. In particular, we observed that as the number of graduate students and faculty increases in a particular discipline, Usage Impact Factor rankings will converge more strongly with the ISI Impact Factor. [source]


    A public-key based authentication and key establishment protocol coupled with a client puzzle

    JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 9 2003
    M.C. Lee
    Network Denial-of-Service (DoS) attacks, which exhaust server resources and network bandwidth, can cause the target servers to be unable to provide proper services to the legitimate users and in some cases render the target systems inoperable and/or the target networks inaccessible. DoS attacks have now become a serious and common security threat to the Internet community. Public Key Infrastructure (PKI) has long been incorporated in various authentication protocols to facilitate verifying the identities of the communicating parties. The use of PKI has, however, an inherent problem as it involves expensive computational operations such as modular exponentiation. An improper deployment of the public-key operations in a protocol could create an opportunity for DoS attackers to exhaust the server's resources. This paper presents a public-key based authentication and key establishment protocol coupled with a sophisticated client puzzle, which together provide a versatile solution for possible DoS attacks and various other common attacks during an authentication process. Besides authentication, the protocol also supports a joint establishment of a session key by both the client and the server, which protects the session communications after the mutual authentication. The proposed protocol has been validated using a formal logic theory and has been shown, through security analysis, to be able to resist, besides DoS attacks, various other common attacks. [source]


    Web server suite for complex mixture analysis by covariance NMR

    MAGNETIC RESONANCE IN CHEMISTRY, Issue S1 2009
    Fengli Zhang
    Abstract Elucidation of the chemical composition of biological samples is a main focus of systems biology and metabolomics. Their comprehensive study requires reliable, efficient, and automatable methods to identify and quantify the underlying metabolites. Because nuclear magnetic resonance (NMR) spectroscopy is a rich source of molecular information, it has a unique potential for this task. Here we present a suite of public web servers (http://spinportal.magnet.fsu.edu), termed COLMAR, which facilitates complex mixture analysis by NMR. The COLMAR web portal presently consists of three servers: COLMAR covariance calculates the covariance NMR spectrum from an NMR input dataset, such as a TOCSY spectrum; COLMAR DemixC method decomposes the 2D covariance TOCSY spectrum into a reduced set of nonredundant 1D cross sections or traces, which belong to individual mixture components; and COLMAR query screens the traces against a NMR spectral database to identify individual compounds. Examples are presented that illustrate the utility of this web server suite for complex mixture analysis. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Robustness of efficient server assignment policies to service time distributions in finite-buffered lines

    NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 6 2010
    Eser K, zlar
    Abstract We study the assignment of flexible servers to stations in tandem lines with service times that are not necessarily exponentially distributed. Our goal is to achieve optimal or near-optimal throughput. For systems with infinite buffers, it is already known that the effective assignment of flexible servers is robust to the service time distributions. We provide analytical results for small systems and numerical results for larger systems that support the same conclusion for tandem lines with finite buffers. In the process, we propose server assignment heuristics that perform well for systems with different service time distributions. Our research suggests that policies known to be optimal or near-optimal for Markovian systems are also likely to be effective when used to assign servers to tasks in non-Markovian systems. © 2010 Wiley Periodicals, Inc. Naval Research Logistics, 2010 [source]


    Scheduling parallel machines with inclusive processing set restrictions

    NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 4 2008
    Jinwen Ou
    Abstract We consider the problem of assigning a set of jobs to different parallel machines of the same processing speed, where each job is compatible to only a subset of those machines. The machines can be linearly ordered such that a higher-indexed machine can process all those jobs that a lower-indexed machine can process. The objective is to minimize the makespan of the schedule. This problem is motivated by industrial applications such as cargo handling by cranes with nonidentical weight capacities, computer processor scheduling with memory constraints, and grades of service provision by parallel servers. We develop an efficient algorithm for this problem with a worst-case performance ratio of + ,, where , is a positive constant which may be set arbitrarily close to zero. We also present a polynomial time approximation scheme for this problem, which answers an open question in the literature. © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008 [source]