Distribution by Scientific Domains
Distribution within Business, Economics, Finance and Accounting

Kinds of Benchmark

  • earning benchmark
  • performance benchmark
  • useful benchmark

  • Terms modified by Benchmark

  • benchmark calculation
  • benchmark case
  • benchmark data
  • benchmark data set
  • benchmark dose
  • benchmark example
  • benchmark instance
  • benchmark level
  • benchmark model
  • benchmark models
  • benchmark problem
  • benchmark solution
  • benchmark test

  • Selected Abstracts


    Magnus Söderberg
    ABSTRACT,:,Benchmarks have been recommended for assessing the relative performance of local government services. However, these are often narrowly defined and therefore ignore important welfare dimensions. This paper proposes a framework for benchmarking based on a combination of production and cost characteristics and citizens' subjective perceptions. An evaluation consisting of Data Envelopment Analysis (DEA) and different regression models is applied on all 21 Swedish regional public transport authorities, covering the period 2002,2006 (n = 103). The results indicate that the industry as a whole is about 70% efficient and that efficiency can be improved by increasing the sizes of the urban and the bus vehicle-km shares. The optimal ownership structure is to have one large owner combined with about 25 small owners. [source]

    Frequency of Seborrheic Keratosis Biopsies in the United States: A Benchmark of Skin Lesion Care Quality and Cost Effectiveness

    Maria I. Duque MD
    Background. Most seborrheic keratoses may be readily clinically differentiated from skin cancer, but occasional lesions resemble atypical melanocytic neoplasms. Objective. To evaluate the frequency, cost, and intensity of procedures performed that result in the removal and histopathologic evaluation of seborrheic keratoses. Methods. Episodes of surgical removal of lesions that were identified as seborrheic keratoses by histologic identification were determined using Medicare Current Beneficiary Survey data from 1998 to 1999. These episodes were defined by a histopathology procedure code that is associated with a diagnosis code for seborrheic keratosis. We then identified what procedure(s) generated the histopathology specimen. Biopsy and shave procedures were considered "low intensity," whereas excision and repair procedures were considered "high intensity." Results. Dermatologists managed 85% of all episodes of seborrheic keratoses. Dermatologists managed 89% of seborrheic keratosis episodes using low-intensity procedures compared with 51% by other specialties. For nondermatologists, 46% of the treatment cost ($9 million) to Medicare was generated from high-intensity management compared with 15% by dermatologists ($6 million). Conclusion. There is a significant difference in the management of suspicious pigmented lesions between dermatologists and other specialists. This affects both the cost and quality of care. [source]

    Predicting project delivery rates using the Naive,Bayes classifier

    B. Stewart
    Abstract The importance of accurate estimation of software development effort is well recognized in software engineering. In recent years, machine learning approaches have been studied as possible alternatives to more traditional software cost estimation methods. The objective of this paper is to investigate the utility of the machine learning algorithm known as the Naive,Bayes classifier for estimating software project effort. We present empirical experiments with the Benchmark 6 data set from the International Software Benchmarking Standards Group to estimate project delivery rates and compare the performance of the Naive,Bayes approach to two other machine learning methods,model trees and neural networks. A project delivery rate is defined as the number of effort hours per function point. The approach described is general and can be used to analyse not only software development data but also data on software maintenance and other types of software engineering. The paper demonstrates that the Naive,Bayes classifier has a potential to be used as an alternative machine learning tool for software development effort estimation. Copyright © 2002 John Wiley & Sons, Ltd. [source]

    New Benchmark for Water Photooxidation by Nanostructured ,-Fe2O3 Films.

    CHEMINFORM, Issue 11 2007
    Andreas Kay
    Abstract ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 200 leading journals. To access a ChemInform Abstract, please click on HTML or PDF. [source]

    Reparallelization techniques for migrating OpenMP codes in computational grids

    Michael Klemm
    Typical computational grid users target only a single cluster and have to estimate the runtime of their jobs. Job schedulers prefer short-running jobs to maintain a high system utilization. If the user underestimates the runtime, premature termination causes computation loss; overestimation is penalized by long queue times. As a solution, we present an automatic reparallelization and migration of OpenMP applications. A reparallelization is dynamically computed for an OpenMP work distribution when the number of CPUs changes. The application can be migrated between clusters when an allocated time slice is exceeded. Migration is based on a coordinated, heterogeneous checkpointing algorithm. Both reparallelization and migration enable the user to freely use computing time at more than a single point of the grid. Our demo applications successfully adapt to the changed CPU setting and smoothly migrate between, for example, clusters in Erlangen, Germany, and Amsterdam, the Netherlands, that use different kinds and numbers of processors. Benchmarks show that reparallelization and migration impose average overheads of about 4 and 2%, respectively. Copyright © 2008 John Wiley & Sons, Ltd. [source]

    Performance evaluation of the SX-6 vector architecture for scientific computations

    Leonid Oliker
    Abstract The growing gap between sustained and peak performance for scientific applications is a well-known problem in high-performance computing. The recent development of parallel vector systems offers the potential to reduce this gap for many computational science codes and deliver a substantial increase in computing capabilities. This paper examines the intranode performance of the NEC SX-6 vector processor, and compares it against the cache-based IBM Power3 and Power4 superscalar architectures, across a number of key scientific computing areas. First, we present the performance of a microbenchmark suite that examines many low-level machine characteristics. Next, we study the behavior of the NAS Parallel Benchmarks. Finally, we evaluate the performance of several scientific computing codes. Overall results demonstrate that the SX-6 achieves high performance on a large fraction of our application suite and often significantly outperforms the cache-based architectures. However, certain classes of applications are not easily amenable to vectorization and would require extensive algorithm and implementation reengineering to utilize the SX-6 effectively. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    OpenMP-oriented applications for distributed shared memory architectures

    Ami Marowka
    Abstract The rapid rise of OpenMP as the preferred parallel programming paradigm for small-to-medium scale parallelism could slow unless OpenMP can show capabilities for becoming the model-of-choice for large scale high-performance parallel computing in the coming decade. The main stumbling block for the adaptation of OpenMP to distributed shared memory (DSM) machines, which are based on architectures like cc-NUMA, stems from the lack of capabilities for data placement among processors and threads for achieving data locality. The absence of such a mechanism causes remote memory accesses and inefficient cache memory use, both of which lead to poor performance. This paper presents a simple software programming approach called copy-inside,copy-back (CC) that exploits the data privatization mechanism of OpenMP for data placement and replacement. This technique enables one to distribute data manually without taking away control and flexibility from the programmer and is thus an alternative to the automat and implicit approaches. Moreover, the CC approach improves on the OpenMP-SPMD style of programming that makes the development process of an OpenMP application more structured and simpler. The CC technique was tested and analyzed using the NAS Parallel Benchmarks on SGI Origin 2000 multiprocessor machines. This study shows that OpenMP improves performance of coarse-grained parallelism, although a fast copy mechanism is essential. Copyright © 2004 John Wiley & Sons, Ltd. [source]

    Assessing the habitat quality of oil mallees and other planted farmland vegetation with reference to natural woodland

    F. Patrick Smith
    Summary, Much of the tree and shrub planting that has been conducted on farms in Western Australia over the past three decades has not been done with the specific intention of creating habitat or conserving biodiversity, particularly commercially oriented monocultures like oil mallee plantings. However, such plantings may nonetheless provide some habitat resources for native plants and animals. This study assessed the habitat quality of farm plantings (most of which were not planted with the primary intention of biodiversity conservation) at 72 sites across a study region in the central wheatbelt of Western Australia. Widely accepted habitat metrics were used to compare the habitat resources provided by planted farmland vegetation with those provided by remnant woodland on the same farms. The impact of adjacency of plantings to woodland and, in the case of oil mallees, the planting configuration on predicted habitat quality is assessed. Condition Benchmarks for five local native vegetation communities are proposed. Farmland plantings achieved an average Vegetation Condition Score (VCS) of 46 out of a possible 100, while remnant woodland on the same farms scored an average 72. The average scores for farm plantings ranged from 38,59 depending on which of five natural vegetation communities was used as its benchmark, but farm plantings always scored significantly less than remnant woodland (P < 0.001). Mixed species plantings on average were rated more highly than oil mallees (e.g. scores of 42 and 36 respectively using the Wandoo benchmark) and adjacency to remnant woodland improved the score for mixed plantings, but not for oil mallees. Configuration of oil mallees as blocks or belts (i.e. as an alley farming system) had no impact on the VCS. Planted farmland vegetation fell short of remnant woodland in both floristic richness (51 planted native species in total compared with a total of more than 166 naturally occurring plant species in woodland) and structural diversity (with height, multiple vegetation strata, tree hollows and woody debris all absent in the relatively young 7,15-year-old farm plantings). Nonetheless farmland plantings do have measurable habitat values and recruitment and apparent recolonization of plantings with native plant species was observed. Habitat values might be expected to increase as the plantings age. The VCS approach, including the application of locally relevant Benchmarks is considered to be valuable for assessing potential habitat quality in farmland vegetation, particularly as a tool for engaging landholders and natural resource management practitioners. [source]

    Accruals Management to Achieve Earnings Benchmarks: A Comparison of Pre-managed Profit and Loss Firms

    Abhijit Barua
    Abstract:, This study examines whether firms with profits before accruals management are more likely than firms with losses before accruals management to meet or exceed earnings benchmarks when pre-managed earnings are below those benchmarks. We extend Brown (2001) by documenting that the differential propensity to achieve earnings benchmarks by profitable and nonprofitable firms results from differential accruals management behavior. We find that firms with profits before accruals management are more likely than firms with losses before accruals management to have pre-managed earnings below both analysts' forecasts and prior period earnings and reported earnings above these benchmarks. [source]

    Building a learning progression for celestial motion: Elementary levels from an earth-based perspective

    Julia D. Plummer
    Abstract Prior research has demonstrated that neither children nor adults hold a scientific understanding of the big ideas of astronomy, as described in standards documents for science education [National Research Council [1996]. National science education standards. Washington, DC: National Academy Press; American Association for the Advancement of Science 1993. Benchmarks for science literacy. New York: Oxford University Press]. This manuscript focuses on ideas in astronomy that are at the foundation of elementary students' understanding of the discipline: the apparent motion of the sun, moon, and stars as seen from an earth-based perspective. Lack of understanding of these concepts may hinder students' progress towards more advanced understanding in the domain. We have analyzed the logic of the domain and synthesized prior research assessing children's knowledge to develop a set of learning trajectories that describe how students' initial ideas about apparent celestial motion as they enter school can be built upon, through successively more sophisticated levels of understanding, to reach a level that aligns with the scientific view. Analysis of an instructional intervention with elementary students in the planetarium was used to test our initial construction of the learning trajectories. This manuscript presents a first look at the use of a learning progression framework in analyzing the structure of astronomy education. We discuss how this work may eventually lead towards the development and empirical testing of a full learning progression on the big idea: how children learn to describe and explain apparent patterns of celestial motion. © 2009 Wiley Periodicals, Inc. J Res Sci Teach 47:768,787, 2010 [source]


    Magnus Söderberg
    ABSTRACT,:,Benchmarks have been recommended for assessing the relative performance of local government services. However, these are often narrowly defined and therefore ignore important welfare dimensions. This paper proposes a framework for benchmarking based on a combination of production and cost characteristics and citizens' subjective perceptions. An evaluation consisting of Data Envelopment Analysis (DEA) and different regression models is applied on all 21 Swedish regional public transport authorities, covering the period 2002,2006 (n = 103). The results indicate that the industry as a whole is about 70% efficient and that efficiency can be improved by increasing the sizes of the urban and the bus vehicle-km shares. The optimal ownership structure is to have one large owner combined with about 25 small owners. [source]

    Benchmarks and control charts for surgical site infections

    T. L. Gustafson
    Background Although benchmarks and control charts are basic quality improvement tools, few surgeons use them to monitor surgical site infection (SSI). Obstacles to widespread acceptance include: (1) small denominators, (2) complexities of adjusting for patient risk and (3) scepticism about their true purpose (cost cutting, surgical privilege determination or improving outcomes). Methods The application of benchmark charts (using US national SSI rates as limits) and control charts (using facility rates as limits) was studied in 51 hospitals submitting data to the AICE National Database Initiative. SSI rates were risk adjusted by calculating a new statistic, the standardized infection ratio (SIR), based on the risk index suggested by the Centers for Disease Control National Nosocomial Infection Surveillance Study. Fourteen different types of control chart were examined and 115 suspiciously high or low monthly rates were flagged. Participating hospital epidemiologists investigated and classified each flag as ,a real problem' (potentially preventable) or ,not a problem' (beyond the control of personnel at this facility). Results None of the standard, widely recommended, control charts studied showed practical value for identifying either preventable rate increases or outbreaks (clusters due to a single organism). On the other hand, several types of risk-adjusted control chart based on the SIR correctly identified most true opportunities for improvement. Sensitivity, specificity and receiver,operator characteristic (ROC) analysis revealed that the XmR chart of monthly SIRs would be useful in hospitals with smaller surgical volumes (ROC area = 0·732, P = 0·001). For larger hospitals, the most sensitive and robust SIR chart for real-time monitoring of surgical infections was the mXmR chart (ROC area = 0·753, P = 0·0005). © 2000 British Journal of Surgery Society Ltd [source]

    Motif Reconstruction in Clusters and Layers: Benchmarks for the Kawska,Zahn Approach to Model Crystal Formation

    CHEMPHYSCHEM, Issue 4 2010
    Theodor Milek
    Abstract A recently developed atomistic simulation scheme for investigating ion aggregation from solution is transferred to the morphogenesis of metal clusters grown from the vapor and layers deposited on a substrate surface. Both systems are chosen as benchmark models for intense motif reorganization during aggregate/layer growth. The applied simulation method does not necessarily involve global energy minimization after each growth event, but instead describes crystal growth as a series of structurally related configurations which may also include local energy minima. Apart from the particularly favorable high-symmetry configurations known from experiments and global energy minimization, we also demonstrate the investigation of transient structures. In the spirit of Ostwald's step rule, a continuous evolution of the aggregate/layer structure during crystal growth is observed. [source]

    CFD Sinflow Library: A framework to develop engineering educational codes in CFD and thermal sciences

    Romeu André Pieritz
    Abstract This work introduces the educational code development library "CFD Sinflow Library" specialized in 2D numerical methods in computational fluid dynamics (CFD) and termal science. This library is for research, educational, and engineering purposes like an open and platform independent architecture. The library was developed with C++ standard programming language using an object-oriented approach allowing educators and graduation/undergraduation students to access the numerical methods in a simplified way. The numerical capabilities and results quality are evaluated, where comparisons are made with benchmark and analytical solutions. © 2004 Wiley Periodicals, Inc. Comput Appl Eng Educ 12: 31,43, 2004; Published online in Wiley InterScience (; DOI 10.1002/cae.10056 [source]

    Tabu Search Strategies for the Public Transportation Network Optimizations with Variable Transit Demand

    Wei Fan
    A multi-objective nonlinear mixed integer model is formulated. Solution methodologies are proposed, which consist of three main components: an initial candidate route set generation procedure (ICRSGP) that generates all feasible routes incorporating practical bus transit industry guidelines; a network analysis procedure (NAP) that decides transit demand matrix, assigns transit trips, determines service frequencies, and computes performance measures; and a Tabu search method (TSM) that combines these two parts, guides the candidate solution generation process, and selects an optimal set of routes from the huge solution space. Comprehensive tests are conducted and sensitivity analyses are performed. Characteristics analyses are undertaken and solution qualities from different algorithms are compared. Numerical results clearly indicate that the preferred TSM outperforms the genetic algorithm used as a benchmark for the optimal bus transit route network design problem without zone demand aggregation. [source]

    Complex version of high performance computing LINPACK benchmark (HPL)

    R. F. Barrett
    Abstract This paper describes our effort to enhance the performance of the AORSA fusion energy simulation program through the use of high-performance LINPACK (HPL) benchmark, commonly used in ranking the top 500 supercomputers. The algorithm used by HPL, enhanced by a set of tuning options, is more effective than that found in the ScaLAPACK library. Retrofitting these algorithms, such as look-ahead processing of pivot elements, into ScaLAPACK is considered as a major undertaking. Moreover, HPL is configured as a benchmark, but only for real-valued coefficients. We therefore developed software to convert HPL for use within an application program that generates complex coefficient linear systems. Although HPL is not normally perceived as a part of an application, our results show that the modified HPL software brings a significant increase in the performance of the solver when simulating the highest resolution experiments thus far configured, achieving 87.5 TFLOPS on over 20 000 processors on the Cray XT4. Copyright © 2009 John Wiley & Sons, Ltd. [source]

    369 Tflop/s molecular dynamics simulations on the petaflop hybrid supercomputer ,Roadrunner'

    Timothy C. Germann
    Abstract We describe the implementation of a short-range parallel molecular dynamics (MD) code, SPaSM, on the heterogeneous general-purpose Roadrunner supercomputer. Each Roadrunner ,TriBlade' compute node consists of two AMD Opteron dual-core microprocessors and four IBM PowerXCell 8i enhanced Cell microprocessors (each consisting of one PPU and eight SPU cores), so that there are four MPI ranks per node, each with one Opteron and one Cell. We will briefly describe the Roadrunner architecture and some of the initial hybrid programming approaches that have been taken, focusing on the SPaSM application as a case study. An initial ,evolutionary' port, in which the existing legacy code runs with minor modifications on the Opterons and the Cells are only used to compute interatomic forces, achieves roughly a 2× speedup over the unaccelerated code. On the other hand, our ,revolutionary' implementation adopts a Cell-centric view, with data structures optimized for, and living on, the Cells. The Opterons are mainly used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard,Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), nearly 10× faster than the unaccelerated (Opteron-only) version. Copyright © 2009 John Wiley & Sons, Ltd. [source]

    Optimizing process allocation of parallel programs for heterogeneous clusters

    Shuichi Ichikawa
    Abstract The performance of a conventional parallel application is often degraded by load-imbalance on heterogeneous clusters. Although it is simple to invoke multiple processes on fast processing elements to alleviate load-imbalance, the optimal process allocation is not obvious. Kishimoto and Ichikawa presented performance models for high-performance Linpack (HPL), with which the sub-optimal configurations of heterogeneous clusters were actually estimated. Their results on HPL are encouraging, whereas their approach is not yet verified with other applications. This study presents some enhancements of Kishimoto's scheme, which are evaluated with four typical scientific applications: computational fluid dynamics (CFD), finite-element method (FEM), HPL (linear algebraic system), and fast Fourier transform (FFT). According to our experiments, our new models (NP-T models) are superior to Kishimoto's models, particularly when the non-negative least squares method is used for parameter extraction. The average errors of the derived models were 0.2% for the CFD benchmark, 2% for the FEM benchmark, 1% for HPL, and 28% for the FFT benchmark. This study also emphasizes the importance of predictability in clusters, listing practical examples derived from our study. Copyright © 2008 John Wiley & Sons, Ltd. [source]

    OpenUH: an optimizing, portable OpenMP compiler

    Chunhua Liao
    Abstract OpenMP has gained wide popularity as an API for parallel programming on shared memory and distributed shared memory platforms. Despite its broad availability, there remains a need for a portable, robust, open source, optimizing OpenMP compiler for C/C++/Fortran 90, especially for teaching and research, for example into its use on new target architectures, such as SMPs with chip multi-threading, as well as learning how to translate for clusters of SMPs. In this paper, we present our efforts to design and implement such an OpenMP compiler on top of Open64, an open source compiler framework, by extending its existing analysis and optimization and adopting a source-to-source translator approach where a native back end is not available. The compilation strategy we have adopted and the corresponding runtime support are described. The OpenMP validation suite is used to determine the correctness of the translation. The compiler's behavior is evaluated using benchmark tests from the EPCC microbenchmarks and the NAS parallel benchmark. Copyright © 2007 John Wiley & Sons, Ltd. [source]

    APEX-Map: a parameterized scalable memory access probe for high-performance computing systems,

    Erich Strohmaier
    Abstract The memory wall between the peak performance of microprocessors and their memory performance has become the prominent performance bottleneck for many scientific application codes. New benchmarks measuring data access speeds locally and globally in a variety of different ways are needed to explore the ever increasing diversity of architectures for high-performance computing. In this paper, we introduce a novel benchmark, APEX-Map, which focuses on global data movement and measures how fast global data can be fed into computational units. APEX-Map is a parameterized, synthetic performance probe and integrates concepts for temporal and spatial locality into its design. Our first parallel implementation in MPI and various results obtained with it are discussed in detail. By measuring the APEX-Map performance with parameter sweeps for a whole range of temporal and spatial localities performance surfaces can be generated. These surfaces are ideally suited to study the characteristics of the computational platforms and are useful for performance comparison. Results on a global-memory vector platform and distributed-memory superscalar platforms clearly reflect the design differences between these different architectures. Published in 2007 by John Wiley & Sons, Ltd. [source]

    A framework for performance analysis of Co-Array Fortran

    Bernd Mohr
    Abstract Co-Array Fortran (CAF) is a parallel programming extension to Fortran that provides a straightforward mechanism for representing distributed memory communication and, in particular, one-sided communication. Although this integration of communication primitives with the language improves programmer productivity, this new level of abstraction makes the analysis of CAF performance mode difficult. This situation is due, in part, to a lack of tools for the analysis of CAF applications. In this paper, we present an extension to the KOJAK toolkit based on a source-to-source translator that supports performance instrumentation, data collection, trace generation, and performance visualization of CAF applications. We illustrate this approach with a performance visualization of a CAF version of the Halo kernel benchmark using the VAMPIR event trace visualization tool. Copyright © 2007 John Wiley & Sons, Ltd. [source]

    Towards a framework and a benchmark for testing tools for multi-threaded programs

    Yaniv Eytani
    Abstract Multi-threaded code is becoming very common, both on the server side, and very recently for personal computers as well. Consequently, looking for intermittent bugs is a problem that is receiving more and more attention. As there is no silver bullet, research focuses on a variety of partial solutions. We outline a road map for combining the research within the different disciplines of testing multi-threaded programs and for evaluating the quality of this research. We have three main goals. First, to create a benchmark that can be used to evaluate different solutions. Second, to create a framework with open application programming interfaces that enables the combination of techniques in the multi-threading domain. Third, to create a focus for the research in this area around which a community of people who try to solve similar problems with different techniques can congregate. We have started creating such a benchmark and describe the lessons learned in the process. The framework will enable technology developers, for example, developers of race detection algorithms, to concentrate on their components and use other ready made components (e.g. an instrumentor) to create a testing solution. Copyright © 2006 John Wiley & Sons, Ltd. [source]

    An efficient concurrent implementation of a neural network algorithm

    R. Andonie
    Abstract The focus of this study is how we can efficiently implement the neural network backpropagation algorithm on a network of computers (NOC) for concurrent execution. We assume a distributed system with heterogeneous computers and that the neural network is replicated on each computer. We propose an architecture model with efficient pattern allocation that takes into account the speed of processors and overlaps the communication with computation. The training pattern set is distributed among the heterogeneous processors with the mapping being fixed during the learning process. We provide a heuristic pattern allocation algorithm minimizing the execution time of backpropagation learning. The computations are overlapped with communications. Under the condition that each processor has to perform a task directly proportional to its speed, this allocation algorithm has polynomial-time complexity. We have implemented our model on a dedicated network of heterogeneous computers using Sejnowski's NetTalk benchmark for testing. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    Performance comparison of MPI and OpenMP on shared memory multiprocessors

    Géraud Krawezik
    Abstract When using a shared memory multiprocessor, the programmer faces the issue of selecting the portable programming model which will provide the best performance. Even if they restricts their choice to the standard programming environments (MPI and OpenMP), they have to select a programming approach among MPI and the variety of OpenMP programming styles. To help the programmer in their decision, we compare MPI with three OpenMP programming styles (loop level, loop level with large parallel sections, SPMD) using a subset of the NAS benchmark (CG, MG, FT, LU), two dataset sizes (A and B), and two shared memory multiprocessors (IBM SP3 NightHawk II, SGI Origin 3800). We have developed the first SPMD OpenMP version of the NAS benchmark and gathered other OpenMP versions from independent sources (PBN, SDSC and RWCP). Experimental results demonstrate that OpenMP provides competitive performance compared with MPI for a large set of experimental conditions. Not surprisingly, the two best OpenMP versions are those requiring the strongest programming effort. MPI still provides the best performance under some conditions. We present breakdowns of the execution times and measurements of hardware performance counters to explain the performance differences. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    The J2EE ECperf benchmark results: transient trophies or technology treasures?

    Paul Brebner
    Abstract ECperf, the widely recognized industry standard J2EE benchmark, has attracted a large number of results submissions and their subsequent publications. However, ECperf places little restriction on the hardware platforms, operating systems and databases utilized in the benchmarking process. This, combined with the existence of only two primary metrics, makes it difficult to accurately compare the performance of the Application Server products themselves. By mining the full-disclosure archives for trends and correlations, we have discovered that J2EE technology is very scalable both in a scale-up and scale-out manner. Other observed trends include a linear correlation between middle-tier total processing power and throughput, as well as between J2EE Application Server license costs and throughput. However, the results clearly indicate that there is an increasing cost per user with increasing capacity systems and scale-up is proportionately more expensive than scale-out. Finally, the correlation between middle-tier processing power and throughput, combined with results obtained from a different ,lighter-weight' benchmark, facilitates an estimate of throughput for different types of J2EE applications. Copyright © 2004 John Wiley & Sons, Ltd. [source]

    On functional motor adaptations: from the quantification of motor strategies to the prevention of musculoskeletal disorders in the neck,shoulder region

    P. Madeleine
    Abstract Background:, Occupations characterized by a static low load and by repetitive actions show a high prevalence of work-related musculoskeletal disorders (WMSD) in the neck,shoulder region. Moreover, muscle fatigue and discomfort are reported to play a relevant initiating role in WMSD. Aims: To investigate relationships between altered sensory information, i.e. localized muscle fatigue, discomfort and pain and their associations to changes in motor control patterns. Materials & Methods:, In total 101 subjects participated. Questionnaires, subjective assessments of perceived exertion and pain intensity as well as surface electromyography (SEMG), mechanomyography (MMG), force and kinematics recordings were performed. Results:, Multi-channel SEMG and MMG revealed that the degree of heterogeneity of the trapezius muscle activation increased with fatigue. Further, the spatial organization of trapezius muscle activity changed in a dynamic manner during sustained contraction with acute experimental pain. A graduation of the motor changes in relation to the pain stage (acute, subchronic and chronic) and work experience were also found. The duration of the work task was shorter in presence of acute and chronic pain. Acute pain resulted in decreased activity of the painful muscle while in subchronic and chronic pain, a more static muscle activation was found. Posture and movement changed in the presence of neck,shoulder pain. Larger and smaller sizes of arm and trunk movement variability were respectively found in acute pain and subchronic/chronic pain. The size and structure of kinematics variability decreased also in the region of discomfort. Motor variability was higher in workers with high experience. Moreover, the pattern of activation of the upper trapezius muscle changed when receiving SEMG/MMG biofeedback during computer work. Discussion:, SEMG and MMG changes underlie functional mechanisms for the maintenance of force during fatiguing contraction and acute pain that may lead to the widespread pain seen in WMSD. A lack of harmonious muscle recruitment/derecruitment may play a role in pain transition. Motor behavior changed in shoulder pain conditions underlining that motor variability may play a role in the WMSD development as corroborated by the changes in kinematics variability seen with discomfort. This prognostic hypothesis was further, supported by the increased motor variability among workers with high experience. Conclusion:, Quantitative assessments of the functional motor adaptations can be a way to benchmark the pain status and help to indentify signs indicating WMSD development. Motor variability is an important characteristic in ergonomic situations. Future studies will investigate the potential benefit of inducing motor variability in occupational settings. [source]

    The Effectiveness of Alternative Risk Assessment and Program Planning Tools in a Fraud Setting,

    Abstract This study examines the impact of alternative risk assessment (standard risk checklist versus no checklist) and program development (standard program versus no program) tools on two facets of fraud planning effectiveness: (1) the quality of audit procedures relative to a benchmark validated by a panel of experts, and (2) the propensity to consult fraud experts. A between-subjects experiment, using an SEC enforcement fraud case, was conducted to examine these relationships. Sixty-nine auditors made risk assessments and designed an audit program. We found that auditors who used a standard risk checklist, structured by SAS No. 82 risk categories, made lower risk assessments than those without a checklist. This suggests that the use of the checklist was associated with a less effective diagnosis of the fraud. We also found that auditors with a standard audit program designed a relatively less effective fraud program than those without this tool but were not more willing to seek consultation with fraud experts. This suggests that standard programs may impair auditors' ability to respond to fraud risk. Finally, our results show that fraud risk assessment (FRASK) was not associated with the planning of more effective fraud procedures but was directly associated with the desire to consult with fraud specialists. This suggests that one benefit of improved FRASK is its relation with consultation. Overall, the findings call into question the effectiveness of standard audit tools in a fraud setting and highlight the need for a more strategic reasoning approach in an elevated risk situation. [source]

    Verification of the 2D Tokamak Edge Modelling Codes for Conditions of Detached Divertor Plasma

    V. Kotov
    Abstract The paper discusses verification of the ITER edge modelling code SOLPS 4.3 (B2-EIRENE). Results of the benchmark against SOLPS 5.0 are shown for standard JET test cases. Special two-point formulas are employed in SOLPS 4.3 to analyze the results of numerical simulations. The applied relations are exact in frame of the equations solved by the B2 code. This enables simultaneous check of the parallel momentum and energy balances and boundary conditions. Transition to divertor detachment is analyzed quantitatively as it appears in the simulations in terms of the coupled momentum and energy balance (© 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]

    Pollutant release and transfer registers: examining the value of government-led reporting on corporate environmental performance

    Rory Sullivan
    Abstract Within this paper, we argue that the data released by companies through corporate environmental reports are of very limited value, particularly for analysts seeking to benchmark the environmental performance of different companies or sites. We also argue that the data published by governments through pollutant release and transfer registers (PRTRs) such as the US Toxic Releases Inventory (TRI) or the EU Polluting Emissions Register (EPER) are of much greater value in comparative analyses. However, we recognize that PRTRs are limited in their scope and there are differences between the PRTRs that are in place in different countries. We find then that while PRTRs can inform comparative analyses within countries, their potential to provide a basis for benchmarking across different countries has not yet been fulfilled. Nonetheless, we conclude that PRTRs often provide the best available data for benchmarking corporate environmental performance and that increases in their scope and a harmonization of their design could play a significant role in illuminating variations in corporate environmental performance over time and across space. Copyright © 2007 John Wiley & Sons, Ltd and ERP Environment. [source]

    Stakeholder accountability or stakeholder management: a review of UK firms' social and ethical accounting, auditing and reporting (SEAAR) practices

    Ataur Rahman Belal
    The main aim of this study is to undertake an evaluation of the initial wave of stand-alone social reports issued by the major market players in the UK using AA1000 as an evaluative tool, or benchmark, in order to ascertain the extent to which they conform to the provisions of AA1000, in particular the core principles of accountability and inclusivity. Applying the lens of the stakeholder model the paper examines to what extent contemporary SEAAR practices in the UK are likely to promote stakeholder accountability, or whether they are simply exercises in stakeholder management. Copyright © 2002 John Wiley & Sons, Ltd and ERP Environment. [source]