Empirical Comparison (empirical + comparison)

Distribution by Scientific Domains


Selected Abstracts


EMPIRICAL COMPARISON OF G MATRIX TEST STATISTICS: FINDING BIOLOGICALLY RELEVANT CHANGE

EVOLUTION, Issue 10 2009
Brittny Calsbeek
A central assumption of quantitative genetic theory is that the breeder's equation (R=GP,1S) accurately predicts the evolutionary response to selection. Recent studies highlight the fact that the additive genetic variance,covariance matrix (G) may change over time, rendering the breeder's equation incapable of predicting evolutionary change over more than a few generations. Although some consensus on whether G changes over time has been reached, multiple, often-incompatible methods for comparing G matrices are currently used. A major challenge of G matrix comparison is determining the biological relevance of observed change. Here, we develop a "selection skewers"G matrix comparison statistic that uses the breeder's equation to compare the response to selection given two G matrices while holding selection intensity constant. We present a bootstrap algorithm that determines the significance of G matrix differences using the selection skewers method, random skewers, Mantel's and Bartlett's tests, and eigenanalysis. We then compare these methods by applying the bootstrap to a dataset of laboratory populations of Tribolium castaneum. We find that the results of matrix comparison statistics are inconsistent based on differing a priori goals of each test, and that the selection skewers method is useful for identifying biologically relevant G matrix differences. [source]


An Empirical Comparison of Price-Limit Models,

INTERNATIONAL REVIEW OF FINANCE, Issue 3-4 2006
TAMIR LEVY
ABSTRACT Using futures traded on the Chicago Board of Trade, Chicago Mercantile Exchange and New York Board of Trade, we test six alternative models of the return-generating process (RGP) in futures exchanges that adopt a price-limit regime. We rank the six models according to their return-prediction ability, based on the mean square error criterion, and we find that the near-limit model performed best for both the estimation period and the prediction period. A reliable prediction of the expected return can have important implications for both traders and policy makers, concerning related issues such as the employment of long or short strategy, margin requirements and the effectiveness of the price limit mechanism. [source]


Private Enforcement of Corporate Law: An Empirical Comparison of the United Kingdom and the United States

JOURNAL OF EMPIRICAL LEGAL STUDIES, Issue 4 2009
John Armour
It is often assumed that strong securities markets require good legal protection of minority shareholders. This implies both "good" law,principally, corporate and securities law,and enforcement, yet there has been little empirical analysis of enforcement. We study private enforcement of corporate law in two common-law jurisdictions with highly developed stock markets, the United Kingdom and the United States, examining how often directors of publicly traded companies are sued, and the nature and outcomes of those suits. We find, based a comprehensive search for filings over 2004,2006, that lawsuits against directors of public companies alleging breach of duty are nearly nonexistent in the United Kingdom. The United States is more litigious, but we still find, based on a nationwide search of court decisions between 2000,2007, that only a small percentage of public companies face a lawsuit against directors alleging a breach of duty that is sufficiently contentious to result in a reported judicial opinion, and a substantial fraction of these cases are dismissed. We examine possible substitutes in the United Kingdom for formal private enforcement of corporate law and find some evidence of substitutes, especially for takeover litigation. Nonetheless, our results suggest that formal private enforcement of corporate law is less central to strong securities markets than might be anticipated. [source]


Launch Decisions and New Product Success: An Empirical Comparison of Consumer and Industrial Products

THE JOURNAL OF PRODUCT INNOVATION MANAGEMENT, Issue 1 2000
Erik Jan Hultink
Many articles have investigated new product development success and failure. However, most of them have used the vantage point of characteristics of the product and development process in this research. In this article we extend this extensive stream of research, looking at factors affecting success; however, we look at the product in the context of the launch support program. We empirically answer the question of whether successful launch decisions differ for consumer and industrial products and identify how they differ. From data collected on over 1,000 product introductions, we first contrast consumer product launches with industrial product launches to identify key differences and similarities in launch decisions between market types. For consumer products, strategic launch decisions appear more defensive in nature, as they focus on defending current market positions. Industrial product strategic launch decisions seem more offensive, using technology and innovation to push the firm to operate outside their current realm of operations and move into new markets. The tactical marketing mix launch decisions (product, place, promotion and price) also differ markedly across the products launched for the two market types. Successful products were contrasted with failed products to identify those launch decisions that discriminate between both outcomes. Here the differences are more of degree rather than principle. Some launch decisions were associated with success for consumer and industrial products alike. Launch successes are more likely to be broader assortments of more innovative product improvements that are advertised with print advertising, independent of market. Other launch decisions uniquely related to success per product type, especially at the marketing mix level (pricing, distribution, and promotion in particular). The launch decisions most frequently made by firms are not well aligned with factors associated with higher success. Additionally, comparing the decisions associated with success to the recommendations for launches from the normative literature suggests that a number of conventional heuristics about how to launch products of each type will actually lead to failure rather than success. [source]


Empirical comparison of density estimators for large carnivores

JOURNAL OF APPLIED ECOLOGY, Issue 1 2010
Martyn E. Obbard
Summary 1. Population density is a critical ecological parameter informing effective wildlife management and conservation decisions. Density is often estimated by dividing capture,recapture (C,R) estimates of abundance () by size of the study area, but this relies on the assumption of geographic closure , a situation rarely achieved in studies of large carnivores. For geographically open populations is overestimated relative to the size of the study area because animals with only part of their home range on the study area are available for capture. This bias (,edge effect') is more severe when animals such as large carnivores range widely. To compensate for edge effect, a boundary strip around the trap array is commonly included when estimating the effective trap area (). Various methods for estimating the width of the boundary strip are proposed, but / estimates of large carnivore density are generally mistrusted unless concurrent telemetry data are available to define. Remote sampling by cameras or hair snags may reduce study costs and duration, yet without telemetry data inflated density estimates remain problematic. 2. We evaluated recently developed spatially explicit capture,recapture (SECR) models using data from a common large carnivore, the American black bear Ursus americanus, obtained by remote sampling of 11 geographically open populations. These models permit direct estimation of population density from C,R data without assuming geographic closure. We compared estimates derived using this approach to those derived using conventional approaches that estimate density as /. 3. Spatially explicit C,R estimates were 20,200% lower than densities estimated as /. AICc supported individual heterogeneity in capture probabilities and home range sizes. Variable home range size could not be accounted for when estimating density as /. 4.Synthesis and applications. We conclude that the higher densities estimated as / compared to estimates from SECR models are consistent with positive bias due to edge effects in the former. Inflated density estimates could lead to management decisions placing threatened or endangered large carnivores at greater risk. Such decisions could be avoided by estimating density by SECR when bias due to geographic closure violation cannot be minimized by study design. [source]


Risk Modeling of Dependence among Project Task Durations

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 6 2007
I-Tung Yang
The assessments, however, can be strongly influenced by the dependence between task durations. In light of the need to address the dependence, the present study proposes a computer simulation model to incorporate and augment NORTA, a method for multivariate random number generation. The proposed model allows arbitrarily specified marginal distributions for task durations (need not be members of the same distribution family) and any desired correlation structure. This level of flexibility is of great practical value when systematic data is not available and planners have to rely on experts' subjective estimation. The application of the proposed model is demonstrated through scheduling a road pavement project. The proposed model is validated by showing that the sample correlation coefficients between task durations closely match the originally specified ones. Empirical comparisons between the proposed model and two conventional approaches, PERT and conventional simulation (without correlations), are used to illustrate the usefulness of the proposed model. [source]


Energy efficiency improvement strategies for a diesel engine in low-temperature combustion

INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 1 2009
Ming Zheng
Abstract The lowered combustion temperature in diesel engines is capable of reducing nitrogen oxides and soot simultaneously, which can be implemented by the heavy use of exhaust gas recirculation (EGR) or the homogeneous charge compression ignition (HCCI) type of combustion. However, the fuel efficiency of the low-temperature combustion (LTC) cycles is commonly compromised by the high levels of hydrocarbon and carbon monoxide emissions. More seriously, the scheduling of fuel delivery in HCCI engines has lesser leverage on the exact timing of auto-ignition that may even occur before the compression stroke is completed, which may cause excessive efficiency reduction and combustion roughness. New LTC control strategies have been explored experimentally to achieve ultralow emissions under independently controlled EGR, intake boost, exhaust backpressure, and multi-event fuel-injection events. Empirical comparisons have been made between the fuel efficiencies of LTC and conventional diesel cycles. Preliminary adaptive control strategies based on cylinder pressure characteristics have been implemented to enable and stabilize the LTC when heavy EGR is applied. The impact of heat-release phasing, duration, shaping, and splitting on the thermal efficiency has also been analyzed with engine cycle simulations. This research intends to identify the major parameters that affect diesel LTC engine thermal efficiency. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Reflections on and alternatives to WHO's fairness of financial contribution index

HEALTH ECONOMICS, Issue 2 2002
*Article first published online: 28 FEB 200, Adam Wagstaff
Abstract In its 2000 World Health Report (WHR), the World Health Organization argues that a key dimension of a health system's performance is the fairness of its financing system. This paper provides a critical assessment of the index of fairness of financial contribution (FFC) proposed in the WHR. It shows that the index cannot discriminate between health financing systems that are regressive and those that are progressive, and cannot discriminate between horizontal inequity on the one hand, and progressivity and regressivity on the other. The paper compares the WHO index to an alternative and more illuminating approach developed in the income redistribution literature in the early 1990s and used in the late 1990s to study the fairness of various OECD countries' health financing systems. It ends with an illustrative empirical comparison of the two approaches using data on out-of-pocket payments for health services in Vietnam for two years , 1993 and 1998. This analysis is of some interest in its own right, given the large share of health spending from out-of-pocket payments in Vietnam, and the changes in fees and drug prices over the 1990s. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Comparison of Mitotyper Rules and Phylogenetic-based mtDNA Nomenclature Systems

JOURNAL OF FORENSIC SCIENCES, Issue 5 2010
Deborah Polanskey B.S.
Abstract:, A consistent nomenclature scheme is necessary to characterize a forensic mitochondrial DNA (mtDNA) haplotype. A standard nomenclature, called the Mitotyper RulesÔ, has been developed that applies typing rules in a hierarchical manner reflecting the forensic practitioner's nomenclature preferences. In this work, an empirical comparison between the revised hierarchical nomenclature rules and the phylogenetic approach to mtDNA type description has been conducted on 5173 samples from the phylogenetically typed European Mitochondrial DNA Population database (EMPOP) to identify the degree and significance of any differences. The comparison of the original EMPOP types and the results of retyping these sequences using the Mitotyper Rules demonstrates a high degree of concordance between the two alignment schemes. Differences in types resulted mainly because the Mitotyper Rules selected an alignment with the fewest number of differences compared with the rCRS. In addition, several identical regions were described in more than one way in the EMPOP dataset, demonstrating a limitation of a solely phylogenetic approach in that it may not consistently type nonhaplogroup-specific sites. Using a rule-based approach, commonly occurring as well as private variants are subjected to the same rules for naming, which is particularly advantageous when typing partial sequence data. [source]


Search-based refactoring: an empirical study

JOURNAL OF SOFTWARE MAINTENANCE AND EVOLUTION: RESEARCH AND PRACTICE, Issue 5 2008
Mark O'Keeffe
Abstract Object-oriented systems that undergo repeated addition of functionality commonly suffer a loss of quality in their underlying design. This problem must often be remedied in a costly refactoring phase before further maintenance programming can take place. Recently search-based approaches to automating the task of software refactoring, based on the concept of treating object-oriented design as a combinatorial optimization problem, have been proposed. However, because search-based refactoring is a novel approach it is yet to be established as to which search techniques are most suitable for the task. In this paper we report the results of an empirical comparison of simulated annealing (SA), genetic algorithms (GAs) and multiple ascent hill-climbing (HCM) in search-based refactoring. A prototype automated refactoring tool is employed, capable of making radical changes to the design of an existing program in order that it conforms more closely to a contemporary quality model. Results show HCM to outperform both SA and GA over a set of five input programs. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Temporary Help Agencies and Occupational Mobility,

OXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 2 2005
J. Ignacio García-Pérez
Abstract This paper analyses the effects of Temporary Help Agencies (THA) on occupational mobility by performing an empirical comparison of the job-to-job upgrading chances of agency and regular (non-agency) workers in Spain. We estimate a switching regression model to allow for self-selection into agency work because of, for instance, more motivated workers being more likely to search for jobs through a THA. We find evidence in favour of the existence of self-selection in all qualification groups considered. Concerning mobility, we find that agency workers in intermediate qualification levels are less likely to experience demotions than regular workers. THA increase the probability of high-skilled workers achieving a permanent contract in Spain. [source]


Coherence versus fragmentation in the development of the concept of force

COGNITIVE SCIENCE - A MULTIDISCIPLINARY JOURNAL, Issue 6 2004
Andrea A. DiSessa
Abstract This article aims to contribute to the literature on conceptual change by engaging in direct theoretical and empirical comparison of contrasting views. We take up the question of whether naïve physical ideas are coherent or fragmented, building specifically on recent work supporting claims of coherence with respect to the concept of force by Ioannides and Vosniadou [Ioannides, C., & Vosniadou, C. (2002). The changing meanings of force. Cognitive Science Quarterly 2, 5,61]. We first engage in a theoretical inquiry on the nature of coherence and fragmentation, concluding that these terms are not well-defined, and proposing a set of issues that may be better specified. The issues have to do with contextuality, which concerns the range of contexts in which a concept (meaning, model, theory) applies, and relational structure, which is how elements of a concept (meaning, model, or theory) relate to one another. We further propose an enhanced theoretical and empirical accountability for what and how much one needs to say in order to have specified a concept. Vague specification of the meaning of a concept can lead to many kinds of difficulties. Empirically, we conducted two studies. A study patterned closely on Ioannides and Vosniadou's work (which we call a quasi-replication) failed to confirm their operationalizations of "coherent." An extension study, based on a more encompassing specification of the concept of force, showed three kinds of results: (1) Subjects attend to more features than mentioned by Ioannides and Vosniadou, and they changed answers systematically based on these features; (2)We found substantial differences in the way subjects thought about the new contexts we asked about, which undermined claims for homogeneity within even the category of subjects (having one particular meaning associated with "force") that best survived our quasi-replication; (3) We found much reasoning of subjects about forces that cannot be accounted for by the meanings specified by Ioannides and Vosniadou. All in all, we argue that, with a greater attention to contextuality and with an appropriately broad specification of the meaning of a concept like force, Ioannides and Vosniadou's claims to have demonstrated coherence seem strongly undermined. Students' ideas are not random and chaotic; but neither are they simply described and strongly systematic. [source]


Potential utility of actuarial methods for identifying specific learning disabilities

PSYCHOLOGY IN THE SCHOOLS, Issue 6 2010
Nicholas Benson
This article describes how actuarial methods can supplant discrepancy models and augment problem solving and Response to Intervention (RTI) efforts by guiding the process of identifying specific learning disabilities (SLD). Actuarial methods use routinized selection and execution of formulas derived from empirically established relationships to make predictions that fall within a plausible range of possible future outcomes. In the case of SLD identification, the extent to which predictions are reasonable can be evaluated by their ability to categorize large segments of the population into subgroups that vary considerably along a spectrum of risk for academic failure. Although empirical comparisons of actuarial methods to clinical judgment reveal that actuarial methods consistently outperform clinical judgment, multidisciplinary teams charged with identifying SLD currently rely on clinical judgment. Actuarial methods provide educators with an empirically verifiable indicator of student need for special education and related services that could be used to estimate the relative effects of exclusionary criteria. This indicator would provide a defensible endpoint in the process of identifying SLD as well as a means of informing and improving the SLD identification process. © 2010 Wiley Periodicals, Inc. [source]


Outsourcing Oversight: A Comparison of Monitoring for In-House and Contracted Services

PUBLIC ADMINISTRATION REVIEW, Issue 3 2007
Mary K. Marvel
The public sector contracting literature has long argued that outsourced services need to be and, in fact, are subject to a more elevated level of scrutiny compared to internally delivered services. Recently, the performance measurement and management literature has suggested that the twin themes of accountability and results have altered the management landscape at all levels of government. By focusing on performance monitoring, the implication is that monitoring levels for internally provided services should more closely approximate those for contracted services. The analysis provided here yields empirical comparisons of how governments monitor the same service provided in-house and contracted out. We find evidence that services provided internally by a government's own employees are indeed monitored intensively by the contracting government, with levels of monitoring nearly as high as those for services contracted out to for-profit providers. In contrast, however, we find strong evidence that performance monitoring by the contracting government does not extend to nonprofit and other governmental service providers, each of which is monitored much less intensively than when comparable services are provided internally. For such service providers, it appears that monitoring is either outsourced along with services, or simply reduced. [source]