Home About us Contact | |||
Empirical Researchers (empirical + researcher)
Selected AbstractsPredicting Business Failures in Canada,ACCOUNTING PERSPECTIVES, Issue 2 2007J. Efrim Boritz ABSTRACT Empirical researchers and practitioners frequently use the bankruptcy prediction models developed by Altman (1968) and Ohlson (1980). This poses a potential problem for practitioners in Canada and researchers working with Canadian data because the Altman and Ohlson models were developed using U.S. data. We compare Canadian bankruptcy prediction models developed by Springate (1978), Altman and Levallee (1980), and Legault and Véronneau (1986) against the Altman and Ohlson models using recent data to determine the robustness of all models over time and the applicability of the Altman and Ohlson models to the Canadian environment. Our results indicate that the models developed by Springate (1978) and Legault and Véronneau (1986) yield similar results to the Ohlson (1980) model while being simpler and requiring less data. The Altman (1968) and Altman and Levallee (1980) models generally have lower performance than the other models. All models have stronger performance with the original coefficients than with re-estimated coefficients. Our results regarding the Altman and Ohlson models are consistent with Begley, Ming, and Watts (1996), who found that the original version of the Ohlson model is superior to the Altman model and is robust over time. Les chercheurs empiriques et les praticiens ont souvent recours aux modèles de prédiction des faillites élaborés par Altman (1968) et Ohlson (1980). Or, le fait que ces auteurs aient utilisé des données des États-Unis dans l'élaboration de leurs modèles soulève un problème particulier pour les praticiens canadiens et les chercheurs qui traitent des données canadiennes. Les auteurs comparent les modèles canadiens de prédiction des faillites mis au point par Springate (1978), Altman et Levallée (1980) et Legault et Véronneau (1986) aux modèles proposés par Altman et Ohlson, en se servant de données récentes pour évaluer la robustesse de tous ces modèles dans le temps et l'applicabilité des modèles d'Altman et Ohlson au contexte canadien. L'analyse révèle que les modèles de Springate (1978) et de Legault et Véronneau (1986) produisent des résultats similaires à ceux du modèle d'Ohlson (1980), bien qu'ils soient plus simples et exigent moins de données. De façon générale, les modèles d'Altman (1968) et d'Altman et Levallee (1980) sont moins performants que les autres modèles. Tous les modèles sont plus efficaces lorsqu'ils font usage des coefficients initiaux que lorsqu'ils sont appliqués à de nouvelles estimations des coefficients. Les résultats obtenus en ce qui a trait aux modèles d'Altman et d'Ohlson corroborent ceux de Begley, Ming et Watts (1996) qui constatent que la version initiale du modèle d'Ohlson est supérieure au modèle d'Altman et résiste au passage du temps. [source] Modelling price paths in on-line auctions: smoothing sparse and unevenly sampled curves by using semiparametric mixed modelsJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 2 2008Florian Reithinger Summary., On-line auctions pose many challenges for the empirical researcher, one of which is the effective and reliable modelling of price paths. We propose a novel way of modelling price paths in eBay's on-line auctions by using functional data analysis. One of the practical challenges is that the functional objects are sampled only very sparsely and unevenly. Most approaches rely on smoothing to recover the underlying functional object from the data, which can be difficult if the data are irregularly distributed. We present a new approach that can overcome this challenge. The approach is based on the ideas of mixed models. Specifically, we propose a semiparametric mixed model with boosting to recover the functional object. As well as being able to handle sparse and unevenly distributed data, the model also results in conceptually more meaningful functional objects. In particular, we motivate our method within the framework of eBay's on-line auctions. On-line auctions produce monotonic increasing price curves that are often correlated across auctions. The semiparametric mixed model accounts for this correlation in a parsimonious way. It also manages to capture the underlying monotonic trend in the data without imposing model constraints. Our application shows that the resulting functional objects are conceptually more appealing. Moreover, when used to forecast the outcome of an on-line auction, our approach also results in more accurate price predictions compared with standard approaches. We illustrate our model on a set of 183 closed auctions for Palm M515 personal digital assistants. [source] A Reality Check for Data SnoopingECONOMETRICA, Issue 5 2000Halbert White Data snooping occurs when a given set of data is used more than once for purposes of inference or model selection. When such data reuse occurs, there is always the possibility that any satisfactory results obtained may simply be due to chance rather than to any merit inherent in the method yielding the results. This problem is practically unavoidable in the analysis of time-series data, as typically only a single history measuring a given phenomenon of interest is available for analysis. It is widely acknowledged by empirical researchers that data snooping is a dangerous practice to be avoided, but in fact it is endemic. The main problem has been a lack of sufficiently simple practical methods capable of assessing the potential dangers of data snooping in a given situation. Our purpose here is to provide such methods by specifying a straightforward procedure for testing the null hypothesis that the best model encountered in a specification search has no predictive superiority over a given benchmark model. This permits data snooping to be undertaken with some degree of confidence that one will not mistake results that could have been generated by chance for genuinely good results. [source] Heterogeneity and cross section dependence in panel data models: theory and applications introductionJOURNAL OF APPLIED ECONOMETRICS, Issue 2 2007Badi H. Baltagi The papers included in this special issue are primarily concerned with the problem of cross section dependence and heterogeneity in the analysis of panel data models and their relevance in applied econometric research. Cross section dependence can arise due to spatial or spill over effects, or could be due to unobserved (or unobservable) common factors. Much of the recent research on non-stationary panel data have focussed on this problem. It was clear that the first generation panel unit root and cointegration tests developed in the 1990's, which assumed cross-sectional independence, are inadequate and could lead to significant size distortions in the presence of neglected cross-section dependence. Second generation panel unit root and cointegration tests that take account of possible cross-section dependence in the data have been developed, see the recent surveys by Choi (2006) and Breitung and Pesaran (2007). The papers by Baltagi, Bresson and Pirotte, Choi and Chue, Kapetanios, and Pesaran in this special issue are further contributions to this literature. The papers by Fachin, and Moon and Perron are empirical studies in this area. Controlling for heterogeneity has also been an important concern for empirical researchers with panel data methods promising better handle on heterogeneity than cross-section data methods. The papers by Hsiao, Shen, Wang and Weeks, Pedroni and Serlenga and Shin are empirical contributions to this area. Copyright © 2007 John Wiley & Sons, Ltd. [source] A CONCEPTUAL UNDERSTANDING OF REQUIREMENTS FOR THEORY-BUILDING RESEARCH: GUIDELINES FOR SCIENTIFIC THEORY BUILDING,JOURNAL OF SUPPLY CHAIN MANAGEMENT, Issue 3 2008JOHN G. WACKER Business academics have focused their attention on empirical investigation of programs' effect on organizational competitive performance. These studies primarily emphasize theory building. With the many definitions of theory, academics are not certain whether their research papers meet the specific requirements for theory development required by the academic field of the philosophy of science. Certainly, supply chain academics generally believe that their academic articles fulfill the requirements of theory building. Although many of these articles do have elements of theory, more focus is needed on the specific requirements of theory to assure that academic research is "good" theory building. The primary purpose of this research paper is to logically develop a set of guidelines to assist empirical researchers to assure that their studies fulfill the requirements of good theory based upon traditional scientific theory building. By fulfilling the requirements of good theory, researchers will develop studies that will have a lasting impact on their academic field. To achieve a lasting impact on an academic field, it is necessary to follow a logical plan. This article provides a plan for logical guidelines for developing an understanding of how and why "good" theory building is achieved. This article logically develops a formal conceptual definition of theory along with its related properties to understand these guidelines. Next, it analyzes the requirements of theory, "good" theory, and their properties. These guidelines are included in the existing philosophy of science publications. However, this article consolidates these sources and logically explains why these guidelines are needed. In the conclusion, the guidelines are summarized to serve as a summary checklist for supply chain researchers to use for ensuring their articles will be recognized as a contribution to the academic field. So in that sense, this article does not develop a revolutionary new insight into theory-building empirical articles, but rather integrates diverse traditional philosophy of science requirements into a much simpler set of guidelines. Through logical development of these guidelines, researchers will understand the structure of theory and how to ensure their studies can be modified to have a lasting impact on the field of supply chain management. [source] Principal-agent relationships on the stewardship-agency axisNONPROFIT MANAGEMENT & LEADERSHIP, Issue 1 2006Ralf Caers This article provides an overview of the literature on nonprofit principal-agent relationships. It depicts the nature of agency theory and stewardship theory, analyzes the origin of their struggle within the nonprofit structure, and marks directions for a conciliatory approach. We open with an introduction to agency theory and discuss the two main components of its mathematical branch. We thereby contrast it with stewardship theory and elaborate on the arguments that can affect the position of nonprofit principal-agent relationships on the stewardship-agency axis. Analysis of the existing literature points to a lack of consensus as to which theory should be applied. We argue that the division of nonprofit principalagent relationships into board-manager and manageremployee interactions may help to clarify the balance between agency theory and stewardship theory and may lead to the establishment of a strongly founded theory on nonprofit principal-agent relationships. We close with a discussion of how this article may prove valuable to nonprofit policymakers and other empirical researchers. [source] |