General Models (general + models)

Distribution by Scientific Domains


Selected Abstracts


Tax Sensitivity in Electronic Commerce,

FISCAL STUDIES, Issue 4 2007
Mark A. Scanlan
Empirical research into the impact of taxation on e-commerce has concluded that there is a significant positive relationship between local sales tax rates and the likelihood that a person will shop online. This paper finds that the tax sensitivity for online purchases at the local level is much lower than previously estimated and is not significant under previous general models. However, by using a splined tax-rate function, this paper finds that consumers living in counties with high sales tax rates are still sensitive to tax rates when deciding whether to shop online, while those in counties with low tax rates exhibit no significant sensitivity. [source]


A comparison of nearest neighbours, discriminant and logit models for auditing decisions

INTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 1-2 2007
Chrysovalantis Gaganis
This study investigates the efficiency of k -nearest neighbours (k -NN) in developing models for estimating auditors' opinions, as opposed to models developed with discriminant and logit analyses. The sample consists of 5276 financial statements, out of which 980 received a qualified audit opinion, obtained from 1455 private and public UK companies operating in the manufacturing and trade sectors. We develop two industry-specific models and a general one using data from the period 1998,2001, which are then tested over the period 2002,2003. In each case, two versions of the models are developed. The first includes only financial variables. The second includes both financial and non-financial variables. The results indicate that the inclusion of credit rating in the models results in a considerable increase both in terms of goodness of fit and classification accuracies. The comparison of the methods reveals that the k -NN models can be more efficient, in terms of average classification accuracy, than the discriminant and logit models. Finally, the results are mixed concerning the development of industry-specific models, as opposed to general models. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Reliable computing in estimation of variance components

JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 6 2008
I. Misztal
Summary The purpose of this study is to present guidelines in selection of statistical and computing algorithms for variance components estimation when computing involves software packages. For this purpose two major methods are to be considered: residual maximal likelihood (REML) and Bayesian via Gibbs sampling. Expectation-Maximization (EM) REML is regarded as a very stable algorithm that is able to converge when covariance matrices are close to singular, however it is slow. However, convergence problems can occur with random regression models, especially if the starting values are much lower than those at convergence. Average Information (AI) REML is much faster for common problems but it relies on heuristics for convergence, and it may be very slow or even diverge for complex models. REML algorithms for general models become unstable with larger number of traits. REML by canonical transformation is stable in such cases but can support only a limited class of models. In general, REML algorithms are difficult to program. Bayesian methods via Gibbs sampling are much easier to program than REML, especially for complex models, and they can support much larger datasets; however, the termination criterion can be hard to determine, and the quality of estimates depends on a number of details. Computing speed varies with computing optimizations, with which some large data sets and complex models can be supported in a reasonable time; however, optimizations increase complexity of programming and restrict the types of models applicable. Several examples from past research are discussed to illustrate the fact that different problems required different methods. [source]


Functional anatomy of the olecranon process in hominoids and plio-pleistocene hominins

AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY, Issue 4 2004
Michelle S.M. Drapeau
Abstract This study examines the functional morphology of the olecranon process in hominoids and fossil hominins. The length of the bony lever of the triceps brachii muscle (TBM) is measured as the distance between the trochlear articular center and the most distant insertion site of the TBM, and olecranon orientation is measured as the angle that this bony lever makes with the long axis of the ulna. Results show that Homo, Pan, Gorilla, most monkeys, and the Australopithecus fossils studied have similar relative olecranon lengths. Suspensory hominoids and Ateles have shorter olecranons, suggesting, in some instances, selection for greater speed in extension. The orientation that the lever arm of the TBM makes with the long axis of the ulna varies with preferred locomotor mode. Terrestrial primates have olecranons that are more posteriorly oriented as body size increases, fitting general models of terrestrial mammalian posture. Arboreal quadrupeds have more proximally oriented lever arms than any terrestrial quadrupeds, which suggests use of the TBM with the elbow in a more flexed position. Olecranon orientation is not consistent in suspensory hominoids, although they are all characterized by orientations that are either similar or more posterior than those observed in quadrupeds. Homo and the fossils have olecranons that are clearly more proximally oriented than expected for a quadruped of their size. This suggests that Homo and Australopithecus used their TBM in a flexed position, a position most consistent with manipulatory activities. Am J Phys Anthropol, 2003. © 2003 Wiley-Liss, Inc. [source]


Valuing credit derivatives using Gaussian quadrature: A stochastic volatility framework

THE JOURNAL OF FUTURES MARKETS, Issue 1 2004
Nabil Tahani
This article proposes semi-closed-form solutions to value derivatives on mean reverting assets. A very general mean reverting process for the state variable and two stochastic volatility processes, the square-root process and the Ornstein-Uhlenbeck process, are considered. For both models, semi-closed-form solutions for characteristic functions are derived and then inverted using the Gauss-Laguerre quadrature rule to recover the cumulative probabilities. As benchmarks, European call options are valued within the following frameworks: Black and Scholes (1973) (represents constant volatility and no mean reversion), Longstaff and Schwartz (1995) (represents constant volatility and mean reversion), and Heston (1993) and Zhu (2000) (represent stochastic volatility and no mean reversion). These comparisons show that numerical prices converge rapidly to the exact price. When applied to the general models proposed (represent stochastic volatility and mean reversion), the Gauss-Laguerre rule proves very efficient and very accurate. As applications, pricing formulas for credit spread options, caps, floors, and swaps are derived. It also is shown that even weak mean reversion can have a major impact on option prices. © 2004 Wiley Periodicals, Inc. Jrl Fut Mark 24:3,35, 2004 [source]


Probabilistic models for medical insurance claims

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 4 2007
Abebe Tessera
Abstract The paper develops two probabilistic models for claim size in health insurance based on the claims of families and individuals covered by the policy. First, general models for the numbers of families and persons covered by a medical insurance are developed. These are then used to construct models for claim size. Applications of these general models are then analysed and discussed. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Generalized dynamic linear models for financial time series

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 1 2001
Patrizia Campagnoli
Abstract In this paper we consider a class of conditionally Gaussian state-space models and discuss how they can provide a flexible and fairly simple tool for modelling financial time series, even in the presence of different components in the series, or of stochastic volatility. Estimation can be computed by recursive equations, which provide the optimal solution under rather mild assumptions. In more general models, the filter equations can still provide approximate solutions. We also discuss how some models traditionally employed for analysing financial time series can be regarded in the state-space framework. Finally, we illustrate the models in two examples to real data sets. Copyright © 2001 John Wiley & Sons, Ltd. [source]