Hoc Methods (hoc + methods)

Distribution by Scientific Domains

Kinds of Hoc Methods

  • ad hoc methods


  • Selected Abstracts


    Modelling current trends in Northern Hemisphere temperatures

    INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 7 2006
    Terence C. Mills
    Abstract Fitting a trend is of interest in many disciplines, but it is of particular importance in climatology, where estimating the current and recent trend in temperature is thought to provide a major indication of the presence of global warming. A range of ad hoc methods of trend fitting have been proposed, with little consensus as to the most appropriate techniques to use. The aim of this paper is to consider a range of trend extraction techniques, none of which require ,padding' out the series beyond the end of the available observations, and to use these to estimate the trend of annual mean Northern Hemisphere (NH) temperatures. A comparison of the trends estimated by these methods thus provides a robust indication of the likely range of current trend temperature increases and hence inform, in a timely quantitative fashion, arguments based on global temperature data concerning the nature and extent of global warming and climate change. For the complete sample 1856,2003, the trend is characterised as having long waves about an underlying increasing level. Since around 1970, all techniques display a pronounced warming trend. However, they also provide a range of trend functions so that extrapolation far into the future would be a hazardous exercise. Copyright © 2006 Royal Meteorological Society. [source]


    Non-smooth structured control design with application to PID loop-shaping of a process

    INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 14 2007
    Pierre Apkarian
    Abstract Feedback controllers with specific structure arise frequently in applications because they are easily apprehended by design engineers and facilitate on-board implementations and re-tuning. This work is dedicated to H, synthesis with structured controllers. In this context, straightforward application of traditional synthesis techniques fails, which explains why only a few ad hoc methods have been developed over the years. In response, we propose a more systematic way to design H, optimal controllers with fixed structure using local optimization techniques. Our approach addresses in principle all those controller structures which can be built into mathematical programming constraints. We apply non-smooth optimization techniques to compute locally optimal solutions, and provide practical tests for descent and optimality. In the experimental part we apply our technique to H, loop-shaping proportional integral derivative (PID) controllers for MIMO systems and demonstrate its use for PID control of a chemical process. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    Residual analysis for spatial point processes (with discussion)

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 5 2005
    A. Baddeley
    Summary., We define residuals for point process models fitted to spatial point pattern data, and we propose diagnostic plots based on them. The residuals apply to any point process model that has a conditional intensity; the model may exhibit spatial heterogeneity, interpoint interaction and dependence on spatial covariates. Some existing ad hoc methods for model checking (quadrat counts, scan statistic, kernel smoothed intensity and Berman's diagnostic) are recovered as special cases. Diagnostic tools are developed systematically, by using an analogy between our spatial residuals and the usual residuals for (non-spatial) generalized linear models. The conditional intensity , plays the role of the mean response. This makes it possible to adapt existing knowledge about model validation for generalized linear models to the spatial point process context, giving recommendations for diagnostic plots. A plot of smoothed residuals against spatial location, or against a spatial covariate, is effective in diagnosing spatial trend or co-variate effects. Q,Q -plots of the residuals are effective in diagnosing interpoint interaction. [source]


    Allocation of quality improvement targets based on investments in learning

    NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 8 2001
    Herbert Moskowitz
    Abstract Purchased materials often account for more than 50% of a manufacturer's product nonconformance cost. A common strategy for reducing such costs is to allocate periodic quality improvement targets to suppliers of such materials. Improvement target allocations are often accomplished via ad hoc methods such as prescribing a fixed, across-the-board percentage improvement for all suppliers, which, however, may not be the most effective or efficient approach for allocating improvement targets. We propose a formal modeling and optimization approach for assessing quality improvement targets for suppliers, based on process variance reduction. In our models, a manufacturer has multiple product performance measures that are linear functions of a common set of design variables (factors), each of which is an output from an independent supplier's process. We assume that a manufacturer's quality improvement is a result of reductions in supplier process variances, obtained through learning and experience, which require appropriate investments by both the manufacturer and suppliers. Three learning investment (cost) models for achieving a given learning rate are used to determine the allocations that minimize expected costs for both the supplier and manufacturer and to assess the sensitivity of investment in learning on the allocation of quality improvement targets. Solutions for determining optimal learning rates, and concomitant quality improvement targets are derived for each learning investment function. We also account for the risk that a supplier may not achieve a targeted learning rate for quality improvements. An extensive computational study is conducted to investigate the differences between optimal variance allocations and a fixed percentage allocation. These differences are examined with respect to (i) variance improvement targets and (ii) total expected cost. For certain types of learning investment models, the results suggest that orders of magnitude differences in variance allocations and expected total costs occur between optimal allocations and those arrived at via the commonly used rule of fixed percentage allocations. However, for learning investments characterized by a quadratic function, there is surprisingly close agreement with an "across-the-board" allocation of 20% quality improvement targets. © John Wiley & Sons, Inc. Naval Research Logistics 48: 684,709, 2001 [source]


    What to Do about Missing Values in Time-Series Cross-Section Data

    AMERICAN JOURNAL OF POLITICAL SCIENCE, Issue 2 2010
    James Honaker
    Applications of modern methods for analyzing data with missing values, based primarily on multiple imputation, have in the last half-decade become common in American politics and political behavior. Scholars in this subset of political science have thus increasingly avoided the biases and inefficiencies caused by ad hoc methods like listwise deletion and best guess imputation. However, researchers in much of comparative politics and international relations, and others with similar data, have been unable to do the same because the best available imputation methods work poorly with the time-series cross-section data structures common in these fields. We attempt to rectify this situation with three related developments. First, we build a multiple imputation model that allows smooth time trends, shifts across cross-sectional units, and correlations over time and space, resulting in far more accurate imputations. Second, we enable analysts to incorporate knowledge from area studies experts via priors on individual missing cell values, rather than on difficult-to-interpret model parameters. Third, because these tasks could not be accomplished within existing imputation algorithms, in that they cannot handle as many variables as needed even in the simpler cross-sectional data for which they were designed, we also develop a new algorithm that substantially expands the range of computationally feasible data types and sizes for which multiple imputation can be used. These developments also make it possible to implement the methods introduced here in freely available open source software that is considerably more reliable than existing algorithms. [source]