Area Data (area + data)

Distribution by Scientific Domains


Selected Abstracts


Space varying coefficient models for small area data

ENVIRONMETRICS, Issue 5 2003
Renato M. Assunção
Abstract Many spatial regression problems using area data require more flexible forms than the usual linear predictor for modelling the dependence of responses on covariates. One direction for doing this is to allow the coefficients to vary as smooth functions of the area's geographical location. After presenting examples from the scientific literature where these spatially varying coefficients are justified, we briefly review some of the available alternatives for this kind of modelling. We concentrate on a Bayesian approach for generalized linear models proposed by the author which uses a Markov random field to model the coefficients' spatial dependency. We show that, for normally distributed data, Gibbs sampling can be used to sample from the posterior and we prove a result showing the equivalence between our model and other usual spatial regression models. We illustrate our approach with a number of rather complex applied problems, showing that the method is computationally feasible and provides useful insights in substantive problems. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Using SimBritain to Model the Geographical Impact of National Government Policies

GEOGRAPHICAL ANALYSIS, Issue 1 2007
Dimitris Ballas
In this article, we use a dynamic spatial microsimulation model of Britain for the analysis of the geographical impact of policies that have been implemented in Britain in the last 10 years. In particular, we show how spatial microsimulation can be used to estimate the geographical and socio-economic impact of the following policy developments: introduction of the minimum wage, winter fuel payments, working families tax credits, and new child and working credits. This analysis is carried out with the use of the SimBritain model, which is a product of a 3-year research project aimed at dynamically simulating urban and regional populations in Britain. SimBritain projections are based on a method that uses small area data from past Censuses of the British population in order to estimate small-area data for 2001, 2011, and 2021. [source]


The biometry of gills of 0-group European flounder

JOURNAL OF FISH BIOLOGY, Issue 4 2000
M. G. J. Hartl
The gill surface area of 0-group, post-metamorphic Pleuronectes flesus L. was examined using digital image analysis software and expressed in relation to body mass according to the equation log Y=loga+c logW (a=239·02; c=0·723). The components that constitute gill area, total filament length, interlamellar space and unilateral lamellar area were measured. The measurement of the length of every filament on all eight arches showed that commonly used methods of calculation can lead to an under-estimation of up to 24% of total filament length. Direct measurements of unilateral lamellar area with digital image analysis showed that previously reported gill area data for the same species was over-estimated by as much as 58%. In addition, in this species the neglect of gill pouch asymmetry after metamorphosis, can bring about a 14% over-estimation of total gill area. [source]


Evaluation of Short-to-Medium Range Streamflow Forecasts Obtained Using an Enhanced Version of SRM,

JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 3 2010
Brian J. Harshburger
Harshburger, Brian J., Karen S. Humes, Von P. Walden, Brandon C. Moore, Troy R. Blandford, and Albert Rango, 2010. Evaluation of Short-to-Medium Range Streamflow Forecasts Obtained Using an Enhanced Version of SRM. Journal of the American Water Resources Association (JAWRA) 46(3):603-617. DOI: 10.1111/j.1752-1688.2010.00437.x Abstract:, As demand for water continues to escalate in the western United States, so does the need for accurate streamflow forecasts. Here, we describe a methodology for generating short-to-medium range (1 to 15 days) streamflow forecasts using an enhanced version of the Snowmelt Runoff Model (SRM), snow-covered area data derived from MODIS products, data from Snow Telemetry stations, and meteorological forecasts. The methodology was tested on three mid-elevation, snowmelt-dominated basins ranging in size from 1,600 to 3,500 km2. To optimize the model performance and aid in its operational implementation, two enhancements have been made to SRM: (1) the use of an antecedent temperature index method to track snowpack cold content, and (2) the use of both maximum and minimum critical temperatures to partition precipitation into rain, snow, or a mixture of rain and snow. The comparison of retrospective model simulations with observed streamflow shows that the enhancements significantly improve the model performance. Streamflow forecasts generated using the enhanced version of the model compare well with the observed streamflow for the earlier leadtimes; forecast performance diminishes with leadtime due to errors in the meteorological forecasts. The three basins modeled in this research are typical of many mid-elevation basins throughout the American West, thus there is potential for this methodology to be applied successfully to other mountainous basins. [source]


Capitation funding in the public sector

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 2 2001
Peter C. Smith
A fundamental requirement of government at all levels,national and local,is to distribute the limited funds that it wishes to spend on particular public services between geographical areas or institutions, which are effectively competitors for such funds. Increasing use is now being made of capitation methods for such purposes, in which a standard estimate of expected expenditure is attached to a citizen with given characteristics. Statistical methods are playing an important role in determining such capitations, but they give rise to profound methodological problems. This paper examines the rationale for capitation and discusses the associated methodological issues. It illustrates the issues raised with two examples taken from the UK public sector: in personal social services and hospital care. Severe limitations of the data mean that small area data are used as the unit of observation, giving rise to considerable complexity in the model to be estimated. As a result, a range of methodologies including two-stage least squares and multilevel modelling methods are deployed. The paper concludes with a suggestion for an approach which would represent an improvement on current capitation methods, but which would require data on individuals rather than on small areas. [source]


The New Keynesian Model and the Euro Area Business Cycle,

OXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 2 2007
Miguel Casares
Abstract This paper describes a New Keynesian model incorporating transactions-facilitating money and a time-to-build constraint into endogenous capital accumulation. The calibrated New Keynesian model performs almost as well as the estimated vector autoregressive model in replicating Euro area cyclical correlations between key variables such as output and inflation, although it fares less well in predicting the procyclical dynamics of nominal interest rates. The presence of a time-to-build requirement in the model helps to improve its fit to Euro area data, whereas the role of transactions-facilitating money is much less important. Impulse,response functions and a decomposition of variance complete the analysis. [source]


Implementing Spatial Data Analysis Software Tools in R

GEOGRAPHICAL ANALYSIS, Issue 1 2006
Roger Bivand
This article reports on work in progress on the implementation of functions for spatial statistical analysis, in particular of lattice/area data in the R language environment. The underlying spatial weights matrix classes, as well as methods for deriving them from data from commonly used geographical information systems are presented, handled using other contributed R packages. Since the initial release of some functions in 2001, and the release of the spdep package in 2002, experience has been gained in the use of various functions. The topics covered are the ingestion of positional data, exploratory data analysis of positional, attribute, and neighborhood data, and hypothesis testing of autocorrelation for univariate data. It also provides information about community building in using R for analyzing spatial data. [source]