Computational Burden (computational + burden)

Distribution by Scientific Domains


Selected Abstracts


Utility Functions for Ceteris Paribus Preferences

COMPUTATIONAL INTELLIGENCE, Issue 2 2004
Michael McGeachie
Ceteris paribus (all-else equal) preference statements concisely represent preferences over outcomes or goals in a way natural to human thinking. Although deduction in a logic of such statements can compare the desirability of specific conditions or goals, many decision-making methods require numerical measures of degrees of desirability. To permit ceteris paribus specifications of preferences while providing quantitative comparisons, we present an algorithm that compiles a set of qualitative ceteris paribus preferences into an ordinal utility function. Our algorithm is complete for a finite universe of binary features. Constructing the utility function can, in the worst case, take time exponential in the number of features, but common independence conditions reduce the computational burden. We present heuristics using utility independence and constraint-based search to obtain efficient utility functions. [source]


Are Points the Better Graphics Primitives?

COMPUTER GRAPHICS FORUM, Issue 3 2001
Markus Gross
Since the early days of graphics the computer based representation of three-dimensional geometry has been one of the core research fields. Today, various sophisticated geometric modelling techniques including NURBS or implicit surfaces allow the creation of 3D graphics models with increasingly complex shape. In spite of these methods the triangle has survived over decades as the king of graphics primitives meeting the right balance between descriptive power and computational burden. As a consequence, today's consumer graphics hardware is heavily tailored for high performance triangle processing. In addition, a new generation of geometry processing methods including hierarchical representations, geometric filtering, or feature detection fosters the concept of triangle meshes for graphics modelling. Unlike triangles, points have amazingly been neglected as a graphics primitive. Although being included in APIs since many years, it is only recently that point samples experience a renaissance in computer graphics. Conceptually, points provide a mere discretization of geometry without explicit storage of topology. Thus, point samples reduce the representation to the essentials needed for rendering and enable us to generate highly optimized object representations. Although the loss of topology poses great challenges for graphics processing, the latest generation of algorithms features high performance rendering, point/pixel shading, anisotropic texture mapping, and advanced signal processing of point sampled geometry. This talk will give an overview of how recent research results in the processing of triangles and points are changing our traditional way of thinking of surface representations in computer graphics - and will discuss the question: Are Points the Better Graphics Primitives? [source]


A novel selectivity technique for high impedance arcing fault detection in compensated MV networks

EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 4 2008
Nagy I. Elkalashy
Abstract In this paper, the initial transients due to arc reignitions associated with high impedance faults caused by leaning trees are extracted using discrete wavelet transform (DWT). In this way, the fault occurrence is localized. The feature extraction is carried out for the phase quantities corresponding to a band frequency 12.5,6.25,kHz. The detection security is enhanced because the DWT corresponds to the periodicity of these transients. The selectivity term of the faulty feeder is based on a novel technique, in which the power polarity is examined. This power is mathematically processed by multiplying the DWT detail coefficients of the phase voltage and current for each feeder. Its polarity identifies the faulty feeder. In order to reduce the computational burden of the technique, the extraction of the fault features from the residual components is examined. The same methodology of computing the power is considered by taking into account the residual voltage and current detail coefficients where the proposed algorithm performs best. Test cases provide evidence of the efficacy of the proposed technique. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Colon segmentation and colonic polyp detection using cellular neural networks and three-dimensional template matching

EXPERT SYSTEMS, Issue 5 2009
Niyazi Kilic
Abstract: In this study, an automatic three-dimensional computer-aided detection system for colonic polyps was developed. Computer-aided detection for computed tomography colonography aims at facilitating the detection of colonic polyps. First, the colon regions of whole computed tomography images were carefully segmented to reduce computational burden and prevent false positive detection. In this process, the colon regions were extracted by using a cellular neural network and then the regions of interest were determined. In order to improve the segmentation performance of the study, weights in the cellular neural network were calculated by three heuristic optimization techniques, namely genetic algorithm, differential evaluation and artificial immune system. Afterwards, a three-dimensional polyp template model was constructed to detect polyps on the segmented regions of interest. At the end of the template matching process, the volumes geometrically similar to the template were emhanced. [source]


Resampling-based multiple hypothesis testing procedures for genetic case-control association studies,

GENETIC EPIDEMIOLOGY, Issue 6 2006
Bingshu E. Chen
Abstract In case-control studies of unrelated subjects, gene-based hypothesis tests consider whether any tested feature in a candidate gene,single nucleotide polymorphisms (SNPs), haplotypes, or both,are associated with disease. Standard statistical tests are available that control the false-positive rate at the nominal level over all polymorphisms considered. However, more powerful tests can be constructed that use permutation resampling to account for correlations between polymorphisms and test statistics. A key question is whether the gain in power is large enough to justify the computational burden. We compared the computationally simple Simes Global Test to the min,P test, which considers the permutation distribution of the minimum p -value from marginal tests of each SNP. In simulation studies incorporating empirical haplotype structures in 15 genes, the min,P test controlled the type I error, and was modestly more powerful than the Simes test, by 2.1 percentage points on average. When disease susceptibility was conferred by a haplotype, the min,P test sometimes, but not always, under-performed haplotype analysis. A resampling-based omnibus test combining the min,P and haplotype frequency test controlled the type I error, and closely tracked the more powerful of the two component tests. This test achieved consistent gains in power (5.7 percentage points on average), compared to a simple Bonferroni test of Simes and haplotype analysis. Using data from the Shanghai Biliary Tract Cancer Study, the advantages of the newly proposed omnibus test were apparent in a population-based study of bile duct cancer and polymorphisms in the prostaglandin-endoperoxide synthase 2 (PTGS2) gene. Genet. Epidemiol. 2006. Published 2006 Wiley-Liss, Inc. [source]


Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA

HEALTH ECONOMICS, Issue 10 2007
Anthony O'Hagan
Abstract Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Appropriate vertical discretization of Richards' equation for two-dimensional watershed-scale modelling

HYDROLOGICAL PROCESSES, Issue 1 2004
Charles W. Downer
Abstract A number of watershed-scale hydrological models include Richards' equation (RE) solutions, but the literature is sparse on information as to the appropriate application of RE at the watershed scale. In most published applications of RE in distributed watershed-scale hydrological modelling, coarse vertical resolutions are used to decrease the computational burden. Compared to point- or field-scale studies, application at the watershed scale is complicated by diverse runoff production mechanisms, groundwater effects on runoff production, runon phenomena and heterogeneous watershed characteristics. An essential element of the numerical solution of RE is that the solution converges as the spatial resolution increases. Spatial convergence studies can be used to identify the proper resolution that accurately describes the solution with maximum computational efficiency, when using physically realistic parameter values. In this study, spatial convergence studies are conducted using the two-dimensional, distributed-parameter, gridded surface subsurface hydrological analysis (GSSHA) model, which solves RE to simulate vadose zone fluxes. Tests to determine if the required discretization is strongly a function of dominant runoff production mechanism are conducted using data from two very different watersheds, the Hortonian Goodwin Creek Experimental Watershed and the non-Hortonian Muddy Brook watershed. Total infiltration, stream flow and evapotranspiration for the entire simulation period are used to compute comparison statistics. The influences of upper and lower boundary conditions on the solution accuracy are also explored. Results indicate that to simulate hydrological fluxes accurately at both watersheds small vertical cell sizes, of the order of 1 cm, are required near the soil surface, but not throughout the soil column. The appropriate choice of approximations for calculating the near soil-surface unsaturated hydraulic conductivity can yield modest increases in the required cell size. Results for both watersheds are quite similar, even though the soils and runoff production mechanisms differ greatly between the two catchments. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Efficient sampling for spatial uncertainty quantification in multibody system dynamics applications

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 5 2009
Kyle P. Schmitt
Abstract We present two methods for efficiently sampling the response (trajectory space) of multibody systems operating under spatial uncertainty, when the latter is assumed to be representable with Gaussian processes. In this case, the dynamics (time evolution) of the multibody systems depends on spatially indexed uncertain parameters that span infinite-dimensional spaces. This places a heavy computational burden on existing methodologies, an issue addressed herein with two new conditional sampling approaches. When a single instance of the uncertainty is needed in the entire domain, we use a fast Fourier transform technique. When the initial conditions are fixed and the path distribution of the dynamical system is relatively narrow, we use an incremental sampling approach that is fast and has a small memory footprint. Both methods produce the same distributions as the widely used Cholesky-based approaches. We illustrate this convergence at a smaller computational effort and memory cost for a simple non-linear vehicle model. Copyright © 2009 John Wiley & Sons, Ltd. [source]


A reduced-order simulated annealing approach for four-dimensional variational data assimilation in meteorology and oceanography

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2008
I. Hoteit
Abstract Four-dimensional variational data assimilation in meteorology and oceanography suffers from the presence of local minima in the cost function. These local minima arise when the system under study is strongly nonlinear. The number of local minima further dramatically increases with the length of the assimilation period and often renders the solution to the problem intractable. Global optimization methods are therefore needed to resolve this problem. However, the huge computational burden makes the application of these sophisticated techniques unfeasible for large variational data assimilation systems. In this study, a Simulated Annealing (SA) algorithm, complemented with an order-reduction of the control vector, is used to tackle this problem. SA is a very powerful tool of combinatorial minimization in the presence of several local minima at the cost of increasing the execution time. Order-reduction is then used to reduce the dimension of the search space in order to speed up the convergence rate of the SA algorithm. This is achieved through a proper orthogonal decomposition. The new approach was implemented with a realistic eddy-permitting configuration of the Massachusetts Institute of Technology general circulation model (MITgcm) of the tropical Pacific Ocean. Numerical results indicate that the reduced-order SA approach was able to efficiently reduce the cost function with a reasonable number of function evaluations. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Defining and optimizing algorithms for neighbouring particle identification in SPH fluid simulations

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 6 2008
G. Viccione
Abstract Lagrangian particle methods such as smoothed particle hydrodynamics (SPH) are very demanding in terms of computing time for large domains. Since the numerical integration of the governing equations is only carried out for each particle on a restricted number of neighbouring ones located inside a cut-off radius rc, a substantial part of the computational burden depends on the actual search procedure; it is therefore vital that efficient methods are adopted for such a search. The cut-off radius is indeed much lower than the typical domain's size; hence, the number of neighbouring particles is only a little fraction of the total number. Straightforward determination of which particles are inside the interaction range requires the computation of all pair-wise distances, a procedure whose computational time would be unpractical or totally impossible for large problems. Two main strategies have been developed in the past in order to reduce the unnecessary computation of distances: the first based on dynamically storing each particle's neighbourhood list (Verlet list) and the second based on a framework of fixed cells. The paper presents the results of a numerical sensitivity study on the efficiency of the two procedures as a function of such parameters as the Verlet size and the cell dimensions. An insight is given into the relative computational burden; a discussion of the relative merits of the different approaches is also given and some suggestions are provided on the computational and data structure of the neighbourhood search part of SPH codes. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Discrete time adaptive control for a MEMS gyroscope

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 6 2005
Sungsu Park
Abstract This paper presents a discrete time version of the observer-based adaptive control system for micro-electro-mechanical systems gyroscopes, which can be implemented using digital processors. A stochastic analysis of this control algorithm is developed and it shows that the estimates of the angular rate and the fabrication imperfections are biased due to the signal discretization errors in the feedforward control path introduced by the sampler and holder. Thus, a two-rate discrete time control is proposed as a compromise between the measurement biases and the computational burden imposed on the controller. The convergence analysis of this algorithm is also conducted and an analysis method is developed for determining the trade-off between the controller sampling frequency and the magnitude of the angular rate estimate biased errors. All convergence and stochastic properties of a continuous time adaptive control are preserved, and this analysis is verified with computer simulations. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Dynamic reduction of a CH4/air chemical mechanism appropriate for investigating vortex,flame interactions

INTERNATIONAL JOURNAL OF CHEMICAL KINETICS, Issue 4 2007
Shaheen R. Tonse
This paper describes two methods, piecewise reusable implementation of solution mapping (PRISM) and dynamic steady-state approximation (DYSSA), in which chemistry is reduced dynamically to reduce the computational burden in combustion simulations. Each method utilizes the large range in species timescales to reduce the dimensionality to the number of species with slow timescales. The methods are applied within a framework that uses hypercubes to partition multidimensional chemical composition space, where each chemical species concentration, plus temperature, is represented by an axis in space. The dimensionality of the problem is reduced uniquely in each hypercube, but the dimensionality of chemical composition space is not reduced. The dimensionality reduction is dynamic and is different for different hypercubes, thereby escaping the restrictions of global methods in which reductions must be valid for all chemical mixtures. PRISM constructs polynomial equations in each hypercube, replacing the chemical kinetic ordinary differential equation (ODE) system with a set of quadratic polynomials with terms related to the number of species with slow timescales. Earlier versions of PRISM were applied to smaller chemical mechanisms and used all chemical species concentrations as terms. DYSSA is a dynamic treatment of the steady-state approximation and uses the fast,slow timescale separation to determine the set of steady-state species in each hypercube. A reduced number of chemical kinetic ODEs are integrated rather than the original full set. PRISM and DYSSA are evaluated for simulations of a pair of counterrotating vortices interacting with a premixed CH4/air laminar flame. DYSSA is sufficiently accurate for use in combustion simulations, and when relative errors are less than 1.0%, speedups on the order of 3 are observed. PRISM does not perform as well as DYSSA with respect to accuracy and efficiency. Although the polynomial evaluation that replaces the ODE solver is sufficiently fast, polynomials are not reused sufficiently to enable their construction cost to be recovered. © 2007 Wiley Periodicals, Inc. 39: 204,220, 2007 [source]


Surrogate-based infill optimization applied to electromagnetic problems

INTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 5 2010
I. Couckuyt
Abstract The increasing use of expensive computer simulations in engineering places a serious computational burden on associated optimization problems. Surrogate-based optimization becomes standard practice in analyzing such expensive black-box problems. This article discusses several approaches that use surrogate models for optimization and highlights one sequential design approach in particular, namely, expected improvement. The expected improvement approach is demonstrated on two electromagnetic problems, namely, a microwave filter and a textile antenna. © 2010 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2010. [source]


The SCoD Model: Analyzing Durations with a Semiparametric Copula Approach,

INTERNATIONAL REVIEW OF FINANCE, Issue 1-2 2005
CORNELIA SAVU
ABSTRACT This paper applies a new methodology for modeling order durations of ultra-high-frequency data using copulas. While the class of common Autoregressive Conditional Duration models are characterized by strict parameterizations and high computational burden, the semiparametric copula approach proposed here offers more flexibility in modeling the dynamic duration process by separating the marginal distributions of waiting times from their temporal dependence structure. Comparing both frameworks as to their density forecast abilities, the Semiparametric Copula Duration model clearly shows a better performance. [source]


Fast algorithm for the solution of large-scale non-negativity-constrained least squares problems

JOURNAL OF CHEMOMETRICS, Issue 10 2004
Mark H. Van Benthem
Abstract Algorithms for multivariate image analysis and other large-scale applications of multivariate curve resolution (MCR) typically employ constrained alternating least squares (ALS) procedures in their solution. The solution to a least squares problem under general linear equality and inequality constraints can be reduced to the solution of a non-negativity-constrained least squares (NNLS) problem. Thus the efficiency of the solution to any constrained least square problem rests heavily on the underlying NNLS algorithm. We present a new NNLS solution algorithm that is appropriate to large-scale MCR and other ALS applications. Our new algorithm rearranges the calculations in the standard active set NNLS method on the basis of combinatorial reasoning. This rearrangement serves to reduce substantially the computational burden required for NNLS problems having large numbers of observation vectors. Copyright © 2005 John Wiley & Sons, Ltd. [source]


A systematic evaluation of the benefits and hazards of variable selection in latent variable regression.

JOURNAL OF CHEMOMETRICS, Issue 7 2002
Part I. Search algorithm, simulations, theory
Abstract Variable selection is an extensively studied problem in chemometrics and in the area of quantitative structure,activity relationships (QSARs). Many search algorithms have been compared so far. Less well studied is the influence of different objective functions on the prediction quality of the selected models. This paper investigates the performance of different cross-validation techniques as objective function for variable selection in latent variable regression. The results are compared in terms of predictive ability, model size (number of variables) and model complexity (number of latent variables). It will be shown that leave-multiple-out cross-validation with a large percentage of data left out performs best. Since leave-multiple-out cross-validation is computationally expensive, a very efficient tabu search algorithm is introduced to lower the computational burden. The tabu search algorithm needs no user-defined operational parameters and optimizes the variable subset and the number of latent variables simultaneously. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Feedback control of dissipative PDE systems using adaptive model reduction

AICHE JOURNAL, Issue 4 2009
Amit Varshney
Abstract The problem of feedback control of spatially distributed processes described by highly dissipative partial differential equations (PDEs) is considered. Typically, this problem is addressed through model reduction, where finite dimensional approximations to the original infinite dimensional PDE system are derived and used for controller design. The key step in this approach is the computation of basis functions that are subsequently utilized to obtain finite dimensional ordinary differential equation (ODE) models using the method of weighted residuals. A common approach to this task is the Karhunen-Loève expansion combined with the method of snapshots. To circumvent the issue of a priori availability of a sufficiently large ensemble of PDE solution data, the focus is on the recursive computation of eigenfunctions as additional data from the process becomes available. Initially, an ensemble of eigenfunctions is constructed based on a relatively small number of snapshots, and the covariance matrix is computed. The dominant eigenspace of this matrix is then utilized to compute the empirical eigenfunctions required for model reduction. This dominant eigenspace is recomputed with the addition of each snapshot with possible increase or decrease in its dimensionality; due to its small dimensionality the computational burden is relatively small. The proposed approach is applied to representative examples of dissipative PDEs, with both linear and nonlinear spatial differential operators, to demonstrate its effectiveness of the proposed methodology. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source]


Design of flexible reduced kinetic mechanisms

AICHE JOURNAL, Issue 11 2001
Avinash R. Sirdeshpande
Reduced mechanisms are often used in place of detailed chemistry because the computational burden of including all the species continuity equations in the reactor model is unreasonably high. Contemporary reduction techniques produce mechanisms that depend strongly on the nominal set of problem parameters for which the reduction is carried out. Effects of variability in these parameters on the reduced mechanism are the focus of this work. The range of validity of a reduced mechanism is determined for variations in initial conditions. Both sampling approaches and quantitative measures of feasibility, such as the flexibility index and the convex hull formulation, are employed. The inverse problem of designing a reduced mechanism that covers the desired range of initial conditions is addressed using a multiperiod approach. The effect of the value of a user-defined tolerance parameter, which determines whether the predictions made by the reduced mechanism are acceptable, is also assessed. The analytical techniques are illustrated with examples from the literature. [source]


Single-warehouse multi-retailer inventory systems with full truckload shipments

NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 5 2009
Yue Jin
Abstract We consider a multi-stage inventory system composed of a single warehouse that receives a single product from a single supplier and replenishes the inventory of n retailers through direct shipments. Fixed costs are incurred for each truck dispatched and all trucks have the same capacity limit. Costs are stationary, or more generally monotone as in Lippman (Management Sci 16, 1969, 118,138). Demands for the n retailers over a planning horizon of T periods are given. The objective is to find the shipment quantities over the planning horizon to satisfy all demands at minimum system-wide inventory and transportation costs without backlogging. Using the structural properties of optimal solutions, we develop (1) an O(T2) algorithm for the single-stage dynamic lot sizing problem; (2) an O(T3) algorithm for the case of a single-warehouse single-retailer system; and (3) a nested shortest-path algorithm for the single-warehouse multi-retailer problem that runs in polynomial time for a given number of retailers. To overcome the computational burden when the number of retailers is large, we propose aggregated and disaggregated Lagrangian decomposition methods that make use of the structural properties and the efficient single-stage algorithm. Computational experiments show the effectiveness of these algorithms and the gains associated with coordinated versus decentralized systems. Finally, we show that the decentralized solution is asymptotically optimal. © 2009 Wiley Periodicals, Inc. Naval Research Logistics 2009 [source]


SCALE MIXTURES DISTRIBUTIONS IN STATISTICAL MODELLING

AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 2 2008
S.T. Boris Choy
Summary This paper presents two types of symmetric scale mixture probability distributions which include the normal, Student t, Pearson Type VII, variance gamma, exponential power, uniform power and generalized t (GT) distributions. Expressing a symmetric distribution into a scale mixture form enables efficient Bayesian Markov chain Monte Carlo (MCMC) algorithms in the implementation of complicated statistical models. Moreover, the mixing parameters, a by-product of the scale mixture representation, can be used to identify possible outliers. This paper also proposes a uniform scale mixture representation for the GT density, and demonstrates how this density representation alleviates the computational burden of the Gibbs sampler. [source]


Hierarchical Spatial Modeling of Additive and Dominance Genetic Variance for Large Spatial Trial Datasets

BIOMETRICS, Issue 2 2009
Andrew O. Finley
Summary This article expands upon recent interest in Bayesian hierarchical models in quantitative genetics by developing spatial process models for inference on additive and dominance genetic variance within the context of large spatially referenced trial datasets. Direct application of such models to large spatial datasets are, however, computationally infeasible because of cubic-order matrix algorithms involved in estimation. The situation is even worse in Markov chain Monte Carlo (MCMC) contexts where such computations are performed for several iterations. Here, we discuss approaches that help obviate these hurdles without sacrificing the richness in modeling. For genetic effects, we demonstrate how an initial spectral decomposition of the relationship matrices negate the expensive matrix inversions required in previously proposed MCMC methods. For spatial effects, we outline two approaches for circumventing the prohibitively expensive matrix decompositions: the first leverages analytical results from Ornstein,Uhlenbeck processes that yield computationally efficient tridiagonal structures, whereas the second derives a modified predictive process model from the original model by projecting its realizations to a lower-dimensional subspace, thereby reducing the computational burden. We illustrate the proposed methods using a synthetic dataset with additive, dominance, genetic effects and anisotropic spatial residuals, and a large dataset from a Scots pine (Pinus sylvestris L.) progeny study conducted in northern Sweden. Our approaches enable us to provide a comprehensive analysis of this large trial, which amply demonstrates that, in addition to violating basic assumptions of the linear model, ignoring spatial effects can result in downwardly biased measures of heritability. [source]


Generalized Hierarchical Multivariate CAR Models for Areal Data

BIOMETRICS, Issue 4 2005
Xiaoping Jin
Summary In the fields of medicine and public health, a common application of areal data models is the study of geographical patterns of disease. When we have several measurements recorded at each spatial location (for example, information on p, 2 diseases from the same population groups or regions), we need to consider multivariate areal data models in order to handle the dependence among the multivariate components as well as the spatial dependence between sites. In this article, we propose a flexible new class of generalized multivariate conditionally autoregressive (GMCAR) models for areal data, and show how it enriches the MCAR class. Our approach differs from earlier ones in that it directly specifies the joint distribution for a multivariate Markov random field (MRF) through the specification of simpler conditional and marginal models. This in turn leads to a significant reduction in the computational burden in hierarchical spatial random effect modeling, where posterior summaries are computed using Markov chain Monte Carlo (MCMC). We compare our approach with existing MCAR models in the literature via simulation, using average mean square error (AMSE) and a convenient hierarchical model selection criterion, the deviance information criterion (DIC; Spiegelhalter et al., 2002, Journal of the Royal Statistical Society, Series B64, 583,639). Finally, we offer a real-data application of our proposed GMCAR approach that models lung and esophagus cancer death rates during 1991,1998 in Minnesota counties. [source]