Model Representation (model + representation)

Distribution by Scientific Domains


Selected Abstracts


An Investigation of Localization as an Element of Cognitive Fit in Accounting Model Representations

DECISION SCIENCES, Issue 1 2001
Cheryl Dunn
Abstract Cognitive fit, a correspondence between task and data representation format, has been demonstrated to lead to superior task performance by individual users and has been posited as an explanation for performance differences among users of various problem representations such as tables, graphs, maps, and schematic faces. The current study extends cognitive fit to accounting models and integrates cognitive fit theory with the concept of localization to provide additional evidence for how cognitive fit works. Two accounting model representations are compared in this study, the traditional DCA (Debit-Credit-Account) accounting model and the REA (Resources-Events-Agents) accounting model. Results indicate that the localization of relevant objects or linkages is important in establishing cognitive fit. [source]


Combining wavelet-based feature extractions with relevance vector machines for stock index forecasting

EXPERT SYSTEMS, Issue 2 2008
Shian-Chang Huang
Abstract: The relevance vector machine (RVM) is a Bayesian version of the support vector machine, which with a sparse model representation has appeared to be a powerful tool for time-series forecasting. The RVM has demonstrated better performance over other methods such as neural networks or autoregressive integrated moving average based models. This study proposes a hybrid model that combines wavelet-based feature extractions with RVM models to forecast stock indices. The time series of explanatory variables are decomposed using some wavelet bases and the extracted time-scale features serve as inputs of an RVM to perform the non-parametric regression and forecasting. Compared with traditional forecasting models, our proposed method performs best. The root-mean-squared forecasting errors are significantly reduced. [source]


High-dimensional model representation for structural reliability analysis

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 4 2009
Rajib Chowdhury
Abstract This paper presents a new computational tool for predicting failure probability of structural/mechanical systems subject to random loads, material properties, and geometry. The method involves high-dimensional model representation (HDMR) that facilitates lower-dimensional approximation of the original high-dimensional implicit limit state/performance function, response surface generation of HDMR component functions, and Monte Carlo simulation. HDMR is a general set of quantitative model assessment and analysis tools for capturing the high-dimensional relationships between sets of input and output model variables. It is a very efficient formulation of the system response, if higher-order variable correlations are weak, allowing the physical model to be captured by the first few lower-order terms. Once the approximate form of the original implicit limit state/performance function is defined, the failure probability can be obtained by statistical simulation. Results of nine numerical examples involving mathematical functions and structural mechanics problems indicate that the proposed method provides accurate and computationally efficient estimates of the probability of failure. Copyright © 2008 John Wiley & Sons, Ltd. [source]


High dimensional model representation for piece-wise continuous function approximation

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 12 2008
Rajib Chowdhury
Abstract High dimensional model representation (HDMR) approximates multivariate functions in such a way that the component functions of the approximation are ordered starting from a constant and gradually approaching to multivariance as we proceed along the terms like first-order, second-order and so on. Until now HDMR applications include construction of a computational model directly from laboratory/field data, creating an efficient fully equivalent operational model to replace an existing time-consuming mathematical model, identification of key model variables, global uncertainty assessments, efficient quantitative risk assessments, etc. In this paper, the potential of HDMR for tackling univariate and multivariate piece-wise continuous functions is explored. Eight numerical examples are presented to illustrate the performance of HDMR for approximating a univariate or a multivariate piece-wise continuous function with an equivalent continuous function. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Generalized forgetting functions for on-line least-squares identification of time-varying systems

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 4 2001
R. E. Mahony
The problem of on-line identification of a parametric model for continuous-time, time-varying systems is considered via the minimization of a least-squares criterion with a forgetting function. The proposed forgetting function depends on two time-varying parameters which play crucial roles in the stability analysis of the method. The analysis leads to the consideration of a Lyapunov function for the identification algorithm that incorporates both prediction error and parameter convergence measures. A theorem is proved showing finite time convergence of the Lyapunov function to a neighbourhood of zero, the size of which depends on the evolution of the time-varying error terms in the parametric model representation. Copyright © 2001 John Wiley & Sons, Ltd. [source]


An algorithm for the use of surrogate models in modular flowsheet optimization

AICHE JOURNAL, Issue 10 2008
José A. Caballero
Abstract In this work a methodology is presented for the rigorous optimization of nonlinear programming problems in which the objective function and (or) some constraints are represented by noisy implicit black box functions. The special application considered is the optimization of modular process simulators in which the derivatives are not available and some unit operations introduce noise preventing the calculation of accurate derivatives. The black box modules are substituted by metamodels based on a kriging interpolation that assumes that the errors are not independent but a function of the independent variables. A Kriging metamodel uses non-Euclidean measure of distance to avoid sensitivity to the units of measure. It includes adjustable parameters that weigh the importance of each variable for obtaining a good model representation, and it allows calculating errors that can be used to establish stopping criteria and provide a solid base to deal with "possible infeasibility" due to inaccuracies in the metamodel representation of objective function and constraints. The algorithm continues with a refining stage and successive bound contraction in the domain of independent variables with or without kriging recalibration until an acceptable accuracy in the metamodel is obtained. The procedure is illustrated with several examples. © 2008 American Institute of Chemical Engineers AIChE J, 2008 [source]


Binary models for marginal independence

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 2 2008
Mathias Drton
Summary., Log-linear models are a classical tool for the analysis of contingency tables. In particular, the subclass of graphical log-linear models provides a general framework for modelling conditional independences. However, with the exception of special structures, marginal independence hypotheses cannot be accommodated by these traditional models. Focusing on binary variables, we present a model class that provides a framework for modelling marginal independences in contingency tables. The approach that is taken is graphical and draws on analogies with multivariate Gaussian models for marginal independence. For the graphical model representation we use bidirected graphs, which are in the tradition of path diagrams. We show how the models can be parameterized in a simple fashion, and how maximum likelihood estimation can be performed by using a version of the iterated conditional fitting algorithm. Finally we consider combining these models with symmetry restrictions. [source]


A methodology for forming components of the linear model in 4D-Var with application to the marine boundary layer

ATMOSPHERIC SCIENCE LETTERS, Issue 4 2009
Tim Payne
Abstract We show how large numbers of parameters used in the linear model in 4D-Var may efficiently be optimised according to suitable criteria. We apply this to the linear model representation of boundary layer processes over sea, to obtain significant reductions in linearisation and forecast error. Crown Copyright © 2009 Royal Meteorological Society [source]


Assessment of Agreement under Nonstandard Conditions Using Regression Models for Mean and Variance

BIOMETRICS, Issue 1 2006
Pankaj K. Choudhary
Summary The total deviation index of Lin (2000, Statistics in Medicine19, 255,270) and Lin et al. (2002, Journal of the American Statistical Association97, 257,270) is an intuitive approach for the assessment of agreement between two methods of measurement. It assumes that the differences of the paired measurements are a random sample from a normal distribution and works essentially by constructing a probability content tolerance interval for this distribution. We generalize this approach to the case when differences may not have identical distributions,a common scenario in applications. In particular, we use the regression approach to model the mean and the variance of differences as functions of observed values of the average of the paired measurements, and describe two methods based on asymptotic theory of maximum likelihood estimators for constructing a simultaneous probability content tolerance band. The first method uses bootstrap to approximate the critical point and the second method is an analytical approximation. Simulation shows that the first method works well for sample sizes as small as 30 and the second method is preferable for large sample sizes. We also extend the methodology for the case when the mean function is modeled using penalized splines via a mixed model representation. Two real data applications are presented. [source]


An Investigation of Localization as an Element of Cognitive Fit in Accounting Model Representations

DECISION SCIENCES, Issue 1 2001
Cheryl Dunn
Abstract Cognitive fit, a correspondence between task and data representation format, has been demonstrated to lead to superior task performance by individual users and has been posited as an explanation for performance differences among users of various problem representations such as tables, graphs, maps, and schematic faces. The current study extends cognitive fit to accounting models and integrates cognitive fit theory with the concept of localization to provide additional evidence for how cognitive fit works. Two accounting model representations are compared in this study, the traditional DCA (Debit-Credit-Account) accounting model and the REA (Resources-Events-Agents) accounting model. Results indicate that the localization of relevant objects or linkages is important in establishing cognitive fit. [source]


Parameterizing redistribution and sublimation of blowing snow for hydrological models: tests in a mountainous subarctic catchment

HYDROLOGICAL PROCESSES, Issue 18 2009
Matthew K. MacDonald
Abstract Model tests of blowing snow redistribution and sublimation by wind were performed for three winters over a small mountainous sub-Arctic catchment located in the Yukon Territory, Canada, using a physically based blowing snow model. Snow transport fluxes were distributed over multiple hydrological response units (HRUs) using inter-HRU snow redistribution allocation factors (SR). Three SR schemes of varying complexity were evaluated. Model results show that end-of-winter snow accumulation can be most accurately simulated using a physically based blowing snow model when SR values are established when taking into account wind direction and speed and HRU aerodynamic characteristics, along with the spatial arrangement of the HRUs in the catchment. With the knowledge that snow transport scales approximately with the fourth power of wind speed (u4), SR values can be (1) established according to the predominant u4 direction and magnitude over a simulation period or (2) can change at each time step according to a measured wind direction. Unfortunately, wind direction data were available only for one of the three winters, so the latter scheme was tested only once. Although the aforementioned SR schemes produced different results, model efficiency was of similar merit. The independent effects of topography and vegetation were examined to assess their importance on snow redistribution modelling over mountainous terrain. Snow accumulation was best simulated when including explicit representations of both landscape vegetation (i.e. vegetation height and density) and topography (i.e. wind exposure). There may be inter-basin differences in the relative importance of model representations of topography and vegetation. Copyright © 2009 John Wiley & Sons, Ltd. [source]


SEMIPARAMETRIC REGRESSION AND GRAPHICAL MODELS

AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 1 2009
M. P. Wand
Summary Semiparametric regression models that use spline basis functions with penalization have graphical model representations. This link is more powerful than previously established mixed model representations of semiparametric regression, as a larger class of models can be accommodated. Complications such as missingness and measurement error are more naturally handled within the graphical model architecture. Directed acyclic graphs, also known as Bayesian networks, play a prominent role. Graphical model-based Bayesian ,inference engines', such as bugs and vibes, facilitate fitting and inference. Underlying these are Markov chain Monte Carlo schemes and recent developments in variational approximation theory and methodology. [source]