Test Functions (test + function)

Distribution by Scientific Domains


Selected Abstracts


Analysis of thick functionally graded plates by local integral equation method

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 8 2007
J. Sladek
Abstract Analysis of functionally graded plates under static and dynamic loads is presented by the meshless local Petrov,Galerkin (MLPG) method. Plate bending problem is described by Reissner,Mindlin theory. Both isotropic and orthotropic material properties are considered in the analysis. A weak formulation for the set of governing equations in the Reissner,Mindlin theory with a unit test function is transformed into local integral equations considered on local subdomains in the mean surface of the plate. Nodal points are randomly spread on this surface and each node is surrounded by a circular subdomain, rendering integrals which can be simply evaluated. The meshless approximation based on the moving least-squares (MLS) method is employed in the numerical implementation. Numerical results for simply supported and clamped plates are presented. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Bayesian Hypothesis Testing: a Reference Approach

INTERNATIONAL STATISTICAL REVIEW, Issue 3 2002
José M. Bernardo
Summary For any probability model M={p(x|,, ,), ,,,, ,,,} assumed to describe the probabilistic behaviour of data x,X, it is argued that testing whether or not the available data are compatible with the hypothesis H0={,=,0} is best considered as a formal decision problem on whether to use (a0), or not to use (a0), the simpler probability model (or null model) M0={p(x|,0, ,), ,,,}, where the loss difference L(a0, ,, ,) ,L(a0, ,, ,) is proportional to the amount of information ,(,0, ,), which would be lost if the simplified model M0 were used as a proxy for the assumed model M. For any prior distribution ,(,, ,), the appropriate normative solution is obtained by rejecting the null model M0 whenever the corresponding posterior expectation ,,,(,0, ,, ,),(,, ,|x)d,d, is sufficiently large. Specification of a subjective prior is always difficult, and often polemical, in scientific communication. Information theory may be used to specify a prior, the reference prior, which only depends on the assumed model M, and mathematically describes a situation where no prior information is available about the quantity of interest. The reference posterior expectation, d(,0, x) =,,,(,|x)d,, of the amount of information ,(,0, ,, ,) which could be lost if the null model were used, provides an attractive nonnegative test function, the intrinsic statistic, which is invariant under reparametrization. The intrinsic statistic d(,0, x) is measured in units of information, and it is easily calibrated (for any sample size and any dimensionality) in terms of some average log-likelihood ratios. The corresponding Bayes decision rule, the Bayesian reference criterion (BRC), indicates that the null model M0 should only be rejected if the posterior expected loss of information from using the simplified model M0 is too large or, equivalently, if the associated expected average log-likelihood ratio is large enough. The BRC criterion provides a general reference Bayesian solution to hypothesis testing which does not assume a probability mass concentrated on M0 and, hence, it is immune to Lindley's paradox. The theory is illustrated within the context of multivariate normal data, where it is shown to avoid Rao's paradox on the inconsistency between univariate and multivariate frequentist hypothesis testing. Résumé Pour un modèle probabiliste M={p(x|,, ,) ,,,, ,,,} censé décrire le comportement probabiliste de données x,X, nous soutenons que tester si les données sont compatibles avec une hypothèse H0={,=,0 doit être considéré comme un problème décisionnel concernant l'usage du modèle M0={p(x|,0, ,) ,,,}, avec une fonction de coût qui mesure la quantité d'information qui peut être perdue si le modèle simplifiéM0 est utilisé comme approximation du véritable modèle M. Le coût moyen, calculé par rapport à une loi a priori de référence idoine fournit une statistique de test pertinente, la statistique intrinsèque d(,0, x), invariante par reparamétrisation. La statistique intrinsèque d(,0, x) est mesurée en unités d'information, et sa calibrage, qui est independante de la taille de léchantillon et de la dimension du paramètre, ne dépend pas de sa distribution à l'échantillonage. La règle de Bayes correspondante, le critère de Bayes de référence (BRC), indique que H0 doit seulement êetre rejeté si le coût a posteriori moyen de la perte d'information à utiliser le modèle simplifiéM0 est trop grande. Le critère BRC fournit une solution bayésienne générale et objective pour les tests d'hypothèses précises qui ne réclame pas une masse de Dirac concentrée sur M0. Par conséquent, elle échappe au paradoxe de Lindley. Cette théorie est illustrée dans le contexte de variables normales multivariées, et on montre qu'elle évite le paradoxe de Rao sur l'inconsistence existant entre tests univariés et multivariés. [source]


A test suite for parallel performance analysis tools

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11 2007
Michael Gerndt
Abstract Parallel performance analysis tools must be tested as to whether they perform their task correctly, which comprises at least three aspects. First, it must be ensured that the tools neither alter the semantics nor distort the run-time behavior of the application under investigation. Next, it must be verified that the tools collect the correct performance data as required by their specification. Finally, it must be checked that the tools perform their intended tasks and detect relevant performance problems. Focusing on the latter (correctness) aspect, testing can be done using synthetic test functions with controllable performance properties, possibly complemented by real-world applications with known performance behavior. A systematic test suite can be built from synthetic test functions and other components, possibly with the help of tools to assist the user in putting the pieces together into executable test programs. Clearly, such a test suite can be highly useful to builders of performance analysis tools. It is surprising that, up until now, no systematic effort has been undertaken to provide such a suite. In this paper we describe the APART Test Suite (ATS) for checking the correctness (in the above sense) of parallel performance analysis tools. In particular, we describe a collection of synthetic test functions which allows one to easily construct both simple and more complex test programs with desired performance properties. We briefly report on experience with MPI and OpenMP performance tools when applied to the test cases generated by ATS. Copyright © 2006 John Wiley & Sons, Ltd. [source]


MRMOGA: a new parallel multi-objective evolutionary algorithm based on the use of multiple resolutions

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 4 2007
Antonio López Jaimes
Abstract In this paper, we introduce MRMOGA (Multiple Resolution Multi-Objective Genetic Algorithm), a new parallel multi-objective evolutionary algorithm which is based on an injection island approach. This approach is characterized by adopting an encoding of solutions which uses a different resolution for each island. This approach allows us to divide the decision variable space into well-defined overlapped regions to achieve an efficient use of multiple processors. Also, this approach guarantees that the processors only generate solutions within their assigned region. In order to assess the performance of our proposed approach, we compare it to a parallel version of an algorithm that is representative of the state-of-the-art in the area, using standard test functions and performance measures reported in the specialized literature. Our results indicate that our proposed approach is a viable alternative to solve multi-objective optimization problems in parallel, particularly when dealing with large search spaces. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Accelerating adaptive trade-off model using shrinking space technique for constrained evolutionary optimization

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 11 2009
Yong Wang
Abstract Adaptive trade-off model (ATM) is a constraint-handling mechanism proposed recently. The main advantages of this model are its simplicity and adaptation. Moreover, it can be easily embedded into evolutionary algorithms for solving constrained optimization problems. This paper proposes a novel method for constrained optimization, which aims at accelerating the ATM using shrinking space technique. Eighteen benchmark test functions and five engineering design problems are used to test the performance of the method proposed. Experimental results suggest that combining the ATM with the shrinking space technique is very beneficial. The method proposed can promptly converge to competitive results without loss of the quality and the precision of the final results. Performance comparisons with some other state-of-the-art approaches from the literature are also presented. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Shape functions for polygonal domains with interior nodes

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 8 2004
Elisabeth Anna Malsch
Abstract The presented formulation follows in a series of publications which outline a method for constructing test functions which satisfy essential edge conditions exactly. The method promises a complete solution, satisfying all of the requirements of a Ritz coordinate function. The influence of interior points on the domain solution is included in this construction. Similar to conformal bubble functions, the test functions are zero along the boundary and single valued only at the points they describe. Unlike the bubble function construction, the interior points can be located at any desired point in the domain. The resulting set of trial functions can satisfy the required global conditions including the exact reproduction of constant and linear fields. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Meshless Galerkin analysis of Stokes slip flow with boundary integral equations

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2009
Xiaolin Li
Abstract This paper presents a novel meshless Galerkin scheme for modeling incompressible slip Stokes flows in 2D. The boundary value problem is reformulated as boundary integral equations of the first kind which is then converted into an equivalent variational problem with constraint. We introduce a Lagrangian multiplier to incorporate the constraint and apply the moving least-squares approximations to generate trial and test functions. In this boundary-type meshless method, boundary conditions can be implemented exactly and system matrices are symmetric. Unlike the domain-type method, this Galerkin scheme requires only a nodal structure on the bounding surface of a body for approximation of boundary unknowns. The convergence and abstract error estimates of this new approach are given. Numerical examples are also presented to show the efficiency of the method. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Diff(Rn) as a Milnor,Lie group

MATHEMATISCHE NACHRICHTEN, Issue 9 2005
Helge Glöckner
Abstract We describe a construction of the Lie group structure on the diffeomorphism group Diff(Rn), modelled on the space D(Rn,Rn) of Rn -valued test functions on Rn, in John Milnor's setting of infinite-dimensional Lie groups. New tools are introduced to simplify this task. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


General theory of domain decomposition: Indirect methods

NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 3 2002
Ismael Herrera
Abstract According to a general theory of domain decomposition methods (DDM), recently proposed by Herrera, DDM may be classified into two broad categories: direct and indirect (or Trefftz-Herrera methods). This article is devoted to formulate systematically indirect methods and apply them to differential equations in several dimensions. They have interest since they subsume some of the best-known formulations of domain decomposition methods, such as those based on the application of Steklov-Poincaré operators. Trefftz-Herrera approach is based on a special kind of Green's formulas applicable to discontinuous functions, and one of their essential features is the use of weighting functions which yield information, about the sought solution, at the internal boundary of the domain decomposition exclusively. A special class of Sobolev spaces is introduced in which boundary value problems with prescribed jumps at the internal boundary are formulated. Green's formulas applicable in such Sobolev spaces, which contain discontinuous functions, are established and from them the general framework for indirect methods is derived. Guidelines for the construction of the special kind of test functions are then supplied and, as an illustration, the method is applied to elliptic problems in several dimensions. A nonstandard method of collocation is derived in this manner, which possesses significant advantages over more standard procedures. © 2002 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 18: 296,322, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/num.10008 [source]