Kernel Function (kernel + function)

Distribution by Scientific Domains


Selected Abstracts


A reproducing kernel method with nodal interpolation property

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 7 2003
Jiun-Shyan Chen
Abstract A general formulation for developing reproducing kernel (RK) interpolation is presented. This is based on the coupling of a primitive function and an enrichment function. The primitive function introduces discrete Kronecker delta properties, while the enrichment function constitutes reproducing conditions. A necessary condition for obtaining a RK interpolation function is an orthogonality condition between the vector of enrichment functions and the vector of shifted monomial functions at the discrete points. A normalized kernel function with relative small support is employed as the primitive function. This approach does not employ a finite element shape function and therefore the interpolation function can be arbitrarily smooth. To maintain the convergence properties of the original RK approximation, a mixed interpolation is introduced. A rigorous error analysis is provided for the proposed method. Optimal order error estimates are shown for the meshfree interpolation in any Sobolev norms. Optimal order convergence is maintained when the proposed method is employed to solve one-dimensional boundary value problems. Numerical experiments are done demonstrating the theoretical error estimates. The performance of the method is illustrated in several sample problems. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Steady/unsteady aerodynamic analysis of wings at subsonic, sonic and supersonic Mach numbers using a 3D panel method

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 10 2003
Jeonghyun Cho
Abstract This paper treats the kernel function of an integral equation that relates a known or prescribed upwash distribution to an unknown lift distribution for a finite wing. The pressure kernel functions of the singular integral equation are summarized for all speed range in the Laplace transform domain. The sonic kernel function has been reduced to a form, which can be conveniently evaluated as a finite limit from both the subsonic and supersonic sides when the Mach number tends to one. Several examples are solved including rectangular wings, swept wings, a supersonic transport wing and a harmonically oscillating wing. Present results are given with other numerical data, showing continuous results through the unit Mach number. Computed results are in good agreement with other numerical results. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Hybrid kernel learning via genetic optimization for TS fuzzy system identification

INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 1 2010
Wei Li
Abstract This paper presents a new TS fuzzy system identification approach based on hybrid kernel learning and an improved genetic algorithm (GA). Structure identification is achieved by using support vector regression (SVR), in which a hybrid kernel function is adopted to improve regression performance. For multiple-parameter selection of SVR, the proposed GA is adopted to speed up the search process and guarantee the least number of support vectors. As a result, a concise model structure can be determined by these obtained support vectors. Then, the premise parameters of fuzzy rules can be extracted from results of SVR, and the consequent parameters can be optimized by the least-square method. Simulation results show that the resulting fuzzy model not only achieves satisfactory accuracy, but also takes on good generalization capability. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Kernel approach to possibilistic C -means clustering

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 3 2009
Frank Chung-Hoon Rhee
Kernel approaches can improve the performance of conventional clustering or classification algorithms for complex distributed data. This is achieved by using a kernel function, which is defined as the inner product of two values obtained by a transformation function. In doing so, this allows algorithms to operate in a higher dimensional space (i.e., more degrees of freedom for data to be meaningfully partitioned) without having to compute the transformation. As a result, the fuzzy kernel C -means (FKCM) algorithm, which uses a distance measure between patterns and cluster prototypes based on a kernel function, can obtain more desirable clustering results than fuzzy C -means (FCM) for not only spherical data but also nonspherical data. However, it can still be sensitive to noise as in the FCM algorithm. In this paper, to improve the drawback of FKCM, we propose a kernel possibilistic C -means (KPCM) algorithm that applies the kernel approach to the possibilistic C -means (PCM) algorithm. The method includes a variance updating method for Gaussian kernels for each clustering iteration. Several experimental results show that the proposed algorithm can outperform other algorithms for general data with additive noise. © 2009 Wiley Periodicals, Inc. [source]


Unified linear subspace approach to semantic analysis

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 1 2010
Dandan Li
The Basic Vector Space Model (BVSM) is well known in information retrieval. Unfortunately, its retrieval effectiveness is limited because it is based on literal term matching. The Generalized Vector Space Model (GVSM) and Latent Semantic Indexing (LSI) are two prominent semantic retrieval methods, both of which assume there is some underlying latent semantic structure in a dataset that can be used to improve retrieval performance. However, while this structure may be derived from both the term space and the document space, GVSM exploits only the former and LSI the latter. In this article, the latent semantic structure of a dataset is examined from a dual perspective; namely, we consider the term space and the document space simultaneously. This new viewpoint has a natural connection to the notion of kernels. Specifically, a unified kernel function can be derived for a class of vector space models. The dual perspective provides a deeper understanding of the semantic space and makes transparent the geometrical meaning of the unified kernel function. New semantic analysis methods based on the unified kernel function are developed, which combine the advantages of LSI and GVSM. We also prove that the new methods are stable because although the selected rank of the truncated Singular Value Decomposition (SVD) is far from the optimum, the retrieval performance will not be degraded significantly. Experiments performed on standard test collections show that our methods are promising. [source]


Nonparametric maximum likelihood estimation for shifted curves

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 2 2001
Birgitte B. Rřnn
The analysis of a sample of curves can be done by self-modelling regression methods. Within this framework we follow the ideas of nonparametric maximum likelihood estimation known from event history analysis and the counting process set-up. We derive an infinite dimensional score equation and from there we suggest an algorithm to estimate the shape function for a simple shape invariant model. The nonparametric maximum likelihood estimator that we find turns out to be a Nadaraya,Watson-like estimator, but unlike in the usual kernel smoothing situation we do not need to select a bandwidth or even a kernel function, since the score equation automatically selects the shape and the smoothing parameter for the estimation. We apply the method to a sample of electrophoretic spectra to illustrate how it works. [source]


Using Difference-Based Methods for Inference in Regression with Fractionally Integrated Processes

JOURNAL OF TIME SERIES ANALYSIS, Issue 6 2007
Wen-Jen Tsay
Abstract., This paper suggests a difference-based method for inference in the regression model involving fractionally integrated processes. Under suitable regularity conditions, our method can effectively deal with the inference problems associated with the regression model consisting of nonstationary, stationary and intermediate memory regressors, simultaneously. Although the difference-based method provides a very flexible modelling framework for empirical studies, the implementation of this method is extremely easy, because it completely avoids the difficult problems of choosing a kernel function, a bandwidth parameter, or an autoregressive lag length for the long-run variance estimation. The asymptotic local power of our method is investigated with a sequence of local data-generating processes (DGP) in what Davidson and MacKinnon [Canadian Journal of Economics. (1985) Vol. 18, pp. 38,57] call ,regression direction'. The simulation results indicate that the size control of our method is excellent even when the sample size is only 100, and the pattern of power performance is highly consistent with the theoretical finding from the asymptotic local power analysis conducted in this paper. [source]


A Generalized Portmanteau Test For Independence Of Two Infinite-Order Vector Autoregressive Series

JOURNAL OF TIME SERIES ANALYSIS, Issue 4 2006
Chafik Bouhaddioui
Primary 62M10; secondary 62M15 Abstract., In many situations, we want to verify the existence of a relationship between multivariate time series. Here, we propose a semiparametric approach for testing the independence between two infinite-order vector autoregressive (VAR(,)) series, which is an extension of Hong's [Biometrika (1996c) vol. 83, 615,625] univariate results. We first filter each series by a finite-order autoregression and the test statistic is a standardized version of a weighted sum of quadratic forms in the residual cross-correlation matrices at all possible lags. The weights depend on a kernel function and on a truncation parameter. Using a result of Lewis and Reinsel [Journal of Multivariate Analysis (1985) Vol. 16, pp. 393,411], the asymptotic distribution of the test statistic is derived under the null hypothesis and its consistency is also established for a fixed alternative of serial cross-correlation of unknown form. Apart from standardization factors, the multivariate portmanteau statistic proposed by Bouhaddioui and Roy [Statistics and Probability Letters (2006) vol. 76, pp. 58,68] that takes into account a fixed number of lags can be viewed as a special case by using the truncated uniform kernel. However, many kernels lead to a greater power, as shown in an asymptotic power analysis and by a small simulation study in finite samples. A numerical example with real data is also presented. [source]


On a class of PDEs with nonlinear distributed in space and time state-dependent delay terms

MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 13 2008
Alexander V. Rezounenko
Abstract A new class of nonlinear partial differential equations with distributed in space and time state-dependent delay is investigated. We find appropriate assumptions on the kernel function which represents the state-dependent delay and discuss advantages of this class. Local and long-time asymptotic properties, including the existence of global attractor and a principle of linearized stability, are studied. Copyright © 2008 John Wiley & Sons, Ltd. [source]


On a quadrature algorithm for the piecewise linear wavelet collocation applied to boundary integral equations

MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 11 2003
Andreas Rathsfeld
Abstract In this paper, we consider a piecewise linear collocation method for the solution of a pseudo-differential equation of order r=0, ,1 over a closed and smooth boundary manifold. The trial space is the space of all continuous and piecewise linear functions defined over a uniform triangular grid and the collocation points are the grid points. For the wavelet basis in the trial space we choose the three-point hierarchical basis together with a slight modification near the boundary points of the global patches of parametrization. We choose linear combinations of Dirac delta functionals as wavelet basis in the space of test functionals. For the corresponding wavelet algorithm, we show that the parametrization can be approximated by low-order piecewise polynomial interpolation and that the integrals in the stiffness matrix can be computed by quadrature, where the quadrature rules are composite rules of simple low-order quadratures. The whole algorithm for the assembling of the matrix requires no more than O(N [logN]3) arithmetic operations, and the error of the collocation approximation, including the compression, the approximative parametrization, and the quadratures, is less than O(N,(2,r)/2). Note that, in contrast to well-known algorithms by Petersdorff, Schwab, and Schneider, only a finite degree of smoothness is required. In contrast to an algorithm of Ehrich and Rathsfeld, no multiplicative splitting of the kernel function is required. Beside the usual mapping properties of the integral operator in low order Sobolev spaces, estimates of Calderón,Zygmund type are the only assumptions on the kernel function. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Estimation of the seed dispersal kernel from exact identification of source plants

MOLECULAR ECOLOGY, Issue 23 2007
JUAN J. ROBLEDO-ARNUNCIO
Abstract The exact identification of individual seed sources through genetic analysis of seed tissue of maternal origin has recently brought the full analytical potential of parentage analysis to the study of seed dispersal. No specific statistical methodology has been described so far, however, for estimation of the dispersal kernel function from categorical maternity assignment. In this study, we introduce a maximum-likelihood procedure to estimate the seed dispersal kernel from exact identification of seed sources. Using numerical simulations, we show that the proposed method, unlike other approaches, is independent of seed fecundity variation, yielding accurate estimates of the shape and range of the seed dispersal kernel under varied sampling and dispersal conditions. We also demonstrate how an obvious estimator of the dispersal kernel, the maximum-likelihood fit of the observed distribution of dispersal distances to seed traps, can be strongly biased due to the spatial arrangement of seed traps relative to source plants. Finally, we illustrate the use of the proposed method with a previously published empirical example for the animal-dispersed tree species Prunus mahaleb. [source]


Blended kernel approximation in the ,-matrix techniques

NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 4 2002
W. Hackbusch
Abstract Several types of ,-matrices were shown to provide a data-sparse approximation of non-local (integral) operators in FEM and BEM applications. The general construction is applied to the operators with asymptotically smooth kernel function provided that the Galerkin ansatz space has a hierarchical structure. The new class of ,-matrices is based on the so-called blended FE and polynomial approximations of the kernel function and leads to matrix blocks with a tensor-product of block-Toeplitz (block-circulant) and rank- k matrices. This requires the translation (rotation) invariance of the kernel combined with the corresponding tensor-product grids. The approach allows for the fast evaluation of volume/boundary integral operators with possibly non-smooth kernels defined on canonical domains/manifolds in the FEM/BEM applications. (Here and in the following, we call domains canonical if they are obtained by translation or rotation of one of their parts, e.g. parallelepiped, cylinder, sphere, etc.) In particular, we provide the error and complexity analysis for blended expansions to the Helmholtz kernel. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Financial decision support using neural networks and support vector machines

EXPERT SYSTEMS, Issue 4 2008
Chih-Fong Tsai
Abstract: Bankruptcy prediction and credit scoring are the two important problems facing financial decision support. The multilayer perceptron (MLP) network has shown its applicability to these problems and its performance is usually superior to those of other traditional statistical models. Support vector machines (SVMs) are the core machine learning techniques and have been used to compare with MLP as the benchmark. However, the performance of SVMs is not fully understood in the literature because an insufficient number of data sets is considered and different kernel functions are used to train the SVMs. In this paper, four public data sets are used. In particular, three different sizes of training and testing data in each of the four data sets are considered (i.e. 3:7, 1:1 and 7:3) in order to examine and fully understand the performance of SVMs. For SVM model construction, the linear, radial basis function and polynomial kernel functions are used to construct the SVMs. Using MLP as the benchmark, the SVM classifier only performs better in one of the four data sets. On the other hand, the prediction results of the MLP and SVM classifiers are not significantly different for the three different sizes of training and testing data. [source]


Association tests using kernel-based measures of multi-locus genotype similarity between individuals

GENETIC EPIDEMIOLOGY, Issue 3 2010
Indranil Mukhopadhyay
Abstract In a genetic association study, it is often desirable to perform an overall test of whether any or all single-nucleotide polymorphisms (SNPs) in a gene are associated with a phenotype. Several such tests exist, but most of them are powerful only under very specific assumptions about the genetic effects of the individual SNPs. In addition, some of the existing tests assume that the direction of the effect of each SNP is known, which is a highly unlikely scenario. Here, we propose a new kernel-based association test of joint association of several SNPs. Our test is non-parametric and robust, and does not make any assumption about the directions of individual SNP effects. It can be used to test multiple correlated SNPs within a gene and can also be used to test independent SNPs or genes in a biological pathway. Our test uses an analysis of variance paradigm to compare variation between cases and controls to the variation within the groups. The variation is measured using kernel functions for each marker, and then a composite statistic is constructed to combine the markers into a single test. We present simulation results comparing our statistic to the U -statistic-based method by Schaid et al. ([2005] Am. J. Hum. Genet. 76:780,793) and another statistic by Wessel and Schork ([2006] Am. J. Hum. Genet. 79:792,806). We consider a variety of different disease models and assumptions about how many SNPs within the gene are actually associated with disease. Our results indicate that our statistic has higher power than other statistics under most realistic conditions. Genet. Epidemiol. 34: 213,221, 2010. © 2009 Wiley-Liss, Inc. [source]


A random field model for generating synthetic microstructures of functionally graded materials

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 7 2008
Sharif Rahman
Abstract This article presents a new level-cut, inhomogeneous, filtered Poisson random field model for representing two-phase microstructures of statistically inhomogeneous, functionally graded materials with fully penetrable embedded particles. The model involves an inhomogeneous, filtered Poisson random field comprising a sum of deterministic kernel functions that are scaled by random variables and a cut of the filtered Poisson field above a specified level. The resulting level-cut field depends on the Poisson intensity, level, kernel functions, random scaling variables, and random rotation matrices. A reconstruction algorithm including model calibration and Monte Carlo simulation is presented for generating samples of two-phase microstructures of statistically inhomogeneous media. Numerical examples demonstrate that the model developed is capable of producing a wide variety of two- and three-dimensional microstructures of functionally graded composites containing particles of various sizes, shapes, densities, gradations, and orientations. An example involving finite element analyses of random microstructures, leading to statistics of effective properties of functionally graded composites, illustrates the usefulness of the proposed model. Copyright © 2008 John Wiley & Sons, Ltd. [source]


A Hermite reproducing kernel approximation for thin-plate analysis with sub-domain stabilized conforming integration

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 3 2008
Dongdong Wang
Abstract A Hermite reproducing kernel (RK) approximation and a sub-domain stabilized conforming integration (SSCI) are proposed for solving thin-plate problems in which second-order differentiation is involved in the weak form. Although the standard RK approximation can be constructed with an arbitrary order of continuity, the proposed approximation based on both deflection and rotation variables is shown to be more effective in solving plate problems. By imposing the Kirchhoff mode reproducing conditions on deflectional and rotational degrees of freedom simultaneously, it is demonstrated that the minimum normalized support size (coverage) of kernel functions can be significantly reduced. With this proposed approximation, the Galerkin meshfree framework for thin plates is then formulated and the integration constraint for bending exactness is also derived. Subsequently, an SSCI method is developed to achieve the exact pure bending solution as well as to maintain spatial stability. Numerical examples demonstrate that the proposed formulation offers superior convergence rates, accuracy and efficiency, compared with those based on higher-order Gauss quadrature rule. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Multilevel fast multipole algorithm enhanced by GPU parallel technique for electromagnetic scattering problems

MICROWAVE AND OPTICAL TECHNOLOGY LETTERS, Issue 3 2010
Kan Xu
Abstract Along with the development of graphics processing Units (GPUS) in floating point operations and programmability, GPU has increasingly become an attractive alternative to the central processing unit (CPU) for some of compute-intensive and parallel tasks. In this article, the multilevel fast multipole algorithm (MLFMA) combined with graphics hardware acceleration technique is applied to analyze electromagnetic scattering from complex target. Although it is possible to perform scattering simulation of electrically large targets on a personal computer (PC) through the MLFMA, a large CPU time is required for the execution of aggregation, translation, and deaggregation operations. Thus GPU computing technique is used for the parallel processing of MLFMA and a significant speedup of matrix vector product (MVP) can be observed. Following the programming model of compute unified device architecture (CUDA), several kernel functions characterized by the single instruction multiple data (SIMD) mode are abstracted from components of the MLFMA and executed by multiple processors of the GPU. Numerical results demonstrate the efficiency of GPU accelerating technique for the MLFMA. © 2010 Wiley Periodicals, Inc. Microwave Opt Technol Lett 52: 502,507, 2010; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/mop.24963 [source]


Appropriate SCF basis sets for orbital studies of galaxies and a ,quantum-mechanical' method to compute them

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 3 2008
Constantinos Kalapotharakos
ABSTRACT We address the question of an appropriate choice of basis functions for the self-consistent field (SCF) method of simulation of the N -body problem. Our criterion is based on a comparison of the orbits found in N -body realizations of analytical potential,density models of triaxial galaxies, in which the potential is fitted by the SCF method using a variety of basis sets, with those of the original models. Our tests refer to maximally triaxial Dehnen ,-models for values of , in the range 0 ,,, 1, i.e. from the harmonic core up to the weak cusp limit. When an N -body realization of a model is fitted by the SCF method, the choice of radial basis functions affects significantly the way the potential, forces or derivatives of the forces are reproduced, especially in the central regions of the system. We find that this results in serious discrepancies in the relative amounts of chaotic versus regular orbits, or in the distributions of the Lyapunov characteristic exponents, as found by different basis sets. Numerical tests include the Clutton-Brock and the Hernquist,Ostriker basis sets, as well as a family of numerical basis sets which are ,close' to the Hernquist,Ostriker basis set (according to a given definition of distance in the space of basis functions). The family of numerical basis sets is parametrized in terms of a quantity , which appears in the kernel functions of the Sturm,Liouville equation defining each basis set. The Hernquist,Ostriker basis set is the ,= 0 member of the family. We demonstrate that grid solutions of the Sturm,Liouville equation yielding numerical basis sets introduce large errors in the variational equations of motion. We propose a quantum-mechanical method of solution of the Sturm,Liouville equation which overcomes these errors. We finally give criteria for a choice of optimal value of , and calculate the latter as a function of the value of ,, i.e. of the power-law exponent of the radial density profile at the central regions of the galaxy. [source]


An algebraic multigrid method for finite element discretizations with edge elements

NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 3 2002
S. Reitzinger
Abstract This paper presents an algebraic multigrid method for the efficient solution of the linear system arising from a finite element discretization of variational problems in H0(curl,,). The finite element spaces are generated by Nédélec's edge elements. A coarsening technique is presented, which allows the construction of suitable coarse finite element spaces, corresponding transfer operators and appropriate smoothers. The prolongation operator is designed such that coarse grid kernel functions of the curl-operator are mapped to fine grid kernel functions. Furthermore, coarse grid kernel functions are ,discrete' gradients. The smoothers proposed by Hiptmair and Arnold, Falk and Winther are directly used in the algebraic framework. Numerical studies are presented for 3D problems to show the high efficiency of the proposed technique. Copyright © 2002 John Wiley & Sons, Ltd. [source]


The likelihood ratio test for homogeneity in finite mixture models

THE CANADIAN JOURNAL OF STATISTICS, Issue 2 2001
Hanfeng Chen
Abstract The authors study the asymptotic behaviour of the likelihood ratio statistic for testing homogeneity in the finite mixture models of a general parametric distribution family. They prove that the limiting distribution of this statistic is the squared supremum of a truncated standard Gaussian process. The autocorrelation function of the Gaussian process is explicitly presented. A re-sampling procedure is recommended to obtain the asymptotic p -value. Three kernel functions, normal, binomial and Poisson, are used in a simulation study which illustrates the procedure. [source]


Improving robust model selection tests for dynamic models

THE ECONOMETRICS JOURNAL, Issue 2 2010
Hwan-sik Choi
Summary, We propose an improved model selection test for dynamic models using a new asymptotic approximation to the sampling distribution of a new test statistic. The model selection test is applicable to dynamic models with very general selection criteria and estimation methods. Since our test statistic does not assume the exact form of a true model, the test is essentially non-parametric once competing models are estimated. For the unknown serial correlation in data, we use a Heteroscedasticity/Autocorrelation-Consistent (HAC) variance estimator, and the sampling distribution of the test statistic is approximated by the fixed- b,asymptotic approximation. The asymptotic approximation depends on kernel functions and bandwidth parameters used in HAC estimators. We compare the finite sample performance of the new test with the bootstrap methods as well as with the standard normal approximations, and show that the fixed- b,asymptotics and the bootstrap methods are markedly superior to the standard normal approximation for a moderate sample size for time series data. An empirical application for foreign exchange rate forecasting models is presented, and the result shows the normal approximation to the distribution of the test statistic considered appears to overstate the data's ability to distinguish between two competing models. [source]