Home About us Contact | |||
Support Vectors (support + vector)
Terms modified by Support Vectors Selected AbstractsHybrid kernel learning via genetic optimization for TS fuzzy system identificationINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 1 2010Wei Li Abstract This paper presents a new TS fuzzy system identification approach based on hybrid kernel learning and an improved genetic algorithm (GA). Structure identification is achieved by using support vector regression (SVR), in which a hybrid kernel function is adopted to improve regression performance. For multiple-parameter selection of SVR, the proposed GA is adopted to speed up the search process and guarantee the least number of support vectors. As a result, a concise model structure can be determined by these obtained support vectors. Then, the premise parameters of fuzzy rules can be extracted from results of SVR, and the consequent parameters can be optimized by the least-square method. Simulation results show that the resulting fuzzy model not only achieves satisfactory accuracy, but also takes on good generalization capability. Copyright © 2008 John Wiley & Sons, Ltd. [source] Support vector design of the microstrip linesINTERNATIONAL JOURNAL OF RF AND MICROWAVE COMPUTER-AIDED ENGINEERING, Issue 4 2008Filiz Güne Abstract In this article, the support vector regression is adapted to the analysis and synthesis of microstrip lines on all isotropic/anisotropic dielectric materials, which is a novel technique based on the rigorous mathematical fundamentals and the most competitive technique to the popular artificial neural networks (ANN). In this design process, accuracy, computational efficiency and number of support vectors are investigated in detail and the support vector regression performance is compared with an ANN performance. It can be concluded that the ANN may be replaced by the support vector machines in the regression applications because of its higher approximation capability and much faster convergence rate with the sparse solution technique. Synthesis is achieved by utilizing the analysis black-box bidirectionally by reverse training. Furthermore, by using the adaptive step size, a much faster convergence rate is obtained in the reverse training. Besides, design of microstrip lines on the most commonly used isotropic/anisotropic dielectric materials are given as the worked examples. © 2008 Wiley Periodicals, Inc. Int J RF and Microwave CAE, 2008. [source] Active learning support vector machines for optimal sample selection in classificationJOURNAL OF CHEMOMETRICS, Issue 6 2004Simeone Zomer Abstract Labelling samples is a procedure that may result in significant delays particularly when dealing with larger datasets and/or when labelling implies prolonged analysis. In such cases a strategy that allows the construction of a reliable classifier on the basis of a minimal sized training set by labelling a minor fraction of samples can be of advantage. Support vector machines (SVMs) are ideal for such an approach because the classifier relies on only a small subset of samples, namely the support vectors, while being independent from the remaining ones that typically form the majority of the dataset. This paper describes a procedure where a SVM classifier is constructed with support vectors systematically retrieved from the pool of unlabelled samples. The procedure is termed ,active' because the algorithm interacts with the samples prior to their labelling rather than waiting passively for the input. The learning behaviour on simulated datasets is analysed and a practical application for the detection of hydrocarbons in soils using mass spectrometry is described. Results on simulations show that the active learning SVM performs optimally on datasets where the classes display an intermediate level of separation. On the real case study the classifier correctly assesses the membership of all samples in the original dataset by requiring for labelling around 14% of the data. Its subsequent application on a second dataset of analogous nature also provides perfect classification without further labelling, giving the same outcome as most classical techniques based on the entirely labelled original dataset. Copyright © 2004 John Wiley & Sons, Ltd. [source] |