Home About us Contact | |||
Real Applications (real + application)
Selected AbstractsUser profiling on the Web based on deep knowledge and sequential questioningEXPERT SYSTEMS, Issue 1 2006Silvano Mussi Abstract: User profiling on the Web is a topic that has attracted a great number of technological approaches and applications. In most user profiling approaches the website learns profiles from data implicitly acquired from user behaviours, i.e. observing the behaviours of users with a statistically significant number of accesses. This paper presents an alternative approach. In this approach the website explicitly acquires data from users, user interests are represented in a Bayesian network, and user profiles are enriched and refined over time. The profile enrichment is achieved through a sequential asking algorithm based on the value-of-information theory using the Shannon entropy concept. However, what mostly characterizes the approach is the fact that the user is involved in a collaborative process of profile building. The approach has been tried out for over a year in a real application. On the basis of the experimental results the approach turns out to be particularly suitable for applications where the website is strongly based on deep domain knowledge (as for example is the case for scientific websites) and has a community of users that share the same domain knowledge of the website and produce a ,low' number of accesses (,low' compared to the high number of accesses of a typical commercial website). After presenting the technical aspects of the approach, we discuss the underlying ideas in the light of the experimental results and the literature on human,computer interaction and user profiling. [source] Improved GMM with parameter initialization for unsupervised adaptation of Brain,Computer interfaceINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 6 2010Guangquan Liu Abstract An important property of brain signals is their nonstationarity. How to adapt a brain,computer interface (BCI) to the changing brain states is one of the challenges faced by BCI researchers, especially in real application where the subject's real intent is unknown to the system. Gaussian mixture model (GMM) has been used for the unsupervised adaptation of the classifier in BCI. In this paper, a method of initializing the model parameters is proposed for expectation maximization-based GMM parameter estimation. This improved GMM method and other two existing unsupervised adaptation methods are applied to groups of constructed artificial data with different data properties. Performances of these methods in different situations are analyzed. Compared with the other two unsupervised adaptation methods, this method shows a better ability of adapting to changes and discovering class information from unlabelled data. The methods are also applied to real EEG data recorded in 19 experiments. For real data, the proposed method achieves an error rate significantly lower than the other two unsupervised methods. Results of the real data agree with the analysis based on the artificial data, which confirms not only the effectiveness of our method but also the validity of the constructed data. Copyright © 2009 John Wiley & Sons, Ltd. [source] Fault diagnosis of a simulated industrial gas turbine via identification approachINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 4 2007S. Simani Abstract In this paper, a model-based procedure exploiting the analytical redundancy principle for the detection and isolation of faults on a simulated process is presented. The main point of the work consists of using an identification scheme in connection with dynamic observer and Kalman filter designs for diagnostic purpose. The errors-in-variables identification technique and output estimation approach for residual generation are in particular advantageous in terms of solution complexity and performance achievement. The proposed tools are analysed and tested on a single-shaft industrial gas turbine MATLAB/SIMULINK® simulator in the presence of disturbances, i.e. measurement errors and modelling mismatch. Selected performance criteria are used together with Monte-Carlo simulations for robustness and performance evaluation. The suggested technique can constitute the design methodology realising a reliable approach for real application of industrial process FDI. Copyright © 2006 John Wiley & Sons, Ltd. [source] Fuzzy quantification in two real scenarios: Information retrieval and mobile roboticsINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 6 2009Félix Díaz-Hermida Fuzzy quantification supplies powerful tools for handling linguistic expressions. Nevertheless, its advantages are usually shown at the theoretical level without a proper empirical validation. In this work, we review the application of fuzzy quantification in two application domains. We provide empirical evidence on the adequacy of fuzzy quantification to support different tasks in the context of mobile robotics and information retrieval. This practical perspective aims at exemplifying the actual benefits that real application can get from fuzzy quantifiers. © 2009 Wiley Periodicals, Inc. [source] Time adaptive denoising of single trial event- related potentials in the wavelet domainPSYCHOPHYSIOLOGY, Issue 6 2000Arndt Effern We present a new wavelet-based method for single trial analysis of transient and time variant event-related potentials (ERPs). Expecting more accurate filter settings than achieved by other techniques (low-pass filter, a posteriori Wiener filter, time invariant wavelet filter), ERPs were initially balanced in time. By simulation, better filter performance could be established for test signals contaminated with either white noise or isospectral noise. To provide an example of real application, the method was applied to limbic P300 potentials (MTL-P300). As a result, variance of single trial MTL-P300s decreased, without restricting the corresponding mean. The proposed method can be regarded as an alternative for single-trial ERP analysis. [source] A static mapping heuristics to map parallel applications to heterogeneous computing systemsCONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2005Ranieri Baraglia Abstract In order to minimize the execution time of a parallel application running on a heterogeneously distributed computing system, an appropriate mapping scheme is needed to allocate the application tasks to the processors. The general problem of mapping tasks to machines is a well-known NP-hard problem and several heuristics have been proposed to approximate its optimal solution. In this paper we propose a static graph-based mapping algorithm, called Heterogeneous Multi-phase Mapping (HMM), which permits suboptimal mapping of a parallel application onto a heterogeneous computing distributed system by using a local search technique together with a tabu search meta-heuristic. HMM allocates parallel tasks by exploiting the information embedded in the parallelism forms used to implement an application, and considering an affinity parameter, that identifies which machine in the heterogeneous computing system is most suitable to execute a task. We compare HMM with some leading techniques and with an exhaustive mapping algorithm. We also give an example of mapping of two real applications using HMM. Experimental results show that HMM performs well demonstrating the applicability of our approach. Copyright © 2005 John Wiley & Sons, Ltd. [source] Evaluating high-performance computers,CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 10 2005Jeffrey S. Vetter Abstract Comparisons of high-performance computers based on their peak floating point performance are common but seldom useful when comparing performance on real workloads. Factors that influence sustained performance extend beyond a system's floating-point units, and real applications exercise machines in complex and diverse ways. Even when it is possible to compare systems based on their performance, other considerations affect which machine is best for a given organization. These include the cost, the facilities requirements (power, floorspace, etc.), the programming model, the existing code base, and so on. This paper describes some of the important measures for evaluating high-performance computers. We present data for many of these metrics based on our experience at Lawrence Livermore National Laboratory (LLNL), and we compare them with published information on the Earth Simulator. We argue that evaluating systems involves far more than comparing benchmarks and acquisition costs. We show that evaluating systems often involves complex choices among a variety of factors that influence the value of a supercomputer to an organization, and that the high-end computing community should view cost/performance comparisons of different architectures with skepticism. Published in 2005 by John Wiley & Sons, Ltd. [source] Smart Microcapsules Encapsulating Reconfigurable Carbon Nanotube CoresADVANCED FUNCTIONAL MATERIALS, Issue 5 2010Won San Choi Abstract The encapsulation of carbon nanotubes (CNTs) to form a reconfigurable conglomerate within iron oxide microcapsules is demonstrated. The individual CNTs conglomerate and form a core inside the capsule upon exposure to high temperature, while they scatter when subjected to mild sonication at low pH. The assembly/disassembly of CNTs within the capsule was reversible and could be repeated by alternate heating and sonication. Also, the fabrication protocol could be used for the generation of various multifunctional hollow structures. To test the feasibility of using the capsules in real applications, the capacity of the capsules as a heavy metal ion remover was explored. The resulting capsules showed an excellent ability to remove lead and chromium ions. In addition, desorption of the metal ions adsorbed on the CNTs could be induced by exposure to low pH. Thus, encapsulated CNTs might be a recyclable, environmentally friendly agent for the removal of heavy metal ions. [source] Multilevel hybrid spectral element ordering algorithmsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 5 2005Jennifer A. Scott Abstract For frontal solvers to perform well on finite-element problems it is essential that the elements are ordered for a small wavefront. Multilevel element ordering algorithms have their origins in the profile reduction algorithm of Sloan but for large problems often give significantly smaller wavefronts. We examine a number of multilevel variants with the aim of finding the best methods to include within a new state-of-the-art frontal solver for finite-element applications that we are currently developing. Numerical experiments are performed using a range of problems arising from real applications and comparisons are made with existing element ordering algorithms. Copyright © 2005 John Wiley & Sons, Ltd. [source] The natural volume method (NVM): Presentation and application to shallow water inviscid flowsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 1 2009R. Ata Abstract In this paper a fully Lagrangian formulation is used to simulate 2D shallow water inviscid flows. The natural element method (NEM), which has been used successfully with several solid and fluid mechanics applications, is used to approximate the fluxes over Voronoi cells. This particle-based method has shown huge potential in terms of handling problems involving large deformations. Its main advantage lies in the interpolant character of its shape function and consequently the ease it allows with respect to the imposition of Dirichlet boundary conditions. In this paper, we use the NEM collocationally, and in a Lagrangian kinematic description, in order to simulate shallow water flows that are boundary moving problems. This formulation is ultimately shown to constitute a finite-volume methodology requiring a flux computation on Voronoi cells rather than the standard elements, in a triangular or quadrilateral mesh. St Venant equations are used as the mathematical model. These equations have discontinuous solutions that physically represent the existence of shock waves, meaning that stabilization issues have thus been considered. An artificial viscosity deduced from an analogy with Riemann solvers is introduced to upwind the scheme and therefore stabilize the method. Some inviscid bidimensional flows were used as preliminary benchmark tests, which produced decent results, leading to well-founded hopes for the future of this method in real applications. Copyright © 2008 John Wiley & Sons, Ltd. [source] Comparison of the performances of decision aimed algorithms with Bayesian and beliefs basisINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 8 2001François Delmotte There are strong views in the literature between the advocates of the Bayesian approach on one hand and the advocates of the belief functions on the other. This paper refers to a few previously published studies showing that, for decision-aimed problems, algorithms using belief functions were much slower than those using a Bayesian approach. Thus, if beliefs are appealing intellectually, they are in fact useless for real applications. This article shows that most of these studies are simply false, because they are based on an erroneous use of the belief functions. © 2001 John Wiley & Sons, Inc. [source] SmartMetals: a new method for metal identification based on fuzzy logicJOURNAL OF CHEMOMETRICS, Issue 11 2009Viktor Pocajt Abstract This paper presents a method of searching, identifying and cross-referencing metal alloys based on their chemical composition and/or mechanical properties, typically obtained by analysis and tests. The method uses a general pattern similar to the approach of a human expert, and relies on a classification of metals based on metallurgical expertise and fuzzy logic for identifying metals and comparing their chemical and mechanical properties. The algorithm has been tested and deployed in real applications for fast metal identification and finding of unknown equivalents, by the leading companies in the field. The same principles can also be used in other domains for similar problems, such as organic and inorganic materials identification and generic drugs comparison. Copyright © 2009 John Wiley & Sons, Ltd. [source] Shared environment representation for a human-robot team performing information fusionJOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 11-12 2007Tobias Kaupp This paper addresses the problem of building a shared environment representation by a human-robot team. Rich environment models are required in real applications for both autonomous operation of robots and to support human decision-making. Two probabilistic models are used to describe outdoor environment features such as trees: geometric (position in the world) and visual. The visual representation is used to improve data association and to classify features. Both models are able to incorporate observations from robotic platforms and human operators. Physically, humans and robots form a heterogeneous sensor network. In our experiments, the human-robot team consists of an unmanned air vehicle, a ground vehicle, and two human operators. They are deployed for an information gathering task and perform information fusion cooperatively. All aspects of the system including the fusion algorithms are fully decentralized. Experimental results are presented in form of the acquired multi-attribute feature map, information exchange patterns demonstrating human-robot information fusion, and quantitative model evaluation. Learned lessons from deploying the system in the field are also presented. © 2007 Wiley Periodicals, Inc. [source] Combination of forecasts using self-organizing algorithmsJOURNAL OF FORECASTING, Issue 4 2005Changzheng He Abstract Based on the theories and methods of self-organizing data mining, a new forecasting method, called self-organizing combining forecasting method, is proposed. Compared with optimal linear combining forecasting methods and neural networks combining forecasting methods, the new method can improve the forecasting capability of the model. The superiority of the new method is justified and demonstrated by real applications. Copyright © 2005 John Wiley & Sons, Ltd. [source] Experimental model for creep groan analysisLUBRICATION SCIENCE, Issue 1 2009Z. Fuadi Abstract A simple experimental model for a fundamental investigation of creep groan generating mechanism is introduced. It is a calliper slider model that is developed based on the operating principle of a real brake system and has the ability to generate creep groan quantitatively comparable to those recorded on the real brake system. The advantage of the model is that it is possible to take into account many parameters, such as surface roughness of mating materials, properties of mating materials and structure's stiffness, so that their effects for creep groan phenomenon can be analysed. The usefulness and potential of the model are demonstrated by its ability to generate creep groan using a real brake lining material that is well known to the brake industry as a material that easily produces creep groan in real applications. Parametric analysis is conducted, and the effects of several sensitive parameters to stick-slip frequency characteristic of creep groan are highlighted. Copyright © 2008 John Wiley & Sons, Ltd. [source] A preconditioning proposal for ill-conditioned Hermitian two-level Toeplitz systemsNUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 2-3 2005D. Noutsos Abstract Large 2-level Toeplitz systems arise in many applications and thus an efficient strategy for their solution is often needed. The already known methods require the explicit knowledge of the generating function , of the considered system Tnm(,)x=b, an assumption that usually is not fulfilled in real applications. In this paper, we extend to the 2-level case a technique proposed in the literature in such a way that, from the knowledge of the coefficients of Tnm(,), we determine optimal preconditioning strategies for the solution of our systems. More precisely, we propose and analyse an algorithm for the economical computation of minimal features of , that allow us to select optimal preconditioners. Finally, we perform various numerical experiments which fully confirm the effectiveness of the proposed idea. Copyright © 2004 John Wiley & Sons, Ltd. [source] Block factorized preconditioners for high-order accurate in time approximation of the Navier-Stokes equationsNUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 4 2003Alessandro Veneziani Computationally efficient solution methods for the unsteady Navier-Stokes incompressible equations are mandatory in real applications of fluid dynamics. A typical strategy to reduce the computational cost is to split the original problem into subproblems involving the separate computation of velocity and pressure. The splitting can be carried out either at a differential level, like in the Chorin-Temam scheme, or in an algebraic fashion, like in the algebraic reinterpretation of the Chorin-Temam method, or in the Yosida scheme (see 1 and 19). These fractional step schemes indeed provide effective methods of solution when dealing with first order accurate time discretizations. Their extension to high order time discretization schemes is not trivial. To this end, in the present work we focus our attention on the adoption of inexact algebraic factorizations as preconditioners of the original problem. We investigate their properties and show that some particular choices of the approximate factorization lead to very effective schemes. In particular, we prove that performing a small number of preconditioned iterations is enough to obtain a time accurate solution, irrespective of the dimension of the system at hand. © 2003 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 19: 487,510, 2003 [source] Three-dimensional ultrasound image-guided robotic system for accurate microwave coagulation of malignant liver tumoursTHE INTERNATIONAL JOURNAL OF MEDICAL ROBOTICS AND COMPUTER ASSISTED SURGERY, Issue 3 2010Jing Xu Abstract Background The further application of conventional ultrasound (US) image-guided microwave (MW) ablation of liver cancer is often limited by two-dimensional (2D) imaging, inaccurate needle placement and the resulting skill requirement. The three-dimensional (3D) image-guided robotic-assisted system provides an appealing alternative option, enabling the physician to perform consistent, accurate therapy with improved treatment effectiveness. Methods Our robotic system is constructed by integrating an imaging module, a needle-driven robot, a MW thermal field simulation module, and surgical navigation software in a practical and user-friendly manner. The robot executes precise needle placement based on the 3D model reconstructed from freehand-tracked 2D B-scans. A qualitative slice guidance method for fine registration is introduced to reduce the placement error caused by target motion. By incorporating the 3D MW specific absorption rate (SAR) model into the heat transfer equation, the MW thermal field simulation module determines the MW power level and the coagulation time for improved ablation therapy. Two types of wrists are developed for the robot: a ,remote centre of motion' (RCM) wrist and a non-RCM wrist, which is preferred in real applications. Results The needle placement accuracies were < 3 mm for both wrists in the mechanical phantom experiment. The target accuracy for the robot with the RCM wrist was improved to 1.6 ± 1.0 mm when real-time 2D US feedback was used in the artificial-tissue phantom experiment. By using the slice guidance method, the robot with the non-RCM wrist achieved accuracy of 1.8 ± 0.9 mm in the ex vivo experiment; even target motion was introduced. In the thermal field experiment, a 5.6% relative mean error was observed between the experimental coagulated neurosis volume and the simulation result. Conclusion The proposed robotic system holds promise to enhance the clinical performance of percutaneous MW ablation of malignant liver tumours. Copyright © 2010 John Wiley & Sons, Ltd. [source] Nanorobots: The Ultimate Wireless Self-Propelled Sensing and Actuating DevicesCHEMISTRY - AN ASIAN JOURNAL, Issue 9 2009Samuel Sánchez Dr. Abstract Natural motor proteins, "bionanorobots," have inspired researchers to develop artificial nanomachines (nanorobots) able to move autonomously by the conversion of chemical to mechanical energy. Such artificial nanorobots are self-propelled by the electrochemical decomposition of the fuel (up to now, hydrogen peroxide). Several approaches have been developed to provide nanorobots with some functionality, such as for controlling their movement, increasing their power output, or transporting different cargo. In this Focus Review we will discuss the recent advances in nanorobots based on metallic nanowires, which can sense, deliver, and actuate in complex environments, looking towards real applications in the not-too-distant future. Los motores naturales basados en proteínas "bionanorobots" han inspirado a investigadores a desarrollar nano-máquinas capaces de moverse de forma autónoma gracias a la conversión de energía química en mecánica. Los nanorobots artificiales se auto-propulsan por la descomposición electroquímica del combustible (hasta la fecha, peróxido de hidrogeno). Se han desarrollado varias propuestas para modificar estos nanorobots con la finalidad de controlar su movimiento, aumentar la potencia o transportar diferentes cargos. En esta revisión discutiremos los recientes avances en nanorobots artificiales basados en nano-hilos metálicos con perspectivas a aplicaciones en un futuro cercano. Estos nanorobots pueden sentir, liberar y actuar en medios complejos. [source] |