Real-world Applications (real-world + application)

Distribution by Scientific Domains


Selected Abstracts


CONCEPTUAL CLUSTERING AND CASE GENERALIZATION OF TWO-DIMENSIONAL FORMS

COMPUTATIONAL INTELLIGENCE, Issue 3-4 2006
Silke Jänichen
Case-based object recognition requires a general case of the object that should be detected. Real-world applications such as the recognition of biological objects in images cannot be solved by one general case. A case base is necessary to handle the great natural variations in the appearance of these objects. In this paper, we will present how to learn a hierarchical case base of general cases. We present our conceptual clustering algorithm to learn groups of similar cases from a set of acquired structural cases of fungal spores. Due to its concept description, it explicitly supplies for each cluster a generalized case and a measure for the degree of its generalization. The resulting hierarchical case base is used for applications in the field of case-based object recognition. We present results based on our application for health monitoring of biologically hazardous material. [source]


Robustness analysis of flexible structures: practical algorithms

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 8 2003
Gilles Ferreres
Abstract When analysing the robustness properties of a flexible system, the classical solution, which consists of computing lower and upper bounds of the structured singular value (s.s.v.) at each point of a frequency gridding, appears unreliable. This paper describes two algorithms, based on the same technical result: the first one directly computes an upper bound of the maximal s.s.v. over a frequency interval, while the second one eliminates frequency intervals, inside which the s.s.v. is guaranteed to be below a given value. Various strategies are then proposed, which combine these two techniques, and also integrate methods for computing a lower bound of the s.s.v. The computational efficiency of the scheme is illustrated on a real-world application, namely a telescope mock-up which is significant of a high order flexible system. Copyright © 2003 John Wiley & Sons, Ltd. [source]


A new method for scoring additive multi-attribute value models using pairwise rankings of alternatives

JOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 3-4 2008
Paul Hansen
Abstract We present a new method for determining the point values for additive multi-attribute value models with performance categories. The method, which we refer to as PAPRIKA (Potentially All Pairwise RanKings of all possible Alternatives), involves the decision-maker pairwise ranking potentially all undominated pairs of all possible alternatives represented by the value model. The number of pairs to be explicitly ranked is minimized by the method identifying all pairs implicitly ranked as corollaries of the explicitly ranked pairs. We report on simulations of the method's use and show that if the decision-maker explicitly ranks pairs defined on just two criteria at-a-time, the overall ranking of alternatives produced by the value model is very highly correlated with the true ranking. Therefore, for most practical purposes decision-makers are unlikely to need to rank pairs defined on more than two criteria, thereby reducing the elicitation burden. We also describe a successful real-world application involving the scoring of a value model for prioritizing patients for cardiac surgery in New Zealand. We conclude that although the new method entails more judgments than traditional scoring methods, the type of judgment (pairwise rankings of undominated pairs) is arguably simpler and might reasonably be expected to reflect the preferences of decision-makers more accurately. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Planning models for parallel batch reactors with sequence-dependent changeovers

AICHE JOURNAL, Issue 9 2007
Muge Erdirik-Dogan
Abstract In this article we address the production planning of parallel multiproduct batch reactors with sequence-dependent changeovers, a challenging problem that has been motivated by a real-world application of a specialty chemicals business. We propose two production planning models that anticipate the impact of the changeovers in this batch processing problem. The first model is based on underestimating the effects of the changeovers that leads to an MILP problem of moderate size. The second model incorporates sequencing constraints that yield very accurate predictions, but at the expense of a larger MILP problem. To solve large scale problems in terms of number of products and reactors, or length of the time horizon, we propose a decomposition technique based on rolling horizon scheme and also a relaxation of the detailed planning model. Several examples are presented to illustrate the performance of the proposed models. © 2007 American Institute of Chemical Engineers AIChE J, 2007 [source]


The backyard human performance technologist: Applying the development research methodology to develop and validate a new instructional design framework

PERFORMANCE IMPROVEMENT QUARTERLY, Issue 3 2009
Timothy R. Brock PhD
Development research methodology (DRM) has been recommended as a viable research approach to expand the practice-to-theory/theory-to-practice literature that human performance technology (HPT) practitioners can integrate into the day-to-day work flow they already use to develop instructional products. However, little has been written about how it can be applied in a workplace setting to allow HPT practitioners to consider this research approach for adoption into their own activities. This article provides a real-world application of the DRM to help close this literature gap. After providing background information to establish the case context, the article presents an overview of how this research approach was applied to an effort to develop and validate a new instructional design framework for potentially training National Aeronautics and Space Administration (NASA) astronauts for deep space exploration missions. The result of this case indicates that this research methodology provides a viable approach that HPT practitioners can integrate into their current practices to provide a practice-based research baseline to contribute to the practice-to-theory/theory-to-practice literature. [source]


The Scalasca performance toolset architecture

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 6 2010
Markus Geimer
Abstract Scalasca is a performance toolset that has been specifically designed to analyze parallel application execution behavior on large-scale systems with many thousands of processors. It offers an incremental performance-analysis procedure that integrates runtime summaries with in-depth studies of concurrent behavior via event tracing, adopting a strategy of successively refined measurement configurations. Distinctive features are its ability to identify wait states in applications with very large numbers of processes and to combine these with efficiently summarized local measurements. In this article, we review the current toolset architecture, emphasizing its scalable design and the role of the different components in transforming raw measurement data into knowledge of application execution behavior. The scalability and effectiveness of Scalasca are then surveyed from experience measuring and analyzing real-world applications on a range of computer systems. Copyright © 2010 John Wiley & Sons, Ltd. [source]


A test suite for parallel performance analysis tools

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 11 2007
Michael Gerndt
Abstract Parallel performance analysis tools must be tested as to whether they perform their task correctly, which comprises at least three aspects. First, it must be ensured that the tools neither alter the semantics nor distort the run-time behavior of the application under investigation. Next, it must be verified that the tools collect the correct performance data as required by their specification. Finally, it must be checked that the tools perform their intended tasks and detect relevant performance problems. Focusing on the latter (correctness) aspect, testing can be done using synthetic test functions with controllable performance properties, possibly complemented by real-world applications with known performance behavior. A systematic test suite can be built from synthetic test functions and other components, possibly with the help of tools to assist the user in putting the pieces together into executable test programs. Clearly, such a test suite can be highly useful to builders of performance analysis tools. It is surprising that, up until now, no systematic effort has been undertaken to provide such a suite. In this paper we describe the APART Test Suite (ATS) for checking the correctness (in the above sense) of parallel performance analysis tools. In particular, we describe a collection of synthetic test functions which allows one to easily construct both simple and more complex test programs with desired performance properties. We briefly report on experience with MPI and OpenMP performance tools when applied to the test cases generated by ATS. Copyright © 2006 John Wiley & Sons, Ltd. [source]


A framework for linguistic logic programming

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 6 2010
Tru H. Cao
Lawry's label semantics for modeling and computing with linguistic information in natural language provides a clear interpretation of linguistic expressions and thus a transparent model for real-world applications. Meanwhile, annotated logic programs (ALPs) and its fuzzy extension AFLPs have been developed as an extension of classical logic programs offering a powerful computational framework for handling uncertain and imprecise data within logic programs. This paper proposes annotated linguistic logic programs (ALLPs) that embed Lawry's label semantics into the ALP/AFLP syntax, providing a linguistic logic programming formalism for development of automated reasoning systems involving soft data as vague and imprecise concepts occurring frequently in natural language. The syntax of ALLPs is introduced, and their declarative semantics is studied. The ALLP SLD-style proof procedure is then defined and proved to be sound and complete with respect to the declarative semantics of ALLPs. © 2010 Wiley Periodicals, Inc. [source]


An evolutionary learning approach for adaptive negotiation agents

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 1 2006
Raymond Y.K. Lau
Developing effective and efficient negotiation mechanisms for real-world applications such as e-business is challenging because negotiations in such a context are characterized by combinatorially complex negotiation spaces, tough deadlines, very limited information about the opponents, and volatile negotiator preferences. Accordingly, practical negotiation systems should be empowered by effective learning mechanisms to acquire dynamic domain knowledge from the possibly changing negotiation contexts. This article illustrates our adaptive negotiation agents, which are underpinned by robust evolutionary learning mechanisms to deal with complex and dynamic negotiation contexts. Our experimental results show that GA-based adaptive negotiation agents outperform a theoretically optimal negotiation mechanism that guarantees Pareto optimal. Our research work opens the door to the development of practical negotiation systems for real-world applications. © 2006 Wiley Periodicals, Inc. Int J Int Syst 21: 41,72, 2006. [source]


Learning cooperative linguistic fuzzy rules using the best,worst ant system algorithm

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 4 2005
Jorge Casillas
Within the field of linguistic fuzzy modeling with fuzzy rule-based systems, the automatic derivation of the linguistic fuzzy rules from numerical data is an important task. In the last few years, a large number of contributions based on techniques such as neural networks and genetic algorithms have been proposed to face this problem. In this article, we introduce a novel approach to the fuzzy rule learning problem with ant colony optimization (ACO) algorithms. To do so, this learning task is formulated as a combinatorial optimization problem. Our learning process is based on the COR methodology proposed in previous works, which provides a search space that allows us to obtain fuzzy models with a good interpretability,accuracy trade-off. A specific ACO-based algorithm, the Best,Worst Ant System, is used for this purpose due to the good performance shown when solving other optimization problems. We analyze the behavior of the proposed method and compare it to other learning methods and search techniques when solving two real-world applications. The obtained results lead us to remark the good performance of our proposal in terms of interpretability, accuracy, and efficiency. © 2005 Wiley Periodicals, Inc. Int J Int Syst 20: 433,452, 2005. [source]


A personal perspective on problem solving by general purpose solvers

INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 3 2010
Toshihide Ibaraki
Abstract To solve the problems that abound in real-world applications, we are proposing an approach of using general-purpose solvers, as we cannot afford to develop special-purpose algorithms for all individual problems. The existing general-purpose solvers such as linear programming and integer programming are very useful but not sufficient. To improve the situation, we have developed solvers for other standard problems such as the constraint satisfaction problem and the resource-constrained project scheduling problem among others. In this article, we describe why general-purpose solvers are needed, what kinds of solvers we considered, how they were developed and where they have been applied. [source]


Path finding under uncertainty

JOURNAL OF ADVANCED TRANSPORTATION, Issue 1 2005
Anthony Chen
Path finding problems have many real-world applications in various fields, such as operations research, computer science, telecommunication, transportation, etc. In this paper, we examine three definitions of optimality for finding the optimal path under an uncertain environment. These three stochastic path finding models are formulated as the expected value model, dependent-chance model, and chance-constrained model using different criteria to hedge against the travel time uncertainty. A simulation-based genetic algorithm procedure is developed to solve these path finding models under uncertainties. Numerical results are also presented to demonstrate the features of these stochastic path finding models. [source]


Generation and visualization of large-scale three-dimensional reconstructions from underwater robotic surveys

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 1 2010
Matthew Johnson-Roberson
Robust, scalable simultaneous localization and mapping (SLAM) algorithms support the successful deployment of robots in real-world applications. In many cases these platforms deliver vast amounts of sensor data from large-scale, unstructured environments. These data may be difficult to interpret by end users without further processing and suitable visualization tools. We present a robust, automated system for large-scale three-dimensional (3D) reconstruction and visualization that takes stereo imagery from an autonomous underwater vehicle (AUV) and SLAM-based vehicle poses to deliver detailed 3D models of the seafloor in the form of textured polygonal meshes. Our system must cope with thousands of images, lighting conditions that create visual seams when texturing, and possible inconsistencies between stereo meshes arising from errors in calibration, triangulation, and navigation. Our approach breaks down the problem into manageable stages by first estimating local structure and then combining these estimates to recover a composite georeferenced structure using SLAM-based vehicle pose estimates. A texture-mapped surface at multiple scales is then generated that is interactively presented to the user through a visualization engine. We adapt established solutions when possible, with an emphasis on quickly delivering approximate yet visually consistent reconstructions on standard computing hardware. This allows scientists on a research cruise to use our system to design follow-up deployments of the AUV and complementary instruments. To date, this system has been tested on several research cruises in Australian waters and has been used to reliably generate and visualize reconstructions for more than 60 dives covering diverse habitats and representing hundreds of linear kilometers of survey. © 2009 Wiley Periodicals, Inc. [source]


Evaluation of semi-autonomous convoy driving

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 11-12 2008
James Davis
Autonomous mobility technologies may have applications to manned vehicle convoy operations,they have the ability to enhance both system performance and operator capability. This effort examines the potential impact of introducing semi-autonomous mobility [Convoy Active Safety Technologies (CAST)] into manned vehicles. Twelve civilians with experience driving military vehicles in convoy-type operations participated in this experiment. For the experiment, they were tasked with following a lead vehicle while completing a concurrent security task (scanning the local environment for targets). The control of the manned vehicle was varied between CAST and manual control at several different speed levels. Several objective speed and accuracy variables along with subjective operator assessment variables were examined for each task. The results support the potential benefits of incorporating semi-autonomous mobility technologies into manned vehicle convoy operations. The semi-autonomous mobility system was associated with significantly better performance in several aspects of operator situational awareness and convoy integrity, including enhanced target identification, improved maintenance of following distance, and improved performance for unanticipated stops. This experiment also highlighted a critical human factors issue associated with the incorporation of autonomy in real-world applications: participants felt that, overall, they outperformed the semi-autonomous system on the simulated convoy operation. The operator's perception of the system's performance could potentially affect his or her willingness to use the system in real-world applications. This experiment demonstrated that enhancements to overall system performance in real-world applications are achieved by considering both technological and human factors solutions. Published 2008 Wiley Periodicals, Inc., [source]


Status of Microbial Modeling in Food Process Models

COMPREHENSIVE REVIEWS IN FOOD SCIENCE AND FOOD SAFETY, Issue 1 2008
Bradley P. Marks
ABSTRACT:, Food process models are typically aimed at improving process design or operation by optimizing some physical or chemical outcome, such as maximizing processing yield, minimizing energy usage, or maximizing nutrient retention. However, in seeking to achieve these objectives, one of the critical constraints is usually microbiological. For example, growth of pathogens or spoilage organisms must be held below a certain level, or pathogen reduction for a kill step must achieve a certain target. Therefore, mathematical models for microbial populations subjected to food processing operations are essential elements of the broader field of food process modeling. However, the complexity of the underlying biological phenomena presents special challenges in formulating, validating, and applying microbial models to real-world applications. In that context, the narrow purpose of this article is to (1) outline the general terminology and constructs of microbial models, (2) evaluate the state of knowledge/state of the art in application of these models, and (3) offer observations about current limitations and future opportunities in the area of predictive microbiology for food process modeling. [source]