Processing Applications (processing + application)

Distribution by Scientific Domains


Selected Abstracts


Regularized semiparametric model identification with application to nuclear magnetic resonance signal quantification with unknown macromolecular base-line

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 3 2006
Diana M. Sima
Summary., We formulate and solve a semiparametric fitting problem with regularization constraints. The model that we focus on is composed of a parametric non-linear part and a nonparametric part that can be reconstructed via splines. Regularization is employed to impose a certain degree of smoothness on the nonparametric part. Semiparametric regression is presented as a generalization of non-linear regression, and all important differences that arise from the statistical and computational points of view are highlighted. We motivate the problem formulation with a biomedical signal processing application. [source]


Differential Representations for Mesh Processing

COMPUTER GRAPHICS FORUM, Issue 4 2006
Olga Sorkine
Abstract Surface representation and processing is one of the key topics in computer graphics and geometric modeling, since it greatly affects the range of possible applications. In this paper we will present recent advances in geometry processing that are related to the Laplacian processing framework and differential representations. This framework is based on linear operators defined on polygonal meshes, and furnishes a variety of processing applications, such as shape approximation and compact representation, mesh editing, watermarking and morphing. The core of the framework is the definition of differential coordinates and new bases for efficient mesh geometry representation, based on the mesh Laplacian operator. [source]


Freeform Shape Representations for Efficient Geometry Processing

COMPUTER GRAPHICS FORUM, Issue 3 2003
Leif Kobbelt
The most important concepts for the handling and storage of freeform shapes in geometry processing applications are parametric representations and volumetric representations. Both have their specific advantages and drawbacks. While the algebraic complexity of volumetric representations is independent from the shape complexity, the domain of a parametric representation usually has to have the same structure as the surface itself (which sometimes makes it necessary to update the domain when the surface is modified). On the other hand, the topology of a parametrically defined surface can be controlled explicitly while in a volumetric representation, the surface topology can change accidentally during deformation. A volumetric representation reduces distance queries or inside/outside tests to mere function evaluations but the geodesic neighborhood relation between surface points is difficult to resolve. As a consequence, it seems promising to combine parametric and volumetric representations to effectively exploit both advantages. In this talk, a number of projects are presented and discussed in which such a combination leads to efficient and numerically stable algorithms for the solution of various geometry processing tasks. Applications include global error control for mesh decimation and smoothing, topology control for level-set surfaces, and shape modeling with unstructured point clouds. [source]


User transparency: a fully sequential programming model for efficient data parallel image processing

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 6 2004
F. J. Seinstra
Abstract Although many image processing applications are ideally suited for parallel implementation, most researchers in imaging do not benefit from high-performance computing on a daily basis. Essentially, this is due to the fact that no parallelization tools exist that truly match the image processing researcher's frame of reference. As it is unrealistic to expect imaging researchers to become experts in parallel computing, tools must be provided to allow them to develop high-performance applications in a highly familiar manner. In an attempt to provide such a tool, we have designed a software architecture that allows transparent (i.e. sequential) implementation of data parallel imaging applications for execution on homogeneous distributed memory MIMD-style multicomputers. This paper presents an extensive overview of the design rationale behind the software architecture, and gives an assessment of the architecture's effectiveness in providing significant performance gains. In particular, we describe the implementation and automatic parallelization of three well-known example applications that contain many fundamental imaging operations: (1) template matching; (2) multi-baseline stereo vision; and (3) line detection. Based on experimental results we conclude that our software architecture constitutes a powerful and user-friendly tool for obtaining high performance in many important image processing research areas. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Synthesis of general impedance with simple dc/dc converters for power processing applications

INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 3 2008
J. C. P. Liu
Abstract A general impedance synthesizer using a minimum number of switching converters is studied in this paper. We begin with showing that any impedance can be synthesized by a circuit consisting of only two simple power converters, one storage element (e.g. capacitor) and one dissipative element (e.g. resistor) or power source. The implementation of such a circuit for synthesizing any desired impedance can be performed by (i) programming the input current given the input voltage such that the desired impedance function is achieved, (ii) controlling the amount of power dissipation (generation) in the dissipative element (source) so as to match the required active power of the impedance to be synthesized. Then, the instantaneous power will be automatically balanced by the storage element. Such impedance synthesizers find a lot of applications in power electronics. For instance, a resistance synthesizer can be used for power factor correction (PFC), a programmable capacitor or inductor synthesizer (comprising small high-frequency converters) can be used for control applications. Copyright © 2007 John Wiley & Sons, Ltd. [source]


ACE4k: An analog I/O 64×64 visual microprocessor chip with 7-bit analog accuracy

INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 2-3 2002
G. Liñán
Abstract This paper describes a full-custom mixed-signal chip which embeds distributed optical signal acquisition, digitally-programmable analog parallel processing, and distributed image memory cache on a common silicon substrate. This chip, designed in a 0.5 µm standard CMOS technology contains around 1.000.000 transistors, of which operate in analog mode; it is hence one the most complex mixed-signal chip reported to now. Chip functional features are: local interactions, spatial-invariant array architecture; programmable local interactions among cells; randomly-selectable memory of instructions (elementary instructions are defined by specific values of the cell local interactions); random storage/retrieval of intermediate images; capability to complete algorithmic image processing tasks controlled by the user-selected stored instructions and interacting with the cache memory, etc. Thus, as illustrated in this paper, the chip is capable to complete complex spatio-temporal image processing tasks within short computation time (<300 ns for linear convolutions) and using a low power budget (<1.2 W for the complete chip). The internal circuitry of the chip has been designed to operate in robust manner with >7-bits equivalent accuracy in the internal analog operations, which has been confirmed by experimental measurements. Such 7-bits accuracy is enough for most image processing applications. ACE4k has been demonstrated capable to implement up to 30 template,-either directly or through template decomposition. This means the 100% of the 3×3 linear templates reported in Roska et al. 1998, [1]. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Texture-based parametric active contour for target detection and tracking

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 3 2009
Ali Reza Vard
Abstract In recent years, active contour models (ACM) have been considered as powerful tools for image segmentation and object tracking in computer vision and image processing applications. This article presents a new tracking method based on parametric active contour models. In the proposed method, a new pressure energy called "texture pressure energy" is added to the energy function of the parametric active contour model to detect and track a texture target object in a texture background. In this scheme, the texture features of the contour are calculated by a moment-based method. Then, by comparing these features with texture features of the target object, the contour curve is expanded or contracted to be adapted to the object boundaries. Experimental results show that the proposed method is more efficient and accurate in the tracking of objects compare to the traditional ones, when both object and background are textures in nature. © 2009 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 19, 187,198, 2009 [source]


A projection-based image quality measure

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 2-3 2008
Jianxin Pang
Abstract Objective image quality measure, evaluating the image quality consistently with human perception automatically, could be employed in image and video retrieval. And the measure with high efficiency and low computational complexity plays an important role in numerous image and video processing applications. On the assumption that any image's distortion could be modeled as the difference between the projection-based values (PV) of reference image and the counterpart of distorted image, we propose a new objective quality assessment method based on signal projection for full reference model. The proposed metric is developed by simple parameters to achieve high efficiency and low computational complexity. Experimental results show that the proposed method is well consistent with the subjective quality score. © 2008 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 18, 94,100, 2008 [source]


Learning invariants to illumination changes typical of indoor environments: Application to image color correction

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 3 2007
B. Bascle
Abstract This paper presents a new approach for automatic image color correction, based on statistical learning. The method both parameterizes color independently of illumination and corrects color for changes of illumination. This is useful in many image processing applications, such as image segmentation or background subtraction. The motivation for using a learning approach is to deal with changes of lighting typical of indoor environments such as home and office. The method is based on learning color invariants using a modified multi-layer perceptron (MLP). The MLP is odd-layered. The middle layer includes two neurons which estimate two color invariants and one input neuron which takes in the luminance desired in output of the MLP. The advantage of the modified MLP over a classical MLP is better performance and the estimation of invariants to illumination. The trained modified MLP can be applied using look-up tables, yielding very fast processing. Results illustrate the approach and compare it with other color correction approaches from the literature. © 2007 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 17, 132,142, 2007 [source]


Survey of sparse and non-sparse methods in source separation

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 1 2005
Paul D. O'Grady
Abstract Source separation arises in a variety of signal processing applications, ranging from speech processing to medical image analysis. The separation of a superposition of multiple signals is accomplished by taking into account the structure of the mixing process and by making assumptions about the sources. When the information about the mixing process and sources is limited, the problem is called ,blind'. By assuming that the sources can be represented sparsely in a given basis, recent research has demonstrated that solutions to previously problematic blind source separation problems can be obtained. In some cases, solutions are possible to problems intractable by previous non-sparse methods. Indeed, sparse methods provide a powerful approach to the separation of linear mixtures of independent data. This paper surveys the recent arrival of sparse blind source separation methods and the previously existing non-sparse methods, providing insights and appropriate hooks into theliterature along the way. © 2005 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 15;18,33;2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20035 [source]


Some IT and data processing applications for 2H,3H,11C,13C and 14C-labelling,

JOURNAL OF LABELLED COMPOUNDS AND RADIOPHARMACEUTICALS, Issue 5-6 2007
William J. S. Lockley
[source]


Simulation of polymer melt processing

AICHE JOURNAL, Issue 7 2009
Morton M. Denn
Abstract Polymer melt processing requires an integration of fluid mechanics and heat transfer, with unique issues regarding boundary conditions, phase change, stability and sensitivity, and melt rheology. Simulation has been useful in industrial melt processing applications. This brief overview is a personal perspective on some of the issues that arise and how they have been addressed. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source]


Novel Thermoplastic Composites from Commodity Polymers and Man-Made Cellulose Fibers

MACROMOLECULAR SYMPOSIA, Issue 1 2006
Hans-Peter Fink
Abstract Summary: A new class of fibre reinforced commodity thermoplastics suited for injection moulding and direct processing applications has been developed using man-made cellulosic fibres (Rayon tire yarn, Tencel, Viscose, Carbacell) and thermoplastic commodity polymers, such as polypropylene (PP), polyethylene (PE), high impact polystyrene (HIPS), poly(lactic acid) (PLA), and a thermoplastic elastomer (TPE) as the matrix polymer. For compounding, a specially adapted double pultrusion technique has been employed which provides composites with homogeneously distributed fibres. Extensive investigations were performed with Rayon reinforced PP in view of applications in the automotive industry. The Rayon-PP composite is characterized by high strength and an excellent impact behaviour as compared with glass fibre reinforced PP, thus permitting applications in the field of engineering thermoplastics such as polycarbonate/acrylonitrile butadiene styrene blends (PC/ABS). With the PP based composites the influence of material parameters (e.g. fibre type and load, coupling agent) were studied and it has been demonstrated how to tailor the desired composite properties as modulus and heat distortion temperature (HDT) by varying the fibre type or adding inorganic fillers. Man-made cellulose fibers are also suitable for the reinforcement of further thermoplastic commodity polymers with appropriate processing temperatures. In case of PE modulus and strength are tripled compared to the neat resin while Charpy impact strength is increased five-fold. For HIPS mainly strength and stiffness are increased, while for TPE the property profile is changed completely. With Rayon reinforced PLA, a fully biogenic and biodegradable composite with excellent mechanical properties including highly improved impact strength is presented. [source]


Inactivation of Bacteria by the Plasma Pencil

PLASMA PROCESSES AND POLYMERS, Issue 6-7 2006
Mounir Laroussi
Abstract Summary: A device capable of generating a relatively long cold plasma plume has recently been developed. The advantages of this device are: plasma controllability and stability, room temperature and atmospheric pressure operation, and low power consumption. These features are what is required from a plasma source to be used reliably in material processing applications, including the biomedical applications. In this communication we describe the device and we present evidence that it can be used successfully to inactivate Escherechia coli in a targeted fashion. More recent experiments have shown that this device inactivates other bacteria also, but these will be reported in the future. Photograph of a He plasma plume launched out of the plasma pencil. [source]