Image Processing (image + processing)

Distribution by Scientific Domains

Kinds of Image Processing

  • digital image processing

  • Terms modified by Image Processing

  • image processing algorithms
  • image processing software
  • image processing techniques

  • Selected Abstracts


    QUANTIFYING ADULTERATION IN ROAST COFFEE POWDERS BY DIGITAL IMAGE PROCESSING

    JOURNAL OF FOOD QUALITY, Issue 2 2003
    EDSON E. SANO
    Pure arabica coffee and mixtures of coffee husks and straw, maize, brown sugar and soybean were produced in our laboratory as investigation materials. Red/Green/Blue (RGB) color composites, magnified twelve times, were generated using a Charge Coupled Device (CCD) camera connected to a stereo microscope and a personal computer with an image processing software package. The percent areas of the contaminants in each image were calculated by the Maximum Likelihood supervised classification technique. Best-fit equations relating weight percentage (g.kg -1) and the percent areas were obtained for each coffee contaminant. To test the method, 247 coffee samples of different amounts and types of adulterants were analyzed in the laboratory. The results showed that the new method developed can analyze precisely and quickly a large number of ground coffee powders. [source]


    Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography

    COMPUTER GRAPHICS FORUM, Issue 1 2009
    T. Mertens
    I.4.8 [Image Processing]: Scene Analysis , Photometry, Sensor Fusion Abstract We propose a technique for fusing a bracketed exposure sequence into a high quality image, without converting to High dynamic range (HDR) first. Skipping the physically based HDR assembly step simplifies the acquisition pipeline. This avoids camera response curve calibration and is computationally efficient. It also allows for including flash images in the sequence. Our technique blends multiple exposures, guided by simple quality measures like saturation and contrast. This is done in a multiresolution fashion to account for the brightness variation in the sequence. The resulting image quality is comparable to existing tone mapping operators. [source]


    Guest Editorial: Selected Papers from the 18th Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI' 2005)

    COMPUTER GRAPHICS FORUM, Issue 4 2006
    Maria Andréia
    No abstract is available for this article. [source]


    Adaptive Logarithmic Mapping For Displaying High Contrast Scenes

    COMPUTER GRAPHICS FORUM, Issue 3 2003
    F. Drago
    We propose a fast, high quality tone mapping technique to display high contrast images on devices with limited dynamicrange of luminance values. The method is based on logarithmic compression of luminance values, imitatingthe human response to light. A bias power function is introduced to adaptively vary logarithmic bases, resultingin good preservation of details and contrast. To improve contrast in dark areas, changes to the gamma correctionprocedure are proposed. Our adaptive logarithmic mapping technique is capable of producing perceptually tunedimages with high dynamic content and works at interactive speed. We demonstrate a successful application of ourtone mapping technique with a high dynamic range video player enabling to adjust optimal viewing conditions forany kind of display while taking into account user preference concerning brightness, contrast compression, anddetail reproduction. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Image Processing and Computer Vision]: Image Representation [source]


    Detecting Cycle Failures at Signalized Intersections Using Video Image Processing

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 6 2006
    Jianyang Zheng
    Cycle failure detection is essential for identifying signal control problems at intersections. However, typical traffic sensors do not have the capability of capturing cycle failures. In this article, we introduce an algorithm for traffic signal cycle failure detection using video image processing. A cycle failure for a particular movement occurs when at least one vehicle must wait through more than one red light to complete the intended movement. The proposed cycle failure algorithm was implemented using Microsoft Visual C#. The system was tested with field data at different locations and time periods. The test results show that the algorithm works favorably: the system captured all the cycle failures and generated only three false alarms, which is approximately 0.9% of the total cycles tested. [source]


    Navigation Aided Image Processing in UAV Surveillance: Preliminary Results and Design of an Airborne Experimental System

    JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 2 2004
    Jonas Nygårds
    This paper describes an airborne reconfigurable measurement system being developed at Swedish Defence Research Agency (FOI), Sensor Technology, Sweden. An image processing oriented sensor management architecture for UAV (unmanned aerial vehicles) IR/EO-surveillance is presented. Some preliminary results of navigation aided image processing in UAV applications are demonstrated, such as SLAM (simultaneous localization and mapping), structure from motion and geolocation, target tracking, and detection of moving objects. The design goal of the measurement system is to emulate a UAV-mounted sensor gimbal using a stand-alone system. The minimal configuration of the system consists of a gyro-stabilized gimbal with IR and CCD sensors and an integrated high-performance navigation system. The navigation system combines dGPS real-time kinematics (RTK) data with data from an inertial measurement unit (IMU) mounted with reference to the optical sensors. The gimbal is to be used as an experimental georeferenced sensor platform, using a choice of carriers, to produce military relevant image sequences for studies of image processing and sensor control on moving surveillance and reconnaissance platforms. Furthermore, a high resolution synthetic environment, developed for sensor simulations in the visual and infrared wavelengths, is presented. © 2004 Wiley Periodicals, Inc. [source]


    Sex Assessment from the Sacral Base by Means of Image Processing

    JOURNAL OF FORENSIC SCIENCES, Issue 2 2009
    Stefano Benazzi Ph.D.
    Abstract:, To help improve sex assessment from skeletal remains, the present study considers the diagnostic value of the sacral base (basis osseus sacri) based on its planar image and related metric data. For this purpose, 114 adult sacra of known sex and age from two early 20th century Italian populations were examined, the first from Bologna, northern Italy (n = 76), and the second from Sassari, Sardinia (n = 38). Digital photos of the sacral base were taken with each bone in a standardized orientation. Technical drawing software was used to trace its profile and to measure related dimensions (area, perimeter, and breadth of S1 and total breadth of the sacrum). The measurements were subjected to discriminant and classification function analyses. The sex prediction success of 93.2% for the Bolognese sample, 81.6% for the Sassarese sample, and 88.3% for the pooled sample indicates that the first sacral vertebra is a good character for sex determination. [source]


    Color Separation in Forensic Image Processing

    JOURNAL OF FORENSIC SCIENCES, Issue 1 2006
    Charles E. H. Berger Ph.D.
    ABSTRACT: In forensic image processing, it is often important to be able to separate a feature from an interfering background or foreground, or to demonstrate colors within an image to be different from each other. In this study, a color deconvolution algorithm that could accomplish this task is described, and it is applied to color separation problems in document and fingerprint examination. Subtle color differences (sometimes invisible to the naked eye) are found to be sufficient, which is demonstrated successfully for several cases where color differences were shown to exist, or where colors were removed from the foreground or background. The software is available for free in the form of an Adobe® Photoshop® -compatible plug-in. [source]


    Ground monitoring the light,shadow windows of a tree canopy to yield canopy light interception and morphological traits

    PLANT CELL & ENVIRONMENT, Issue 8 2000
    Rita Giuliani
    ABSTRACT Monitoring the light,shadow windows of a tree via a grid system on the ground was performed on sunny summer days at high spatial resolution using a custom-built, inexpensive scanner. The measurements were taken with two goals: (1) to quickly and remotely quantify the overall, short-wave solar radiation (300,1100 nm) intercepted by the tree canopy, and (2) to yield such crown geometric traits as shape, size and the number of theoretical canopy leaf layers (leaf layer index, LLI) in relation to the section orthogonal to sunbeam direction (sun window). The ground readings at each measurement over the day were used to project a digitized shadow image. Image processing was applied and the intercepted radiation was calculated as the difference from the corresponding incoming radiation above the canopy. Tree-crown size and shape were profiled via computer imaging by analysing the different shadow images acquired at the various solar positions during the day. It is notable that these combined images yielded the crown features without having to parameterize such canopy characteristics as foliage extension and spatial distribution. [source]


    Time-lapsed imaging for in-process evaluation of supercritical fluid processing of tissue engineering scaffolds

    BIOTECHNOLOGY PROGRESS, Issue 4 2009
    Melissa L. Mather
    Abstract This article demonstrates the application of time-lapsed imaging and image processing to inform the supercritical processing of tissue scaffolds that are integral to many regenerative therapies. The methodology presented provides online quantitative evaluation of the complex process of scaffold formation in supercritical environments. The capabilities of the developed system are demonstrated through comparison of scaffolds formed from polymers with different molecular weight and with different venting times. Visual monitoring of scaffold fabrication enabled key events in the supercritical processing of the scaffolds to be identified including the onset of polymer plasticization, supercritical points and foam formation. Image processing of images acquired during the foaming process enabled quantitative tracking of the growing scaffold boundary that provided new insight into the nature of scaffold foaming. Further, this quantitative approach assisted in the comparison of different scaffold fabrication protocols. Observed differences in scaffold formation were found to persist, post-fabrication as evidenced by micro x-ray computed tomography (, x-ray CT) images. It is concluded that time-lapsed imaging in combination with image processing is a convenient and powerful tool to provide insight into the scaffold fabrication process. © 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009 [source]


    A programming environment for behavioural animation

    COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 5 2002
    Frédéric Devillers
    Abstract Behavioural models offer the ability to simulate autonomous agents like organisms and living beings. Psychological studies have shown that human behaviour can be described by a perception,decision,action loop, in which the decisional process should integrate several programming paradigms such as real time, concurrency and hierarchy. Building such systems for interactive simulation requires the design of a reactive system treating flows of data to and from the environment, and involving task control and preemption. Since a complete mental model based on vision and image processing cannot be constructed in real time using purely geometrical information, higher levels of information are needed in a model of the virtual environment. For example, the autonomous actors of a virtual world would exploit the knowledge of the environment topology to navigate through it. Accordingly, in this paper we present our programming environment for real-time behavioural animation which is compounded of a general animation and simulation platform, a behavioural modelling language and a scenario-authoring tool. Those tools has been used for different applications such as pedestrian and car driver interaction in urban environments, or a virtual museum populated by a group of visitors. Copyright © 2002 John Wiley & Sons, Ltd. [source]


    A Windows-based interface for teaching image processing

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 2 2010
    Melvin Ayala
    Abstract The use of image processing in research represents a challenge to the scientific community interested in its various applications but is not familiar with this area of expertise. In academia as well as in industry, fundamental concepts such as image transformations, filtering, noise removal, morphology, convolution/deconvolution among others require extra efforts to be understood. Additionally, algorithms for image reading and visualization in computers are not always easy to develop by inexperienced researchers. This type of environment has lead to an adverse situation where most students and researchers develop their own image processing code for operations which are already standards in image processing, a redundant process which only exacerbates the situation. The research proposed in this article, with the aim to resolve this dilemma, is to propose a user-friendly computer interface that has a dual objective which is to free students and researchers from the learning time needed for understanding/applying diverse imaging techniques but to also provide them with the option to enhance or reprogram such algorithms with direct access to the software code. The interface was thus developed with the intention to assist in understanding and performing common image processing operations through simple commands that can be performed mostly by mouse clicks. The visualization of pseudo code after each command execution makes the interface attractive, while saving time and facilitating to users the learning of such practical concepts. © 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ 18: 213,224, 2010; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20171 [source]


    Teaching image processing: A two-step process

    COMPUTER APPLICATIONS IN ENGINEERING EDUCATION, Issue 3 2008
    Clarence Han-Wei Yapp
    Abstract An interactive program for teaching digital image processing techniques is presented in this article. Instead of heavy programming tasks and mathematical functions, students are led step by step through the exercises and then allowed to experiment. This article evaluates the proposed program and compares it with existing techniques. © 2008 Wiley Periodicals, Inc. Comput Appl Eng Educ 16: 211,222, 2008; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20149 [source]


    Horizontal Roadway Curvature Computation Algorithm Using Vision Technology

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 2 2010
    Yichang (James) Tsai
    However, collecting such data is time-consuming, costly, and dangerous using traditional, manual surveying methods. It is especially difficult to perform such manual measurement when roadways have high traffic volumes. Thus, it would be valuable for transportation agencies if roadway curvature data could be computed from photographic images taken using low-cost digital cameras. This is the first article that develops an algorithm using emerging vision technology to acquire horizontal roadway curvature data from roadway images to perform roadway safety assessment. The proposed algorithm consists of four steps: (1) curve edges image processing, (2) mapping edge positions from an image domain to the real-world domain, (3) calibrating camera parameters, and (4) calculating the curve radius and center from curve points. The proposed algorithm was tested on roadways having various levels of curves and using different image sources to demonstrate its capability. The ground truth curvatures for two cases were also collected to evaluate the error of the proposed algorithm. The test results are very promising, and the computed curvatures are especially accurate for curves of small radii (less than 66 m/200 ft) with less than 1.0% relative errors with respect to the ground truth data. The proposed algorithm can be used as an alternative method that complements the traditional measurement methods used by state DOTs to collect roadway curvature data. [source]


    Detecting Cycle Failures at Signalized Intersections Using Video Image Processing

    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 6 2006
    Jianyang Zheng
    Cycle failure detection is essential for identifying signal control problems at intersections. However, typical traffic sensors do not have the capability of capturing cycle failures. In this article, we introduce an algorithm for traffic signal cycle failure detection using video image processing. A cycle failure for a particular movement occurs when at least one vehicle must wait through more than one red light to complete the intended movement. The proposed cycle failure algorithm was implemented using Microsoft Visual C#. The system was tested with field data at different locations and time periods. The test results show that the algorithm works favorably: the system captured all the cycle failures and generated only three false alarms, which is approximately 0.9% of the total cycles tested. [source]


    User transparency: a fully sequential programming model for efficient data parallel image processing

    CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 6 2004
    F. J. Seinstra
    Abstract Although many image processing applications are ideally suited for parallel implementation, most researchers in imaging do not benefit from high-performance computing on a daily basis. Essentially, this is due to the fact that no parallelization tools exist that truly match the image processing researcher's frame of reference. As it is unrealistic to expect imaging researchers to become experts in parallel computing, tools must be provided to allow them to develop high-performance applications in a highly familiar manner. In an attempt to provide such a tool, we have designed a software architecture that allows transparent (i.e. sequential) implementation of data parallel imaging applications for execution on homogeneous distributed memory MIMD-style multicomputers. This paper presents an extensive overview of the design rationale behind the software architecture, and gives an assessment of the architecture's effectiveness in providing significant performance gains. In particular, we describe the implementation and automatic parallelization of three well-known example applications that contain many fundamental imaging operations: (1) template matching; (2) multi-baseline stereo vision; and (3) line detection. Based on experimental results we conclude that our software architecture constitutes a powerful and user-friendly tool for obtaining high performance in many important image processing research areas. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Experimental determination of saltating glass particle dispersion in a turbulent boundary layer

    EARTH SURFACE PROCESSES AND LANDFORMS, Issue 14 2006
    H. T. Wang
    Abstract A horizontal saltation layer of glass particles in air is investigated experimentally over a flat bed and also over a triangular ridge in a wind tunnel. Particle concentrations are measured by light scattering diffusion (LSD) and digital image processing, and velocities using particle image velocimetry (PIV). All the statistical moments of the particle concentration are determined such as mean concentration, root mean square concentration fluctuations, skewness and flatness coefficients. Over the flat bed, it is confirmed that the mean concentration decreases exponentially with height, the mean dispersion height being a significant length scale. It is shown that the concentration distribution follows quite well a lognormal distribution. Over the ridge, measurements were made at the top of the ridge and in the cavity region and are compared with measurements without the ridge. On the hill crest, particles are retarded, the saltation layer decreases in thickness and concentration is increased. Downwind of the ridge, particle flow behaves like a jet, in particular no particle return flow is observed. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Morphometric analysis and tectonic interpretation of digital terrain data: a case study

    EARTH SURFACE PROCESSES AND LANDFORMS, Issue 8 2003
    Gyozo Jordan
    Abstract Tectonic movement along faults is often re,ected by characteristic geomorphological features such as linear valleys, ridgelines and slope-breaks, steep slopes of uniform aspect, regional anisotropy and tilt of terrain. Analysis of digital elevation models, by means of numerical geomorphology, provides a means of recognizing fractures and characterizing the tectonics of an area in a quantitative way. The objective of this study is to investigate the use of numerical geomorphometric methods for tectonic geomorphology through a case study. The methodology is based on general geomorphometry. In this study, the basic geometric attributes (elevation, slope, aspect and curvatures) are complemented with the automatic extraction of ridge and valley lines and surface speci,c points. Evans' univariate and bivariate methodology of general geomorphometry is extended with texture (spatial) analysis methods, such as trend, autocorrelation, spectral, and network analysis. Terrain modelling is implemented with the integrated use of: (1) numerical differential geometry; (2) digital drainage network analysis; (3) digital image processing; and (4) statistical and geostatistical analysis. Application of digital drainage network analysis is emphasized. A simple shear model with principal displacement zone with an NE,SW orientation can account for most of the the morphotectonic features found in the basin by geological and digital tectonic geomorphology analyses. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Estimation of erosion and deposition volumes in a large, gravel-bed, braided river using synoptic remote sensing

    EARTH SURFACE PROCESSES AND LANDFORMS, Issue 3 2003
    Stuart N. Lane
    Abstract System-scale detection of erosion and deposition is crucial in order to assess the transferability of findings from scaled laboratory and small field studies to larger spatial scales. Increasingly, synoptic remote sensing has the potential to provide the necessary data. In this paper, we develop a methodology for channel change detection, coupled to the use of synoptic remote sensing, for erosion and deposition estimation, and apply it to a wide, braided, gravel-bed river. This is based upon construction of digital elevation models (DEMs) using digital photogrammetry, laser altimetry and image processing. DEMs of difference were constructed by subtracting DEM pairs, and a method for propagating error into the DEMs of difference was used under the assumption that each elevation in each surface contains error that is random, independent and Gaussian. Data were acquired for the braided Waimakariri River, South Island, New Zealand. The DEMs had a 1·0 m pixel resolution and covered an area of riverbed that is more than 1 km wide and 3·3 km long. Application of the method showed the need to use survey-specific estimates of point precision, as project design and manufacturer estimates of precision overestimate a priori point quality. This finding aside, the analysis showed that even after propagation of error it was possible to obtain high quality DEMs of difference for process estimation, over a spatial scale that has not previously been achieved. In particular, there was no difference in the ability to detect erosion and deposition. The estimates of volumes of change, despite being downgraded as compared with traditional cross-section survey in terms of point precision, produced more reliable erosion and deposition estimates as a result of the large improvement in spatial density that synoptic methods provide. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Active Vegetations Can Be Differentiated from Chronic Vegetations by Visual Inspection of Standardized Two-Dimensional Echocardiograms

    ECHOCARDIOGRAPHY, Issue 2 2000
    PH.D., TAHIR TAK M.D.
    The ability to differentiate active from chronic valvular vegetations (VEGs) by digital image processing and by visual observation was evaluated in 18 patients with a clinical diagnosis of infective endocarditis (IE). Two-dimensional echocardiographic (2-DE) examinations were performed on all patients at diagnosis and after a mean period of 52 days. Two comparable images (active and chronic) from the same patient and in the same phase of the cardiac cycle were digitized, magnified, and displayed on a high resolution monitor. The mean pixel intensity (MPI) was 72 ± 14 in the active stage and 143 ± 23 in the chronic stage (P < 0.0001). The VEG size was 0.64 ± 0.15 cm2 in the active stage and decreased to 0.46 ± 0.17 cm2 in the chronic stage (P < 0.001). Two experienced echocar-diographers, who were blinded to the age of the VEGs, identified each echocardiographic image as active or chronic based on visual observation of density of the VEGs. The VEGs were correctly identified as active or chronic in 17 out of the 18 patients. In summary, although digital image processing of 2-DE may be useful, the density of VEGs assessed by visual inspection will help differentiate between active and chronic VEGs of IE. The standardization procedure at the time of the initial study and use of identical gain settings in subsequent studies are key factors in making this distinction. [source]


    Quantifying dye tracers in soil profiles by image processing

    EUROPEAN JOURNAL OF SOIL SCIENCE, Issue 2 2000
    I. Forrer
    Summary Developing and testing models for solute transport in the field requires experimental data on the spreading of solutes in the soil. Obtaining such data is costly, and a substantial part of the total costs is in the preparation and chemical analysis of the tracing compounds in the gathered samples. We developed a cheap method to quantify the concentration of the mobile dye tracer Brilliant Blue FCF from digitized photographs of stained soil profiles, and we have tested it in the field. Soil sampling and chemical analyses were necessary only to establish a calibration relation between the dye content and the colour of the soil. The digital images were corrected for geometrical distortions, varying background brightness, and colour tinges, and then they were analysed to determine the soil colour at sampling points in the profiles. The resident concentration of the dye was modelled by polynomial regression with the primary colours red, green, blue and the soil depth as explanatory variables. Concentration maps of Brilliant Blue were then computed from the digitized images with a spatial resolution of 1 mm. Validation of the technique with independent data showed that the method predicted the concentration of the dye well, provided the corrected images contained only the colours included in the calibration. [source]


    Expression of psoriasis-associated fatty acid-binding protein in senescent human dermal microvascular endothelial cells

    EXPERIMENTAL DERMATOLOGY, Issue 9 2004
    Moon Kyung Ha
    Abstract:, Aging is associated with the progressive pathophysiologic modification of endothelial cells. In vitro endothelial cell senescence is accompanied by proliferative activity failure and by perturbations in gene and protein expressions. Moreover, this cellular senescence in culture has been proposed to reflect processes that occur in aging organisms. In order to observe the changing patterns of protein expression in senescent human dermal microvascular endothelial cells (HDMECs), proteins obtained from both early- and late-passaged HDMECs were separated by two-dimensional electrophoresis, visualized by silver staining, and quantified by image processing. Proteins of interest were extracted by in-gel digestion with trypsin and quantified by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS), by searching the National Center for Biotechnology Information protein-sequence database. More than 2000 spots were detected by 2D electrophoresis within a linear pH range of 3,10. Twenty-two major differentially expressed spots were observed in serially passaged HDMECs and identified with high confidence by MALDI-TOF-MS. One of these spots was found to be a 14,15 kDa psoriasis-associated fatty acid-binding protein (PA-FABP) with high affinity for long-chain fatty acids. The expression of PA-FABP was confirmed to be elevated in senescent HDMECs (passage 20) by fluorescence-activated cell sorting (FACS), confocal laser microscopy, and by immunohistochemistry in aged human skin tissue. Our results suggest that the overexpression of FABP in cultured senescent HDMECs is closely related to skin aging. [source]


    Integrating remote sensing in fisheries control

    FISHERIES MANAGEMENT & ECOLOGY, Issue 5 2005
    N. KOURTI
    Abstract, To complement existing fishery control measures, in particular the Vessel Monitoring System (VMS), a pilot operational system to find fishing vessels in satellite images was set up. Radar is the mainstay of the system, which furthermore includes fully automated image processing and communication protocols with the authorities. Different image types are used to match different fisheries , oceanic, shelf and coastal. Vessel detection rates were 75,100% depending on image type and vessel size. Output of the system, in the form of an overview of vessel positions in the area highlighting any discrepancies with otherwise reported positions, can be at the authorities within 30 min of the satellite image being taken , fast enough to task airborne inspection for follow up. [source]


    Influence of pore size and geometry on peat unsaturated hydraulic conductivity computed from 3D computed tomography image analysis

    HYDROLOGICAL PROCESSES, Issue 21 2010
    F. Rezanezhad
    Abstract In organic soils, hydraulic conductivity is related to the degree of decomposition and soil compression, which reduce the effective pore diameter and consequently restrict water flow. This study investigates how the size distribution and geometry of air-filled pores control the unsaturated hydraulic conductivity of peat soils using high-resolution (45 µm) three-dimensional (3D) X-ray computed tomography (CT) and digital image processing of four peat sub-samples from varying depths under a constant soil water pressure head. Pore structure and configuration in peat were found to be irregular, with volume and cross-sectional area showing fractal behaviour that suggests pores having smaller values of the fractal dimension in deeper, more decomposed peat, have higher tortuosity and lower connectivity, which influences hydraulic conductivity. The image analysis showed that the large reduction of unsaturated hydraulic conductivity with depth is essentially controlled by air-filled pore hydraulic radius, tortuosity, air-filled pore density and the fractal dimension due to degree of decomposition and compression of the organic matter. The comparisons between unsaturated hydraulic conductivity computed from the air-filled pore size and geometric distribution showed satisfactory agreement with direct measurements using the permeameter method. This understanding is important in characterizing peat properties and its heterogeneity for monitoring the progress of complex flow processes at the field scale in peatlands. Copyright © 2010 John Wiley & Sons, Ltd. [source]


    Dentine demineralization when subjected to EDTA with or without various wetting agents: a co-site digital optical microscopy study

    INTERNATIONAL ENDODONTIC JOURNAL, Issue 4 2008
    G. De-Deus
    Abstract Aim, To analyse quantitatively the chelating ability of ethylenediaminetetraacetic acid (EDTA) and three common EDTA-based associations with wetting agents. Methodology, Twelve maxillary human molars were selected, from which 3 mm thick discs were obtained from the cervical third of the root. Following the creation of standardized smear layer co-site microscopy image sequences of the dentine surface submitted to EDTA, EDTA plus 0.1% cetavlon® (Sigma Chemical Co., St Louis, MO, USA), EDTA plus 1.25% sodium lauryl ether sulphate and SmearClearÔ (Sybron Endo, Orange, CA, USA) were obtained after several cumulative demineralization times. Sixteen images were obtained of each dentine sample for each experimental time, at 1000× magnification. An image processing and analysis sequence was used to measure the area of open tubules for each experimental time. Thus, it was possible to follow the demineralization process and quantitatively analyse the effect of the various substances. The Student's t -test was used to assess differences between experimental groups. Results, EDTA solution had the strongest effect at all experimental times whilst the association of EDTA with wetting agents showed a weaker chelating effect and this difference was statistically significant (P < 0.05). Conclusions, (i) The EDTA solution had the strongest effect at all experimental times (P < 0.05); (ii) the association of EDTA with wetting agents did not improve the chelating power of the solution; (iii) co-site optical microscopy represents a powerful approach to compare directly, longitudinally and quantitatively the ability of the chelating solutions. [source]


    Voxel-based meshing and unit-cell analysis of textile composites

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 7 2003
    Hyung Joo Kim
    Abstract Unit-cell homogenization techniques are frequently used together with the finite element method to compute effective mechanical properties for a wide range of different composites and heterogeneous materials systems. For systems with very complicated material arrangements, mesh generation can be a considerable obstacle to usage of these techniques. In this work, pixel-based (2D) and voxel-based (3D) meshing concepts borrowed from image processing are thus developed and employed to construct the finite element models used in computing the micro-scale stress and strain fields in the composite. The potential advantage of these techniques is that generation of unit-cell models can be automated, thus requiring far less human time than traditional finite element models. Essential ideas and algorithms for implementation of proposed techniques are presented. In addition, a new error estimator based on sensitivity of virtual strain energy to mesh refinement is presented and applied. The computational costs and rate of convergence for the proposed methods are presented for three different mesh-refinement algorithms: uniform refinement; selective refinement based on material boundary resolution; and adaptive refinement based on error estimation. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Robot vision with cellular neural networks: a practical implementation of new algorithms

    INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 4 2007
    Giovanni Egidio Pazienza
    Abstract Cellular neural networks (CNNs) are well suited for image processing due to the possibility of a parallel computation. In this paper, we present two algorithms for tracking and obstacle avoidance using CNNs. Furthermore, we show the implementation of an autonomous robot guided using only real-time visual feedback; the image processing is performed entirely by a CNN system embedded in a digital signal processor (DSP). We successfully tested the two algorithms on this robot. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Computer-based morphometry of brain

    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 2 2010
    Bang-Bon Koo
    Abstract Over the past decade, the importance of probing the anatomy of the brain has reemerged as an important field of neuroscience. In combination with functional imaging techniques, the rapid advancement of neuroimaging techniques,such as magnetic resonance imaging,and their growing applicability in studying brain morphometry has led to great advances in neuroscience research. Considering the requirements of the diverse technologies,from image processing to statistics,in performing morphometry of the brain, it is critical to have an overall understanding of this subject. The major objective of this review is to provide a practical introduction to this field. The review starts by covering basic concepts and techniques that are commonly used in morphometry of structural magnetic resonance imaging and then extends to further technical perspectives. © 2010 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 20, 117,125, 2010 [source]


    Satellite image segmentation using hybrid variable genetic algorithm

    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 3 2009
    Mohamad M. Awad
    Abstract Image segmentation is an important task in image processing and analysis. Many segmentation methods have been used to segment satellite images. The success of each method depends on the characteristics of the acquired image such as resolution limitations and on the percentage of imperfections in the process of image acquisition due to noise. Many of these methods require a priori knowledge which is difficult to obtain. Some of them are parametric statistical methods that use many parameters which are dependent on image property. In this article, a new unsupervised nonparametric method is developed to segment satellite images into homogeneous regions without any a priori knowledge. The new method is called hybrid variable genetic algorithm (HVGA). The variability is found in the variable number of cluster centers and in the changeable mutation rate. In addition, this new method uses different heuristic processes to increase the efficiency of genetic algorithm in avoiding local optimal solutions. Experiments performed on two different satellite images (Landsat and Spot) proved the high accuracy and efficiency of HVGA compared with another two unsupervised and nonparametric segmentation methods genetic algorithm (GA) and self-organizing map (SOM). The verification of the results included stability and accuracy measurements using an evaluation method implemented from the functional model (FM) and field surveys. © 2009 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 19, 199,207, 2009 [source]


    Wavelet algorithms for deblurring models

    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 3 2004
    Michael K. Ng
    Abstract Blur removal is an important problem in signal and image processing. In this article, we formulate the deblurring problem within a wavelet framework and design a methodology to find deblurring filters. Using these deblurring filters, we derive an iterative deblurring algorithm and prove its convergence. Simulation results are reported to illustrate the proposed framework and methodology. © 2004 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 14, 113,121, 2004; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20014 [source]