Video Sequences (video + sequence)

Distribution by Scientific Domains


Selected Abstracts


Image clustering for the exploration of video sequences

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 4 2006
Vicenç Torra
In this article we present a system for the exploration of video sequences. The system, GAMBAL for the Exploration of Video Sequences (GAMBAL-EVS), segments video sequences, extracting an image for each shot, and then clusters such images and presents them in a visualization system. The system allows the user to find similarities between images and to proceed through the video sequences to find the relevant ones. [source]


Mixing virtual and real scenes in the site of ancient Pompeii

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 1 2005
George Papagiannakis
Abstract This paper presents an innovative 3D reconstruction of ancient fresco paintings through the real-time revival of their fauna and flora, featuring groups of virtual animated characters with artificial-life dramaturgical behaviours in an immersive, fully mobile augmented reality (AR) environment. The main goal is to push the limits of current AR and virtual storytelling technologies and to explore the processes of mixed narrative design of fictional spaces (e.g. fresco paintings) where visitors can experience a high degree of realistic immersion. Based on a captured/real-time video sequence of the real scene in a video-see-through HMD set-up, these scenes are enhanced by the seamless accurate real-time registration and 3D rendering of realistic complete simulations of virtual flora and fauna (virtual humans and plants) in a real-time storytelling scenario-based environment. Thus the visitor of the ancient site is presented with an immersive and innovative multi-sensory interactive trip to the past. Copyright © 2005 John Wiley & Sons, Ltd. [source]


SecondSkin: An interactive method for appearance transfer

COMPUTER GRAPHICS FORUM, Issue 7 2009
A. Van Den Hengely
Abstract SecondSkin estimates an appearance model for an object visible in a video sequence, without the need for complex interaction or any calibration apparatus. This model can then be transferred to other objects, allowing a non-expert user to insert a synthetic object into a real video sequence so that its appearance matches that of an existing object, and changes appropriately throughout the sequence. As the method does not require any prior knowledge about the scene, the lighting conditions, or the camera, it is applicable to video which was not captured with this purpose in mind. However, this lack of prior knowledge precludes the recovery of separate lighting and surface reflectance information. The SecondSkin appearance model therefore combines these factors. The appearance model does require a dominant light-source direction, which we estimate via a novel process involving a small amount of user interaction. The resulting model estimate provides exactly the information required to transfer the appearance of the original object to new geometry composited into the same video sequence. [source]


Analog-VLSI, array-processor-based, Bayesian, multi-scale optical flow estimation

INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 1 2006
L. Török
Abstract Optical flow (OF) estimation aims at derive a motion-vector field that characterizes motions on a video sequence of images. In this paper, we propose a new multi-scale (or scale-space) algorithm that generates OF on cellular neural/non-linear network universal machine, a general purpose analog-VLSI hardware, at resolution of 128 × 128 with fair accuracy and working over a speed of 100 frames/s. The performance of the hardware implementation of the proposed algorithm is measured on a standard image sequence. As far as we are concerned, this is the first time when an OF estimator hardware is tested on a practical-size standard image sequence. Copyright © 2006 John Wiley & Sons, Ltd. [source]


CNN-based architecture for real-time object-oriented video coding applications

INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS, Issue 1 2005
Giuseppe Grassi
Abstract This paper presents a new CNN-based architecture for real-time video coding applications. The proposed approach, by exploiting object-oriented CNN algorithms and MPEG encoding capabilities, enables low bit-rate encoder/decoder to be designed. Simulation results using Claire video sequence show the effectiveness of the proposed scheme. Copyright © 2005 John Wiley & Sons, Ltd. [source]


MAP fusion method for superresolution of images with locally varying pixel quality

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 4 2008
Kio Kim
Abstract Superresolution is a procedure that produces a high-resolution image from a set of low-resolution images. Many of superresolution techniques are designed for optical cameras, which produce pixel values of well-defined uncertainty, while there are still various imaging modalities for which the uncertainty of the images is difficult to control. To construct a superresolution image from low-resolution images with varying uncertainty, one needs to keep track of the uncertainty values in addition to the pixel values. In this paper, we develop a probabilistic approach to superresolution to address the problem of varying uncertainty. As direct computation of the analytic solution for the superresolution problem is difficult, we suggest a novel algorithm for computing the approximate solution. As this algorithm is a noniterative method based on Kalman filter-like recursion relations, there is a potential for real-time implementation of the algorithm. To show the efficiency of our method, we apply this algorithm to a video sequence acquired by a forward looking sonar system. © 2008 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 18, 242,250, 2008; Published online in Wiley InterScience (www.interscience.wiley.com). [source]


High-resolution images from compressed low-resolution video: Motion estimation and observable pixels

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 2 2004
L. D. Alvarez
Abstract In this article, we address the problem of obtaining a high-resolution (HR) image from a compressed low-resolution (LR) video sequence. Motion information plays a critical role in solving this problem, and we determine which pixels in the sequence provide useful information for calculating the high-resolution image. The bit stream of hybrid motion compensated video compression methods includes low-resolution motion-compensated images; we therefore also study which pixels in these images should be used to increase the quality of the reconstructed image. Once the useful (observable) pixels in the low-resolution and motion-compensated sequences have been detected, we modify the acquisition model to only account for these observations. The proposed approach is tested on real compressed video sequences and the improved performance is reported. © 2004 Wiley Periodicals, Inc. Int J Imaging Syst Technol 14, 58,66, 2004; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20008 [source]


Video compression for multicast environments using spatial scalability and simulcast coding

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 6 2003
Wade K. Wan
Abstract A common problem with many video transmission applications is the wide range of available bandwidths between the server and different clients. These environments require efficient multicast video service, the capability to transmit and receive the same video sequence at different resolutions. Two approaches to achieve multicast service are scalable coding (dependent bitstream coding) and simulcast coding (independent bitstream coding). One would expect scalable coding to have higher coding efficiency because a scalable coded bitstream can exploit similar information in another bitstream. This reasoning would suggest that multicast implementations should only use scalable coding for maximum coding efficiency. However, this article shows results where simulcast coding has been found to outperform spatial scalability (one type of scalable coding). In this article, methods are described to select between simulcast coding and spatial scalability for multicast video transmission. These techniques can be used to determine the proper multicast coding approach for providing service to clients with different communication links. The methodology described can also be used to construct decision regions to guide more general scenarios or adaptively switch between the two coding approaches. A number of important results were obtained that may be directly applicable to commercial multicast systems. © 2004 Wiley Periodicals, Inc. Int J Imaging Syst Technol 13, 331,340, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10065 [source]


Testing (non-)linearity of distributed-parameter systems from a video sequence,,

ASIAN JOURNAL OF CONTROL, Issue 2 2010
Ewaryst Rafaj, owicz
Abstract The aim of the paper is to propose a statistical test for testing (non-) linearity of nD systems described by partial differential equations. The test is based on using a sequence of 2D observations provided by a camera, which are converted into observed modes of the system and compared to the modes that are expected when the system is linear. A theoretical correctness of the test is proved for a certain class of nonlinear systems. An application of the theory is illustrated by testing linearity of the process of cooling a copper plate. Copyright © 2010 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society [source]


Using computer vision to simulate the motion of virtual agents

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 2 2007
Soraia R. Musse
Abstract In this paper, we propose a new model to simulate the movement of virtual humans based on trajectories captured automatically from filmed video sequences. These trajectories are grouped into similar classes using an unsupervised clustering algorithm, and an extrapolated velocity field is generated for each class. A physically-based simulator is then used to animate virtual humans, aiming to reproduce the trajectories fed to the algorithm and at the same time avoiding collisions with other agents. The proposed approach provides an automatic way to reproduce the motion of real people in a virtual environment, allowing the user to change the number of simulated agents while keeping the same goals observed in the filmed video. Copyright © 2007 John Wiley & Sons, Ltd. [source]


As-consistent-As-possible compositing of virtual objects and video sequences

COMPUTER ANIMATION AND VIRTUAL WORLDS (PREV: JNL OF VISUALISATION & COMPUTER ANIMATION), Issue 3-4 2006
Guofeng Zhang
Abstract We present an efficient approach that merges the virtual objects into video sequences taken by a freely moving camera in a realistic manner. The composition is visually and geometrically consistent through three main steps. First, a robust camera tracking algorithm based on key frames is proposed, which precisely recovers the focal length with a novel multi-frame strategy. Next, the concerned 3D models of the real scenes are reconstructed by means of an extended multi-baseline algorithm. Finally, the virtual objects in the form of 3D models are integrated into the real scenes, with special cares on the interaction consistency including shadow casting, occlusions, and object animation. A variety of experiments have been implemented, which demonstrate the robustness and efficiency of our approach. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Behavioral Syndromes in Stable Social Groups: An Artifact of External Constraints?

ETHOLOGY, Issue 12 2008
Ximena J. Nelson
Individuals of many species differ consistently in their behavioral reactions toward different stimuli, such as predators, rivals, and potential mates. These typical reactions, described as ,behavioral syndromes' or ,personalities,' appear to be heritable and therefore subject to selection. We studied behavioral syndromes in 36 male fowl living in 12 social groups and found that individuals behaved consistently over time. Furthermore, responses to different contexts (anti-predator, foraging, and territorial) were inter-correlated, suggesting that males exhibited comparable behavioral traits in these functionally distinct situations. We subsequently isolated the same roosters and conducted tests in a ,virtual environment,' using high-resolution digital video sequences to simulate the anti-predator, foraging, and territorial contexts that they had experienced outdoors. Under these controlled conditions, repeatability persisted but individual responses to the three classes of stimuli failed to predict one another. These were instead context-specific. In particular, production of each type of vocal signal was independent, implying that calls in the repertoire are controlled by distinct mechanisms. Our results show that extrinsic factors, such as social position, can be responsible for the appearance of traits that could readily be mistaken for the product of endogenous characters. [source]


Extraction of media and plaque boundaries in intravascular ultrasound images by level sets and min/max flow

EXPERT SYSTEMS, Issue 2 2010
Ali Iskurt
Abstract: Estimation of the plaque area in intravascular ultrasound images after extraction of the media and plaque,lumen interfaces is an important application of computer-aided diagnosis in medical imaging. This paper presents a novel system for fully automatic and fast calculation of plaque quantity by capturing the surrounding ring called media. The system utilizes an algorithm that consists of an enhanced technique for noise removal and a method of detecting different iso levels by sinking the image gradually under zero level. Moreover, an important novelty with this technique is the simultaneous extraction of media and lumen,plaque interfaces at satisfactory levels. There are no higher dimensional surfaces and evolution of contours, stopping at high image gradients. Thus, the system runs really fast with curvature velocity only and has no complexity. Experiments also show that this shape-recovering curvature term not only removes the noisy behaviour of ultrasound images but also strengthens very weak boundaries and even completes the missing walls of the media. In addition, the lumen,plaque interface can be detected simultaneously. For validation, a new and very useful algorithm is developed for labelling of intravascular ultrasound images, taken from video sequences of 15 patients, and a comparison-based verification is done between manual contours by experts and the contours extracted by our system. [source]


Quantitative Image Analysis in Darmstadt

IMAGING & MICROSCOPY (ELECTRONIC), Issue 3 2007
Konrad Sandau Prof. Dr.
The 14th workshop "Quantitative Image Analysis" has been held at the University of Applied Sciences in Darmstadt on 15 June 2007. Image Analysis works on complex images as 3D-images, massive mosaics and video sequences. [source]


Synthetic-aperture technique for high-resolution composite imaging of the inside walls of tubular specimens

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 4 2004
Hua Lee
Abstract High-resolution survey of the inside walls of tubular specimens is a unique application of synthetic-aperture composite imaging. This article describes the data acquisition process, 3D motion estimation and compensation, image registration, and superposition for the formation of high-resolution composite images from conventional video sequences. Experiment results from the survey of an oil well are used to demonstrate the capability of the technique. © 2004 Wiley Periodicals, Inc. Int J Imaging Syst Technol 14, 167,169, 2004; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20020 [source]


High-resolution images from compressed low-resolution video: Motion estimation and observable pixels

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 2 2004
L. D. Alvarez
Abstract In this article, we address the problem of obtaining a high-resolution (HR) image from a compressed low-resolution (LR) video sequence. Motion information plays a critical role in solving this problem, and we determine which pixels in the sequence provide useful information for calculating the high-resolution image. The bit stream of hybrid motion compensated video compression methods includes low-resolution motion-compensated images; we therefore also study which pixels in these images should be used to increase the quality of the reconstructed image. Once the useful (observable) pixels in the low-resolution and motion-compensated sequences have been detected, we modify the acquisition model to only account for these observations. The proposed approach is tested on real compressed video sequences and the improved performance is reported. © 2004 Wiley Periodicals, Inc. Int J Imaging Syst Technol 14, 58,66, 2004; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.20008 [source]


MPEG-4 facial animation in video analysis and synthesis

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 5 2003
Peter Eisert
Abstract MPEG-4 supports the definition, encoding, transmission, and animation of 3-D head and body models. These features can be used for a variety of different applications ranging from low bit-rate video coding to character and avatar animation. In this article, an entire system for the analysis of facial expressions from image sequences and their synthesis is presented. New methods for the estimation of MPEG-4 facial animation parameters as well as scene illumination are proposed. Experiments for different applications demonstrate the potential of using facial animation techniques in video analysis and synthesis. A model-based codec is presented that is able to encode head-and-shoulder video sequences at bit-rates of about 1 kbit/s. Besides the low bit-rate, many enhancements and scene modifications can be easily applied, like scene lighting changes or cloning of expressions for character animation. But also for the encoding of arbitrary sequences, 3-D knowledge can help to increase the coding efficiency. With our model-aided codec, bit-rate reductions of up to 45% at the same quality can be achieved in comparison to standard hybrid video codecs. © 2004 Wiley Periodicals, Inc. Int J Imaging Syst Technol 13, 245,256, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10072 [source]


Short-term MPEG-4 video traffic prediction using ANFIS

INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, Issue 6 2005
Adel Abdennour
Multimedia traffic and particularly MPEG-coded video streams are growing to be a major traffic component in high-speed networks. Accurate prediction of such traffic enhances the reliable operation and the quality of service of these networks through a more effective bandwidth allocation and better control strategies. However, MPEG video traffic is characterized by a periodic correlation structure, a highly complex bit rate distribution and very noisy streams. Therefore, it is considered an intractable problem. This paper presents a neuro-fuzzy short-term predictor for MPEG-4-coded videos. The predictor is based on the Adaptive Network Fuzzy Inference System (ANFIS) to perform single-step predictions for the I, P and B frames. Short-term predictions are also examined using smoothed signals of the video sequences. The ANFIS prediction results are evaluated using long entertainment and broadcast video sequences and compared to those obtained using a linear predictor. ANFIS is capable of providing accurate prediction and has the added advantage of being simple to design and to implement. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Sonographic examination of the oral phase of swallowing: Bolus image enhancement

JOURNAL OF CLINICAL ULTRASOUND, Issue 2 2002
Michael J. Casas DDS
Abstract Purpose The purpose of this study was to evaluate the ability of 4 liquid boluses to enhance pixel brightness and the ease with which the boluses could be identified during the sonographic evaluation of oral swallowing in healthy young adults. Methods Ten healthy adult volunteers (5 men and 5 women), ranging in age from 21 to 31 years, underwent sonographic evaluation of the oral phase of swallowing while sitting in their usual feeding position. We compared the ability of the 4 following liquids to improve sonographic visualization of swallowing with that of water: a carbonated cola beverage, 5.0 ml of Thick-It in 120 ml of water, 2.5 ml of Thick-It in 120 ml of water, and 7.5 ml of confectioners' sugar in 120 ml of water. Water was used as a control. In each case, 5 ml of the liquid was introduced into the subject's oral cavity using a syringe, and the subject was instructed to swallow. Digitized still images and recorded video sequences of sonographic examinations of the swallowing were analyzed. The brightness of the bolus image on selected digitized video frames was measured digitally using Image Analyst software. Pixel brightness within selected regions of interest for each of the test liquids was statistically compared with that for water. Seven clinicians rated the visualization of each test liquid and water on paired sonographic videotape sequences. These ratings and the level of agreement between them were statistically tested. Results Only the carbonated cola beverage demonstrated statistically greater pixel brightness than that of water on digitized video frames (p = 0.01), whereas both cola (with a moderate inter-rater agreement, , = 0.50) and 5.0 ml Thick-It mixed with 120 ml of water (with a fair inter-rater agreement, , = 0.24) were significantly better visualized on sonographic video sequences. Conclusions The digital still-frame analysis confirmed the clinicians' ratings of bolus visualization on real-time sonography, but dynamic sonography is more important than still frames in assessing sonographic swallow media because the dynamic images more closely parallel what is seen in clinical practice. Future investigations of sonographic contrast agents for use in the examination of the oral phase of swallowing should use both static digital (still-frame) and dynamic (real-time) assessment methods, as well as expert reviewers. © 2002 John Wiley & Sons, Inc. J Clin Ultrasound 30:83,87, 2002; DOI 10.1002/jcu.10034 [source]


Efficient three-dimensional scene modeling and mosaicing

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 10 2009
Tudor Nicosevici
Scene modeling has a key role in applications ranging from visual mapping to augmented reality. This paper presents an end-to-end solution for creating accurate three-dimensional (3D) textured models using monocular video sequences. The methods are developed within the framework of sequential structure from motion, in which a 3D model of the environment is maintained and updated as new visual information becomes available. The proposed approach contains contributions at different levels. The camera pose is recovered by directly associating the 3D scene model with local image observations, using a dual-registration approach. Compared to the standard structure from motion techniques, this approach decreases the error accumulation while increasing the robustness to scene occlusions and feature association failures, while allowing 3D reconstructions for any type of scene. Motivated by the need to map large areas, a novel 3D vertex selection mechanism is proposed, which takes into account the geometry of the scene. Vertices are selected not only to have high reconstruction accuracy but also to be representative of the local shape of the scene. This results in a reduction in the complexity of the final 3D model, with minimal loss of precision. As a final step, a composite visual map of the scene (mosaic) is generated. We present a method for blending image textures using 3D geometric information and photometric differences between registered textures. The method allows high-quality mosaicing over 3D surfaces by reducing the effects of the distortions induced by camera viewpoint and illumination changes. The results are presented for four scene modeling scenarios, including a comparison with ground truth under a realistic scenario and a challenging underwater data set. Although developed primarily for underwater mapping applications, the methods are general and applicable to other domains, such as aerial and land-based mapping. © 2009 Wiley Periodicals, Inc. [source]


Functional morphology of prey capture in the sturgeon, Scaphirhynchus albus

JOURNAL OF MORPHOLOGY, Issue 3 2003
Andrew M. Carroll
Abstract Acipenseriformes (sturgeon and paddlefish) are basal actinopterygians with a highly derived cranial morphology that is characterized by an anatomical independence of the jaws from the neurocranium. We examined the morphological and kinematic basis of prey capture in the Acipenseriform fish Scaphirhynchus albus, the pallid sturgeon. Feeding pallid sturgeon were filmed in lateral and ventral views and movement of cranial elements was measured from video sequences. Sturgeon feed by creating an anterior to posterior wave of cranial expansion resulting in prey movement through the mouth. The kinematics of S. albus resemble those of other aquatic vertebrates: maximum hyoid depression follows maximum gape by an average of 15 ms and maximum opercular abduction follows maximum hyoid depression by an average of 57 ms. Neurocranial rotation was not a part of prey capture kinematics in S. albus, but was observed in another sturgeon species, Acipenser medirostris. Acipenseriformes have a novel jaw protrusion mechanism, which converts rostral rotation of the hyomandibula into ventral protrusion of the jaw joint. The relationship between jaw protrusion and jaw opening in sturgeon typically resembles that of elasmobranchs, with peak upper jaw protrusion occurring after peak gape. J. Morphol. 256:270,284, 2003. © 2003 Wiley-Liss, Inc. [source]


Image clustering for the exploration of video sequences

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE AND TECHNOLOGY, Issue 4 2006
Vicenç Torra
In this article we present a system for the exploration of video sequences. The system, GAMBAL for the Exploration of Video Sequences (GAMBAL-EVS), segments video sequences, extracting an image for each shot, and then clusters such images and presents them in a visualization system. The system allows the user to find similarities between images and to proceed through the video sequences to find the relevant ones. [source]


Development of an anatomically based whole-body musculoskeletal model of the Japanese macaque (Macaca fuscata)

AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY, Issue 3 2009
Naomichi Ogihara
Abstract We constructed a three-dimensional whole-body musculoskeletal model of the Japanese macaque (Macaca fuscata) based on computed tomography and dissection of a cadaver. The skeleton was modeled as a chain of 20 bone segments connected by joints. Joint centers and rotational axes were estimated by joint morphology based on joint surface approximation using a quadric function. The path of each muscle was defined by a line segment connecting origin to insertion through an intermediary point if necessary. Mass and fascicle length of each were systematically recorded to calculate physiological cross-sectional area to estimate the capacity of each muscle to generate force. Using this anatomically accurate model, muscle moment arms and force vectors generated by individual limb muscles at the foot and hand were calculated to computationally predict muscle functions. Furthermore, three-dimensional whole-body musculoskeletal kinematics of the Japanese macaque was reconstructed from ordinary video sequences based on this model and a model-based matching technique. The results showed that the proposed model can successfully reconstruct and visualize anatomically reasonable, natural musculoskeletal motion of the Japanese macaque during quadrupedal/bipedal locomotion, demonstrating the validity and efficacy of the constructed musculoskeletal model. The present biologically relevant model may serve as a useful tool for comprehensive understanding of the design principles of the musculoskeletal system and the control mechanisms for locomotion in the Japanese macaque and other primates. Am J Phys Anthropol, 2009. © 2008 Wiley-Liss, Inc. [source]


Relative afferent pupillary defect in glaucoma: a pupillometric study

ACTA OPHTHALMOLOGICA, Issue 5 2007
Lada Kalaboukhova
Abstract. Purpose:, To study the presence of relative afferent pupillary defect (RAPD) in patients with glaucoma with the help of a custom-built pupillometer. Methods:, Sixty-five participants were recruited (32 with open-angle glaucoma and 33 healthy subjects). All underwent standard clinical examination including perimetry and optic disc photography. Pupillary light reflexes were examined with a custom-built pupillometer. Three video sequences were recorded for each subject. Alternating light stimulation with a duration of 0.5 seconds was used, followed by a 1 second pause. Mean values of pupil area ratio (PAR), pupil contraction velocity ratio (PCVR), and pupil dilation velocity ratio (PDVR) were calculated. Receiver operating characteristic (ROC) curves were constructed for each of the three parameters. Intra-individual variability was estimated. Results:, PAR and PDVR differed significantly between the group with glaucoma and the control group (P < 0.0001). PAR was more sensitive for glaucoma detection than the other pupillometric parameters (PCVR and PDVR). The area under the receiver operating characteristic curve was largest for PAR. At a fixed specificity of 90%, sensitivity for PAR was 86.7%. Conclusion:, Measuring RAPD with infrared computerized pupillometry can detect optic neuropathy in glaucoma with high sensitivity and specificity. The method is fast and objective. Pupil area amplitude measurements were superior to pupil velocity measurements for the detection of RAPD in glaucoma [source]