Reconstruction Error (reconstruction + error)

Distribution by Scientific Domains


Selected Abstracts


Face recognition based on face-specific subspace

INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 1 2003
Shiguang Shan
Abstract In this article, we present an individual appearance model based method, named face-specific subspace (FSS), for recognizing human faces under variation in lighting, expression, and viewpoint. This method derives from the traditional Eigenface but differs from it in essence. In Eigenface, each face image is represented as a point in a low-dimensional face subspace shared by all faces; however, the experiments conducted show one of the demerits of such a strategy: it fails to accurately represent the most discriminanting features of a specific face. Therefore, we propose to model each face with one individual face subspace, named Face-Specific Subspace. Distance from the face-specific subspace, that is, the reconstruction error, is then exploited as the similarity measurement for identification. Furthermore, to enable the proposed approach to solve the single example problem, a technique to derive multisamples from one single example is further developed. Extensive experiments on several academic databases show that our method significantly outperforms Eigenface and template matching, which intensively indicates its robustness under variation in illumination, expression, and viewpoint. © 2003 Wiley Periodicals, Inc. Int J Imaging Syst Technol 13: 23,32, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10047 [source]


Signal reconstruction in the presence of finite-rate measurements: finite-horizon control applications

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 1 2010
Sridevi V. Sarma
Abstract In this paper, we study finite-length signal reconstruction over a finite-rate noiseless channel. We allow the class of signals to belong to a bounded ellipsoid and derive a universal lower bound on a worst-case reconstruction error. We then compute upper bounds on the error that arise from different coding schemes and under different causality assumptions. When the encoder and decoder are noncausal, we derive an upper bound that either achieves the universal lower bound or is comparable to it. When the decoder and encoder are both causal operators, we show that within a very broad class of causal coding schemes, memoryless coding prevails as optimal, imposing a hard limitation on reconstruction. Finally, we map our general reconstruction problem into two important control problems in which the plant and controller are local to each other, but are together driven by a remote reference signal that is transmitted through a finite-rate noiseless channel. The first problem is to minimize a finite-horizon weighted tracking error between the remote system output and a reference command. The second problem is to navigate the state of the remote system from a nonzero initial condition to as close to the origin as possible in finite-time. Our analysis enables us to quantify the tradeoff between time horizon and performance accuracy, which is not well studied in the area of control with limited information as most works address infinite-horizon control objectives (e.g. stability, disturbance rejection). Copyright © 2009 John Wiley & Sons, Ltd. [source]


Haplotype Misclassification Resulting from Statistical Reconstruction and Genotype Error, and Its Impact on Association Estimates

ANNALS OF HUMAN GENETICS, Issue 5 2010
Claudia Lamina
Summary Haplotypes are an important concept for genetic association studies, but involve uncertainty due to statistical reconstruction from single nucleotide polymorphism (SNP) genotypes and genotype error. We developed a re-sampling approach to quantify haplotype misclassification probabilities and implemented the MC-SIMEX approach to tackle this as a 3 × 3 misclassification problem. Using a previously published approach as a benchmark for comparison, we evaluated the performance of our approach by simulations and exemplified it on real data from 15 SNPs of the APM1 gene. Misclassification due to reconstruction error was small for most, but notable for some, especially rarer haplotypes. Genotype error added misclassification to all haplotypes resulting in a non-negligible drop in sensitivity. In our real data example, the bias of association estimates due to reconstruction error alone reached ,48.2% for a 1% genotype error, indicating that haplotype misclassification should not be ignored if high genotype error can be expected. Our 3 × 3 misclassification view of haplotype error adds a novel perspective to currently used methods based on genotype intensities and expected number of haplotype copies. Our findings give a sense of the impact of haplotype error under realistic scenarios and underscore the importance of high-quality genotyping, in which case the bias in haplotype association estimates is negligible. [source]


Numerical errors of the volume-of-fluid interface tracking algorithm

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 4 2002
Gregor, erne
Abstract One of the important limitations of the interface tracking algorithms is that they can be used only as long as the local computational grid density allows surface tracking. In a dispersed flow, where the dimensions of the particular fluid parts are comparable or smaller than the grid spacing, several numerical and reconstruction errors become considerable. In this paper the analysis of the interface tracking errors is performed for the volume-of-fluid method with the least squares volume of fluid interface reconstruction algorithm. A few simple two-fluid benchmarks are proposed for the investigation of the interface tracking grid dependence. The expression based on the gradient of the volume fraction variable is introduced for the estimation of the reconstruction correctness and can be used for the activation of an adaptive mesh refinement algorithm. Copyright © 2002 John Wiley & Sons, Ltd. [source]


x-f choice: Reconstruction of undersampled dynamic MRI by data-driven alias rejection applied to contrast-enhanced angiography

MAGNETIC RESONANCE IN MEDICINE, Issue 4 2006
Shaihan J. Malik
Abstract A technique for reconstructing dynamic undersampled MRI data, termed "x-f choice," was developed and applied to dynamic contrast-enhanced MR angiography (DCE-MRA). Regular undersampling in k-t space (a hybrid of k -space and time) creates aliasing in the conjugate x-f space that must be resolved. When regions in the object containing fast dynamic change are sparse, as in DCE-MRA, signal overlap caused by aliasing is often much less than the undersample factor would imply. x-f Choice reconstruction identifies overlapping signals using a model of the full non-aliased x-f space that is automatically generated from the undersampled data, and applies parallel imaging (PI) to separate them. No extra reference scans are required to generate either the model or the coil sensitivity maps. At each location in the reconstructed images, g -factor noise amplification is compared with predicted reconstruction errors to obtain an optimized solution. Acceleration factors greater than the number of receiver coils are possible, but are limited by the sparseness of the dynamic content and the signal-to-noise ratio (SNR) (in DCE-MRA the latter is dominant). Temporal fidelity was validated for up to a factor 10 speed-up using retrospectively undersampled data from a six-coil array. The method was tested on volunteers using fivefold prospective undersampling. Magn Reson Med, 2006. © 2006 Wiley-Liss, Inc. [source]


Observational biases in Lagrangian reconstructions of cosmic velocity fields

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2008
G. Lavaux
ABSTRACT Lagrangian reconstruction of large-scale peculiar velocity fields can be strongly affected by observational biases. We develop a thorough analysis of these systematic effects by relying on specially selected mock catalogues. For the purpose of this paper, we use the Monge,Ampère,Kantorovitch (MAK) reconstruction method, although any other Lagrangian reconstruction method should be sensitive to the same problems. We extensively study the uncertainty in the mass-to-light assignment due to incompleteness (missing luminous mass tracers), and the poorly determined relation between mass and luminosity. The impact of redshift distortion corrections is analysed in the context of MAK and we check the importance of edge and finite-volume effects on the reconstructed velocities. Using three mock catalogues with different average densities, we also study the effect of cosmic variance. In particular, one of them presents the same global features as found in observational catalogues that extend to 80 h,1 Mpc scales. We give recipes, checked using the aforementioned mock catalogues, to handle these particular observational effects, after having introduced them into the mock catalogues so as to quantitatively mimic the most densely sampled currently available galaxy catalogue of the nearby Universe. Once biases have been taken care of, the typical resulting error in reconstructed velocities is typically about a quarter of the overall velocity dispersion, and without significant bias. We finally model our reconstruction errors to propose an improved Bayesian approach to measure ,m in an unbiased way by comparing the reconstructed velocities to the measured ones in distance space, even though they may be plagued by large errors. We show that, in the context of observational data, it is possible to build a nearly unbiased estimator of ,m using MAK reconstruction. [source]