Home About us Contact | |||
Computational Complexity (computational + complexity)
Selected AbstractsFast and Efficient Skinning of Animated MeshesCOMPUTER GRAPHICS FORUM, Issue 2 2010L. Kavan Abstract Skinning is a simple yet popular deformation technique combining compact storage with efficient hardware accelerated rendering. While skinned meshes (such as virtual characters) are traditionally created by artists, previous work proposes algorithms to construct skinning automatically from a given vertex animation. However, these methods typically perform well only for a certain class of input sequences and often require long pre-processing times. We present an algorithm based on iterative coordinate descent optimization which handles arbitrary animations and produces more accurate approximations than previous techniques, while using only standard linear skinning without any modifications or extensions. To overcome the computational complexity associated with the iterative optimization, we work in a suitable linear subspace (obtained by quick approximate dimensionality reduction) and take advantage of the typically very sparse vertex weights. As a result, our method requires about one or two orders of magnitude less pre-processing time than previous methods. [source] Space-Time Hierarchical Radiosity with Clustering and Higher-Order WaveletsCOMPUTER GRAPHICS FORUM, Issue 2 2004Cyrille Damez Abstract We address in this paper the issue of computing diffuse global illumination solutions for animation sequences. The principal difficulties lie in the computational complexity of global illumination, emphasized by the movement of objects and the large number of frames to compute, as well as the potential for creating temporal discontinuities in the illumination, a particularly noticeable artifact. We demonstrate how space-time hierarchical radiosity, i.e. the application to the time dimension of a hierarchical decomposition algorithm, can be effectively used to obtain smooth animations: first by proposing the integration of spatial clustering in a space-time hierarchy; second, by using a higher-order wavelet basis adapted for the temporal dimension. The resulting algorithm is capable of creating time-dependent radiosity solutions efficiently. [source] A Framework for Facilitating Sourcing and Allocation Decisions for Make-to-Order ItemsDECISION SCIENCES, Issue 4 2004Nagesh N. Murthy ABSTRACT This paper provides a fundamental building block to facilitate sourcing and allocation decisions for make-to-order items. We specifically address the buyer's vendor selection problem for make-to-order items where the goal is to minimize sourcing and purchasing costs in the presence of fixed costs, shared capacity constraints, and volume-based discounts for bundles of items. The potential suppliers for make-to-order items provide quotes in the form of single sealed bids or participate in a dynamic auction involving open bids. A solution to our problem can be used to determine winning bids amongst the single sealed bids or winners at each stage of a dynamic auction. Due to the computational complexity of this problem, we develop a heuristic procedure based on Lagrangian relaxation technique to solve the problem. The computational results show that the procedure is effective under a variety of scenarios. The average gap across 2,250 problem instances is 4.65%. [source] Automated comparative protein structure modeling with SWISS-MODEL and Swiss-PdbViewer: A historical perspectiveELECTROPHORESIS, Issue S1 2009Nicolas Guex Abstract SWISS-MODEL pioneered the field of automated modeling as the first protein modeling service on the Internet. In combination with the visualization tool Swiss-PdbViewer, the Internet-based Workspace and the SWISS-MODEL Repository, it provides a fully integrated sequence to structure analysis and modeling platform. This computational environment is made freely available to the scientific community with the aim to hide the computational complexity of structural bioinformatics and encourage bench scientists to make use of the ever-increasing structural information available. Indeed, over the last decade, the availability of structural information has significantly increased for many organisms as a direct consequence of the complementary nature of comparative protein modeling and experimental structure determination. This has a very positive and enabling impact on many different applications in biomedical research as described in this paper. [source] Adaptive resource allocation in OFDMA systems with fairness and QoS constraints,EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 6 2007Liang Chen This paper describes several practical and efficient adaptive subchannel, power and bit allocation algorithms for orthogonal frequency-division multiple-access (OFDMA) systems. Assuming perfect knowledge of channel state information (CSI) at the transmitter, we look at the problem of minimising the total power consumption while maintaining individual rate requirements and QoS constraints. An average signal-to-noise ratio (SNR) approximation is used to determine the allocation while substantially reducing the computational complexity. The proposed algorithms guarantee improvement through each iteration and converge quickly to stable suboptimal solutions. Numerical results and complexity analysis show that the proposed algorithms offer beneficial cost versus performance trade-offs compared to existing approaches. Copyright © 2007 John Wiley & Sons, Ltd. [source] Resource allocation with minimum rates for OFDM broadcast channels,EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 6 2007Carolin Huppert Downlink transmissions with minimum rate requirements over orthogonal frequency division multiplexing (OFDM) channels are commonly done by means of scheduling algorithms. However, regarding it from an information theoretical point of view, this is not optimal since broadcast techniques can achieve higher rates. The drawbacks of the optimum broadcast algorithm are that the signalling overhead is larger than for scheduling and also the computational complexity is much higher. In this paper we propose an algorithm which overcomes these points. This algorithm is a hybrid algorithm combining scheduling and broadcast approaches. Thus, it combines advantages of both methods. Furthermore, we present modifications to this algorithm to avoid irresolvable decoding dependencies. We show by means of simulation results that the proposed algorithm operates close to the optimum performance and that it outperforms a pure scheduling approach. Copyright © 2007 John Wiley & Sons, Ltd. [source] Adaptive group detection for DS/CDMA systems over frequency-selective fading channels,EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 3 2003Stefano Buzzi In this paper we consider the problem of group detection for asynchronous Direct-Sequence Code Division Multiple Access (DS/CDMA) systems operating over frequency-selective fading channels. A two-stage near-far resistant detection structure is proposed. The first stage is a linear filter, aimed at suppressing the effect of the unwanted user signals, while the second stage is a non-linear block, implementing a maximum likelihood detection rule on the set of desired user signals. As to the linear stage, we consider both the Zero-Forcing (ZF) and the Minimum Mean Square Error (MMSE) approaches; in particular, based on the amount of prior knowledge on the interference parameters which is available to the receiver and on the affordable computational complexity, we come up with several receiving structures, which trade system performance for complexity and needed channel state information. We also present adaptive implementations of these receivers, wherein only the parameters from the users to be decoded are assumed to be known. The case that the channel fading coefficients of the users to be decoded are not known a priori is also considered. In particular, based on the transmission of pilot signals, we adopt a least-squares criterion in order to obtain estimates of these coefficients. The result is thus a fully adaptive structure, which can be implemented with no prior information on the interfering signals and on the channel state. As to the performance assessment, the new receivers are shown to be near-far resistant, and simulation results confirm their superiority with respect to previously derived detection structures. Copyright © 2003 AEI. [source] Exploiting the short,term and long,term channel properties in space and time: Eigenbeamforming concepts for the BS in WCDMAEUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 5 2001Christopher Brunner The deployment of adaptive antennas at base stations considerably increases the spectral efficiency of wireless communication systems. To reduce the computational complexity and increase performance of space,time (ST) processing, processing may take place in reduced dimension, i. e., pre,filtering takes place which is related to linear estimation theory based on second order statistics. To this end, long,term and short,term channel estimates are integrated into specific Tx/Rx systems. In this article, we present a new ST rake structure for uplink reception in WCDMA which operates in reduced dimension. Accordingly, our approach combines short,term and long,term spatial and temporal channel properties using an eigenanalysis. By choosing dominant eigenbeams in time and space, the algorithm enhances interference suppression as well as spatial and temporal receive diversity. In contrast to previously introduced well,known receiver structures, the ST eigenrake inherently adapts to different propagation environments and achieves higher spectral efficiency than other receivers. This is illustrated by Monte,Carlo simulations. Then we extend the proposed concept to the downlink. The downlink eigenbeamformer improves closed,loop downlink diversity compared to other proposals in standardization (3GPP) which only exploit short,term channel properties. Even though the short,term feedback rate remains unchanged, additional antenna elements can be included to increase antenna and diversity gain. We also present a tracking solution to the downlink eigenbeamforming in WCDMA. To this end, we propose a distributed implementation of the eigenspace/,beam tracking at the mobile terminal and base station (BS), respectively. Moreover, the specific nature of the deployed tracking scheme offers an advantageous feedback signalling. [source] Use of longitudinal data in genetic studies in the genome-wide association studies era: summary of Group 14GENETIC EPIDEMIOLOGY, Issue S1 2009Berit Kerner Abstract Participants analyzed actual and simulated longitudinal data from the Framingham Heart Study for various metabolic and cardiovascular traits. The genetic information incorporated into these investigations ranged from selected single-nucleotide polymorphisms to genome-wide association arrays. Genotypes were incorporated using a broad range of methodological approaches including conditional logistic regression, linear mixed models, generalized estimating equations, linear growth curve estimation, growth modeling, growth mixture modeling, population attributable risk fraction based on survival functions under the proportional hazards models, and multivariate adaptive splines for the analysis of longitudinal data. The specific scientific questions addressed by these different approaches also varied, ranging from a more precise definition of the phenotype, bias reduction in control selection, estimation of effect sizes and genotype associated risk, to direct incorporation of genetic data into longitudinal modeling approaches and the exploration of population heterogeneity with regard to longitudinal trajectories. The group reached several overall conclusions: (1) The additional information provided by longitudinal data may be useful in genetic analyses. (2) The precision of the phenotype definition as well as control selection in nested designs may be improved, especially if traits demonstrate a trend over time or have strong age-of-onset effects. (3) Analyzing genetic data stratified for high-risk subgroups defined by a unique development over time could be useful for the detection of rare mutations in common multifactorial diseases. (4) Estimation of the population impact of genomic risk variants could be more precise. The challenges and computational complexity demanded by genome-wide single-nucleotide polymorphism data were also discussed. Genet. Epidemiol. 33 (Suppl. 1):S93,S98, 2009. © 2009 Wiley-Liss, Inc. [source] Design of transmission line filters and matching circuits using genetic algorithmsIEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING, Issue 6 2007Hirofumi Sanada Member Abstract A method for designing microwave filters and impedance matching circuits using transmission lines is presented. Transmission line filters with shunt-connected open circuit stubs and continuously varying transmission line matching circuits are described in detail. The proposed method is based on genetic algorithms and can effectively be applied to various filter and matching circuit design problems without increasing theoretical and computational complexity. Design examples are provided, and the proposed method is demonstrated to be effective in designing transmission line filters and matching circuits. Copyright © 2007 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc. [source] Index tracking with constrained portfoliosINTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 1-2 2007Dietmar Maringer Passive portfolio management strategies, such as index tracking, are popular in the industry, but so far little research has been done on the cardinality of such a portfolio, i.e. on how many different assets ought to be included in it. One reason for this is the computational complexity of the associated optimization problems. Traditional optimization techniques cannot deal appropriately with the discontinuities and the many local optima emerging from the introduction of explicit cardinality constraints. More recent approaches, such as heuristic methods, on the other hand, can overcome these hurdles. This paper demonstrates how one of these methods, differential evolution, can be used to solve the constrained index-tracking problem. We analyse the financial implication of cardinality constraints for a tracking portfolio using an empirical study of the Down Jones Industrial Average. We find that the index can be tracked satisfactorily with a subset of its components and, more important, that the deviation between computed actual tracking error and the theoretically achievable tracking error out of sample is negligibly affected by the portfolio's cardinality. Copyright © 2007 John Wiley & Sons, Ltd. [source] Comparison of two wave element methods for the Helmholtz problemINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 1 2009T. Huttunen Abstract In comparison with low-order finite element methods (FEMs), the use of oscillatory basis functions has been shown to reduce the computational complexity associated with the numerical approximation of Helmholtz problems at high wave numbers. We compare two different wave element methods for the 2D Helmholtz problems. The methods chosen for this study are the partition of unity FEM (PUFEM) and the ultra-weak variational formulation (UWVF). In both methods, the local approximation of wave field is computed using a set of plane waves for constructing the basis functions. However, the methods are based on different variational formulations; the PUFEM basis also includes a polynomial component, whereas the UWVF basis consists purely of plane waves. As model problems we investigate propagating and evanescent wave modes in a duct with rigid walls and singular eigenmodes in an L-shaped domain. Results show a good performance of both methods for the modes in the duct, but only a satisfactory accuracy was obtained in the case of the singular field. On the other hand, both the methods can suffer from the ill-conditioning of the resulting matrix system. Copyright © 2008 John Wiley & Sons, Ltd. [source] Fast multipole boundary element analysis of two-dimensional elastoplastic problemsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 10 2007P. B. Wang Abstract This paper presents a fast multipole boundary element method (BEM) for the analysis of two-dimensional elastoplastic problems. An incremental iterative technique based on the initial strain approach is employed to solve the nonlinear equations, and the fast multipole method (FMM) is introduced to achieve higher run-time and memory storage efficiency. Both of the boundary integrals and domain integrals are calculated by recursive operations on a quad-tree structure without explicitly forming the coefficient matrix. Combining multipole expansions with local expansions, computational complexity and memory requirement of the matrix,vector multiplication are both reduced to O(N), where N is the number of degrees of freedom (DOFs). The accuracy and efficiency of the proposed scheme are demonstrated by several numerical examples. Copyright © 2006 John Wiley & Sons, Ltd. [source] A fast multi-level convolution boundary element method for transient diffusion problemsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 14 2005C.-H. Wang Abstract A new algorithm is developed to evaluate the time convolution integrals that are associated with boundary element methods (BEM) for transient diffusion. This approach, which is based upon the multi-level multi-integration concepts of Brandt and Lubrecht, provides a fast, accurate and memory efficient time domain method for this entire class of problems. Conventional BEM approaches result in operation counts of order O(N2) for the discrete time convolution over N time steps. Here we focus on the formulation for linear problems of transient heat diffusion and demonstrate reduced computational complexity to order O(N3/2) for three two-dimensional model problems using the multi-level convolution BEM. Memory requirements are also significantly reduced, while maintaining the same level of accuracy as the conventional time domain BEM approach. Copyright © 2005 John Wiley & Sons, Ltd. [source] A rapidly converging filtered-error algorithm for multichannel active noise controlINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 7 2007A. P. Berkhoff Abstract In this paper, a multichannel adaptive control algorithm is described which has good convergence properties while having relatively small computational complexity. This complexity is similar to that of the filtered-error algorithm. In order to obtain these properties, the algorithm is based on a preprocessing step for the actuator signals using a stable and causal inverse of the minimum-phase part of the transfer path between actuators and error sensors, the secondary path. The latter algorithm is known from the literature as postconditioned filtered-error algorithm, which improves convergence rate for the case that the minimum-phase part of the secondary path increases the eigenvalue spread. However, the convergence rate of this algorithm suffers from delays in the adaptation path because adaptation rates have to be reduced for larger delays. The contribution of this paper is to modify the postconditioned filtered-error scheme in such a way that the adaptation rate can be set to a higher value. Consequently, the scheme also provides good convergence if the system contains significant delays. Furthermore, a regularized extension of the scheme is given which can be used to limit the actuator signals. Copyright © 2006 John Wiley & Sons, Ltd. [source] Fractionally spaced blind equalization with low-complexity concurrent constant modulus algorithm and soft decision-directed schemeINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 6 2005S. Chen Abstract The paper proposes a low-complexity concurrent constant modulus algorithm (CMA) and soft decision-directed (SDD) scheme for fractionally spaced blind equalization of high-order quadrature amplitude modulation channels. We compare our proposed blind equalizer with the recently introduced state-of-art concurrent CMA and decision-directed (DD) scheme. The proposed CMA+SDD blind equalizer is shown to have simpler computational complexity per weight update, faster convergence speed, and slightly improved steady-state equalization performance, compared with the existing CMA+DD blind equalizer. Copyright © 2004 John Wiley & Sons, Ltd. [source] The Gauss-Seidel fast affine projection algorithm for multichannel active noise control and sound reproduction systemsINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 2-3 2005Martin Bouchard Abstract In the field of adaptive filtering, the fast implementations of affine projection algorithms are known to provide a good tradeoff between convergence speed and computational complexity. Such algorithms have recently been published for multichannel active noise control systems. Previous work reported that these algorithms can outperform more complex recursive least-squares algorithms when noisy plant models are used in active noise control systems. This paper proposes a new fast affine projection algorithm for multichannel active noise control or sound reproduction systems, based on the Gauss,Seidel solving scheme. The proposed algorithm has a lower complexity than the previously published algorithms, with the same convergence speed and the same good performance with noisy plant models, and a potential for better numerical stability. It provides the best performance/cost ratio. Details of the algorithm and its complexity are presented in the paper, with simulation results to validate its performance. Copyright © 2004 John Wiley & Sons, Ltd. [source] Blind MIMO equalization with optimum delay using independent component analysisINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 3 2004Vicente Zarzoso Abstract Blind space,time equalization of multiuser time-dispersive digital communication channels consists of recovering the users' simultaneously transmitted data free from the interference caused by each other and the propagation effects, without using training sequences. In scenarios composed of mutually independent non-Gaussian i.i.d. users' signals, independent component analysis (ICA) techniques based on higher-order statistics can be employed to refine the performance of conventional linear detectors, as recently shown in a code division multiple access environment (Signal Process 2002; 82:417,431). This paper extends these results to the more general multi-input multi-output (MIMO) channel model, with the minimum mean square error (MMSE) as conventional equalization criterion. The time diversity introduced by the wideband multipath channel enables a reduction of the computational complexity of the ICA post-processing stage while further improving performance. In addition, the ICA-based detector can be tuned to extract each user's signal at the delay which provides the best MMSE. Experiments in a variety of simulation conditions demonstrate the benefits of ICA-assisted MIMO equalization. Copyright © 2004 John Wiley & Sons, Ltd. [source] A forward-only recursion algorithm for MAP decoding of linear block codesINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 8 2002Hans-Jürgen Zepernick Abstract The evolution of digital mobile communications along with the increase of integrated circuit complexity has resulted in frequent use of error control coding to protect information against transmission errors. Soft decision decoding offers better error performance compared to hard decision decoding but on the expense of decoding complexity. The maximum a posteriori (MAP) decoder is a decoding algorithm which processes soft information and aims at minimizing bit error probability. In this paper, a matrix approach is presented which analytically describes MAP decoding of linear block codes in an original domain and a corresponding spectral domain. The trellis-based decoding approach belongs to the class of forward-only recursion algorithms. It is applicable to high rate block codes with a moderate number of parity bits and allows a simple implementation in the spectral domain in terms of storage requirements and computational complexity. Especially, the required storage space can be significantly reduced compared to conventional BCJR-based decoding algorithms. Copyright © 2002 John Wiley & Sons, Ltd. [source] Performance of Markov models for frame-level errors in IEEE 802.11 wireless LANsINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 6 2009Gennaro Boggia Abstract Interference among different wireless hosts is becoming a serious issue due to the growing number of wireless LANs based on the popular IEEE 802.11 standard. Thus, an accurate modeling of error paths at the data link layer is indispensable for evaluating system performance and for tuning and optimizing protocols at higher layers. Error paths are usually described looking at sequences of consecutive correct or erroneous frames and at the distributions of their sizes. In recent years, a number of Markov-based stochastic models have been proposed in order to statistically characterize these distributions. Nevertheless, when applied to analyze the data traces we collected, they exhibit several flaws. In this paper, to overcome these model limitations, we propose a new algorithm based on a semi-Markov process, where each state characterizes a different error pattern. The model has been validated by using measures from a real environment. Moreover, we have compared our method with other promising models already available in the literature. Numerical results show that our proposal performs better than the other models in capturing the long-term temporal correlation of real measured traces. At the same time, it is able to estimate first-order statistics with the same accuracy of the other models, but with a minor computational complexity. Copyright © 2009 John Wiley & Sons, Ltd. [source] Performance of robust symbol-timing and carrier-frequency estimation for OFDM systemsINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 5 2009Nan-Yang YenArticle first published online: 7 NOV 200 Abstract In recent years, many maximum likelihood (ML) blind estimators have been proposed to estimate timing and frequency offsets for orthogonal frequency division multiplexing (OFDM) systems. However, the previously proposed ML blind estimators utilizing cyclic prefix do not fully characterize the random observation vector over the entire range of the timing offset and will significantly degrade the estimation performance. In this paper, we present a global ML blind estimator to compensate the estimation error. Moreover, we extend the global ML blind estimator by accumulating the ML function of the estimation parameters to achieve a better accuracy without increasing the hardware or computational complexity. The simulation results show that the proposed algorithm can significantly improve the estimation performance in both additional white Gaussian noise and ITU-R M.1225 multipath channels. Copyright © 2008 John Wiley & Sons, Ltd. [source] Low complexity bit allocation algorithm for OFDM systemsINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 11 2008Changwook Lee Abstract A bit allocation algorithm is presented for orthogonal frequency division multiplexing (OFDM) systems. The proposed algorithm is derived from the geometric progression of the additional transmission power required by the subcarriers and the arithmetic,geometric means inequality. Consequently, this algorithm has a simple procedure and low computational complexity. Copyright © 2008 John Wiley & Sons, Ltd. [source] Clustering-based scheduling: A new class of scheduling algorithms for single-hop lightwave networksINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 8 2008Sophia G. Petridou Abstract In wavelength division multiplexing (WDM) star networks, the construction of the transmission schedule is a key issue, which essentially affects the network performance. Up to now, classic scheduling techniques consider the nodes' requests in a sequential service order. However, these approaches are static and do not take into account the individual traffic pattern of each node. Owing to this major drawback, they suffer from low performance, especially when operating under asymmetric traffic. In this paper, a new class of scheduling algorithms for WDM star networks, which is based on the use of clustering techniques, is introduced. According to the proposed Clustering-Based Scheduling Algorithm (CBSA), the network's nodes are organized into clusters, based on the number of their requests per channel. Then, their transmission priority is defined beginning from the nodes belonging to clusters with higher demands and ending to the nodes of clusters with fewer requests. The main objective of the proposed scheme is to minimize the length of the schedule by rearranging the nodes' service order. Furthermore, the proposed CBSA scheme adopts a prediction mechanism to minimize the computational complexity of the scheduling algorithm. Extensive simulation results are presented, which clearly indicate that the proposed approach leads to a significantly higher throughput-delay performance when compared with conventional scheduling algorithms. We believe that the proposed clustering-based approach can be the base of a new generation of high-performance scheduling algorithms for WDM star networks. Copyright © 2008 John Wiley & Sons, Ltd. [source] Simplified group interference cancelling for asynchronous DS-CDMAINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 10 2006David W. Matolak Abstract A simplified group interference cancelling (IC) approach is investigated for asynchronous direct-sequence code-division multiple access on flat fading channels. The technique employs grouping by estimated signal-to-noise-plus-interference ratio (SNIR), and interference cancellation is performed blockwise, for a subset of the total number of users. We consider long random spreading codes, and include the effects of imperfect amplitude, carrier phase, and delay estimation. Performance of the technique shows SNIR gains of several dB, and concomitant improvements in error probability, with lower computational complexity than that of parallel or serial interference cancelling techniques. We also show that our SNIR expressions are applicable to both the AWGN and flat fading channels, and for moderate near,far conditions. In addition, we determine optimal group sizes for our technique, where optimality is in terms of average error probability over all users. Copyright © 2006 John Wiley & Sons, Ltd. [source] A novel approach to enable decorrelating multiuser detection without matrix inversion operationsINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 9 2004Hsiao-Hwa Chen Abstract This paper proposes a non-matrix inversion based algorithm to implement decorrelating detection (DD), namely quasi-decorrelating detector (QDD), which uses truncated matrix series expansion to overcome the problems associated with the matrix inversion in DD, such as noise enhancement, computational complexity and matrix singularity, etc. Two alternative QDD implementation schemes are presented in this paper; one is to use multi-stage feedforward filters and the other is to use an nth order single matrix filter (neither of which involves matrix inversion). In addition to significantly reduced computational complexity if compared with DD, the QDD algorithm offers a unique flexibility to trade among MAI suppression, near-far resistance and noise enhancement depending on varying system set-ups. The obtained results show that the QDD outperforms DD in either AWGN or multipath channel if a proper number of feed-forward stages can be used. We will also study the impact of correlation statistics of spreading codes on the QDD's performance with the help of a performance-determining factor derived in the paper, which offers a code-selection guideline for the optimal performance of QDD algorithm. Copyright © 2004 John Wiley & Sons, Ltd. [source] Adaptive directional-aware location update strategyINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 2 2004Tracy Tung Abstract In this paper, a new tracking strategy, the directivity-aware location updating scheme was developed to better utilize the distinct characteristics of individual users on travelling directions. In this new adaptive scheme, an optimal distance-based update threshold is selected according to the call-to-mobility ratio and a transitional directivity index, introduced to give indications of the mobile's travelling patterns. It is found that as far as mobility characteristics are concerned, the actual transitional direction of roaming mobiles plays a significant role in selecting the optimal threshold in addition to the usual perception about mobility rate. Its advantage becomes even more visible when an optimal threshold is not theoretically obtainable due to certain restrictions imposed by the network during times of high system loadings. Simulation results show that the additional information made available about roaming mobile's transitional directivity is critical to ensure that the best available sub-optimal threshold is realizable. Other advances in this paper include the simplification of existing Markovian movement model. With the improved model presentation, the number of states necessary to simulate such memoryless movements is reduced. Consequently, the computational complexity involved is also lessened. Copyright © 2004 John Wiley & Sons, Ltd. [source] Selective partial PIC for wireless CDMA communicationsINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 6 2003Filippo Belloni Abstract This paper deals with a cancellation multiuser detector for CDMA communication systems. The proposed receiver, defined as selective partial parallel interference cancellation (SP-PIC), is supposed to be used at the end of an up-link channel characterized by multipath fading phenomena. The SP-PIC main feature is to perform a weighted selective cancellation of the co-channel interfering signals according to the received power level. With respect to other approaches, the proposed detector exhibits an improved bit error rate (BER) and a low computational complexity, linear with the number of users. Copyright © 2003 John Wiley & Sons, Ltd. [source] An enhanced explicit rate algorithm for ABR traffic control in ATM networksINTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 10 2001Y. H. Long Abstract A high performance, low computational complexity rate-based flow control algorithm which can avoid congestion and achieve fairness is important to ATM available bit rate service. The explicit rate allocation algorithm proposed by Kalampoukas et al. is designed to achieve max,min fairness in ATM networks. It has several attractive features, such as a fixed computational complexity of O(1) and the guaranteed convergence to max,min fairness. In this paper, certain drawbacks of the algorithm, such as the severe overload of an outgoing link during transient period and the non-conforming use of the current cell rate field in a resource management cell, have been identified and analysed; a new algorithm which overcomes these drawbacks is proposed. The proposed algorithm simplifies the rate computation as well. Compared with Kalampoukas's algorithm, it has better performance in terms of congestion avoidance and smoothness of rate allocation. Copyright © 2001 John Wiley & Sons, Ltd. [source] A projection-based image quality measureINTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 2-3 2008Jianxin Pang Abstract Objective image quality measure, evaluating the image quality consistently with human perception automatically, could be employed in image and video retrieval. And the measure with high efficiency and low computational complexity plays an important role in numerous image and video processing applications. On the assumption that any image's distortion could be modeled as the difference between the projection-based values (PV) of reference image and the counterpart of distorted image, we propose a new objective quality assessment method based on signal projection for full reference model. The proposed metric is developed by simple parameters to achieve high efficiency and low computational complexity. Experimental results show that the proposed method is well consistent with the subjective quality score. © 2008 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 18, 94,100, 2008 [source] Watermarking in halftone images with mixed halftone techniquesINTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 5 2007Jing-Ming Guo Abstract Ordered dithering and error diffusion are the two most popular processes to produce halftone results for printing industry. Ordered dithering inherently has the benefit of efficiency. On the other hand, error diffusion has high quality and reasonable complexity. In this article, we propose a watermarking that adopts the ordered dithering to produce the referenced halftone image, then applying the noise-balanced error diffusion to embed the watermark. A low computational complexity, low memory demand, and good embedded image quality are achieved with the proposed technique. From the experimental results, this technique can guard against the cropping and print-and-scan two major degradation processes in halftone images. © 2008 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 17, 303,314, 2007 [source] |