Conference Publications
Abstract
Abstract
In this paper, we study the benefits of rate-splitting (RS) in multiple-input multiple-output (MIMO) cloud radio access networks (C-RAN). For this setting, we propose a stream-based transmission scheme in which user's messages are divided into a private and a common part, each of which is encoded into multiple streams. Under this stream-based strategy, we formulate a weighted sum-rate maximization problem subject to backhaul capacity and transmit power constraints. We determine the beamforming vectors of private and common streams of this non-convex optimization problem via a proposed iterative algorithm based on the fractional programming (FP) framework. Numerical results show gains of up to 27% of RS-MIMO over the baseline scheme of treating interference as noise (TIN). Particularly, at large backhaul capacities and specific antenna settings at which interference levels are maximal, rate splitting along with common message decoding is a viable option for effective interference management in MIMO C-RAN.
Abstract
Abstract
Abstract
Attitude determination is an important application of Global Navigation Satellite Systems (GNSS). However, before GNSS attitude determination can be achieved, the carrier-phase integer ambiguity must be resolved. We handle the attitude determination problem by arranging the GNSS receiving antennas on two non-collinear baselines, which allow us to obtain the 3-D attitude of the moving platform on which they are stationed. Initially, we tackle the ambiguity resolution problem independently over each baseline based on single phase-difference measurements. To this end, we discuss and test two different approaches to carrier-phase ambiguity resolution. We either exploit the receiver antenna configuration or employ multiple carrier frequencies. Namely, we show that for special configurations of collinear antenna triplets, the ambiguity resolution problem can be handled using a simple algebraic formula. A similar approach is developed for a baseline with only two antennas in which we utilize a pair of GNSS carrier frequencies that satisfy a specific condition. The initial solution to the algebraic formula yields only coarse vectors indicating the pointing direction of the baselines and coarse unwrapped phase-difference measurements. Therefore, we develop and apply refining procedures to improve the initial results. Using the obtained coarse phase differences, we formulate the attitude determination problem as a least-squares problem with dual baseline length constraints and an inter-baseline angle constraint. This is a non-convex optimization problem to which we propose an efficient solution. This solution, combined with each of the two ambiguity resolution approaches, results in two different methods for attitude determination. The proposed methods are extensively tested using simulations covering a broad range of scenarios. The results demonstrate high success rates of ambiguity resolution and high attitude accuracy. Moreover, the proposed methods perform reasonably well even in scenarios with a small number of visible satellites.
Abstract
In this paper, we derive an analytical expression for the bit error rate (BER) of binary phase shift keying (BPSK) symbols transmitted over a multiple-input multiple-output (MIMO) system. In this wireless communications system, the receiver uses the linear minimum mean squared error (LMMSE) estimator to estimate the channel matrix. The error in this estimation affects the following estimation that is used to recover the transmitted symbols.
We derive the BER of the estimated symbols as a function of the energy allocation. Exploiting the large dimensionality of the problem, we leverage tools from random matrix theory (RMT) to express the BER only in terms of the deterministic parameters of the system.
We further utilize the deterministic expression to find the optimal energy allocation.
The theoretical results are matched with simulations showing high level
of congruence.
Abstract
As an alternative to low rate and high latency acoustic systems, underwater optical wireless sensor network (UOWSN) is a promising technology to enable high speed and low latency underwater communications. However, the aquatic medium poses significant challenges for underwater optical wireless communications (UOWC) such as higher absorption, scattering, ambient noise, and turbulence impairments of seawater. These severe impairments yield very limited transmission ranges and necessitate multihop transmissions to expand communication ranges and enhance the network connectivity. Therefore, one needs to take some crucial design parameters into account in order to achieve a fully connected multihop UOWSN (MH-UOWSN). Unlike the omnidirectional wireless network, one of the most distinctive features of UOWSN is transmission occurs only within a directed beam sector. Therefore, we model an MH-UOWSN as a randomly scaled sector graph where connection among the nodes is established by point-to-point directed links. Thereafter, the probability of network connectivity is analytically derived as a function of communication range, network density, and beam-width. Throughout the extensive simulations, we demonstrate that the probability of an obscured/isolated node strongly depends on these three parameters and the upper bound for network connectivity is achieved at larger beam-widths and dense deployments. The proposed work provides a practical method for effective selection of the physical layer design parameters of MH- UOWSNs.
Abstract
This paper presents a new adaptation of a Gaussian echo model (GEM) to estimate the distances to multiple targets using acoustic signals. The proposed algorithm utilizes m-sequences and opens the door for applying other modulations and signal designs for acoustic estimation in a similar way. The proposed algorithm estimates the system impulse response and uses the GEM to limit the effect of noise before applying deconvolution to estimate the time of arrival (TOA) to multiple targets with high accuracy. The algorithm was experimentally evaluated for different scenarios with active (transmitters) and passive (reflectors) targets at proximity. In the case of closely spaced static passive targets, results show that 90$%$ of the ranging errors are below 7 mm. When tracking two moving active targets approaching very close proximity, results show that 90$%$ of the ranging errors are less than 10 mm.
Abstract
In this paper, a received signal strength (RSS) based local-ization technique is investigated for underwater optical wire-less sensor networks (UOWSNs) where optical noise sources (e.g., sunlight, background, thermal, and dark current) and channel impairments of seawater (e.g., absorption, scattering, and turbulence) pose significant challenges. Hence, we pro-pose a localization technique that works on the noisy ranging measurements embedded in a higher dimensional space and localize the sensor network in a low dimensional space. Once the neighborhood information is measured, a weighted net-work graph is constructed, which contains the one-hop neigh-bor distance estimations. A novel approach is developed to complete the missing distances in the kernel matrix. The out-put of the proposed technique is fused with Helmert trans-formation to refine the final location estimation with the help of anchors. The simulation results show that the root means square positioning error (RMSPE) of the proposed technique is more robust and accurate compared to baseline and mani-fold regularization.
Abstract
Underwater optical wireless networks (UOWNs) have recently gained attention as an emerging solution to the growing demand for broadband connectivity. Even though it is an alternative to low-bandwidth and high-latency acoustic systems, underwater optical wireless communications (UOWC) suffers from limited range and requires effective multi-hop solutions. Therefore, this paper analyzes and compares the performance of multihop underwater optical wireless networks under two relaying schemes: Decode & Forward (DF) and Amplify & Forward (AF). Noting that nodes close to the surface sink (SS) are required to relay more information, these nodes are enabled for retro-reflective communication, where SS illuminates these nodes with a continuous-wave beam which is then modulated and reflected back to the SS receivers. Accordingly, we analytically evaluate important performance metrics including end-to-end bit error rate, achievable multihop data rates, and communication ranges between node pairs. Thereafter, we develop routing algorithms for DF and AF schemes in order to maximize the end-to-end performance metrics. Numerical results demonstrate that multi-hop transmission can significantly enhance the network performance and expand the communication range.
8377388TamimAbstract
This paper presents a new approach for studying the steady state performance of the Recursive Least Square (RLS) adaptive filter for a circularly correlated Gaussian input. Earlier methods have two major drawbacks: (1) The energy relation developed for the RLS is approximate (as we show later) and (2) The evaluation of the moment of the random variable ||u_i||^2_(P_i) , where u_i is input to the RLS filter and Pi is the estimate of the inverse of input covariance matrix by assuming that u_i and P_i are independent (which is not true). These assumptions could result in negative value of the stead-state Excess Mean Square Error (EMSE). To overcome these issues, we modify the energy relation without imposing any approximation.
Based on modified energy relation, we derive the steady-state EMSE and two upper bounds on the EMSE. For that, we derive closed from expression for the aforementioned moment which is based on finding the cumulative distribution function (CDF) of the random variable of the form (gamma+||u||^2_D)^-1 where u is correlated circular Gaussian input and D is a diagonal matrix. Simulation results corroborate our analytical findings.
Abstract
Abstract
Abstract
Abstract
Abstract
This paper addresses the problem of 3-D location estimation from perturbed range information and uncertain anchor positions. The 3-D location estimation problem is formulated as a min-max convex optimization problem with a set of second-order cone constraints. Robust optimization tools are applied to convert these cone constrains to semi-definite programming constraints and achieve robust location estimation without prior knowledge of the statistical distributions of the errors. Simulation results demonstrate the superiority of the proposed approach over other benchmark algorithms in a wide range of measurement error scenarios.
Abstract
Abstract
Abstract This paper carries out a large dimensional analysis of the standard regularized quadratic discriminant analysis (QDA) classifier designed on the assumption that data arise from a Gaussian mixture model. The analysis relies on fundamental results from random matrix theory (RMT) when both the number of features and the cardinality of the training data within each class grow large at the same pace. Under some mild assumptions, we show that the asymptotic classification error converges to a deterministic quantity that depends only on the covariances and means associated with each class as well as the problem dimensions. Such a result permits a better understanding of the performance of regularized QDA and can be used to determine the optimal regularization parameter that minimizes the misclassification error probability. Despite being valid only for Gaussian data, our theoretical findings are shown to yield a high accuracy in predicting the performances achieved with real data sets drawn from popular real data bases, thereby making an interesting connection between theory and practice.
Abstract In this paper, the focus is on optimal sensor placement and power rating selection for parameter estimation in wireless sensor networks (WSNs). We take into account theamount of energy harvested by the sensing nodes, communication link quality, and the observation accuracy at the sensor level. In particular, the aim is to reconstruct the estimationparameter with minimum error at a fusion center under a system budget constraint. To achieve this goal, a subset of sensing locations is selected from a large pool of candidate sensinglocations. Furthermore, the type of sensor to be placed at those locations is selected from a given set of sensor types (e.g., sensors with different power ratings). We furtherinvestigate whether it is better to install a large number of cheap sensors, a few expensive sensors or a combination of different sensor types at the optimal locations.
Abstract The image restoration problem deals with images in which information has been degraded by blur or noise. In this work, we present a new method for image deblurring by solvinga regularized linear least-squares problem. In the proposed method, a synthetic perturbation matrix with a bounded norm is forced into the discrete ill-conditioned model matrix. Thisperturbation is added to enhance the singular-value structure of the matrix and hence to provide an improved solution. A method is proposed to find a near-optimal value of theregularization parameter for the proposed approach. To reduce the computational complexity, we present a technique based on the bootstrapping method to estimate the regularizationparameter for both low and high-resolution images. Experimental results on the image deblurring problem are presented. Comparisons are made with three benchmark methods and the resultsdemonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and SSIM values.
Abstract
Hand gestures are tools for conveying information, expressing emotion, interacting with electronic devices or even serving disabled people as a second language. A gesture canbe recognized by capturing the movement of the hand, in real time, and classifying the collected data. Several commercial products such as Microsoft Kinect, Leap Motion Sensor,Synertial Gloves and HTC Vive have been released and new solutions have been proposed by researchers to handle this task. These systems are mainly based on optical measurements,inertial measurements, ultrasound signals and radio signals. This paper proposes an ultrasonic-based gesture recognition system using AOA (Angle of Arrival) information of ultrasonicsignals emitted from a wearable ultrasound transducer. The 2-D angles of the moving hand are estimated using multi-frequency signals captured by a fixed receiver array. A simpleredundant dictionary matching classifier is designed to recognize gestures representing the numbers from â€Ú©0’ to â€Ú©9’ and compared with a neural network classifier. Averageclassification accuracies of 95.5% and 94.4% are obtained, respectively, using the two classification methods.
Abstract
This paper presents a new adaptation of Zadoff-Chu sequences for the purpose of range estimation and movement tracking. The proposed method uses Zadoff-Chu sequencesutilizing a wideband ultrasonic signal to estimate the range between two devices with very high accuracy and high update rate. This range estimation method is based on time of flight(TOF) estimation using cyclic cross correlation. The system was experimentally evaluated under different noise levels and multi-user interference scenarios. For a single user, theresults show less than 7 mm error for 90% of range estimates in a typical indoor environment. Under the interference from three other users, the 90% error was less than 25 mm. Thesystem provides high estimation update rate allowing accurate tracking of objects moving with high speed.
Abstract
This paper focuses on the problem of selecting the regularization parameter for linear least-squares estimation. Usually, the problem is formulated as a minimization problem with a cost function consisting of the square sum of the l 2 norm of the residual error, plus a penalty term of the squared norm of the solution multiplied by a constant. The penalty term has the effect of shrinking the solution towards the origin with magnitude that depends on the value of the penalty constant. By considering both squared and non-squared norms of the residual error and the solution, four different cost functions can be formed to achieve the same goal. In this paper, we show that all the four cost functions lead to the same closed-form solution involving a regularization parameter, which is related to the penalty constant through a different constraint equation for each cost function. We show that for three of the cost functions, a specific procedure can be applied to combine the constraint equation with the mean squared error (MSE) criterion to develop approximately optimal regularization parameter selection algorithms. Performance of the developed algorithms is compared to existing methods to show that the proposed algorithms stay closest to the optimal MSE.
Abstract In this paper, we discuss the principal pivot transforms (PPT) on a family of matrices, called the radix-2 DFT-type matrices. Given a transformation matrix, the PPT of thematrix is a transformation matrix with exchanging some entries between the input array and the output array. The radix-2 DFT-type matrices form a classification of matrices such thatthe transformations by the matrices can be calculated via radix-2 butterflies. A number of well-known matrices, such as radix-2 DFT matrices and Hadamard matrices, belong to thisclassification. In this paper, the sufficient conditions for the PPTs on radix-2 DFT-type matrices are given, such that their transformations can also be computed in O{n lg n). Thenbased on the results above, an encoding algorithm for systematic Reed-Solomon (RS) codes in O{n lg n) field operations is presented.
Abstract The BOX-LASSO is a variant of the popular LASSO that includes an additional box-constraint. We propose its use as a decoder in modern Multiple Input Multiple Output (MIMO)communication systems with modulation methods such as the Generalized Space Shift Keying (GSSK) modulation, which produces constellation vectors that are inherently sparse and withbounded elements. In that direction, we prove novel explicit asymptotic characterizations of the squared-error and of the per-element error rate of the BOX-LASSO, under iid Gaussianmeasurements. In particular, the theoretical predictions can be used to quantify the improved performance of the BOX-LASSO, when compared to the previously used standard LASSO. Weinclude simulation results that validate both these premises and our theoretical predictions.
Abstract Cache-enabled base station (BS) densification, denoted as a fog radio access network (F-RAN), is foreseen as a key component of 5G cellular networks. F-RAN enables storingpopular files at the network edge (i.e., BS caches), which empowers local communication and alleviates traffic congestions at the core/backhaul network. The hitting probability, whichis the probability of successfully transmitting popular files request from the network edge, is a fundamental key performance indicator (KPI) for F-RAN. This paper develops ascheduling aware mathematical framework, based on stochastic geometry, to characterize the hitting probability of F-RAN in a multi-channel environment. To this end, we assess andcompare the performance of two caching distribution schemes, namely, uniform caching and Zipf caching. The numerical results show that the commonly used single channel environmentleads to pessimistic assessment for the hitting probability of F-RAN. Furthermore, the numerical results manifest the superiority of the Zipf caching scheme and quantify the hittingprobability gains in terms of the number of channels and cache size.
Abstract In this paper, we propose a novel patch-based image denoising algorithm using collaborative support-agnostic sparse reconstruction. In the proposed collaborative scheme,similar patches are assumed to share the same support taps. For sparse reconstruction, the likelihood of a tap being active in a patch is computed and refined through a collaborationprocess with other similar patches in the similarity group. This provides a very good patch support estimation, hence enhancing the quality of image restoration. Performancecomparisons with state-of-the-art algorithms, in terms of PSNR and SSIM, demonstrate the superiority of the proposed algorithm.
Abstract This paper investigates the problem of recovering an n-dimensional BPSK signal x0 ∈ {-1, 1
Abstract Network densification has always been an important factor to cope with the ever increasing capacity demand. Deploying more base stations (BSs) improves the spatial frequencyutilization, which increases the network capacity. However, such improvement comes at the expense of shrinking the BSs' footprints, which increases the handover (HO) rate and maydiminish the foreseen capacity gains. In this paper, we propose a cooperative HO management scheme to mitigate the HO effect on throughput gains achieved via cellular networkdensification. The proposed HO scheme relies on skipping HO to the nearest BS at some instances along the user's trajectory while enabling cooperative BS service during HO execution atother instances. To this end, we develop a mathematical model, via stochastic geometry, to quantify the performance of the proposed HO scheme in terms of coverage probability and userthroughput. The results show that the proposed cooperative HO scheme outperforms the always best connected based association at high mobility. Also, the value of BS cooperation alongwith handover skipping is quantified with respect to the HO skipping only that has recently appeared in the literature. Particularly, the proposed cooperative HO scheme showsthroughput gains of 12% to 27% and 17% on average, when compared to the always best connected and HO skipping only schemes at user velocity ranging from 80 km/h to 160 Km/h,respectively.
Abstract
The radiation pattern of an antenna array depends on the excitation weights and the geometry of the array. Due to mobility, some vehicular antenna elements might be subjectedto full or partial blockages from a plethora of particles like dirt, salt, ice, and water droplets. These particles cause absorption and scattering to the signal incident on the array,and as a result, change the array geometry. This distorts the radiation pattern of the array mostly with an increase in the sidelobe level and decrease in gain. In this paper, wepropose a blockage detection technique for millimeter wave vehicular antenna arrays that jointly estimates the locations of the blocked antennas and the attenuation and phase-shiftsthat result from the suspended particles. The proposed technique does not require the antenna array to be physically removed from the vehicle and permits real-time array diagnosis.Numerical results show that the proposed technique provides satisfactory results in terms of block detection with low detection time provided that the number of blockages is smallcompared to the array size.
Abstract
Conventional cloud radio access networks assume single cloud processing and treat inter-cloud interference as background noise. This paper considers the downlink of amulti-cloud radio access network (CRAN) where each cloud is connected to several base-stations (BS) through limited-capacity wireline backhaul links. The set of BSs connected to eachcloud, called cluster, serves a set of pre-known mobile users (MUs). The performance of the system becomes therefore a function of both inter-cloud and intra-cloud interference, aswell as the compression schemes of the limited capacity backhaul links. The paper assumes independent compression scheme and imperfect channel state information (CSI) where the CSIerrors belong to an ellipsoidal bounded region. The problem of interest becomes the one of minimizing the network total transmit power subject to BS power and quality of serviceconstraints, as well as backhaul capacity and CSI error constraints. The paper suggests solving the problem using the alternating direction method of multipliers (ADMM). One of thehighlight of the paper is that the proposed ADMM-based algorithm can be implemented in a distributed fashion across the multi-cloud network by allowing a limited amount of informationexchange between the coupled clouds. Simulation results show that the proposed distributed algorithm provides a similar performance to the centralized algorithm in a reasonable numberof iterations.
Abstract This paper considers the uplink of a single-cell large-scale multiple-input multiple output (MIMO) system in which m mono-antenna users communicate with a base station (BS)outfitted by n antennas. We assume that the number of antennas at the BS and that of users take large values, as envisioned by large-scale MIMO systems. This allows for high spectralefficiency gains but obviously comes at the cost of higher complexity, a fact that becomes all the more critical as the number of antennas grows large. To solve this issue is to choosea subset of the available n antennas. The subset must be carefully chosen to achieve the best performance. However, finding the optimal subset of antennas is usually a difficult task,requiring one to solve a high dimensional combinatorial optimization problem. In this paper, we approach this problem in two ways. The first one consists in solving a convex relaxationof the problem using standard convex optimization tools. The second technique solves the problem using a greedy approach. The main advantages of the greedy approach lies in its widerscope, in that, unlike the first approach, it can be applied irrespective of the considered performance criterion. As an outcome of this feature, we show that the greedy approach canbe applied even when only the channel statistics are available at the BS, which provides blind way to perform antenna selection.
Abstract In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold.Firstly, the process requires the inversion of the usually ill-conditioned covariance matrix of the received signals. Secondly, the steering vector pertaining to the direction ofarrival of the signal of interest is not known precisely. To tackle these two challenges, the standard capon beamformer is manipulated to a form where the beamformer output is obtainedas a scaled version of the inner product of two vectors. The two vectors are linearly related to the steering vector and the received signal snapshot, respectively. The linearoperator, in both cases, is the square root of the covariance matrix. A regularized least-squares (RLS) approach is proposed to estimate these two vectors and to provide robustnesswithout exploiting prior information. Simulation results show that the RLS beamformer using the proposed regularization algorithm outperforms state-of-the-art beamforming algorithms,as well as another RLS beamformers using a standard regularization approaches.
Abstract The advent of smartphones and tablets over the past several years has resulted in a drastic increase of global carbon footprint, due to the explosive growth of data traffic.Improving energy efficiency (EE) becomes, therefore, a crucial design metric in next generation wireless systems (5G). Cloud radio access network (C-RAN), a promising 5G networkarchitecture, provides an efficient framework for improving the EE performance, by means of coordinating the transmission across the network. This paper considers a C-RAN system formedby several clusters of remote radio heads (RRHs), each serving a predetermined set of mobile users (MUs), and assumes imperfect channel state information (CSI). The network performancebecomes therefore a function of the intra-cluster and inter-cluster interference, as well as the channel estimation error. The paper optimizes the transmit power of each RRH in orderto maximize the network global EE subject to MU service rate requirements and RRHs maximum power constraints. The paper proposes solving the optimization problem using a heuristicalgorithm based on techniques from optimization theory via a two-stage iterative solution. Simulation results show that the proposed power allocation algorithm provides an appreciableperformance improvement as compared to the conventional systems with maximum power transmission strategy. They further highlight the convergence of the proposed algorithm for differentnetworks scenarios.
Abstract In computer storage, RAID 6 is a level of RAID that can tolerate two failed drives. When RAID-6 is implemented by Reed-Solomon (RS) codes, the penalty of the writingperformance is on the field multiplications in the second parity. In this paper, we present a configuration of the factors of the second-parity formula, such that the arithmeticcomplexity can reach the optimal complexity bound when the code length approaches infinity. In the proposed approach, the intermediate data used for the first parity is also utilizedto calculate the second parity. To the best of our knowledge, this is the first approach supporting the RAID-6 RS codes to approach the optimal arithmetic complexity.
Abstract Millimeter wave (mmWave) vehicular communication systems have the potential to improve traffic efficiency and safety. Lack of secure communication links, however, may lead toa formidable set of abuses and attacks. To secure communication links, a physical layer precoding technique for mmWave vehicular communication systems is proposed in this paper. Theproposed technique exploits the large dimensional antenna arrays available at mmWave systems to produce direction dependent transmission. This results in coherent transmission to thelegitimate receiver and artificial noise that jams eavesdroppers with sensitive receivers. Theoretical and numerical results demonstrate the validity and effectiveness of the proposedtechnique and show that the proposed technique provides high secrecy throughput when compared to conventional array and switched array transmission techniques.
Abstract This paper proposes a new approach to find the regularization parameter for linear least-squares discrete ill-posed problems. In the proposed approach, an artificialperturbation matrix with a bounded norm is forced into the discrete ill-posed model matrix. This perturbation is introduced to enhance the singular-value (SV) structure of the matrixand hence to provide a better solution. The proposed approach is derived to select the regularization parameter in a way that minimizes the mean-squared error (MSE) of the estimator.Numerical results demonstrate that the proposed approach outperforms a set of benchmark methods in most cases when applied to different scenarios of discrete ill-posed problems.Jointly, the proposed approach enjoys the lowest run-time and offers the highest level of robustness amongst all the tested methods.
Abstract In this paper, we derive a closed-form expression for the inverse moments of one sided-correlated random Gram matrices. Such a question is mainly motivated by applications insignal processing and wireless communications for which evaluating this quantity is a question of major interest. This is for instance the case of the best linear unbiased estimator,in which the average estimation error corresponds to the first inverse moment of a random Gram matrix.
Abstract Opportunistic user selection is a simple technique that exploits the spatial diversity in multiuser relay-aided networks. Nonetheless, channel state information (CSI) fromall users (and cooperating relays) is generally required at a central node in order to make selection decisions. Practically, CSI acquisition generates a great deal of feedbackoverhead that could result in significant transmission delays. In addition to this, the presence of a full-duplex cooperating relay corrupts the fed back CSI by additive noise and therelay's loop (or self) interference. This could lead to transmission outages if user selection is based on inaccurate feedback information. In this paper, we propose an opportunisticfull-duplex feedback algorithm that tackles the above challenges. We cast the problem of joint user signal-to-noise ratio (SNR) and the relay loop interference estimation at thebase-station as a block sparse signal recovery problem in compressive sensing (CS). Using existing CS block recovery algorithms, the identity of the strong users is obtained and theircorresponding SNRs are estimated. Numerical results show that the proposed technique drastically reduces the feedback overhead and achieves a rate close to that obtained by techniquesthat require dedicated error-free feedback from all users. Numerical results also show that there is a trade-off between the feedback interference and load, and for short coherenceintervals, full-duplex feedback achieves higher throughput when compared to interference-free (half-duplex) feedback.
Abstract This paper addresses the design of the Adaptive Subspace Matched Filter (ASMF) detector in the presence of compound Gaussian clutters and a mismatch in the steering vector.In particular, we consider the case wherein the ASMF uses the regularized Tyler estimator (RTE) to estimate the clutter covariance matrix. Under this setting, a major question thatneeds to be addressed concerns the setting of the threshold and the regularization parameter. To answer this question, we consider the regime in which the number of observations usedto estimate the RTE and their dimensions grow large together. Recent results from random matrix theory are then used in order to approximate the false alarm and detection probabilitiesby deterministic quantities. The latter are optimized in order to maximize an upper bound on the asymptotic detection probability while keeping the asymptotic false alarm probabilityat a fixed rate.
Abstract
One to Many communications are expected to be among the killer applications for the currently discussed 5G standard. The usage of coding mechanisms is impacting broadcastingstandard quality, as coding is involved at several levels of the stack, and more specifically at the application layer where Rateless, LDPC, Reed Slomon codes and network codingschemes have been extensively studied, optimized and standardized in the past. Beyond reusing, extending or adapting existing application layer packet coding mechanisms based onprevious schemes and designed for the foregoing LTE or other broadcasting standards; our purpose is to investigate the use of Generalized Reed Muller codes and the value of theirlocality property in their progressive decoding for Broadcast/Multicast communication schemes with real time video delivery. Our results are meant to bring insight into the use oflocally decodable codes in Broadcasting.
Abstract The radio-frequency (RF) technology is a scalable solution for the backhaul planning. However, its performance is limited in terms of data rate and latency. Free SpaceOptical (FSO) backhaul, on the other hand, offers a higher data rate but is sensitive to weather conditions. To combine the advantages of RF and FSO backhauls, this paper proposes acost-efficient backhaul network using the hybrid RF/FSO technology. To ensure a resilient backhaul, the paper imposes a given degree of redundancy by connecting each node through Klink-disjoint paths so as to cope with potential link failures. Hence, the network planning problem considered in this paper is the one of minimizing the total deployment cost bychoosing the appropriate link type, i.e., either hybrid RF/FSO or optical fiber (OF), between each couple of base-stations while guaranteeing K link-disjoint connections, a data ratetarget, and a reliability threshold. The paper solves the problem using graph theory techniques. It reformulates the problem as a maximum weight clique problem in the planning graph,under a specified realistic assumption about the cost of OF and hybrid RF/FSO links. Simulation results show the cost of the different planning and suggest that the proposed heuristicsolution has a close-to-optimal performance for a significant gain in computation complexity.
Abstract Optical wireless communications (OWC) is a potential solution for coping with the mismatch between the users growing demand for higher data-rates and the wireless networkcapabilities. In this paper, a multi-user OWC scenario is studied from an in formation-theoretic perspective. The studied network consists of two users communicating simultaneouslywith one access point using OWC, thus establishing an optical uplink channel. The capacity of this network is an important metric which reflects the highest possible communicationrates that can be achieved over this channel. Capacity outer and inner bounds are derived, and are shown to be fairly tight in the high signal-to-noise ratio regime.
Abstract Cellular operators are continuously densifying their networks to cope with the ever-increasing capacity demand. Furthermore, an extreme densification phase for cellularnetworks is foreseen to fulfill the ambitious fifth generation (5G) performance requirements. Network densification improves spectrum utilization and network capacity by shrinking basestations' (BSs) footprints and reusing the same spectrum more frequently over the spatial domain. However, network densification also increases the handover (HO) rate, which maydiminish the capacity gains for mobile users due to HO delays. In highly dense 5G cellular networks, HO delays may neutralize or even negate the gains offered by network densification.In this paper, we present an analytical paradigm, based on stochastic geometry, to quantify the effect of HO delay on the average user rate in cellular networks. To this end, wepropose a flexible handover scheme to reduce HO delay in case of highly dense cellular networks. This scheme allows skipping the HO procedure with some BSs along users' trajectories.The performance evaluation and testing of this scheme for only single HO skipping shows considerable gains in many practical scenarios.
Abstract Several research efforts are invested to develop stochastic geometry models for cellular networks with multiple antenna transmission and reception (MIMO). On one hand, thereare models that target Abstract outage probability and ergodic rate for simplicity. On the other hand, there are models that sacrifice simplicity to target more tangible performancemetrics such as the error probability. Both types of models are completely disjoint in terms of the analytic steps to obtain the performance measures, which makes it challenging toconduct studies that account for different performance metrics. This paper unifies both techniques and proposes a unified stochastic-geometry based mathematical paradigm to account forerror probability, outage probability, and ergodic rates in MIMO cellular networks. The proposed model is also unified in terms of the antenna configurations and leads to simpler errorprobability analysis compared to existing state-of-the-art models. The core part of the analysis is based on Abstracting unnecessary information conveyed within the interfering signalsby assuming Gaussian signaling. To this end, the accuracy of the proposed framework is verified against state-of-the-art models as well as system level simulations. We provide via thisunified study insights on network design by reflecting system parameters effect on different performance metrics.
Abstract In this work, we demonstrate how the theory of majorization and schur-convexity can be used to assess the impact of input-spread on the Mean Squares Error (MSE) performanceof adaptive filters. First, we show that the concept of majorization can be utilized to measure the spread in input-regressors and subsequently order the input-regressors according totheir spread. Second, we prove that the MSE of the Least Mean Squares Error (LMS) and Normalized LMS (NLMS) algorithms are schur-convex, that is, the MSE of the LMS and the NLMSalgorithms preserve the majorization order of the inputs which provide an analytical justification to why and how much the MSE performance of the LMS and the NLMS algorithmsdeteriorate as the spread in input increases.
Abstract In this paper, we consider a bistatic multiple-input multiple-output (MIMO) radar. We propose a reduced complexity algorithm to estimate the direction-of-arrival (DOA) anddirection-of-departure (DOD) for moving target. We show that the calculation of parameter estimation can be expressed in terms of one-dimensional fast-Fourier-transforms whichdrastically reduces the complexity of the optimization algorithm. The performance of the proposed algorithm is compared with the two-dimension multiple signal classification (2D-MUSIC)and reduced-dimension MUSIC (RD-MUSIC) algorithms. It is shown by simulations, our proposed algorithm has better estimation performance and lower computational complexity compared tothe 2D-MUSIC and RD-MUSIC algorithms. Moreover, simulation results also show that the proposed algorithm achieves the Cramer-Rao lower bound.
Abstract
Orthogonal frequency division multiple access (OFDMA) systems suffer from several impairments, and communication system engineers use powerful signal processing tools to combat these impairments and to keep up with the capacity/rate demands. One of these impairments is high peak-to-average power ratio (PAPR) and clipping is the simplest peak reduction scheme. However, in general, when multiple users are subjected to clipping, frequency domain clipping distortions spread over the spectrum of all users. This results in compromised performance and hence clipping distortions need to be mitigated at the receiver. Mitigating these distortions in multiuser case is not simple and requires complex clipping mitigation procedures at the receiver. However, it was observed that interleaved OFDMA presents a special structure that results in only self-inflicted clipping distortions (i.e., the distortions of a particular user do not interfere with other users). In this work, we prove analytically that distortions do not spread over multiple users (while utilizing interleaved carrier assignment in OFDMA) and construct a compressed sensing system that utilizes the sparsity of the clipping distortions and recovers it on each user. We provide numerical results that validate our analysis and show promising performance for the proposed clipping recovery scheme.
Abstract This paper addresses the problem of reducing the delivery time of data messages to cellular users using instantly decodable network coding (IDNC) with physical-layer rateawareness. While most of the existing literature on IDNC does not consider any physical layer complications, this paper proposes a cross-layer scheme that incorporates the differentchannel rates of the various users in the decision process of both the transmitted message combinations and the rates with which they are transmitted. The completion time minimizationproblem in such scenario is first shown to be intractable. The problem is, thus, approximated by reducing, at each transmission, the increase of an anticipated version of thecompletion time. The paper solves the problem by formulating it as a maximum weight clique problem over a newly designed rate aware IDNC (RA-IDNC) graph. Further, the paper provides amulti-layer solution to improve the completion time approximation. Simulation results suggest that the cross-layer design largely outperforms the uncoded transmissions strategies andthe classical IDNC scheme.
Abstract Next-generation cellular networks are expected to be assisted by femtocaches (FCs), which collectively store the most popular files for the clients. Given any arbitrarynon-fragmented placement of such files, a strict no-latency constraint, and clients' prior knowledge, new file download requests could be efficiently handled by both the FCs and themacrocell base station (MBS) using opportunistic network coding (ONC). In this paper, we aim to find the best allocation of coded file downloads to the FCs so as to minimize the MBSinvolvement in this download process. We first formulate this optimization problem over an ONC graph, and show that it is NP-hard. We then propose a greedy approach that maximizes thenumber of files downloaded by the FCs, with the goal to reduce the download share of the MBS. This allocation is performed using a dual conflict ONC graph to avoid conflicts among theFC downloads. Simulations show that our proposed scheme almost achieves the optimal performance and significantly saves on the MBS bandwidth.
Abstract We present two efficient algorithms for distributed estimation of channels in massive MIMO systems. The two cases of 1) generic, and 2) sparse channels is considered. Thealgorithms estimate the impulse response for each channel observed by the antennas at the receiver (base station) in a coordinated manner by sharing minimal information amongneighboring antennas. Simulations demonstrate the superior performance of the proposed methods as compared to other methods.
Abstract
In the context of resource allocation in cloud- radio access networks, recent studies assume either signal-level or scheduling-level coordination. This paper, instead,considers a hybrid level of coordination for the scheduling problem in the downlink of a multi-cloud radio- access network, so as to benefit from both scheduling policies. Consider amulti-cloud radio access network, where each cloud is connected to several base-stations (BSs) via high capacity links, and therefore allows joint signal processing between them.Across the multiple clouds, however, only scheduling-level coordination is permitted, as it requires a lower level of backhaul communication. The frame structure of every BS iscomposed of various time/frequency blocks, called power- zones (PZs), and kept at fixed power level. The paper addresses the problem of maximizing a network-wide utility by associatingusers to clouds and scheduling them to the PZs, under the practical constraints that each user is scheduled, at most, to a single cloud, but possibly to many BSs within the cloud, andcan be served by one or more distinct PZs within the BSs' frame. The paper solves the problem using graph theory techniques by constructing the conflict graph. The scheduling problemis, then, shown to be equivalent to a maximum- weight independent set problem in the constructed graph, in which each vertex symbolizes an association of cloud, user, BS and PZ, with aweight representing the utility of that association. Simulation results suggest that the proposed hybrid scheduling strategy provides appreciable gain as compared to thescheduling-level coordinated networks, with a negligible degradation to signal-level coordination.
Abstract Millimeter wave (mmWave) communication is one solution to provide more spectrum than available at lower carrier frequencies. To provide sufficient link budget, mmWave systemswill use beamforming with large antenna arrays at both the transmitter and receiver. Training these large arrays using conventional approaches taken at lower carrier frequencies,however, results in high overhead. In this paper, we propose a beam training algorithm that efficiently designs the beamforming vectors with low training overhead. Exploiting mmWavechannel reciprocity, the proposed algorithm relaxes the need for an explicit feedback channel, and opportunistically terminates the training process when a desired quality of serviceis achieved. To construct the training beamforming vectors, a new multi-resolution codebook is developed for hybrid analog/digital architectures. Simulation results show that theproposed algorithm achieves a comparable rate to that obtained by exhaustive search solutions while requiring lower training overhead when compared to prior work.
Abstract Recent studies on cloud-radio access networks (CRANs) assume the availability of a single processor (cloud) capable of managing the entire network performance; inter-cloudinterference is treated as background noise. This paper considers the more practical scenario of the downlink of a CRAN formed by multiple clouds, where each cloud is connected to acluster of multiple-antenna base stations (BSs) via high-capacity wireline backhaul links. The network is composed of several disjoint BSs' clusters, each serving a pre-known set ofsingle-antenna users. To account for both inter- cloud and intra-cloud interference, the paper considers the problem of minimizing the total network power consumption subject toquality of service constraints, by jointly determining the set of active BSs connected to each cloud and the beamforming vectors of every user across the network. The paper solves theproblem using Lagrangian duality theory through a dual decomposition approach, which decouples the problem into multiple and independent subproblems, the solution of which depends onthe dual optimization problem. The solution then proceeds in updating the dual variables and the active set of BSs at each cloud iteratively. The proposed approach leads to adistributed implementation across the multiple clouds through a reasonable exchange of information between adjacent clouds. The paper further proposes a centralized solution to theproblem. Simulation results suggest that the proposed algorithms significantly outperform the conventional per-cloud update solution, especially at high signal-to-interference-plus-noise ratio (SINR) target.
Abstract Since its introduction, compressed sampling (CS) has found use in various applications ranging from image restoration, radar and sensing to channel and system identification.Recently in the radio frequency (RF) power amplifier (PA) design and linearization communities, there have been many attempts to utilize the CS technique to enable the development ofefficient wireless transmitters. This paper provides a brief review of the use of CS in PA and transmitter linearization. Mainly two approaches are discussed: the use of CS to recoveramplitude-distorted signals, and the use of CS to reduce the complexity of the digital predistorters. Experimental results obtained using an envelope tracking (ET) PA prototype showthe potential and value of the CS technique in developing efficient predistorters at a low computational cost.
Abstract The cloud-radio access network (CRAN) is expected to be the core network architecture for next generation mobile radio systems. In this paper, we consider the downlink of aCRAN formed of one central processor (the cloud) and several base station (BS), where each BS is connected to the cloud via either a wireless or capacity-limited wireline backhaullink. The paper addresses the joint design of the hybrid backhaul links (i.e., designing the wireline and wireless backhaul connections from the cloud to the BSs) and the access links(i.e., determining the sparse beamforming solution from the BSs to the users). The paper formulates the hybrid backhaul and access link design problem by minimizing the total networkpower consumption. The paper solves the problem using a two-stage heuristic algorithm. At one stage, the sparse beamforming solution is found using a weighted mixed 11/12 normminimization approach; the correlation matrix of the quantization noise of the wireline backhaul links is computed using the classical rate-distortion theory. At the second stage, thetransmit powers of the wireless backhaul links are found by solving a power minimization problem subject to quality-of-service constraints, based on the principle of conservation ofrate by utilizing the rates found in the first stage. Simulation results suggest that the performance of the proposed algorithm approaches the global optimum solution, especially athigh signal-to-interference-plus-noise ratio (SINR).
Abstract Relay selection is a simple technique that achieves spatial diversity in cooperative relay networks. Nonetheless, relay selection algorithms generally require error-freechannel state information (CSI) from all cooperating relays. Practically, CSI acquisition generates a great deal of feedback overhead that could result in significant transmissiondelays. In addition to this, the fed back channel information is usually corrupted by additive noise. This could lead to transmission outages if the central node selects the set ofcooperating relays based on inaccurate feedback information. In this paper, we propose a relay selection algorithm that tackles the above challenges. Instead of allocating each relay adedicated channel for feedback, all relays share a pool of feedback channels. Following that, each relay feeds back its identity only if its effective channel(source-relay-destination) exceeds a threshold. After deriving closed-form expressions for the feedback load and the achievable rate, we show that the proposed algorithm drasticallyreduces the feedback overhead and achieves a rate close to that obtained by selection algorithms with dedicated error-free feedback from all relays.
Abstract We propose a distributed user selection strategy in a network MIMO setting with M base stations serving K users. Each base station is equipped with L antennas, where LM â‰ھ K.The conventional selection strategy is based on a well known technique called semi-orthogonal user selection when the zero-forcing beamforming (ZFBF) is adopted. Such technique,however, requires perfect channel state information at the transmitter (CSIT), which might not be available or need large feedback overhead. This paper proposes an alternativedistributed user selection technique where each user sets a timer that is inversely proportional to his channel quality indicator (CQI), as a means to reduce the feedback overhead. Theproposed strategy allows only the user with the highest CQI to respond with a feedback. Such technique, however, remains collision free only if the transmission time is shorter thanthe difference between the strongest user timer and the second strongest user timer. To overcome the situation of longer transmission times, the paper proposes another feedbackstrategy that is based on the theory of compressive sensing, where collision is allowed and all users encode their feedback information and send it back to the base-stationssimultaneously. The paper shows that the problem can be formulated as a block sparse recovery problem which is agnostic on the transmission time, which makes it a good alternative tothe timer approach when collision is dominant.
Abstract
This paper considers the joint maximum likelihood (ML) channel estimation and data detection problem for massive SIMO (single input multiple output) wireless systems. Wepropose efficient algorithms achieving the exact ML non-coherent data detection, for both constant-modulus constellations and nonconstant-modulus constellations. Despite a large numberof unknown channel coefficients in massive SIMO systems, we show that the expected computational complexity is linear in the number of receive antennas and polynomial in channelcoherence time. To the best of our knowledge, our algorithms are the first efficient algorithms to achieve the exact joint ML channel estimation and data detection performance formassive SIMO systems with general constellations. Simulation results show our algorithms achieve considerable performance gains at a low computational complexity.
Abstract
The desirable characteristics of ultra-wideband (UWB) technology are challenged by formidable sampling frequency, performance degradation in the presence of multi-user interference, and complexity of the receiver due to the channel estimation process. In this paper, a low-rate-sampling technique is used to implement M-ary multiple access UWB communications, in both the detection and channel estimation stages. A novel approach is used for multiple-access-interference (MAI) cancelation for the purpose of channel estimation. Results show reasonable performance of the proposed receiver for different number of users operating many times below Nyquist rate.
Abstract We consider frequency selective channel estimation in the uplink of massive MIMO-OFDM systems, where our major concern is complexity. A low complexity distributed LMMSEalgorithm is proposed that attains near optimal channel impulse response (CIR) estimates from noisy observations at receive antenna array. In proposed method, every antenna estimatesthe CIRs of its neighborhood followed by recursive sharing of estimates with immediate neighbors. At each step, every antenna calculates the weighted average of shared estimates whichconverges to near optimal LMMSE solution. The simulation results validate the near optimal performance of proposed algorithm in terms of mean square error (MSE).
Abstract Recovering information on subsurface geological features, such as flow channels, holds significant importance for optimizing the productivity of oil reservoirs. The flowchannels exhibit high permeability in contrast to low permeability rock formations in their surroundings, enabling formulation of a sparse field recovery problem. The Ensemble Kalmanfilter (EnKF) is a widely used technique for the estimation of subsurface parameters, such as permeability. However, the EnKF often fails to recover and preserve the channel structuresduring the estimation process. Compressed Sensing (CS) has shown to significantly improve the reconstruction quality when dealing with such problems. We propose a new scheme based onCS principles to enhance the reconstruction of subsurface geological features by transforming the EnKF estimation process to a sparse domain representing diverse geological structures.Numerical experiments suggest that the proposed scheme provides an efficient mechanism to incorporate and preserve structural information in the estimation process and results insignificant enhancement in the recovery of flow channel structures.
Abstract This paper addresses the coordinated scheduling problem in cloud-enabled networks. Consider the downlink of a cloud-radio access network (CRAN), where the cloud is onlyresponsible for the scheduling policy and the synchronization of the transmit frames across the connected base-stations (BS). The transmitted frame of every BS consists of severaltime/frequency blocks, called power-zones (PZ), maintained at fixed transmit power. The paper considers the problem of scheduling users to PZs and BSs in a coordinated fashion acrossthe network, by maximizing a network-wide utility under the practical constraint that each user cannot be served by more than one base-station, but can be served by one or morepower-zones within each base-station frame. The paper solves the problem using a graph theoretical approach by introducing the scheduling graph in which each vertex represents anassociation of users, PZs and BSs. The problem is formulated as a maximum weight clique, in which the weight of each vertex is the benefit of the association represented by thatvertex. The paper further presents heuristic algorithms with low computational complexity. Simulation results show the performance of the proposed algorithms and suggest that theheuristics perform near optimal in low shadowing environments.
Abstract In this work, we develop an analytical paradigm to analyze the average symbol error probability (ASEP) performance of uplink traffic in a multi-tier cellular network. Theanalysis is based on the recently developed Equivalent-in-Distribution approach that utilizes stochastic geometric tools to account for the network geometry in the performancecharacterization. Different from the other stochastic geometry models adopted in the literature, the developed analysis accounts for important communication system parameters and goesbeyond signal-to-interference-plus-noise ratio characterization. That is, the presented model accounts for the modulation scheme, constellation type, and signal recovery techniques tomodel the ASEP. To this end, we derive single integral expressions for the ASEP for different modulation schemes due to aggregate network interference. Finally, all theoreticalfindings of the paper are verified via Monte Carlo simulations.
Abstract This paper addresses the problem of narrowband interference (NBI) in SC-FDMA systems by using tools from compressed sensing and stochastic geometry. The proposed NBIcancellation scheme exploits the frequency domain sparsity of the unknown signal and adopts a Bayesian sparse recovery procedure. This is done by keeping a few randomly chosensub-carriers data free to sense the NBI signal at the receiver. As Bayesian recovery requires knowledge of some NBI parameters (i.e., mean, variance and sparsity rate), we use toolsfrom stochastic geometry to obtain analytical expressions for the required parameters. Our simulation results validate the analysis and depict suitability of the proposed recoverymethod for NBI mitigation.
Abstract Recent advancements in graph-based analysis and solutions of instantly decodable network coding (IDNC) trigger the interest to extend them to more complicated opportunisticnetwork coding (ONC) scenarios, with limited increase in complexity. In this paper, we design a simple IDNC-like graph model for a specific subclass of ONC, by introducing a moregeneralized definition of its vertices and the notion of vertex aggregation in order to represent the storage of non-instantly-decodable packets in ONC. Based on this representation,we determine the set of pairwise vertex adjacency conditions that can populate this graph with edges so as to guarantee decodability or aggregation for the vertices of each clique inthis graph. We then develop the algorithmic procedures that can be applied on the designed graph model to optimize any performance metric for this ONC subclass. A case study onreducing the completion time shows that the proposed framework improves on the performance of IDNC and gets very close to the optimal performance.
Abstract
This paper considers the problem of reducing the broadcast delay of wireless networks using instantly decodable network coding (IDNC) based device-to-device (D2D)communications. In D2D-enabled networks, devices help hasten the recovery of the lost packets of devices in their transmission range by sending network coded packets. To solve theproblem, the different events occurring at each device are identified so as to derive an expression for the probability distribution of the decoding delay. The joint optimizationproblem over the set of transmitting devices and the packet combinations of each is formulated. Due to the high complexity of finding the optimal solution, this paper focuses oncooperation without interference between the transmitting users. The optimal solution, in such interference-less scenario, is expressed using a graph theory approach by introducing thecooperation graph. Extensive simulations compare the decoding delay experienced in the Point to Multi-Point (PMP), the fully connected D2D (FC-D2D) and the more practical partiallyconnected D2D (PC-D2D) configurations and suggest that the PC-D2D outperforms the FC-D2D in all situations and provides an enormous gain for poorly connected networks.
Abstract
The deluge of date rate in today's networks poses a cost burden on the backhaul network design. Developing cost efficient backhaul solutions becomes an interesting, yetchallenging, problem. Traditional technologies for backhaul networks include either radio-frequency backhauls (RF) or optical fibres (OF). While RF is a cost-effective solution ascompared to OF, it supports lower data rate requirements. Another promising backhaul solution that may combine both a high data rate and a relatively low cost is the free-space optics(FSO). FSO, however, is sensitive to nature conditions (e.g., rain, fog, line-ofsight, etc.). A more reliable alternative is, therefore, to combine RF and FSO solutions through ahybrid structure called hybrid RF/FSO. Consider a backhaul network, where the base-stations (BS) can be connected to each other either via OF or hybrid RF/FSO backhaul links. The paperaddresses the problem of minimizing the cost of backhaul planning under connectivity and data rates constraints, so as to choose the appropriate costeffective backhaul type between BSs(i.e., either OF or hybrid RF/FSO). The paper solves the problem using graph theory techniques by introducing the corresponding planning graph. It shows that under a specifiedrealistic assumption about the cost of OF and hybrid RF/FSO links, the problem is equivalent to a maximum weight clique problem, which can be solved with moderate complexity.Simulation results show that our proposed solution shows a close-to-optimal performance, especially for practical prices of the hybrid RF/FSO.
Abstract In this paper, as an extension to [1], we propose a prioritized multi-layer network coding scheme for collaborative packet recovery in hybrid (interweave and underlay)cellular cognitive radio networks. This scheme allows the uncoordinated collaboration between the collocated primary and cognitive radio base-stations in order to minimize their own aswell as each other's packet recovery overheads, thus by improving their throughput. The proposed scheme ensures that each network's performance is not degraded by its help to the othernetwork. Moreover, it guarantees that the primary network's interference threshold is not violated in the same and adjacent cells. Yet, the scheme allows the reduction of the recoveryoverhead in the collocated primary and cognitive radio networks. The reduction in the cognitive radio network is further amplified due to the perfect detection of spectrum holes whichallows the cognitive radio base station to transmit at higher power without fear of violating the interference threshold of the primary network. For the secondary network, simulationresults show reductions of 20% and 34% in the packet recovery overhead, compared to the non-collaborative scheme, for low and high probabilities of primary packet arrivals,respectively. For the primary network, this reduction was found to be 12%.
Abstract A robust digital modulation scheme, called differential on-on keying (DOOK), is presented in this paper which outperforms the conventional on-off keying (OOK). In thisscheme, a sinusoidal signal is transmitted during the first half of the bit duration while a replica or an inverted version of the sinusoidal signal is transmitted during the secondhalf for logic one or logic zero, respectively. Non-coherent receiver correlates the two halves of the received signal over half bit duration to construct a decision variable. Biterror performance is analyzed over AWGN and Rayleigh fading channels and compared to the conventional OOK.
Abstract Modeling aggregate network interference in cellular networks has recently gained immense attention both in academia and industry. While stochastic geometry based models havesucceeded to account for the cellular network geometry, they mostly Abstract many important wireless communication system aspects (e.g., modulation techniques, signal recoverytechniques). Recently, a novel stochastic geometry model, based on the Equivalent-in-Distribution (EiD) approach, succeeded to capture the aforementioned communication system aspectsand extend the analysis to averaged error performance, however, on the expense of increasing the modeling complexity. Inspired by the EiD approach, the analysis developed in [1] takesinto consideration the key system parameters, while providing a simple tractable analysis. In this paper, we extend this framework to study the effect of different interferencemanagement techniques in downlink cellular network. The accuracy of the proposed analysis is verified via Monte Carlo simulations.
Abstract This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LSestimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form ofregularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allowfor perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter.Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, thelinear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.
Abstract In multiple-input multiple-output radar systems, it is usually desirable to steer transmitted power in the region-of-interest. To do this, conventional methods optimize thewaveform covariance matrix,R, for the desired beampattern, which is then used to generate actual transmitted waveforms. In this paper, we provide a low complexity closed-form solutionto design covariance matrix for the given planar beampattern using the planar array, which is then used to derive a novel closedform algorithm to directly design the finite-alphabetconstantenvelope waveforms. The proposed algorithm exploits the two-dimensional fast-Fourier-transform. The performance of our proposed algorithm is compared with the existing methodsthat are based on semi-definite quadratic programming with the advantage of a considerably reduced complexity.
Abstract
This paper presents a novel narrowband interference (NBI) mitigation scheme for SC-FDMA systems. The proposed scheme exploits the frequency domain sparsity of the unknown NBIsignal and adopts a low complexity Bayesian sparse recovery procedure. In practice, however, the sparsity of the NBI is destroyed by a grid mismatch between NBI sources and SC-FDMAsystem. Towards this end, an accurate grid mismatch model is presented and a sparsifying transform is utilized to restore the sparsity of the unknown signal. Numerical results arepresented that depict the suitability of the proposed scheme for NBI mitigation.
Abstract We propose a method for estimation of sparse frequency selective channels within MIMO-OFDM systems. These channels are independently sparse and share a common support. Themethod estimates the impulse response for each channel observed by the antennas at the receiver. Estimation is performed in a coordinated manner by sharing minimal information amongneighboring antennas to achieve results better than many contemporary methods. Simulations demonstrate the superior performance of the proposed method.
Abstract The issue of blind Multiple-Input and Multiple-Output (MIMO) deconvolution of communication system is addressed. Two new iterative Blind Source Separation (BSS) algorithmsare presented, based on the minimization of Multi-Modulus (MM) criterion. A pre-whitening filter is utilized to transform the problem into finding a unitary beamformer matrix. Then,applying iterative Givens and Hyperbolic rotations results in Givens Multi-modulus Algorithm (G-MMA) and Hyperbolic G-MMA (HG-MMA), respectively. Proposed algorithms are compared withseveral BSS algorithms in terms of Signal to Interference and Noise Ratio (SINR) and Symbol Error Rate (SER) and it was shown to outperform them.
Abstract
This paper considers a multicloud radio access network (M-CRAN), wherein each cloud serves a cluster of base-stations (BS's) which are connected to the clouds through highcapacity digital links. The network comprises several remote users, where each user can be connected to one (and only one) cloud. This paper studies the user-to-cloud-assignmentproblem by maximizing a network-wide utility subject to practical cloud connectivity constraints. The paper solves the problem by using an auction-based iterative algorithm, which canbe implemented in a distributed fashion through a reasonable exchange of information between the clouds. The paper further proposes a centralized heuristic algorithm, with lowcomputational complexity. Simulations results show that the proposed algorithms provide appreciable performance improvements as compared to the conventional cloud-less assignmentsolutions.
Abstract In this paper, we introduce a game theoretic framework for studying the problem of minimizing the completion time of instantly decodable network coding (IDNC) for cooperativedata exchange (CDE) in decentralized wireless network. In this configuration, clients cooperate with each other to recover the erased packets without a central controller. Game theoryis employed herein as a tool for improving the distributed solution by overcoming the need for a central controller or additional signaling in the system. We model the session byself-interested players in a non-cooperative potential game. The utility function is designed such that increasing individual payoff results in a collective behavior achieving both adesirable system performance in a shared network environment and the Pareto optimal solution. We further show that our distributed solution achieves the centralized solution. Throughextensive simulations, our approach is compared to the best performance that could be found in the conventional point-to-multipoint (PMP) recovery process. Numerical results show thatour formulation largely outperforms the conventional PMP scheme in most practical situations and achieves a lower delay.
Abstract Relay selection is a simple technique that achieves spatial diversity in cooperative relay networks. However, for relay selection algorithms to make a selection decision,channel state information (CSI) from all cooperating relays is usually required at a central node. This requirement poses two important challenges. Firstly, CSI acquisition generates agreat deal of feedback overhead (air-time) that could result in significant transmission delays. Secondly, the fed back channel information is usually corrupted by additive noise. Thiscould lead to transmission outages if the central node selects the set of cooperating relays based on inaccurate feedback information. In this paper, we introduce a limited feedbackrelay selection algorithm for a multicast relay network. The proposed algorithm exploits the theory of compressive sensing to first obtain the identity of the "strong" relays withlimited feedback. Following that, the CSI of the selected relays is estimated using linear minimum mean square error estimation. To minimize the effect of noise on the fed back CSI, weintroduce a back-off strategy that optimally backs-off on the noisy estimated CSI. For a fixed group size, we provide closed form expressions for the scaling law of the maximumequivalent SNR for both Decode and Forward (DF) and Amplify and Forward (AF) cases. Numerical results show that the proposed algorithm drastically reduces the feedback air-time andachieves a rate close to that obtained by selection algorithms with dedicated error-free feedback channels.
Abstract For several years, the completion time and the decoding delay problems in Instantly Decodable Network Coding (IDNC) were considered separately and were thought to completelyact against each other. Recently, some works aimed to balance the effects of these two important IDNC metrics but none of them studied a further optimization of one by controlling theother. In this paper, we study the effect of controlling the decoding delay to reduce the completion time below its currently best known solution. We first derive thedecoding-delay-dependent expressions of the users' and their overall completion times. Although using such expressions to find the optimal overall completion time is NP-hard, we use aheuristic that minimizes the probability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control oftheir decoding delays. Simulation results show that this new algorithm achieves both a lower mean completion time and mean decoding delay compared to the best known heuristic forcompletion time reduction. The gap in performance becomes significant for harsh erasure scenarios.
Abstract Multiple-input multiple-output (MIMO) radar works on the principle of transmission of independent waveforms at each element of its antenna array and is widely used forsurveillance purposes. In this work, we investigate MIMO radar target localization problem with compressive sensing. Specifically, we try to solve the problem of estimation of targetlocation in MIMO radar by group and block sparsity algorithms. It will lead us to a reduced number of snapshots required and also we can achieve better radar resolution. We will usegroup orthogonal matching pursuit (GOMP) and block orthogonal matching pursuit (BOMP) for our problem.
Abstract In this paper, we consider the problem of minimizing the maximum broadcast decoding delay experienced by all the receivers of generalized instantly decodable network coding(IDNC). Unlike the sum decoding delay, the maximum decoding delay as a definition of delay for IDNC allows a more equitable distribution of the delays between the different receiversand thus a better Quality of Service (QoS). In order to solve this problem, we first derive the expressions for the probability distributions of maximum decoding delay increments.Given these expressions, we formulate the problem as a maximum weight clique problem in the IDNC graph. Although this problem is known to be NP-hard, we design a greedy algorithm toperform effective packet selection. Through extensive simulations, we compare the sum decoding delay and the max decoding delay experienced when applying the policies to minimize thesum decoding delay and our policy to reduce the max decoding delay. Simulations results show that our policy gives a good agreement among all the delay aspects in all situations andoutperforms the sum decoding delay policy to effectively minimize the sum decoding delay when the channel conditions become harsher. They also show that our definition of delaysignificantly improve the number of served receivers when they are subject to strict delay constraints.
Abstract A matching pursuit method using a Bayesian approach is introduced for recovering a set of sparse signals with common support from a set of their measurements. This methodperforms Bayesian estimates of joint-sparse signals even when the distribution of active elements is not known. It utilizes only the a priori statistics of noise and the sparsity rateof the signal, which are estimated without user intervention. The method utilizes a greedy approach to determine the approximate MMSE estimate of the joint-sparse signals. Simulationresults demonstrate the superiority of the proposed estimator.
Abstract In this paper, we propose an all-digital scheme for ultra-wideband symbol detection. In the proposed scheme, the received symbols are sampled many times below the Nyquistrate. It is shown that when the number of symbol repetitions, P, is co-prime with the symbol duration given in Nyquist samples, the receiver can sample the received data P times belowthe Nyquist rate, without loss of fidelity. The proposed scheme is applied to perform channel estimation and binary pulse position modulation (BPPM) detection. Results are presentedfor two receivers operating at two different sampling rates that are 10 and 20 times below the Nyquist rate. The feasibility of the proposed scheme is demonstrated in differentscenarios, with reasonable bit error rates obtained in most of the cases.
Abstract This paper proposes a low-sampling-rate scheme for ultra-wideband channel estimation. In the proposed scheme, P pulses are transmitted to produce P observations. Theseobservations are exploited to produce channel impulse response estimates at a desired sampling rate, while the ADC operates at a rate that is P times less. To avoid loss of fidelity,the interpulse interval, given in units of sampling periods of the desired rate, is restricted to be co-prime with P. This condition is affected when clock drift is present and thetransmitted pulse locations change. To handle this situation and to achieve good performance without using prior information, we derive an improved estimator based on the bounded datauncertainty (BDU) model. This estimator is shown to be related to the Bayesian linear minimum mean squared error (LMMSE) estimator. The performance of the proposed sub-sampling schemewas tested in conjunction with the new estimator. It is shown that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator inmost cases; while in the high SNR regime, it also outperforms the LMMSE estimator.
Abstract High peak-to-average power ratio is one of the major drawbacks of orthogonal frequency division multiplexing (OFDM). Clipping is the simplest peak reduction scheme, however,it requires clipping mitigation at the receiver. Recently compressed sensing has been used for clipping mitigation (by exploiting the sparse nature of clipping signal). However,clipping estimation in multi-user scenario (i.e., OFDMA) is not straightforward as clipping distortions overlap in frequency domain and one cannot distinguish between distortions fromdifferent users. In this work, a collaborative clipping removal strategy is proposed based on joint estimation of the clipping distortions from all users. Further, an effective dataaided channel estimation strategy for clipped OFDM is also outlined. Simulation results are presented to justify the effectiveness of the proposed schemes.
Abstract Opportunistic schedulers rely on the feedback of the channel state information of users in order to perform user selection and downlink scheduling. This feedback increaseswith the number of users, and can lead to inefficient use of network resources and scheduling delays. We tackle the problem of feedback design, and propose a novel class ofnonorthogonal codes to feed back channel state information. Users with favorable channel conditions simultaneously transmit their channel state information via non-orthogonal beams tothe base station. The proposed formulation allows the base station to identify the strong users via a simple correlation process. After deriving the minimum required code length andclosed-form expressions for the feedback load and downlink capacity, we show that: the proposed algorithm reduces the feedback load while matching the achievable rate of full feedbackalgorithms operating over a noiseless feedback channel; and the proposed codes are superior to the Gaussian codes.
Abstract The main limitation of deploying/updating Received Signal Strength (RSS) based indoor localization is the construction of fingerprinted radio map, which is quite a hectic andtime-consuming process especially when the indoor area is enormous and/or dynamic. Different approaches have been undertaken to reduce such deployment/update efforts, but theperformance degrades when the fingerprinting load is reduced below a certain level. In this paper, we propose an indoor localization scheme that requires as low as 1% fingerprintingload. This scheme employs unsupervised manifold alignment that takes crowd sourced RSS readings and localization requests as source data set and the environment's plan coordinates asdestination data set. The 1% fingerprinting load is only used to perturb the local geometries in the destination data set. Our proposed algorithm was shown to achieve less than 5 mmean localization error with 1% fingerprinting load and a limited number of crowd sourced readings, when other learning based localization schemes pass the 10 m mean error with thesame information.
Abstract In this paper, we study the effect of lossy intermittent feedback loss events on the multicast decoding delay performance of generalized instantly decodable network coding.These feedback loss events create uncertainty at the sender about the reception statues of different receivers and thus uncertainty to accurately determine subsequent instantlydecodable coded packets. To solve this problem, we first identify the different possibilities of uncertain packets at the sender and their probabilities. We then derive the expressionof the mean decoding delay. We formulate the Generalized Instantly Decodable Network Coding (G-IDNC) minimum decoding delay problem as a maximum weight clique problem. Since findingthe optimal solution is NP-hard, we design a variant of the algorithm employed in [1]. Our algorithm is compared to the two blind graph update proposed in [2] through extensivesimulations. Results show that our algorithm outperforms the blind approaches in all the situations and achieves a tolerable degradation, against the perfect feedback, for largefeedback loss period.
Abstract
One of the main drawbacks of OFDM systems is the high peak-to-average-power ratio (PAPR). Most of the PAPR reduction techniques require transmitter-based processing. However,we propose a receiver-based low-complexity clipping signal recovery method. This method is able to i) reduce PAPR via a simple clipping scheme, ii) use a Bayesian recovery algorithm toreconstruct the distortion signal with high accuracy, and iii) is energy efficient due to low complexity. The proposed method is robust against variation in noise and signalstatistics. The method is enhanced by making use of all prior information such as, the locations and the phase of the non-zero elements of the clipping signal. Simulation resultsdemonstrate the superiority of using the proposed algorithm over other recovery algorithms.
Abstract This work presents an exact tracking analysis of the Normalized Least Mean Square (NLMS) algorithm for circular complex correlated Gaussian inputs. Unlike the existing works,the analysis presented neither uses separation principle nor small step-size assumption. The approach is based on the derivation of a closed form expression for the cumulativedistribution function (CDF) of random variables of the form (∥u∥D12)(∥u∥D22)-1 where u is a white Gaussian vector and D1 and D2 are diagonal matrices and using that to derive the firstand second moments of such variables. These moments are then used to evaluate the tracking behavior of the NLMS algorithm in closed form. Thus, both the steady-state mean-square-error(MSE) and mean-square-deviation (MSD )tracking behaviors of the NLMS algorithm are evaluated. The analysis is also used to derive the optimum step-size that minimizes the excess MSE(EMSE). Simulations presented for the steady-state tracking behavior support the theoretical findings for a wide range of step-size and input correlation.
Abstract This paper presents a compressed sensing based method to suppress impulse noise in digital subscriber line (DSL). The proposed algorithm exploits the sparse nature of theimpulse noise and utilizes the null carriers, already available in all practical DSL systems, for its estimation and cancelation. Specifically, compressed sensing is used for a coarseestimate of the impulse position, an a priori information based maximum aposteriori probability (MAP) metric for its refinement, followed by least squares (LS) or minimum mean squareerror (MMSE) estimation for estimating the impulse amplitudes. Simulation results show that the proposed scheme achieves higher rate as compared to other known sparse estimationalgorithms in literature. The paper also demonstrates the superior performance of the proposed scheme compared to the ITU-T G992.3 standard that utilizes RS-coding for impulse noiserefinement in DSL signals.
Abstract Opportunistic schedulers rely on the feedback of all users in order to schedule a set of users with favorable channel conditions. While the downlink channels can be easilyestimated at all user terminals via a single broadcast, several key challenges are faced during uplink transmission. First of all, the statistics of the noisy and fading feedbackchannels are unknown at the base station (BS) and channel training is usually required from all users. Secondly, the amount of network resources (air-time) required for feedbacktransmission grows linearly with the number of users. In this paper, we tackle the above challenges and propose a Bayesian based scheduling algorithm that 1) reduces the air-timerequired to identify the strong users, and 2) is agnostic to the statistics of the feedback channels and utilizes the a priori statistics of the additive noise to identify the strongusers. Numerical results show that the proposed algorithm reduces the feedback air-time while improving detection in the presence of fading and noisy channels when compared to recentcompressed sensing based algorithms. Furthermore, the proposed algorithm achieves a sum-rate throughput close to that obtained by noiseless dedicated feedback systems.
Abstract We address the distributed estimation of an unknown scalar parameter in Wireless Sensor Networks (WSNs). Sensor nodes transmit their noisy observations over multiple accesschannel to a Fusion Center (FC) that reconstructs the source parameter. The received signal is corrupted by noise and channel fading, so that the FC objective is to minimize theMean-Square Error (MSE) of the estimate. In this paper, we assume sensor node observations to be correlated with the source signal and correlated with each other as well. Thecorrelation coefficient between two observations is exponentially decaying with the distance separation. The effect of the distance-based correlation on the estimation quality isdemonstrated and compared with the case of unity correlated observations. Moreover, a closed-form expression for the outage probability is derived and its dependency on the correlationcoefficients is investigated. Numerical simulations are provided to verify our analytic results.
Abstract In this paper, we propose a prioritized multi-layer network coding scheme for collaborative packet recovery in underlay cellular cognitive radio networks. This scheme allowsthe collocated primary and cognitive radio base-stations to collaborate with each other, in order to minimize their own and each other's packet recovery overheads, and thus improvetheir throughput, without any coordination between them. This non-coordinated collaboration is done using a novel multi-layer instantly decodable network coding scheme, whichguarantees that each network's help to the other network does not result in any degradation in its own performance. It also does not cause any violation to the primary networksinterference thresholds in the same and adjacent cells. Yet, our proposed scheme both guarantees the reduction of the recovery overhead in collocated primary and cognitive radionetworks, and allows early recovery of their packets compared to non-collaborative schemes. Simulation results show that a recovery overhead reduction of 15% and 40% can be achieved byour proposed scheme in the primary and cognitive radio networks, respectively, compared to the corresponding non-collaborative scheme.
Abstract In this paper, we consider the problem of minimizing the decoding delay of generalized instantly decodable network coding (G-IDNC) in persistent erasure channels (PECs). Bypersistent erasure channels, we mean erasure channels with memory, which are modeled as a Gilbert-Elliott two-state Markov model with good and bad channel states. In this scenario, thechannel erasure dependence, represented by the transition probabilities of this channel model, is an important factor that could be exploited to reduce the decoding delay. We firstformulate the G-IDNC minimum decoding delay problem in PECs as a maximum weight clique problem over the G-IDNC graph. Since finding the optimal solution of this formulation is NP-hard,we propose two heuristic algorithms to solve it and compare them using extensive simulations. Simulation results show that each of these heuristics outperforms the other in certainranges of channel memory levels. They also show that the proposed heuristics significantly outperform both the optimal strict IDNC in the literature and the channel-unaware G-IDNCalgorithms.
Abstract We present an algorithm and its variants for sparse signal recovery from a small number of its measurements in a distribution agnostic manner. The proposed algorithm findsBayesian estimate of a sparse signal to be recovered and at the same time is indifferent to the actual distribution of its non-zero elements. Termed Support Agnostic Bayesian MatchingPursuit (SABMP), the algorithm also has the capability of refining the estimates of signal and required parameters in the absence of the exact parameter values. The inherent feature ofthe algorithm of being agnostic to the distribution of the data grants it the flexibility to adapt itself to several related problems. Specifically, we present two important extensionsto this algorithm. One extension handles the problem of recovering sparse signals having block structures while the other handles multiple measurement vectors to jointly estimate therelated unknown signals. We conduct extensive experiments to show that SABMP and its variants have superior performance to most of the state-of-the-art algorithms and that too atlow-computational expense.
Abstract In this paper we introduce a self interference (SI) estimation and minimisation technique for amplify and forward relays. Relays are used to help forward signals between atransmitter and a receiver. This helps increase the signal coverage and reduce the required transmitted signal power. One problem that faces relays communications is the leaked signalfrom the relay's output to its input. This will cause an SI problem where the new received signal at the relay's input will be added with the unwanted leaked signal from the relay'soutput. A Solution is proposed in this paper to estimate and minimise this SI which is based upon using a tapped filter at the destination. To get the optimum weights for this tappedfilter, some channel parameters must be estimated first. This is performed blindly at the destination without the need of any training. This channel parameter estimation method isnamed the blind-self-interference-channel-estimation (BSICE) method. The next step in the proposed solution is to estimate the tapped filter's weights. This is performed by minimisingthe mean squared error (MSE) at the destination. This proposed method is named the MSE-Optimum Weight (MSE-OW) method. Simulation results are provided in this paper to verify theperformance of BSICE and MSE-OW methods.
Abstract A key task of smart meters is to securely report the power consumption of households and provide dynamic pricing to consumers. While transmission to all meters can beperformed via a simple broadcast, several challenges are faced during the reporting process. Firstly, the communication network should be able to handle the large amount of loadreports, and secondly, the privacy of the load report should be ensured. In this paper, we propose a novel compressive sensing based network design that 1) reduces the communicationnetwork transmission overhead, and 2) ensures the privacy of the load reports. Based on recent findings from [1] and [2], numerical results show that the proposed design significantlyreduces the network transmission overhead and utilizes the fading channel to encrypt the load reports, thus making it almost impossible for an eavesdropper to decipher the loadreports.
Abstract
A fast matching pursuit method using a Bayesian approach is introduced for block-sparse signal recovery. This method performs Bayesian estimates of block-sparse signals evenwhen the distribution of active blocks is non-Gaussian or unknown. It is agnostic to the distribution of active blocks in the signal and utilizes a priori statistics of additive noiseand the sparsity rate of the signal, which are shown to be easily estimated from data and no user intervention is required. The method requires a priori knowledge of block partitionand utilizes a greedy approach and order-recursive updates of its metrics to find the most dominant sparse supports to determine the approximate minimum mean square error (MMSE)estimate of the block-sparse signal. Simulation results demonstrate the power and robustness of our proposed estimator.
Abstract
This paper presents an L-shaped microphone array configuration for a robust 2-D localization of an impulsive acoustic source in an indoor environment. The localizationtechnique relies on a recently proposed time delay estimation technique based on the orthogonal clustering algorithm (TDE-OC) which is designed to work under room reverberantconditions and at low sampling rates. The TDE-OC method finds the TDEs from the sparse room impulse response (RIR) signal. The TDE's obtained from RIR adds to the robustness of theTDE-OC method against room reverberations while the low sampling rates requirement reduces the hardware and computational complexity and relaxes the communication link between themicrophones and the centralized location. Experimental results show the robustness of this method in a reverberant environment with low sampling rates, when compared with thegeneralized cross correlation method.
Abstract Localization systems are most often based on time delay estimation (TDE) techniques. TDE techniques based on channel impulse response (CIR) are effective in reverberantenvironment such as indoors. A recently developed algorithm called Orthogonal Clustering (OC) algorithm is one such algorithm that estimates the CIR utilizing a sparse signalreconstruction approach. OC is based on low complexity Bayesian method utilizing the sparsity constraint, the sensing matrix structure and the a priori statistical information. Inpractical systems several parameters affect the performance of a localization system based on OC TDE. Therefore, it is necessary to analyze the performance of an algorithm when certainparameters vary. In this paper we investigate the effect of variations in different parameters on the performance of the OC algorithm used in an impulsive acoustic source localization(IASL) system.
Abstract Applying wavelet edge detection technique on observed wideband spectrum, results in a signal which contains frequency band boundaries information. Resultant signal containspeaks at locations corresponding to frequency band boundaries i.e. start and end locations of frequency bands. In the presence of noise resultant signal contains mixture of true peaksand noisy peaks. A threshold value is required to extract true peaks efficiently from mixture. In this paper calculation of threshold value is performed using blind source separationtechnique. Probability of detection and success ratio plots are used to evaluate proposed technique. Success ratio plot shows improvement of 4 dB and probability of detection plotshows improvement of 8 dB. Moreover, the proposed algorithm is based on the received signal and does not require any apriori information.
Abstract This paper presents a new method of time delay estimation (TDE) using low sample rates of an impulsive acoustic source in a room environment. The proposed method finds thetime delay from the room impulse response (RIR) which makes it robust against room reverberations. The RIR is considered a sparse phenomenon and a recently proposed sparse signalreconstruction technique called orthogonal clustering (OC) is utilized for its estimation from the low rate sampled received signal. The arrival time of the direct path signal at apair of microphones is identified from the estimated RIR and their difference yields the desired time delay. Low sampling rates reduce the hardware and computational complexity anddecrease the communication between the microphones and the centralized location. The performance of the proposed technique is demonstrated by numerical simulations and experimentalresults.
Abstract In this paper we propose a novel method of clipping mitigation in OFDM using compressive sensing that completely avoids using reserved tones or channel-estimation pilots. Themethod builds on selecting the most reliable perturbations from the constellation lattice upon decoding at the receiver (in the frequency domain), and performs compressive sensing overthese observations in order to completely recover the sparse nonlinear distortion in the time domain. As such, the method provides a practical solution to the problem of initialerroneous decoding decisions in iterative ML methods, and the ability to recover the distorted signal in one shot.
Abstract This paper presents a maximum likelihood (ML) approach to mitigate the effect of narrow band interference (NBI) in a zero padded orthogonal frequency division multiplexing(ZP-OFDM) system. The NBI is assumed to be time variant and asynchronous with the frequency grid of the ZP-OFDM system. The proposed structure based technique uses the fact that theNBI signal is sparse as compared to the ZP-OFDM signal in the frequency domain. The structure is also useful in reducing the computational complexity of the proposed method. The paperalso presents a data aided approach for improved NBI estimation. The suitability of the proposed method is demonstrated through simulations.
Abstract
In this work we present a detailed discussion of the various sources of errors in commercial off the shelf (COTS) wireless sensor node (WSN) platforms through a series ofexperiments. These COTS WSNs are programmed using the TinyOS 2.x standard components and interfaces. The experimental setup is used to record an impulsive acoustic signal from severalsensor nodes' microphones and send to the base-station for processing. The Flooding Time Synchronization Protocol (FTSP) was used, and a special MATLAB interfacing code was written toanalyze and present the errors within this acquisition setup. It was found that there are at least 4 error sources that can dramatically degrade the signal acquisition due to the factthat standard TinyOS components are not suitable for medium/high sampling frequency applications.
Abstract The paper addresses the problem of channel estimation in Impulse-Radio Ultra-Wideband (IR-UWB) communication system. The IEEE 802.15.4a channel model is used where thechannel is assumed to be Linear Time Invariant (LTI) and thus the problem of channel estimation becomes the estimation of the sparse channel taps and their delays. Since, the bandwidthof the signal is very large, Nyquist rate sampling is impractical, therefore, we propose to estimate the channel taps from the sub-sampled versions of the received signal profile. Weadopt the Bayesian framework to estimate the channel support by incorporating the a priori multipath arrival time statistics. In the first approach, we adopt a two-step method byemploying Compressive Sensing to obtain coarse estimates and then refine them by applying Maximum A Posteriori (MAP) criterion. In the second approach, we develop a Low-Complexity MAP(LC-MAP) estimator. The computational complexity is reduced by identifying nearly orthogonal clusters in the received profile and by leveraging the structure of the sensing matrix.
Abstract In this paper, we present a fast Bayesian method for sparse signal recovery that makes a collective use of the sparsity information, a priori statistical properties, and thestructure involved in the problem to obtain near optimal estimates at very low complexity. Specifically, we utilize the rich structure present in the sensing matrix encountered in manysignal processing applications to develop a fast reconstruction algorithm when the statistics of the sparse signal are non- Gaussian or unknown. The proposed method outperforms thewidely used convex relaxation approaches as well as greedy matching pursuit techniques all while operating at a much lower complexity.
Abstract Impulsive noise is the bottleneck that limits the distance at which DSL communications can take place. By considering impulsive noise a sparse vector, recently developedsparse reconstruction algorithms can be utilized to combat it. We propose an algorithm that utilizes the guard band null carriers for the impulsive noise estimation and cancellation.Instead of relying on â„“1 minimization as done in some popular general-purpose compressive sensing (CS) schemes, the proposed method exploits the structure present in the problem andthe available a priori information jointly for sparse signal recovery. The computational complexity of the proposed algorithm is very low as compared to the sparse reconstructionalgorithms based on â„“1 minimization. A performance comparison of the proposed method with other techniques, including â„“1 minimization and another recently developed scheme for sparsesignal recovery, is provided in terms of achievable rates for a DSL line with impulse noise estimation and cancellation.
Abstract Under high mobility, the orthogonality between sub-carriers in an OFDM symbol is destroyed resulting in severe inter-carrier interference (ICI). We present a novel algorithmto estimate the channel and ICI coefficients by exploiting the channel's time and frequency correlations and the (approximately) banded structure of the frequency-domain channelmatrix. In addition, we invoke the asymptotic equivalence of Toeplitz and circulant matrices to reduce the dimensionality of the channel estimation problem by retaining the dominantterms only in an offline eigen-decomposition. Furthermore, we show that the asymptotically MMSE-optimum pilot design consists of identical equally-spaced frequency-domain clusterswhose size is determined by the channel Doppler spread. Comparisons of our proposed algorithm with a widely-cited recent algorithm demonstrate a significant performance advantage at acomparable real-time complexity.
Abstract Impulsive noise is the bottleneck that determines the maximum length of the DSL. Impulsive noise seldom occurs in DSL but when it occurs, it is very destructive and resultsin dropping the affected DSL symbols at the receiver as they cannot be recovered. By considering impulsive noise a sparse vector, recently developed sparse reconstruction algorithmscan be utilized to combat it. We propose an algorithm that utilizes the null carriers for the impulsive noise estimation and cancellation. Specifically, we use compressive sampling fora coarse estimate of the impulse position, an a priori information based MAP metric for its refinement, followed by MMSE estimation for estimating the impulse amplitudes. We alsopresent a comparison of the achievable rate in DSL using our algorithm and recently developed algorithms for sparse signal reconstruction.
Abstract
This work presents exact tracking analysis of the ∈-normalized least mean square (∈-NLMS) algorithm for circular complex correlated Gaussian input. The analysis is based onthe derivation of a closed form expression for the cumulative distribution function (CDF) of random variables of the form [∥ui∥(D1)2][دµ+∥ui∥(D2)2]-1. The CDF is then used to derive thefirst and second moments of these variables. These moments in turn completely characterize the tracking performance of the ∈-NLMS algorithm in explicit closed form expressions.Consequently, new explicit closed-form expressions for the steady state tracking excess mean square error and optimum step size are derived. The simulation results of the trackingbehavior of the filter match the expressions obtained theoretically for various degrees of input correlation and for various values of ∈.
Abstract
This paper focuses on the data-aided (DA) direction of arrival (DOA) estimation of a single narrow-band source in time-varying Rayleigh fading amplitude. The time-variant fading amplitude is modeled by considering the Jakes' and the first order autoregressive (AR1) correlation models. Closed-form expressions of the CRB for DOA alone are derived for fast and slow Rayleigh fading amplitude. As a special case, the CRB under uncorrelated fading Rayleigh channel is derived. A analytical approximate expressions of the CRB are derived for low and high SNR that enable the derivation of a number of properties that describe the bound's dependence on key parameters such as SNR, channel correlation. A high signal-to-noise-ratio maximum likelihood (ML) estimator based on the AR1 correlation model is derived. The main objective is to reduce algorithm complexity to a single-dimensional search on the DOA parameter alone as in the static-channel DOA estimator. Finally, simulation results illustrate the performance of the estimator and confirm the validity of the theoretical analysis.
Abstract Channel estimation is vital in OFDM systems for efficient data recovery. In this paper, we propose a blind algorithm for channel estimation that is based on the assumptionthat the transmitted data in an OFDM system is Gaussian (by central limit arguments). The channel estimate can then be obtained by maximizing the output likelihood function.Unfortunately, the likelihood function turns out to be multi-modal and thus finding the global maxima is challenging. We rely on spectral factorization and the cyclostationarity of theoutput to obtain the correct channel zeros. The Genetic algorithm is then used to fine tune the obtained solution.
Abstract
This paper addresses the data-aided signal-to-noise ratio (SNR) estimation in time-variant flat Rayleigh fading channels. The time-variant fading channel is modeled by considering the Jakes' model and the first order autoregressive (AR1) model. Closed-form expressions of the Crameجپr-Rao bound (CRB) for data-aided SNR estimation are derived for fast and slow fading Rayleigh channels. As a special case, the CRB under uncorrelated fading Rayleigh channel is derived. Analytical approximate expressions of the CRB are derived for low and high SNR that enable the derivation of a number of properties that describe the bound's dependence on key parameters such as SNR, channel correlation, sample number. Since the exact maximum likelihood (ML) estimator is computationally intensive in the case of fast-fading channel, two approximate solutions are proposed for high and low SNR cases. Numerical results illustrate the performance of the estimators and confirm the validity of the theoretical analysis.
Abstract We propose a generic feedback channel model, and compressive sensing based opportunistic feedback protocol for feedback resource (channels) reduction in MIMO BroadcastChannels under the assumption that both feedback and downlink channels are noisy and undergo block Rayleigh fading. The feedback resources are shared and are opportunistically accessedby users who are strong (users above a certain fixed threshold). Strong users send same feedback information on all shared channels. They are identified by the base station viacompressive sensing. The proposed protocol is shown to achieve the same sum-rate throughput as that achieved by dedicated feedback schemes, but with feedback channels growing onlylogarithmically with number of users.
Abstract We propose a generic feedback channel model, and compressive sensing based opportunistic feedback protocol for feedback resource (channels) reduction in MIMO BroadcastChannels under the assumption that both feedback and downlink channels are noisy and undergo block Rayleigh fading. The feedback resources are shared and are opportunistically accessedby users who are strong (users above a certain fixed threshold). Strong users send same feedback information on all shared channels. They are identified by the base station viacompressive sensing. The proposed protocol is shown to achieve the same sum-rate throughput as that achieved by dedicated feedback schemes, but with feedback channels growing onlylogarithmically with number of users.
Abstract In an OFDM system, the receiver requires an estimate of the channel to recover the transmitted data. Most channel estimation methods rely on some form of training whichreduces the useful data rate. In this paper, we introduce an algorithm that blindly estimates the channel by maximizing the log likelihood of the channel given the output data. Findingthe likelihood function of a linear system can be very difficult. However, in the OFDM case, central limit arguments can be used to argue that the time-domain input is Gaussian. Thistogether with the Gaussian assumption on the noise makes the output data Gaussian. The output likelihood function can then be maximized to obtain the maximum likelihood (ML) estimateof the channel. Unfortunately, this optimization problem is not convex and thus finding the global maximum is challenging. In this paper, we propose two methods to find the globalmaximum of the ML objective function. One is the blind Genetic algorithm and the other is the semi-blind Steepest descent method. The performance of the proposed algorithms isdemonstrated by computer simulations.
Abstract In this paper, we describe a novel design of a Peak-to-Average-Power-Ratio (PAPR) reducing system, which exploits the relative temporal sparsity of Orthogonal FrequencyDivision Multiplexed (OFDM) signals to detect the positions and amplitudes of clipped peaks, by partial observation of their frequency content at the receiver. This approach usesrecent advances in reconstruction of sparse signals from rank-deficient projections using convex programming collectively known as compressive sensing. Since previous work in theliterature has focused on using the reserved tones as spectral support for optimum peak-reducing signals in the time-domain, the complexity at the transmitter was always a problem. Inthis work, we alternatively use extremely simple peak-reducing signals at the transmitter, then use the reserved tones to detect the peak-reducing signal at the receiver by a convexrelaxation of an other-wise combinatorially prohibitive optimization problem. This in effect completely shifts the complexity to the receiver and drastically reduces it from a functionof N (the number of subcarriers in the OFDM signal), to a function of m (the number of reserved tones) which is a small subset of N.
Abstract A key feature in the design of any MAC protocol is the throughput it can provide. In wireless networks, the channel of a user is not fixed but varies randomly. Thus, in orderto maximize the throughput of the MAC protocol at any given time, only users with large channel gains should be allowed to transmit. In this paper, a compressive sensing basedopportunistic protocol for exploiting multiuser diversity in wireless networks is proposed. This protocol is based on the traditional protocol of R-ALOHA which allows users to competefor channel access before reserving the channel to the best user. We use compressive sensing to find the best user, and show that the proposed protocol requires less time forreservation and so it outperforms other schemes proposed in the literature. Also, as the proposed scheme requires less reservation time, it can be seen as an enhancement for R-ALOHAschemes in fast fading environment.
Abstract OFDM modulation combines advantages of high achievable data rates and relatively easy implementation. However, for proper recovery of input, the OFDM receiver needs accuratechannel information. Most algorithms proposed in literature perform channel estimation in time domain which increases computational complexity in multi-access situations where the useris only interested in part of the spectrum. In this paper, we propose a frequency domain algorithm for channel estimation in OFDMA systems. The algorithm performs eigenvaluedecomposition of channel autocorrelation matrix and approximates channel frequency response seen by each user using the first few dominant eigenvectors. In a time variant environment,we derive a state space model for the evolution of the eigenmodes that help us to track them. This is done using a forward backward Kalman filter. The performance of the algorithm isfurther improved by employing a data-aided approach (based on expectation maximization).
Abstract The distribution of randomly deployed wireless sensors plays an important role in the quality of the methods used for data acquisition and signal reconstruction.Mathematically speaking, the estimation of the distribution of randomly deployed sensors can be related to computing the spectrum of Vandermonde matrices with non-uniform entries. Inthis paper, we use the recent free deconvolution framework to recover, in noisy environments, the asymptotic moments of the structured random Vandermonde matrices and relate thesemoments to the distribution of the randomly deployed sensors. Remarkably, the results are valid in the finite case using only a limited number of sensors and samples.
Abstract In this work, we propose a transparent approach to evaluating the CDF of indefinite quadratic forms in Gaussian random variables and ratios of such forms. This quantityappears in the analysis of different receivers in communication systems and in various applications in signal processing. Instead of attempting to find the pdf of this quantity as isthe case in many papers in literature, we focus on finding the CDF. The basic trick that we implement is to replace inequalities that appear in the CDF calculations with the unit stepfunction and replace the latter with its Fourier transform. This produces a multi-dimensional integral that can be evaluated using complex integration. We show how our approach extendsto nonzero mean Gaussian real/complex vectors and to the joint distribution of indefinite quadratic forms.
Abstract Random beamforming (RBF) exploits multiuser diversity to increase the sum-rate capacity of MIMO broadcast channels. However, in the presence of spatial correlation betweenthe downlink channels, multiuser diversity can not be exploited and the sum-rate suffers a signal to noise (SNR) hit. In this paper, we explore precoding techniques that minimize thishit. Basically, we derive an optimum and an approximate precoding matrix that minimizes the sum-rate hit of RBF. As a by product, we introduce a technique that evaluates the cumulativedistribution function (CDF) of weighted norms of Gaussian random variables.
Abstract In this paper, we consider blind data detection for OFDM transmission over block fading channels. Specifically, we show how constant modulus data of an OFDM symbol can beblindly detected using output symbol and associated cyclic prefix. Our approach relies on decomposing the OFDM channel into two subchannels (cyclic and linear) that share the sameinput and are characterized by the same channel parameters. This fact enables us to estimate the channel parameters from one subchannel and substitute the estimate into the other, thusobtaining a nonlinear relationship involving the input and output data only that can be searched for the maximum likelihood estimate of the input. This shows that OFDM systems arecompletely identifiable using output data only, irrespective of the channel zeros, as long as the channel delay spread is less than the length of the cyclic prefix. We also proposeiterative methods to reduce the computational complexity involved in the maximum likelihood search of input.