Journal Papers
Abstract
This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded norm is allowed into the linear transformation matrix to improve the singular-value structure. Following this, the problem is formulated as a min–max optimization problem. Next, the min–max problem is converted to an equivalent minimization problem to estimate the unknown vector quantity. The solution of the minimization problem is shown to converge to that of the $ell _{2
Abstract
Abstract
Abstract
Abstract
Abstract
This paper addresses the problem of blind demixing of instantaneous mixtures in a multiple-input multiple-output communication system. The main objective is to present efficient blind source separation (BSS) algorithms dedicated to moderate or high-order QAM constellations. Four new iterative batch BSS algorithms are presented dealing with the multimodulus (MM) and alphabet matched (AM) criteria. For the optimization of these cost functions, iterative methods of Givens and hyperbolic rotations are used. A pre-whitening operation is also utilized to reduce the complexity of design problem. It is noticed that the designed algorithms using Givens rotations gives satisfactory performance only for large number of samples. However, for small number of samples, the algorithms designed by combining both Givens and hyperbolic rotations compensate for the ill-whitening that occurs in this case and thus improves the performance. Two algorithms dealing with the MM criterion are presented for moderate order QAM signals such as 16-QAM. The other two dealing with the AM criterion are presented for high-order QAM signals. These methods are finally compared with the state of art batch BSS algorithms in terms of signal-to-interference and noise ratio, symbol error rate and convergence rate. Simulation results show that the proposed methods outperform the contemporary batch BSS algorithms.
Abstract
The radiation pattern of an antenna array depends on the excitation weights and the geometry of the array. Due to wind and atmospheric conditions, outdoor millimeter wave antenna elements are subject to full or partial blockages from a plethora of particles like dirt, salt, ice, and water droplets. Handheld devices are also subject to blockages from random finger placement and/or finger prints. These blockages cause absorption and scattering to the signal incident on the array, modify the array geometry, and distort the far-field radiation pattern of the array. This paper studies the effects of blockages on the far-field radiation pattern of linear arrays and proposes several array diagnosis techniques for millimeter wave antenna arrays. The proposed techniques jointly estimate the locations of the blocked antennas and the induced attenuation and phase-shifts given knowledge of the angles of arrival/departure. Numerical results show that the proposed techniques provide satisfactory results in terms of fault detection with reduced number of measurements (diagnosis time) provided that the number of blockages is small compared to the array size.
Abstract
Abstract This paper analyzes the statistical properties of the signal-to-noise ratio (SNR) at the output of the Capon's minimum variance distortionless response (MVDR) beamformers when operating over impulsive noises. Particularly, we consider the supervised case in which the receiver employs the regularized Tyler estimator in order to estimate the covariance matrix of the interference-plus-noise process using n observations of size Nأ—1. The choice for the regularized Tyler estimator (RTE) is motivated by its resilience to the presence of outliers and its regularization parameter that guarantees a good conditioning of the covariance estimate. Of particular interest in this paper is the derivation of the second order statistics of the SINR. To achieve this goal, we consider two different approaches. The first one is based on considering the classical regime, referred to as the n-large regime, in which N is assumed to be fixed while n grows to infinity. The second approach is built upon recent results developed within the framework of random matrix theory and assumes that N and n grow large together. Numerical results are provided in order to compare between the accuracies of each regime under different settings.
Abstract
Abstract The least-mean-fourth (LMF) algorithm is known for its fast convergence and lower steady state error, especially in sub-Gaussian noise environments. Recent work on normalised versions of the LMF algorithm has further enhanced its stability and performance in both Gaussian and sub-Gaussian noise environments. For example, the recently developed normalised LMF (XE-NLMF) algorithm is normalised by the mixed signal and error powers, and weighted by a fixed mixed-power parameter. Unfortunately, this algorithm depends on the selection of this mixing parameter. In this work, a time-varying mixed-power parameter technique is introduced to overcome this dependency. A convergence analysis, transient analysis, and steady-state behaviour of the proposed algorithm are derived and verified through simulations. An enhancement in performance is obtained through the use of this technique in two different scenarios. Moreover, the tracking analysis of the proposed algorithm is carried out in the presence of two sources of nonstationarities: (1) carrier frequency offset between transmitter and receiver and (2) random variations in the environment. Close agreement between analysis and simulation results is obtained. The results show that, unlike in the stationary case, the steady-state excess mean-square error is not a monotonically increasing function of the step size.
Abstract
Abstract Several areas in signal processing and communications rely on various tools in order statistics. Studying the scaling of the extreme values of iid random variables is of particular interest as it is sometimes only possible to make meaningful statements in the large number of variables case. This paper develops a new approach to finding the scaling of the minimum of iid variables by studying the behavior of the CDF and its derivatives at one point, or equivalently by studying the behavior of the characteristic function. The theory developed is used to study the scaling of several types of random variables and is confirmed by simulations.
Abstract
Abstract
Electrocardiogram (ECG) signals are vital tools in assessing the health of the mother and the fetus during pregnancy. Extraction of fetal ECG (FECG) signal from the mother's abdominal recordings requires challenging signal processing tasks to eliminate the effects of the mother's ECG (MECG) signal, noise and other distortion sources. The availability of ECG data from multiple electrodes provides an opportunity to leverage the collective information in a collaborative manner. We propose a new scheme for extracting the fetal ECG signals from the abdominal ECG recordings of the mother using the multiple measurement vectors approach. The scheme proposes a dual dictionary framework that employs a learned dictionary for eliminating the MECG signals through sparse domain representation and a wavelet dictionary for the noise reduced sparse estimation of the fetal ECG signals. We also propose a novel methodology for inferring a single estimate of the fetal ECG source signal from the individual sensor estimates. Simulation results with real ECG recordings demonstrate that the proposed scheme provides a comprehensive framework for eliminating the mother's ECG component in the abdominal recordings, effectively filters out noise and distortions, and leads to more accurate recovery of the fetal ECG source signal compared to other state-of-the-art algorithms.
Abstract
Abstract
Orthogonal frequency division multiplexing (Open image in new window ) has emerged as a modulation scheme that can achieve high data rates over frequency selective fading channel by efficiently handling multipath effects. This paper proposes receiver design for space-time block coded Open image in new window transmission over frequency selective time-variant channels. Joint channel and data recovery are performed at the receiver by utilizing the expectation-maximization (Open image in new window ) algorithm. It makes collective use of the data constraints (pilots, cyclic prefix, the finite alphabet constraint, and space-time block coding) and channel constraints (finite delay spread, frequency and time correlation, and transmit and receive correlation) to implement an effective receiver. The channel estimation part of the receiver boils down to an Open image in new window -based forward-backward Kalman filter. A forward-only Kalman filter is also proposed to avoid the latency involved in estimation. Simulation results show that the proposed receiver outperforms other least-squares-based iterative receivers.
Abstract
Underwater wireless technologies demand to transmit at higher data rate for ocean exploration. Currently, large coverage is achieved by acoustic sensor networks with low data rate, high cost, high latency, high power consumption, and negative impact on marine mammals. Meanwhile, optical communication for underwater networks has the advantage of the higher data rate albeit for limited communication distances. Moreover, energy consumption is another major problem for underwater sensor networks, due to limited battery power and difficulty in replacing or recharging the battery of a sensor node. The ultimate solution to this problem is to add energy harvesting capability to the acoustic-optical sensor nodes. Localization of underwater sensor networks is of utmost importance because the data collected from underwater sensor nodes is useful only if the location of the nodes is known. Therefore, a novel localization technique for energy harvesting hybrid acoustic-optical underwater wireless sensor networks (AO-UWSNs) is proposed. AO-UWSN employs optical communication for higher data rate at a short transmission distance and employs acoustic communication for low data rate and long transmission distance. A hybrid received signal strength (RSS) based localization technique is proposed to localize the nodes in AO-UWSNs. The proposed technique combines the noisy RSS based measurements from acoustic communication and optical communication and estimates the final locations of acoustic-optical sensor nodes. A weighted multiple observations paradigm is proposed for hybrid estimated distances to suppress the noisy observations and give more importance to the accurate observations. Furthermore, the closed form solution for Cramer-Rao lower bound (CRLB) is derived for localization accuracy of the proposed technique.
Abstract
Abstract Channel estimation is an important prerequisite for receiver design. In this paper we present a semi-blind low complexity frequency domain based channel estimation algorithm for multi-access Orthogonal Frequency Division Multiplexing (OFDM) systems. Our algorithm is based on eigenvalues interpolation and makes a collective use of data and channel constraints. We exploit these constraints to derive a frequency domain maximum a posteriori (MAP) channel estimator. Furthermore, we develop a data aided (expectation maximization based) estimator incorporating frequency correlation information. The estimator is further enhanced by utilizing the time correlation information through a forward backward (FB) Kalman filter. We also explore various implementation for the FB Kalman filter. The simulation results are provided validating the applicability of the proposed algorithm.
Abstract
Abstract In this paper, we present a comprehensive scheme for wireless monitoring of the respiratory movements in humans. Our scheme overcomes the challenges low signal-to-noise ratio, background clutter and high sampling rates. It is based on the estimation of the ultra-wideband channel impulse response. We suggest techniques for dealing with background clutter in situations when it might be time variant. We also present a novel methodology for reducing the required sampling rate of the system significantly while achieving the accuracy offered by the Nyquist rate. Performance results from simulations conducted with pre-recorded respiratory signals demonstrate the robustness of our scheme for tackling the above challenges and providing a low-complexity solution for the monitoring of respiratory movements.
Abstract
Abstract In this paper, compressed sensing techniques are proposed to linearize commercial power amplifiers driven by orthogonal frequency division multiplexing signals. The nonlinear distortion is considered as a sparse phenomenon in the time-domain, and three compressed sensing based algorithms are presented to estimate and compensate for these distortions at the receiver using a few and, at times, even no frequency-domain free carriers (i.e. pilot carriers). The first technique is a conventional compressed sensing approach, while the second incorporates a priori information about the distortions to enhance the estimation. Finally, the third technique involves an iterative data-aided algorithm that does not require any pilot carriers and hence allows the system to work at maximum bandwidth efficiency. The performances of all the proposed techniques are evaluated on a commercial power amplifier and compared. The error vector magnitude and symbol error rate results show the ability of compressed sensing to compensate for the amplifier's nonlinear distortions.
Abstract
This paper investigates the joint maximum likelihood (ML) data detection and channel estimation problem for Alamouti space-time block-coded (STBC) orthogonal frequency-division multiplexing (OFDM) wireless systems. The joint ML estimation and data detection is generally considered a hard combinatorial optimization problem. We propose an efficient low-complexity algorithm based on branch-estimate-bound strategy that renders exact joint ML solution. However, the computational complexity of blind algorithm becomes critical at low signal-to-noise ratio (SNR) as the number of OFDM carriers and constellation size are increased especially in multiple-antenna systems. To overcome this problem, a semi-blind algorithm based on a new framework for reducing the complexity is proposed by relying on subcarrier reordering and decoding the carriers with different levels of confidence using a suitable reliability criterion. In addition, it is shown that by utilizing the inherent structure of Alamouti coding, the estimation performance improvement or the complexity reduction can be achieved. The proposed algorithms can reliably track the wireless Rayleigh fading channel without requiring any channel statistics. Simulation results presented against the perfect coherent detection demonstrate the effectiveness of blind and semi-blind algorithms over frequency-selective channels with different fading characteristics.
Abstract
This paper presents a robust method for two-dimensional (2D) impulsive acoustic source localization in a room environment using low sampling rates. The proposed method finds the time delay from the room impulse response (RIR) which makes it robust against room reverberations. We consider the RIR as a sparse phenomenon and apply a recently proposed sparse signal reconstruction technique called orthogonal clustering (OC) for its estimation from the sub-sampled received signal. The arrival time of the direct path signal at a pair of microphones is identified from the estimated RIR, and their difference yields the desired time delay estimate (TDE). Low sampling rates reduces the hardware and computational complexity and decreases the communication between the microphones and the centralized location. Simulation and experimental results of an actual hardware setup are presented to demonstrate the performance of the proposed technique.
Abstract
Summary form only given. Strong light-matter coupling has been recently successfully explored in the GHz and THz [1] range with on-chip platforms. New and intriguing quantum optical phenomena have been predicted in the ultrastrong coupling regime [2], when the coupling strength $\Omega$ becomes comparable to the unperturbed frequency of the system $\omega$. We recently proposed a new experimental platform where we couple the inter-Landau level transition of an high-mobility 2DEG to the highly subwavelength photonic mode of an LC meta-atom [3] showing very large $\Omega$/$\omega$c=0.87. Our system benefits from the collective enhancement of the light-matter coupling which comes from the scaling of the coupling $\Omega$ ∠âˆڑn, were n is the number of optically active electrons. In our previous experiments [3] and in literature [4] this number varies from 104-103 electrons per meta-atom. We now engineer a new cavity, resonant at 290 GHz, with an extremely reduced effective mode surface Seff=4 أ— 10-14 m2 (FE simulations, CST), yielding large field enhancements above 1500 and allowing to enter the few ({\textless
Abstract
This paper studies the delay reduction problem for instantly decodable network coding (IDNC)-based device-to-device (D2D) communication-enabled networks. Unlike conventional point-to-multipoint (PMP) systems in which the wireless base station has the sufficient computation abilities, D2D networks rely on battery-powered operations of the devices. Therefore, a particular emphasis on the computation complexity needs to be addressed in the design of delay reduction algorithms for D2D networks. While most of the existing literature on IDNC directly extend the delay reduction PMP schemes, known to be NP-hard, to the D2D setting, this paper proposes to investigate and minimize the complexity of such algorithms for battery-powered devices. With delay minimization problems in IDNC-based systems being equivalent to a maximum weight clique problems in the IDNC graph, the presented algorithms, in this paper, can be applied to different delay aspects. This paper introduces and focuses on the reduction of the maximum value of the decoding delay as it represents the most general solution. The complexity of the solution is reduced by first proposing efficient methods for the construction, the update, and the dimension reduction of the IDNC graph. The paper, further, shows that, under particular scenarios, the problem boils down to a maximum clique problem. Due to the complexity of discovering such maximum clique, the paper presents a fast selection algorithm. Simulation results illustrate the performance of the proposed schemes and suggest that the proposed fast selection algorithm provides appreciable complexity gain as compared to the optimal selection one, with a negligible degradation in performance. In addition, they indicate that the running time of the proposed solution is close to the random selection algorithm.
Abstract
By virtue of large antenna arrays, massive MIMO systems have a potential to yield higher spectral and energy efficiency in comparison with the conventional MIMO systems. Thispaper addresses uplink channel estimation in massive MIMO-OFDM systems with frequency selective channels. We propose an efficient distributed minimum mean square error (MMSE) algorithmthat can achieve near optimal channel estimates at low complexity by exploiting the strong spatial correlation among antenna array elements. The proposed method involves solving areduced dimensional MMSE problem at each antenna followed by a repetitive sharing of information through collaboration among neighboring array elements. To further enhance the channelestimates and/or reduce the number of reserved pilot tones, we propose a data-aided estimation technique that relies on finding a set of most reliable data carriers. Furthermore, weuse stochastic geometry to quantify the pilot contamination, and in turn use this information to analyze the effect of pilot contamination on channel MSE. The simulation resultsvalidate our analysis and show near optimal performance of the proposed estimation algorithms.
Abstract
In this paper, we present a fast Fourier transform algorithm over extension binary fields, where the polynomial is represented in a non-standard basis. The proposedFourier-like transform requires O(h lg(h)) field operations, where h is the number of evaluation points. Based on the proposed Fourier-like algorithm, we then develop theencoding/decoding algorithms for (n=2m, k) Reed-Solomon erasure codes. The proposed encoding/erasure decoding algorithm requires O(n lg(n)), in both additive and multiplicativecomplexities. As the complexity leading factor is small, the proposed algorithms are advantageous in practical applications. Finally, the approaches to convert the basis between themonomial basis and the new basis are proposed.
Abstract
This paper considers the multicast decoding delay reduction problem for generalized instantly decodable network coding (G-IDNC) over persistent erasure channels with feedbackimperfections. The feedback scenario discussed is the most general situation in which the sender does not always receive acknowledgments from the receivers after each transmission andthe feedback communications are subject to loss. The decoding delay increment expressions are derived and employed to express the decoding delay reduction problem as a maximum weightclique problem in the G-IDNC graph. This paper provides a theoretical analysis of the expected decoding delay increase at each time instant. Problem formulations in simpler channel andfeedback models are shown to be special cases of the proposed generalized formulation. Since finding the optimal solution to the problem is known to be NP-hard, a suboptimal greedyalgorithm is designed and compared with blind approaches proposed in the literature. Through extensive simulations, the proposed algorithm is shown to outperform the blind methods inall situations and to achieve significant improvement, particularly for high time-correlated channels.
Abstract
While network densification is considered an important solution to cater the ever-increasing capacity demand, its effect on the handover (HO) rate is overlooked. In dense 5Gnetworks, HO delays may neutralize or even negate the gains offered by network densification. Hence, user mobility imposes a nontrivial challenge to harvest capacity gains via networkdensification. In this paper, we propose a velocity-aware HO management scheme for two-tier downlink cellular network to mitigate the HO effect on the foreseen densification throughputgains. The proposed HO scheme sacrifices the best base station (BS) connectivity, by skipping HO to some BSs along the user trajectory, to maintain longer connection durations andreduce HO rates. Furthermore, the proposed scheme enables cooperative BS service and strongest interference cancellation to compensate for skipping the best connectivity. To this end,we consider different HO skipping scenarios and develop a velocity-aware mathematical model, via stochastic geometry, to quantify the performance of the proposed HO schemes in terms ofthe coverage probability and user throughput. The results highlight the HO rate problem in dense cellular environments and show the importance of the proposed HO schemes. Finally, thevalue of BS cooperation along with handover skipping is quantified for different user mobility profiles.
Abstract
Base station densification is increasingly used by network operators to provide better throughput and coverage performance to mobile subscribers in dense data traffic areas.Such densification is progressively diffusing the move from traditional macrocell base stations toward heterogeneous networks with diverse cell sizes (e.g., microcell, picocell,femotcell) and diverse radio access technologies (e.g., GSM, CDMA), and LTE). The coexistence of the different network entities brings an additional set of challenges, particularly interms of the provisioning of high-speed communications and the management of wireless interference. Resource sharing between different entities, largely incompatible in conventionalsystems due to the lack of interconnections, becomes a necessity. By connecting all the base stations from different tiers to a central processor (referred to as the cloud) throughwire/wireline backhaul links, the heterogeneous cloud radio access network, H-CRAN, provides an open, simple, controllable, and flexible paradigm for resource allocation. This articlediscusses challenges and recent developments in H-CRAN design. It proposes promising resource allocation schemes in H-CRAN: coordinated scheduling, hybrid backhauling, and multicloudassociation. Simulations results show how the proposed strategies provide appreciable performance improvement compared to methods from recent literature.
Abstract
This letter presents a novel approach for evaluating the mean behavior of the well known normalized least mean squares (NLMS) adaptive algorithm for a circularly correlatedGaussian input. The mean analysis of the NLMS algorithm requires the calculation of some normalized moments of the input. This is done by first expressing these moments in terms ofratios of quadratic forms of spherically symmetric random variables and finding the cumulative density function (CDF) of these variables. The CDF is then used to calculate the requiredmoments. As a result, we obtain explicit expressions for the mean behavior of the NLMS algorithm.
Abstract
This work establishes the design, analysis, and fine-tuning of a peak-to-average-power-ratio (PAPR) reducing system, based on compressed sensing (CS) at the receiver of apeak-reducing sparse clipper applied to an orthogonal frequency-division multiplexing (OFDM) signal at the transmitter. By exploiting the sparsity of clipping events in the time domainrelative to a predefined clipping threshold, the method depends on partially observing the frequency content of the clipping distortion over reserved tones to estimate the remainingdistortion.
Abstract
It has been shown recently that dirty paper coding (DPC) achieves optimum sum-rate capacity in a multi-antenna broadcast channel with full channel state (CSI) information atthe transmitter. With only partial feedback, random beamforming (RBF) is able to match the sumrate of DPC for large number of users. However, in the presence of spatial correlation,RBF incurs an SNR hit as compared to DPC. In this letter, we explore precoding techniques to reduce the effect of correlation on RBF. We thus derive the optimum precoding matrix thatminimizes the rate gap between DPC and RBF. Given the numerical complexity involved in calculating the optimum precoder, we derive approximate precoding matrices that are simple tocalculate and close in performance to the optimum precoder.
Abstract
This paper presents exact mean-square analysis of the -NLMS algorithm for circular complex correlated Gaussian input. The analysis is based on the derivation of a closed formexpression for the cumulative distribution function (CDF) of random variables of the form ∥ui∥D12/(دµ + ∥ui∥D12) and using that to derive the first and second moments of such variables.These moments in turn completely characterize the mean square (MS) behavior of the دµ-NLMS in explicit closed form expressions. Both transient and steady-state behavior are analyzed.Consequently, new explicit closed-form expressions for the mean-square-error (MSE) behavior are derived. Our simulations of the transient and steady-state behavior of the filter matchthe expressions obtained theoretically for various degrees of input correlation and for various values of دµ.
Abstract
Cellular networks have preserved an application agnostic and base station (BS) centric architecture1 for decades. Network functionalities (e.g. user association) are decidedand performed regardless of the underlying application (e.g. automation, tactile Internet, online gaming, multimedia). Such an ossified architecture imposes several hurdles againstachieving the ambitious metrics of next generation cellular systems. This article first highlights the features and drawbacks of such architectural ossification. Then the articleproposes a virtualized and cognitive network architecture, wherein network functionalities are implemented via software instances in the cloud, and the underlying architecture canadapt to the application of interest as well as to changes in channels and traffic conditions. The adaptation is done in terms of the network topology by manipulating connectivitiesand steering traffic via different paths, so as to attain the applications' requirements and network design objectives. The article presents cognitive strategies to implement some ofthe classical network functionalities, along with their related implementation challenges. The article further presents a case study illustrating the performance improvement of theproposed architecture as compared to conventional cellular networks, both in terms of outage probability and handover rate.
Abstract
In this work, we propose a new regularization approach for linear least-squares problems with random matrices. In the proposed constrained perturbation regularizationapproach, an artificial perturbation matrix with a bounded norm is forced into the system model matrix. This perturbation is introduced to improve the singular-value structure of themodel matrix and, hence, the solution of the estimation problem. Relying on the randomness of the model matrix, a number of deterministic equivalents from random matrix theory areapplied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set ofbenchmark regularization methods for various estimated signal characteristics. In addition, simulations show that our approach is robust in the presence of model uncertainty.
Abstract
Network densification is foreseen as a potential solution to fulfill the 5G spectral efficiency requirements. The spectral efficiency is improved by shrinking base stations'(BSs) footprints, thus improving the spatial frequency reuse and reducing the number of users sharing the resources of each BS. However, the foreseen densification gains are achievedat the expense of increasing handover (HO) rates. Hence, HO rate is a key performance limiting factor that should be carefully considered in densification planning. This paper shedslight on the HO problem that appears in dense 5G networks and proposes an effective solution via topology aware HO skipping. Different skipping techniques are considered and comparedwith the conventional best connected scheme. To this end, the proposed schemes are validated via the average user rate in downlink single-tier and two-tier cellular networks, which aremodeled using the Poisson point process and the Poisson cluster process, respectively. The proposed skipping schemes show up to 47% gains in the average throughput, which wouldmaximize the benefit of network densification.
Abstract
Relay selection is a simple technique that achieves spatial diversity in cooperative relay networks. Generally, relay selection algorithms require channel state information(CSI) feedback from all cooperating relays to make a selection decision. This requirement poses two important challenges, which are often neglected in the literature. Firstly, the fedback channel information is usually corrupted by additive noise. Secondly, CSI feedback generates a great deal of feedback overhead (air-time) that could result in significantperformance hits. In this paper, we propose a compressive sensing (CS) based relay selection algorithm that reduces the feedback overhead of relay networks under the assumption ofnoisy feedback channels. The proposed algorithm exploits CS to first obtain the identity of a set of relays with favorable channel conditions. Following that, the CSI of the identifiedrelays is estimated using least squares estimation without any additional feedback. Both single and multiple relay selection cases are considered. After deriving closed-formexpressions for the asymptotic end-to-end SNR at the destination and the feedback load for different relaying protocols, we show that CS-based selection drastically reduces thefeedback load and achieves a rate close to that obtained by selection algorithms with dedicated error-free feedback.
Abstract
In practical wireless systems, the successful implementation of resource allocation techniques strongly depends on the algorithmic complexity. Consider a cloud-radio accessnetwork (CRAN), where the central cloud is responsible for scheduling devices to the frames' radio resources blocks (RRBs) of the single-antenna base-stations (BSs), adjusting thetransmit power levels, and for synchronizing the transmit frames across the connected BSs. Previous studies show that the jointly coordinated scheduling and power control problem inthe considered CRAN can be solved using an approach that scales exponentially with the number of BSs, devices, and RRBs, which makes the practical implementation infeasible forreasonably sized networks. This letter instead proposes a low-complexity solution to the problem, under the constraints that each device cannot be served by more than one BS but can beserved by multiple RRBs within each BS frame, and under the practical assumption that the channel is constant during the duration of each frame. The paper utilizes graph-theoreticalbased techniques and shows that constructing a single power control graph is sufficient to obtain the optimal solution with a complexity that is independent of the number of RRBs.Simulation results reveal the optimality of the proposed solution for slow-varying channels, and show that the solution performs near-optimal for highly correlated channels.
Abstract
In this work, we propose a unified approach to evaluating the CDF and PDF of indefinite quadratic forms in Gaussian random variables. Such a quantity appears in manyapplications in communications, signal processing, information theory, and adaptive filtering. For example, this quantity appears in the mean-square-error (MSE) analysis of thenormalized least-mean-square (NLMS) adaptive algorithm, and SINR associated with each beam in beam forming applications. The trick of the proposed approach is to replace inequalitiesthat appear in the CDF calculation with unit step functions and to use complex integral representation of the the unit step function. Complex integration allows us then to evaluate theCDF in closed form for the zero mean case and as a single dimensional integral for the non-zero mean case. Utilizing the saddle point technique allows us to closely approximate suchintegrals in non zero mean case. We demonstrate how our approach can be extended to other scenarios such as the joint distribution of quadratic forms and ratios of such forms, and tocharacterize quadratic forms in isotropic distributed random variables. We also evaluate the outage probability in multiuser beamforming using our approach to provide an application ofindefinite forms in communications.
Abstract
In this paper, we study the multicast completion and decoding delay minimization problems for instantly decodable network coding (IDNC) in the case of lossy feedback. Whenfeedback loss events occur, the sender falls into uncertainties about packet reception at the different receivers, which forces it to perform partially blind selections of packetcombinations in subsequent transmissions. To determine efficient selection policies that reduce the completion and decoding delays of IDNC in such an environment, we first extend theperfect feedback formulation in our previous works to the lossy feedback environment, by incorporating the uncertainties resulting from unheard feedback events in these formulations.For the completion delay problem, we use this formulation to identify the maximum likelihood state of the network in events of unheard feedback and employ it to design a partiallyblind graph update extension to the multicast IDNC algorithm in our earlier work. For the decoding delay problem, we derive an expression for the expected decoding delay increment forany arbitrary transmission. This expression is then used to find the optimal policy that reduces the decoding delay in such lossy feedback environment. Results show that our proposedsolutions both outperform previously proposed approaches and achieve tolerable degradation even at relatively high feedback loss rates.
Abstract
Millimeter wave (mmWave) vehicular communication systems will provide an abundance of bandwidth for the exchange of raw sensor data and support driver-assisted andsafety-related functionalities. Lack of secure communication links, however, may lead to abuses and attacks that jeopardize the efficiency of transportation systems and the physicalsafety of drivers. In this paper, we propose two physical layer (PHY) security techniques for vehicular mmWave communication systems. The first technique uses multiple antennas with asingle radio-frequency (RF) chain to transmit information symbols to a target receiver and noise-like signals in nonreceiver directions. The second technique uses multiple antennaswith a few RF chains to transmit information symbols to a target receiver and opportunistically inject artificial noise in controlled directions, thereby reducing interference invehicular environments. Theoretical and numerical results show that the proposed techniques provide higher secrecy rate when compared to traditional PHY security techniques thatrequire digital or more complex antenna architectures.
Abstract
This paper considers the downlink of a cognitive radio (CR) network formed by multiple primary and secondary transmitters, where each multiantenna transmitter serves apreknown set of single-antenna users. This paper assumes that the secondary and primary transmitters can simultaneously transmit their data over the same frequency bands to achievehigh system spectrum efficiency. This paper considers the downlink balancing problem of maximizing the minimum signal-to-interference-plus-noise ratio (SINR) of the secondarytransmitters subject to both the total power constraint of the secondary transmitters and the maximum interference constraint at each primary user due to secondary transmissions. Thispaper proposes solving the problem using the alternating direction method of multipliers, which leads to a distributed implementation through limited information exchange across thecoupled secondary transmitters. This paper additionally proposes a solution that guarantees feasibility at each iteration. Simulation results demonstrate that the proposed solutionconverges to the centralized solution in a reasonable number of iterations.
Abstract
This paper addresses the design of adaptive subspace matched filter (ASMF) detectors in the presence of a mismatch in the steering vector. These detectors are coined asadaptive in reference to the step of utilizing an estimate of the clutter covariance matrix using training data of signal-free observations. To estimate the clutter covariance matrix,we employ regularized covariance estimators that, by construction, force the eigenvalues of the covariance estimates to be greater than a positive scalar دپ. While this feature islikely to increase the bias of the covariance estimate, it presents the advantage of improving its conditioning, thus making the regularization suitable for handling high-dimensionalregimes. In this paper, we consider the setting of the regularization parameter and the threshold for ASMF detectors in both Gaussian and compound Gaussian clutters. In order to allowfor a proper selection of these parameters, it is essential to analyze the false alarm and detection probabilities. For tractability, such a task is carried out under the asymptoticregime in which the number of observations and their dimensions grow simultaneously large, thereby allowing us to leverage existing results from random matrix theory. Simulationresults are provided in order to illustrate the relevance of the proposed design strategy and to compare the performances of the proposed ASMF detectors versus adaptive normalizedmatched filter (ANMF) detectors under mismatch scenarios.
Abstract
The problem of minimizing the decoding delay in Generalized instantly decodable network coding (G-IDNC) for both perfect and lossy feedback scenarios is formulated as amaximum weight clique problem over the G-IDNC graph in . In this letter, we introduce a new lossy G-IDNC graph (LG-IDNC) model to further minimize the decoding delay in lossy feedbackscenarios. Whereas the G-IDNC graph represents only doubtless combinable packets, the LG-IDNC graph represents also uncertain packet combinations, arising from lossy feedback events,when the expected decoding delay of XORing them among themselves or with other certain packets is lower than that expected when sending these packets separately. We compare thedecoding delay performance of LG-IDNC and G-IDNC graphs through extensive simulations. Numerical results show that our new LG-IDNC graph formulation outperforms the G-IDNC graphformulation in all lossy feedback situations and achieves significant improvement in the decoding delay especially when the feedback erasure probability is higher than the packeterasure probability.
Abstract
We consider wireless sensor networks (WSNs) used for distributed estimation of unknown parameters. Due to the limited bandwidth, sensor nodes quantize their noisyobservations before transmission to a fusion center (FC) for the estimation process. In this letter, the correlation between observations is exploited to reduce the mean-square error(MSE) of the distributed estimation. Specifically, sensor nodes generate local predictions of their observations and then transmit the quantized prediction errors (innovations) to theFC rather than the quantized observations. The analytic and numerical results show that transmitting the innovations rather than the observations mitigates the effect of quantizationnoise and hence reduces the MSE.
Abstract
Estimating the locations and the structures of subsurface channels holds significant importance for forecasting the subsurface flow and reservoir productivity. These channelsexhibit high permeability and are easily contrasted from the low-permeability rock formations in their surroundings. This enables formulating the flow channels estimation problem as asparse field recovery problem. The ensemble Kalman filter (EnKF) is a widely used technique for the estimation and calibration of subsurface reservoir model parameters, such aspermeability. However, the conventional EnKF framework does not provide an efficient mechanism to incorporate prior information on the wide varieties of subsurface geologicalstructures, and often fails to recover and preserve flow channel structures. Recent works in the area of compressed sensing (CS) have shown that estimating in a sparse domain, usingalgorithms such as the orthogonal matching pursuit (OMP), may significantly improve the estimation quality when dealing with such problems. We propose two new, and computationallyefficient, algorithms combining OMP with the EnKF to improve the estimation and recovery of the subsurface geological channels. Numerical experiments suggest that the proposedalgorithms provide efficient mechanisms to incorporate and preserve structural information in the EnKF and result in significant improvements in recovering flow channel structures.
Abstract
Recently, a new polynomial basis over binary extension fields was proposed, such that the fast Fourier transform (FFT) over such fields can be computed in the complexity oforder O(n lg(n)), where n is the number of points evaluated in FFT. In this paper, we reformulate this FFT algorithm, such that it can be easier understood and be extended to developfrequencydomain decoding algorithms for (n=2m, k) systematic Reed-Solomon (RS) codes over F2m, m ∈ Z+, with n- k a power of two. First, the basis of syndrome polynomials isreformulated in the decoding procedure so that the new transforms can be applied to the decoding procedure. A fast extended Euclidean algorithm is developed to determine the errorlocator polynomial. The computational complexity of the proposed decoding algorithm is O(n lg(n-k)+(n-k) lg2(n-k)), improving upon the best currently available decoding complexity O(nlg2(n) lglg(n)), and reaching the best known complexity bound that was established by Justesen in 1976. However, Justesen's approach is only for the codes over some specific fields,which can apply Cooley-Tukey FFTs. As revealed by the computer simulations, the proposed decoding algorithm is 50 times faster than the conventional one for the (216, 215) RS code overF216.
Abstract
This paper addresses the problem of reducing the delivery time of data messages to cellular users using instantly decodable network coding (IDNC) with physical-layer rateawareness. While most of the existing literature on IDNC does not consider any physical layer complications and Abstracts the model as equally slotted time for all users, this paperproposes a cross-layer scheme that incorporates the different channel rates of the various users in the decision process of both the transmitted message combinations and the rates withwhich they are transmitted. The consideration of asymmetric rates for receivers reflects more practical application scenarios and introduces a new tradeoff between the choice of codingcombinations for various receivers and the broadcasting rates. The completion time minimization problem in such a scenario is first shown to be intractable. The problem is thusapproximated by reducing, at each transmission, the increase of an anticipated version of the completion time. This paper solves the problem by formulating it as a maximum weightclique problem over a newly designed rate-aware IDNC graph. The highest weight clique in the created graph being potentially not unique, this paper further suggests a multi-layerversion of the proposed solution to improve the obtained results from the employed completion time approximation. Simulation results indicate that the cross-layer design largelyoutperforms the uncoded transmissions strategies and the classical IDNC scheme.
Abstract
Subsurface reservoir flow channels are characterized by high-permeability values and serve as preferred pathways for fluid propagation. Accurate estimation of theirgeophysical structures is thus of great importance for the oil industry. The ensemble Kalman filter (EnKF) is a widely used statistical technique for estimating subsurface reservoirmodel parameters. However, accurate reconstruction of the subsurface geological features with the EnKF is challenging because of the limited measurements available from the wells andthe smoothing effects imposed by the â„“2-norm nature of its update step. A new EnKF scheme based on sparse domain representation was introduced by Sana et al. (2015) to incorporateuseful prior structural information in the estimation process for efficient recovery of subsurface channels. In this paper, we extend this work in two ways: 1) investigate the effectsof incorporating time-lapse seismic data on the channel reconstruction; and 2) explore a Bayesian sparse reconstruction algorithm with the potential ability to reduce the computationalrequirements. Numerical results suggest that the performance of the new sparse Bayesian based EnKF scheme is enhanced with the availability of seismic measurements, leading to furtherimprovement in the recovery of flow channels structures. The sparse Bayesian approach further provides a computationally efficient framework for enforcing a sparse solution, especiallywith the possibility of using high sparsity rates through the inclusion of seismic data.
Abstract
This paper addresses the problem of channel impulse response estimation for cluster-sparse channels under the Bayesian estimation framework. We develop a novel low-complexityminimum mean squared error (MMSE) estimator by exploiting the sparsity of the received signal profile and the structure of the measurement matrix. It is shown that, due to the bandedToeplitz/circulant structure of the measurement matrix, a channel impulse response, such as underwater acoustic channel impulse responses, can be partitioned into a number oforthogonal or approximately orthogonal clusters. The orthogonal clusters, the sparsity of the channel impulse response, and the structure of the measurement matrix, all combined,result in a computationally superior realization of the MMSE channel estimator. The MMSE estimator calculations boil down to simpler in-cluster calculations that can be reused indifferent clusters. The reduction in computational complexity allows for a more accurate implementation of the MMSE estimator. The proposed approach is tested using synthetic Gaussianchannels, as well as simulated underwater acoustic channels. Symbol-error-rate performance and computation time confirm the superiority of the proposed method compared to selectedbenchmark methods in systems with preamble-based training signals transmitted over cluster-sparse channels.
Abstract
This paper considers the effect of spatial correlation between transmit antennas on the sum-rate capacity of the MIMO Gaussian broadcast channel (i.e., downlink of a cellularsystem). Specifically, for a system with a large number of users n, we analyze the scaling laws of the sum-rate for the dirty paper coding and for different types of beamformingtransmission schemes. When the channel is i.i.d., it has been shown that for large n, the sum rate is equal to M log log n + M log P/M + o(1) where M is the number of transmitantennas, P is the average signal to noise ratio, and o(1) refers to terms that go to zero as n rarr infin. When the channel exhibits some spatial correlation with a covariance matrixR (non-singular with tr(R)=M), we prove that the sum rate of dirty paper coding is M log log n + M log P/M + log det(R) + o(1). We further show that the sum-rate of variousbeamforming schemes achieves M log log n + M log P/M + M log c + o(1) where c les 1 depends on the type of beamforming. We can in fact compute c for random beamforming proposed in andmore generally, for random beamforming with preceding in which beams are pre-multiplied by a fixed matrix. Simulation results are presented at the end of the paper.
Abstract
OFDM systems typically use coding and interleaving across subchannels to exploit frequency diversity on frequency-selective channels. This letter presents a low-complexityiterative algorithm for blind and semi-blind joint channel estimation and soft decoding in coded OFDM systems. The proposed algorithm takes advantage of the channel finite delay-spreadconstraint and the extra observation offered by the cyclic-prefix. It converges within a single OFDM symbol and, therefore, has a minimum latency.
Abstract
In this paper, a low-sampling-rate scheme for ultra-wideband channel estimation is proposed. The scheme exploits multiple observations generated by transmitting multiplepulses. In the proposed scheme, $P$ pulses are transmitted to produce channel impulse response estimates at a desired sampling rate, while the ADC samples at a rate that is $P$ timesslower. To avoid loss of fidelity, the number of sampling periods (based on the desired rate) in the inter-pulse interval is restricted to be co-prime with $P$ . This condition isaffected when clock drift is present and the transmitted pulse locations change. To handle this case, and to achieve an overall good channel estimation performance, without using priorinformation, we derive an improved estimator based on the bounded data uncertainty (BDU) model. It is shown that this estimator is related to the Bayesian linear minimum mean squarederror (LMMSE) estimator. Channel estimation performance of the proposed sub-sampling scheme combined with the new estimator is assessed in simulation. The results show that highreduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in almost all cases, while in the high SNR regime it also outperforms theLMMSE estimator. In addition to channel estimation, a synchronization method is also proposed that utilizes the same pulse sequence used for channel estimation.
Abstract
This paper proposes a low-complexity algorithm for blind equalization of data in orthogonal frequency division multiplexing (OFDM)-based wireless systems with generalconstellations. The proposed algorithm is able to recover the transmitted data even when the channel changes on a symbol-by-symbol basis, making it suitable for fast fading channels.The proposed algorithm does not require any statistical information about the channel and thus does not suffer from latency normally associated with blind methods. The paperdemonstrates how to reduce the complexity of the algorithm, which becomes especially low at high signal-to-noise ratio (SNR). Specifically, it is shown that in the high SNR regime, thenumber of operations is of the order O(LN), where L is the cyclic prefix length and N is the total number of subcarriers. Simulation results confirm the favorable performance of theproposed algorithm.
Abstract
This paper addresses the joint coordinated scheduling and power control problem in cloud-enabled networks. Consider the downlink of a cloud-radio access network (CRAN), wherethe cloud is only responsible for the scheduling policy, power control, and synchronization of the transmit frames across the single-antenna base-stations (BS). The transmit frameconsists of several time/frequency blocks, called power-zones (PZs). The paper considers the problem of scheduling users to PZs and determining their power levels (PLs), by maximizingthe weighted sum-rate under the practical constraints that each user cannot be served by more than one base-station, but can be served by one or more power-zones within eachbase-station frame. The paper solves the problem using a graph theoretical approach by introducing the joint scheduling and power control graph formed by several clusters, where eachis formed by a set of vertices, representing the possible association of users, BSs, and PLs for one specific PZ. The problem is, then, formulated as a maximum-weight clique problem,in which the weight of each vertex is the sum of the benefits of the individual associations belonging to that vertex. Simulation results suggest that the proposed cross-layer schemeprovides appreciable performance improvement as compared to schemes from recent literature.
Abstract
This letter focuses on the computation of the positive moments of one-side correlated random Gram matrices. Closed-form expressions for the moments can be obtained easily,but numerical evaluation thereof is prone to numerical stability, especially in high-dimensional settings. This letter provides a numerically stable method that efficiently computesthe positive moments in closed form. The developed expressions are more accurate and can lead to higher accuracy levels when fed to moment-based approaches. As an application, we showhow the obtained moments can be used to approximate the marginal distribution of the eigenvalues of random Gram matrices.
Abstract
Sparse signal reconstruction algorithms have attracted research attention due to their wide applications in various fields. In this paper, we present a simple Bayesianapproach that utilizes the sparsity constraint and a priori statistical information (Gaussian or otherwise) to obtain near optimal estimates. In addition, we make use of the richstructure of the sensing matrix encountered in many signal processing applications to develop a fast sparse recovery algorithm. The computational complexity of the proposed algorithmis very low compared with the widely used convex relaxation methods as well as greedy matching pursuit techniques, especially at high sparsity.
Abstract
This paper addresses the problem of estimating sparse channels in massive MIMO-OFDM systems. Most wireless channels are sparse in nature with large delay spread. In addition,these channels as observed by multiple antennas in a neighborhood have approximately common support. The sparsity and common support properties are attractive when it comes to theefficient estimation of large number of channels in massive MIMO systems. Moreover, to avoid pilot contamination and to achieve better spectral efficiency, it is important to use asmall number of pilots. We present a novel channel estimation approach which utilizes the sparsity and common support properties to estimate sparse channels and requires a small numberof pilots. Two algorithms based on this approach have been developed that perform Bayesian estimates of sparse channels even when the prior is non-Gaussian or unknown. Neighboringantennas share among each other their beliefs about the locations of active channel taps to perform estimation. The coordinated approach improves channel estimates and also reduces therequired number of pilots. Further improvement is achieved by the data-aided version of the algorithm. Extensive simulation results are provided to demonstrate the performance of theproposed algorithms.
Abstract
Stochastic geometry analysis for cellular networks is mostly limited to outage probability and ergodic rate, which Abstract many important wireless communication aspects.Recently, a novel technique based on the Equivalent-in-Distribution (EiD) approach is proposed to extend the analysis to capture these metrics and analyze bit error probability (BEP)and symbol error probability (SEP). However, the EiD approach considerably increases the complexity of the analysis. In this letter, we propose an approximate yet accurate framework,that is also able to capture fine wireless communication details similar to the EiD approach, but with simpler analysis. The proposed methodology is verified against the exact EiDanalysis in both downlink and uplink cellular networks scenarios.
Abstract
This paper investigates the delay minimization problem for instantly decodable network coding (IDNC) based device-to-device (D2D) communications. In D2D enabled systems,users cooperate to recover all their missing packets. The paper proposes a game theoretic framework as a tool for improving the distributed solution by overcoming the need for acentral controller or additional signaling in the system. The session is modeled by self-interested players in a non-cooperative potential game. The utility functions are designed soas increasing individual payoff results in a collective behavior which achieves both a desirable system performance in a shared network environment and the Nash equilibrium. Threegames are developed whose first reduces the completion time, the second the maximum decoding delay and the third the sum decoding delay. The paper, further, improves the formulationsby including a punishment policy upon collision occurrence so as to achieve the Nash bargaining solution. Learning algorithms are proposed for systems with complete and incompleteinformation, and for the imperfect feedback scenario. Numerical results suggest that the proposed game-theoretical formulation provides appreciable performance gain against theconventional point-to-multipoint (PMP), especially for reliable user-to-user channels.
Abstract
In multi-antenna broadcast networks, the base stations (BSs) rely on the channel state information (CSI) of the users to perform user scheduling and downlink transmission.However, in networks with large number of users, obtaining CSI from all users is arduous, if not impossible, in practice. This paper proposes channel feedback reduction techniquesbased on the theory of compressive sensing (CS), which permits the BS to obtain CSI with acceptable recovery guarantees under substantially reduced feedback overhead. Additionally,assuming noisy CS measurements at the BS, inexpensive ways for improving post-CS detection are explored. The proposed techniques are shown to reduce the feedback overhead, improve CSdetection at the BS, and achieve a sum-rate close to that obtained by noiseless dedicated feedback channels.
Abstract
A fast matching pursuit method using a Bayesian approach is introduced for sparse signal recovery. This method performs Bayesian estimates of sparse signals even when thesignal prior is non-Gaussian or unknown. It is agnostic on signal statistics and utilizes a priori statistics of additive noise and the sparsity rate of the signal, which are shown tobe easily estimated from data if not available. The method utilizes a greedy approach and order-recursive updates of its metrics to find the most dominant sparse supports to determinethe approximate minimum mean-square error (MMSE) estimate of the sparse signal. Simulation results demonstrate the power and robustness of our proposed estimator.
Abstract
This paper develops an approach to the transient analysis of adaptive filters with data normalization. Among other results, the derivation characterizes the transientbehavior of such filters in terms of a linear time-invariant state-space model. The stability, of the model then translates into the mean-square stability of the adaptive filters.Likewise, the steady-state operation of the model provides information about the mean-square deviation and mean-square error performance of the filters. In addition to deriving earlierresults in a unified manner, the approach leads to stability and performance results without restricting the regression data to being Gaussian or white. The framework is based onenergy-conservation arguments and does not require an explicit recursion for the covariance matrix of the weight-error vector.
Abstract
The rapid pace of demand for mobile data services and the limited supply of capacity in the current wireless access networks infrastructure are leading network operators toincrease the density of base station deployments to improve network performance. This densification, made possible by small-cell deployment, also brings a novel set of challenges,specifically related to the cost of ownership, in which backhaul is of primary concern. This article proposes a cost-effective hybrid RF/free-space optical (FSO) solution to combinethe advantages of RF backhauls (low cost, NLOS applications) and FSO backhauls (high-rate, low latency). To first illustrate the cost advantages of the RF backhaul solution, the firstpart of this article presents a business case of NLOS wireless RF backhaul, which has a low cost of ownership as compared to other backhaul candidates. RF backhaul, however, is limitedby latency problems. On the other side, an FSO solution, which offers better latency and higher data rate than RF backhauls, remains sensitive to weather and nature conditions (e.g.,rain, fog). To combine RF and FSO advantages, the second part of this article proposes a lowcost hybrid RF/FSO solution, wherein base stations are connected to each other using eitheroptical fiber or hybrid RF/FSO links. This part addresses the problem of minimizing the cost of backhaul planning under reliability, connectivity, and data rate constraints, andproposes choosing the appropriate cost-effective backhaul connection between BSs (i.e., either OF or hybrid RF/FSO) using graph theory techniques.
Abstract
This paper characterizes the performance metrics of MU-MIMO systems under Rayleigh fading channels in the presence of both cochannel interference and additive noise withunknown channel state information and known correlation matrices. In the first task, we derive analytical expressions for the cumulative distribution function of the instantaneoussignal-to-interference-plus-noise ratio (SINR) for any deterministic beamvectors. As a second task, exact closed-form expressions are derived for the instantaneous capacity, the upperbound on ergodic capacity, and the Gram-Schmidt orthogonalization-based ergodic capacity for similar intra-cell correlation coefficients. Finally, we present the utility of severalstructured-diagonalization techniques, which can achieve the tractability for the approximate solution of ergodic capacity for both similar as well as different intra-cell correlationmatrices. The novelty of this paper is to formulate the received SINR in terms of indefinite quadratic forms, which allows us to use complex residue theory to characterize the systembehavior. The analytical expressions obtained closely match simulation results.
Abstract
Orthogonal frequency-division multiplexing (OFDM) combines the advantages of high performance and relatively low implementation complexity. However, for reliable coherentdetection of the input signal, the OFDM receiver needs accurate channel information. When the channel exhibits fast time variation as it is the case with several recent OFDM-basedmobile broadband wireless standards (e.g., WiMAX, LTE, DVB-H), channel estimation at the receiver becomes quite challenging for two main reasons: 1) the receiver needs to perform thisestimation more frequently and 2) channel time-variations introduce intercarrier interference among the OFDM subcarriers which can degrade the performance of conventional channelestimation algorithms significantly. In this paper, we propose a new pilot-aided algorithm for the estimation of fast time-varying channels in OFDM transmission. Unlike many existingOFDM channel estimation algorithms in the literature, we propose to perform channel estimation in the frequency domain, to exploit the structure of the channel response (such asfrequency and time correlations and bandedness), optimize the pilot group size and perform most of the computations offline resulting in high performance at substantial complexityreductions.
Abstract
This letter proposes a highly accurate algorithm to estimate the signal-to-noise ratio (SNR) for a linear system from a single realization of the received signal. We assumethat the linear system has a Gaussian matrix with one sided left correlation. The unknown entries of the signal and the noise are assumed to be independent and identically distributedwith zero mean and can be drawn from any distribution. We use the ridge regression function of this linear model in company with tools and techniques adapted from random matrix theoryto achieve, in closed form, accurate estimation of the SNR without prior statistical knowledge on the signal or the noise. Simulation results show that the proposed method is veryaccurate.
Abstract
This paper presents a comprehensive performance analysis of full-duplex multiuser relay networks employing opportunistic scheduling with noisy and compressive feedback.Specifically, two feedback techniques based on compressive sensing (CS) theory are introduced and their effect on the system performance is analyzed. The problem of joint user identityand signal-to-noise ratio (SNR) estimation at the base-station is casted as a block sparse signal recovery problem in CS. Using existing CS block recovery algorithms, the identity ofthe strong users is obtained and their corresponding SNRs are estimated using the best linear unbiased estimator (BLUE). To minimize the effect of feedback noise on the estimated SNRs,a backoff strategy that optimally backsoff on the noisy estimated SNRs is introduced, and the error covariance matrix of the noise after CS recovery is derived. Finally, closed-formexpressions for the end-to-end SNRs of the system are derived. Numerical results show that the proposed techniques drastically reduce the feedback air-time and achieve a rate close tothat obtained by scheduling techniques that require dedicated error-free feedback from all network users. Key findings of this paper suggest that the choice of half-duplex orfull-duplex SNR feedback is dependent on the channel coherence interval, and on low coherence intervals, full-duplex feedback is superior to the interference-free half-duplexfeedback.
Abstract
This paper addresses the development of analytical tools for the computation of the inverse moments of random Gram matrices with one side correlation. Such a question ismainly driven by applications in signal processing and wireless communications wherein such matrices naturally arise. In particular, we derive closed-form expressions for the inversemoments and show that the obtained results can help approximate several performance metrics such as the average estimation error corresponding to the best linear unbiased estimator(BLUE) and the linear minimum mean square error (LMMSE) estimator or also other loss functions used to measure the accuracy of covariance matrix estimates.
Abstract
The paper develops a unified approach to the transient analysis of adaptive filters with error nonlinearities. In addition to deriving earlier results in a unified manner,the approach also leads to new performance results without restricting the regression data to being Gaussian or white. The framework is based on energy-conservation arguments andavoids the need for explicit recursions for the covariance matrix of the weight-error vector.
Abstract
The deluge of date rate in today's networks imposes a cost burden on the backhaul network design. Developing cost-efficient backhaul solutions becomes an exciting, yetchallenging, problem. Traditional technologies for backhaul networks, including either radio-frequency (RF) backhauls or optical fibers (OF). While RF is a cost-effective solution ascompared with OF, it supports the lower data rate requirements. Another promising backhaul solution is the free-space optics (FSO) as it offers both a high data rate and a relativelylow cost. The FSO, however, is sensitive to nature conditions, e.g., rain, fog, and line-of-sight. This paper combines both the RF and FSO advantages and proposes a hybrid RF/FSObackhaul solution. It considers the problem of minimizing the cost of the backhaul network by choosing either OF or hybrid RF/FSO backhaul links between the base stations, so as tosatisfy data rate, connectivity, and reliability constraints. It shows that under a specified realistic assumption about the cost of OF and hybrid RF/FSO links, the problem isequivalent to a maximum weight clique problem, which can be solved with moderate complexity. Simulation results show that the proposed solution shows a close-to-optimal performance,especially for reasonable prices of the hybrid RF/FSO links. They further reveal that the hybrid RF/FSO is a cost-efficient solution and a good candidate for upgrading the existingbackhaul networks.
Abstract
This letter studies the mobility aware user-to-base station (BS) association policies, within a stochastic geometry framework, in two-tier uplink cellular networks withfractional channel inversion power control. Particularly, we model the BSs’ locations using the widely accepted Poisson point process and obtain the coverage probability and handovercost expressions for the coupled and decoupled uplink and downlink associations. To this end, we compute the average throughput for the mobile users and study the merits and demeritsof each association strategy.
Abstract
Estimating unknown signal in Wireless Sensor Networks (WSNs) requires sensor nodes to transmit their observations of the signal over a multiple access channel to a FusionCenter (FC). The FC uses the received observations, which is corrupted by observation noise and both channel fading and noise, to find the minimum Mean Square Error (MSE) estimate ofthe signal. In this paper, we investigate the effect of the source-node correlation (the correlation between sensor node observations and the source signal) and the inter-nodecorrelation (the correlation between sensor node observations) on the performance of the Linear Minimum Mean Square Error (LMMSE) estimator for three correlation models in the presenceof channel fading. First, we investigate the asymptotic behavior of the achieved distortion (i.e., MSE) resulting from both the observation and channel noise in a non-fading channel.Then, the effect of channel fading is considered and the corresponding distortion outage probability, the probability that the distortion exceeds a certain value, is found. Byrepresenting the distortion as a ratio of indefinite quadratic forms, a closed-form expression is derived for the outage probability that shows its dependency on the correlation.Finally, the new representation of the outage probability allows us to propose an iterative solution for the power allocation problem to minimize the outage probability under total andindividual power constraints. Numerical simulations are provided to verify our analytic results.
Abstract
Clipping is one of the simplest peak-to-average power ratio reduction schemes for orthogonal frequency division multiplexing (OFDM). Deliberately clipping the transmissionsignal degrades system performance, and clipping mitigation is required at the receiver for information restoration. In this paper, we acknowledge the sparse nature of the clippingsignal and propose a low-complexity Bayesian clipping estimation scheme. The proposed scheme utilizes a priori information about the sparsity rate and noise variance for enhancedrecovery. At the same time, the proposed scheme is robust against inaccurate estimates of the clipping signal statistics. The undistorted phase property of the clipped signal, as wellas the clipping likelihood, is utilized for enhanced reconstruction. Furthermore, motivated by the nature of modern OFDM-based communication systems, we extend our clippingreconstruction approach to multiple antenna receivers and multi-user OFDM.We also address the problem of channel estimation from pilots contaminated by the clipping distortion.Numerical findings are presented that depict favorable results for the proposed scheme compared to the established sparse reconstruction schemes.
Abstract
Regenerating codes represent a class of block codes applicable for distributed storage systems. The [n, k, d] regenerating code has data recovery capability while possessingarbitrary k out of n code fragments, and supports the capability for code fragment regeneration through the use of other arbitrary d fragments, for k ≤ d ≤ n - 1. Minimum storageregenerating (MSR) codes are a subset of regenerating codes containing the minimal size of each code fragment. The first explicit construction of MSR codes that can perform exactregeneration (named exact-MSR codes) for d ≥ 2k - 2 has been presented via a product-matrix framework. This paper addresses some of the practical issues on the construction ofexact-MSR codes. The major contributions of this paper include as follows. A new product-matrix framework is proposed to directly include all feasible exact-MSR codes for d ≥ 2k - 2.The mechanism for a systematic version of exact-MSR code is proposed to minimize the computational complexities for the process of message-symbol remapping. Two practical forms ofencoding matrices are presented to reduce the size of the finite field.
Abstract
Broadcast (or point to multipoint) communication has attracted a lot of research recently. In this paper, we consider the group broadcast channel where the users' pool isdivided into groups, each of which is interested in common information. Such a situation occurs for example in digital audio and video broadcast where the users are divided intovarious groups according to the shows they are interested in. The paper obtains upper and lower bounds for the sum rate capacity in the large number of users regime and quantifies theeffect of spatial correlation on the system capacity. The paper also studies the scaling of the system capacity when the number of users and antennas grow simultaneously. It is shownthat in order to achieve a constant rate per user, the number of transmit antennas should scale at least logarithmically in the number of users.
Abstract
In multiple-input multiple-output (MIMO) radar, for desired transmit beampatterns, appropriate correlated waveforms are designed. To design such waveforms, conventional MIMOradar methods use two steps. In the first step, the waveforms covariance matrix R is synthesized to achieve the desired beampattern. Whereas in the second step, to realize thesynthesized covariance matrix, actual waveforms are designed. Most of the existing methods use iterative algorithms to solve these constrained optimization problems. The computationalcomplexity of these algorithms is very high, which makes them difficult to use in practice. In this paper, to achieve the desired beampattern, a low complexitydiscrete-Fourier-transform based closed-form covariance matrix design technique is introduced for an MIMO radar. The designed covariance matrix is then exploited to derive a novelclosed-form algorithm to directly design the finite-alphabet constant-envelope waveforms for the desired beampattern. The proposed technique can be used to design waveforms for largeantenna array to change the beampattern in real time. It is also shown that the number of transmitted symbols from each antenna depends on the beampattern and is less than the totalnumber of transmit antenna elements.
Abstract
Optical wireless communications (OWC) is a promising technology for closing the mismatch between the growing number of connected devices and the limited wireless networkcapabilities. Similar to downlink, uplink can also benefit from OWC for establishing connectivity between such devices and an optical access point. In this context, the incoherentintensity-modulation and direct-detection (IM-DD) scheme is desirable in practice. Hence, it is important to understand the fundamental limits of communication rates over an OWC uplinkemploying IM-DD, i.e., the channel capacity. This uplink, modeled as a Gaussian multiple-access channel (MAC) for indoors OWC, is studied in this paper, under the IM-DD constraints,which form the main difference with the standard Gaussian MAC commonly studied in the radio-frequency context. Capacity region outer and inner bounds for this channel are derived. Thebounds are fairly close at high signal-to-noise ratio (SNR), where a truncated-Gaussian input distribution achieves the capacity region within a constant gap. Furthermore, the boundscoincide at low SNR showing the optimality of ON-OFF keying combined with successive cancellation decoding in this regime. At moderate SNR, an optimized uniformly spaced discrete inputdistribution achieves fairly good performance.
Abstract
Orthogonal Frequency Division Multiplexing (OFDM) is a modulation scheme that is widely used in wired and wireless communication systems. While OFDM is ideally suited to dealwith frequency selective channels and AWGN, its performance may be dramatically impacted by the presence of impulse noise. In fact, very strong noise impulses in the time domain mightresult in the erasure of whole OFDM blocks of symbols at the receiver. Impulse noise can be mitigated by considering it as a sparse signal in time, and using recently developedalgorithms for sparse signal reconstruction. We propose an algorithm that utilizes the guard band null subcarriers for the impulse noise estimation and cancellation. Instead of relyingon â„“1 minimization as done in some popular general-purpose compressive sensing schemes, the proposed method jointly exploits the specific structure of this problem and the available apriori information for sparse signal recovery. The computational complexity of the proposed algorithm is very competitive with respect to sparse signal reconstruction schemes based onâ„“1 minimization. The proposed method is compared with respect to other state-of-the-art methods in terms of achievable rates for an OFDM system with impulse noise and AWGN.
Abstract
A high-speed railway system equipped with moving relay stations placed on the middle of the ceiling of each train wagon is investigated. The users inside the train are servedin two hops via orthogonal frequency-division multiple-access (OFDMA) technology. In this paper, we first focus on minimizing the total downlink power consumption of the base station(BS) and the moving relays while respecting specific quality-of-service (QoS) constraints. We first derive the optimal resource-allocation solution, in terms of OFDMA subcarriers andpower allocation, using the dual decomposition method. Then, we propose an efficient algorithm based on the Hungarian method to find a suboptimal but low-complexity solution. Moreover,we propose an OFDMA planning solution for high-speed trains by finding the maximal inter-BS distance, given the required user data rates to perform seamless handover. Our simulationresults illustrate the performance of the proposed resource-allocation schemes in the case of Third-Generation Partnership Project (3GPP) Long-Term Evolution Advanced (LTE-A) andcompare them with previously developed algorithms, as well as with the direct transmission scenario. Our results also highlight the significant planning gain obtained, owing to the useof multiple relays instead of the conventional single-relay scenario.
Abstract
Linearization of user equipment power amplifiers driven by orthogonal frequency division multiplexing signals is addressed in this paper. Particular attention is paid to thepower efficient operation of an orthogonal frequency division multiple access cognitive radio system and realization of such a system using compressed sensing. Specifically,precompensated overdriven amplifiers are employed at the mobile terminal. Over-driven amplifiers result in in-band distortions and out of band interference. Out of band interferencemostly occupies the spectrum of inactive users, whereas the in-band distortions are mitigated using compressed sensing at the receiver. It is also shown that the performance of theproposed scheme can be further enhanced using multiple measurements of the distortion signal in single-input multi-output systems. Numerical results verify the ability of the proposedsetup to improve error vector magnitude, bit error rate, outage capacity and mean squared error.
Abstract
This paper presents a novel narrowband interference (NBI) mitigation scheme for single carrier-frequency division multiple access systems. The proposed NBI cancellationscheme exploits the frequency-domain sparsity of the unknown signal and adopts a low complexity Bayesian sparse recovery procedure. At the transmitter, a few randomly chosen datalocations are kept data free to sense the NBI signal at the receiver. Furthermore, it is noted that in practice, the sparsity of the NBI signal is destroyed by a grid mismatch betweenthe NBI sources and the system under consideration. Toward this end, first, an accurate grid mismatch model is presented that is capable of assuming independent offsets for multipleNBI sources, and second, the sparsity of the unknown signal is restored prior to reconstruction using a sparsifying transform. To improve the spectral efficiency of the proposedscheme, a data-aided NBI recovery procedure is outlined that relies on adaptively selecting a subset of data-points and using them as additional measurements. Numerical resultsdemonstrate the effectiveness of the proposed scheme for NBI mitigation.
Abstract
For several years, the completion time and the decoding delay problems in instantly decodable network coding (IDNC) were considered separately and were thought to actcompletely against each other. Recently, some works have aimed to balance the effects of these two important IDNC metrics, but none of them studied a further optimization of one bycontrolling the other. This paper investigates the effect of controlling the decoding delay to reduce the completion time below its currently best known solution in both perfect andimperfect feedback with persistent erasure channels. To solve the problem, the decoding-delay-dependent expressions of the users' and overall completion times are derived in thecomplete feedback scenario. Although using such expressions to find the optimal overall completion time is NP-hard, this paper proposes two novel heuristics that minimize theprobability of increasing the maximum of these decoding-delay-dependent completion time expressions after each transmission through a layered control of their decoding delays.Afterward, this paper extends the study to the imperfect feedback scenario, in which uncertainties at the sender affect its ability to anticipate accurately the decoding delay increaseat each user. This paper formulates the problem in such an environment and derives the expression of the minimum increase in the completion time. Simulation results show theperformance of the proposed solutions and suggest that both heuristics achieve a lower mean completion time, as compared with the best known heuristics for completion time reduction inperfect and imperfect feedback. The gap in performance becomes more significant as the erasure of the channel increases.
Abstract
The Received Signal Strength (RSS) based fingerprinting approaches for indoor localization pose a need for updating the fingerprint databases due to dynamic nature of theindoor environment. This process is hectic and time-consuming when the size of the indoor area is large. The semi-supervised approaches reduce this workload and achieve good accuracyaround 15 percent of the fingerprinting load but the performance is severely degraded if it is reduced below this level. We propose an indoor localization framework that usesunsupervised manifold alignment. It requires only 1 percent of the fingerprinting load, some crowd sourced readings, and plan coordinates of the indoor area. The 1 percentfingerprinting load is used only in perturbing the local geometries of the plan coordinates. The proposed framework achieves less than 5 m mean localization error, which isconsiderably better than semi-supervised approaches at very small amount of fingerprinting load. In addition, the few location estimations together with few fingerprints help toestimate the complete radio map of the indoor environment. The estimation of radio map does not demand extra workload rather it employs the already available information from theproposed indoor localization framework. The testing results for radio map estimation show almost 50 percent performance improvement by using this information as compared to using onlyfingerprints.
Abstract
In this correspondence, we show how the cyclic prefix (CP) can be used to enhance the performance of an orthogonal-frequency-division multiplexing (OFDM) receiver.Specifically, we show how an OFDM symbol transmitted over a block fading channel can be blindly detected using the output symbol and associated CP. The algorithm boils down to anonlinear relationship involving the input and output data only that can be used to search for the maximum-likelihood (ML) estimate of the input. This relationship becomes much simplerfor constant modulus (CM) data. We also propose iterative methods to reduce the computational complexity involved in the ML search of the input for CM data.
Abstract
We consider comb-type OFDM transmission over doubly-selective channels. Given a fixed number and total power of the pilot subcarriers, we show that the MMSE-optimum pilotdesign consists of identical equally-spaced clusters where each cluster is zero-correlation-zone sequence.
Abstract
From its introduction to its quindecennial, network coding has built a strong reputation for enhancing packet recovery and achieving maximum information flow in both wiredand wireless networks. Traditional studies focused on optimizing the throughput of the system by proposing elaborate schemes able to reach the network capacity. With the shift towarddistributed computing on mobile devices, performance and complexity become both critical factors that affect the efficiency of a coding strategy. Instantly decodable network codingpresents itself as a new paradigm in network coding that trades off these two aspects. This paper review instantly decodable network coding schemes by identifying, categorizing, andevaluating various algorithms proposed in the literature. The first part of the manuscript investigates the conventional centralized systems, in which all decisions are carried out bya central unit, e.g., a base-station. In particular, two successful approaches known as the strict and generalized instantly decodable network are compared in terms of reliability,performance, complexity, and packet selection methodology. The second part considers the use of instantly decodable codes in a device-to-device communication network, in which devicesspeed up the recovery of the missing packets by exchanging network coded packets. Although the performance improvements are directly proportional to the computational complexityincreases, numerous successful schemes from both the performance and complexity viewpoints are identified.
Abstract
Orthogonal frequency division multiplexing (OFDM) combines the advantages of high achievable rates and relatively easy implementation. However, for proper recovery of theinput, the OFDM receiver needs accurate channel information. In this paper, we propose an expectation-maximization algorithm for joint channel and data recovery in fast fadingenvironments. The algorithm makes a collective use of the data and channel constraints inherent in the communication problem. This comes in contrast to other works which have employedthese constraints selectively. The data constraints include pilots, the cyclic prefix, and the finite alphabet restriction, while the channel constraints include sparsity, finite delayspread, and the statistical properties of the channel (frequency and time correlation). The algorithm boils down to a forward-backward Kalman filter. We also suggest a suboptimalmodification that is able to track the channel and recover the data with no latency. Simulations show the favorable behavior of both algorithms compared to other channel estimationtechniques.
Abstract
This paper presents a unified mathematical paradigm, based on stochastic geometry, for downlink cellular networks with multiple-input-multiple-output (MIMO) base stations.The developed paradigm accounts for signal retransmission upon decoding errors, in which the temporal correlation among the signal-to-interference-plus-noise ratio (SINR) of theoriginal and retransmitted signals is captured. In addition to modeling the effect of retransmission on the network performance, the developed mathematical model presents twofoldanalysis unification for the MIMO cellular networks literature. First, it integrates the tangible decoding error probability and the Abstracted (i.e., modulation scheme and receivertype agnostic) outage probability analysis, which are largely disjoint in the literature. Second, it unifies the analysis for different MIMO configurations. The unified MIMO analysisis achieved by Abstracting unnecessary information conveyed within the interfering signals by Gaussian signaling approximation along with an equivalent SISO representation for theper-data stream SINR in the MIMO cellular networks. We show that the proposed unification simplifies the analysis without sacrificing the model accuracy. To this end, we discuss thediversity-multiplexing tradeoff imposed by different MIMO schemes and shed light on the diversity loss due to the temporal correlation among the SINRs of the original and retransmittedsignals. Finally, several design insights are highlighted.
Abstract
Abstract
Estimating the values of unknown parameters in ill-posed problems from corrupted measured data presents formidable challenges in ill-posed problems. In such problems, many of the fundamental estimation methods fail to provide meaningful stabilized solutions. In this work, we propose a new regularization approach combined with a new regularization-parameter selection method for linear least-squares discrete ill-posed problems called constrained perturbation regularization approach (COPRA). The proposed COPRA is based on perturbing the singular-value structure of the linear model matrix to enhance the stability of the problem solution. Unlike many regularization methods that seek to minimize the estimated data error, the proposed approach is developed to minimize the mean-squared error of the estimator, which is the objective in many estimation scenarios. The performance of the proposed approach is demonstrated by applying it to a large set of real-world discrete ill-posed problems. Simulation results show that the proposed approach outperforms a set of benchmark regularization methods in most cases. In addition, the approach enjoys the shortest runtime and offers the highest level of robustness of all the tested benchmark regularization methods.}
Abstract
Conventional algorithms used for parameter estimation in colocated multiple-input-multiple-output (MIMO) radars require the inversion of the covariance matrix of the received spatial samples. In these algorithms, the number of received snapshots should be at least equal to the size of the covariance matrix. For large size MIMO antenna arrays, the inversion of the covariance matrix becomes computationally very expensive. Compressive sensing (CS) algorithms which do not require the inversion of the complete covariance matrix can be used for parameter estimation with fewer number of received snapshots. In this work, it is shown that the spatial formulation is best suitable for large MIMO arrays when CS algorithms are used. A temporal formulation is proposed which fits the CS algorithms framework, especially for small size MIMO arrays. A recently proposed low-complexity CS algorithm named support agnostic Bayesian matching pursuit (SABMP) is used to estimate target parameters for both spatial and temporal formulations for the unknown number of targets. The simulation results show the advantage of SABMP algorithm utilizing low number of snapshots and better parameter estimation for both small and large number of antenna elements. Moreover, it is shown by simulations that SABMP is more effective than other existing algorithms at high signal-to-noise ratio.
Abstract
Recent studies on cloud-radio access networks assume either signal-level or scheduling-level coordination. This paper considers a hybrid coordinated scheme as a means to benefit from both policies. Consider the downlink of a multi-cloud radio access network, where each cloud is connected to several base-stations (BSs) via high capacity links and, therefore, allows for joint signal processing within the cloud transmission. Across the multiple clouds, however, only scheduling-level coordination is permitted, as low levels of backhaul communication are feasible. The frame structure of every BS is composed of various time/frequency blocks, called power-zones (PZs), which are maintained at a fixed power level. This paper addresses the problem of maximizing a network-wide utility by associating users to clouds and scheduling them to the PZs, under the practical constraints that each user is scheduled to a single cloud at most, but possibly to many BSs within the cloud, and can be served by one or more distinct PZs within the BSs' frame. This paper solves the problem using graph theory techniques by constructing the conflict graph. The considered scheduling problem is, then, shown to be equivalent to a maximum-weight independent set problem in the constructed graph, which can be solved using efficient techniques. This paper then proposes solving the problem using both optimal and heuristic algorithms that can be implemented in a distributed fashion across the network. The proposed distributed algorithms rely on the well-chosen structure of the constructed conflict graph utilized to solve the maximum-weight independent set problem. Simulation results suggest that the proposed optimal and heuristic hybrid scheduling strategies provide appreciable gain as compared with the scheduling-level coordinated networks, with a negligible degradation to signal-level coordination.
Abstract
Using stochastic geometry, this article studies the retransmission performance in uplink cellular networks with fractional path-loss inversion power control (FPC). We first show that the signal-to-interference-ratio (SIR) is correlated across time, which imposes temporal diversity loss in the retransmission performance. In particular, FPC with lower path-loss compensation factor decreases inter-cell interference but suffers from degraded retransmission diversity. On the other hand, full path-loss inversion achieves almost full temporal diversity (i.e., temporal SIR independence) at the expense of increased inter-cell interference. To this end, the results show that ramping-down the power upon transmission failure improves the overall coverage probability in interference-limited uplink networks.
Abstract
Location is one of the basic information required for underwater optical wireless sensor networks (UOWSNs) for different purposes such as relating the sensing measurements with precise sensor positions, enabling efficient geographic routing techniques, and sustaining link connectivity between the nodes. Even though various two-dimensional UOWSNs localization methods have been proposed in the past, the directive nature of optical wireless communications and three-dimensional (3D) deployment of sensors require to develop 3D underwater localization methods. Additionally, the localization accuracy of the network strongly depends on the placement of the anchors. Therefore, we propose a robust 3D localization method for partially connected UOWSNs which can accommodate the outliers and optimize the placement of the anchors to improve the localization accuracy. The proposed method formulates the problem of missing pairwise distances and outliers as an optimization problem which is solved through half quadratic minimization. Furthermore, analysis is provided to optimally place the anchors in the network which improves the localization accuracy. The problem of optimal anchor placement is formulated as a combination of Fisher information matrices for the sensor nodes where the condition of D-optimality is satisfied. The numerical results indicate that the proposed method outperforms the literature substantially in the presence of outliers.
Abstract
In this letter, we consider the problem of recovering an unknown sparse signal from noisy linear measurements, using an enhanced version of the popular Elastic-Net (EN) method. We modify the EN by adding a box-constraint, and we call it the Box-Elastic Net (Box-EN). We assume independent identically distributed (iid) real Gaussian measurement matrix with additive Gaussian noise. In many practical situations, the measurement matrix is not perfectly known, and so we only have a noisy estimate of it. In this letter, we precisely characterize the mean squared error and the probability of support recovery of the Box-EN in the high-dimensional asymptotic regime. Numerical simulations validate the theoretical predictions derived in the letter and also show that the boxed variant outperforms the standard EN.
8633429
Abstract
The next era of information revolution will rely on aggregating big data from massive numbers of devices that are widely scattered in our environment. Most of these devices are expected to be of low-complexity, low-cost, and limited power supply, which impose stringent constraints on the network operation. In this regard, this paper investigates aerial data aggregation and field estimation from a finite spatial field via an unmanned aerial vehicle (UAV). Instead of fusing, relaying, and routing the data across the wireless nodes to fixed locations access points, a UAV flies over the field and collects the required data for two prominent missions; data aggregation and field estimation. To accomplish these tasks, the field of interest is divided into several subregions over which the UAV hovers to collect samples from the underlying nodes. To this end, we formulate and solve an optimization problem to minimize total hovering and traveling time of each mission. While the former requires the collection of a prescribed average number of samples from the field, the latter ensures for a given field spatial correlation model that the average mean-squared estimation error of the field value is no more than a predetermined threshold at any point. These goals are fulfilled by optimizing the number of subregions, the area of each subregion, the hovering locations, the hovering time at each location, and the trajectory traversed between hovering locations. The proposed formulation is shown to be NP-hard mixed integer problem, and hence, a decoupled heuristic solution is proposed. The results show that there exists an optimal number of subregions that balance the tradeoff between hovering and traveling times such that the total time for collecting the required samples is minimized.
8743453
Abstract
Caching and cloud control are new technologies that were suggested to improve the performance of future wireless networks. Fog radio access networks (F-RANs) have been recently proposed to further improve the throughput of future cellular networks by exploiting these two technologies. In this paper, we study the cloud offloading gains achieved by utilizing F-RANs that admit enhanced remote radio heads (eRRHs) with heterogeneous wireless technologies, namely, LTE and WiFi. This F-RAN architecture thus allows widely proliferating smart phone devices to receive two packets simultaneously from their in-built LTE and WiFi interfaces. We first formulate the general cloud base station (CBS) offloading problem as an optimization problem over a dual conflict graph, which is proven to be intractable. Thus, we formulate an online version of the CBS offloading problem in heterogeneous F-RANs as a weighted graph coloring problem and show it is NP-hard. We then devise a novel opportunistic network coding (ONC)-assisted heuristic solution to this problem, which divides it into two subproblems and solves each subproblem independently. We derive lower bounds on the online and aggregate CBS offloading performances of our proposed scheme and analyze its complexity. The simulations quantify the gains achieved by our proposed heterogeneous F-RAN solution compared with the traditional homogeneous F-RAN scheme and the derived lower bounds in terms of both CBS offloading and throughput.},
keywords={cellular radio;cloud computing;graph colouring;Long Term Evolution;network coding;optimisation;radio access networks;smart phones;wireless LAN;F-RAN architecture;smart phone devices;LTE;WiFi interfaces;general cloud base station offloading problem;optimization problem;dual conflict graph;CBS offloading problem;weighted graph coloring problem;novel opportunistic network coding-assisted heuristic solution;online CBS offloading performances;heterogeneous F-RAN solution;traditional homogeneous F-RAN scheme;opportunistic network coding-assisted cloud offloading;heterogeneous fog radio access networks;future wireless networks;future cellular networks;remote radio heads;heterogeneous wireless technologies;Network coding;Radio access networks;Delays;Multicast communication;Throughput;Wireless networks;Long Term Evolution;Fog radio access networks;enhanced remote radio heads;opportunistic network coding;conflict graph
Abstract
This paper proposes a received signal strength (RSS)-based localization framework for energy harvesting underwater optical wireless sensor networks (EH-UOWSNs), where the optical noise sources and channel impairments of seawater pose significant challenges on range estimation. In UOWSNs, energy limitation is another major problem due to the limited battery power and difficulty to replace or recharge the battery of an underwater sensor node. In the proposed framework, sensor nodes with insufficient battery harvest ambient energy and start communicating once they have sufficient storage of energy. Network localization is carried out by measuring the RSSs of active nodes, which are modeled based on the underwater optical communication channel characteristics. Thereafter, block kernel matrices are computed for the RSS-based range measurements. Unlike the traditional shortest-path approach, the proposed technique reduces the estimation error of the shortest path for each block kernel matrix. Once the complete block kernel matrices are available, a closed form localization technique is developed to find the location of every optical sensor node in the network. An analytical expression for the Cramer-Rao lower bound is also derived as a benchmark to evaluate the localization performance of the developed technique. The extensive simulations show that the proposed framework outperforms the well-known network localization techniques.
Abstract
Since its first use by Euler on the problem of the seven bridges of Königsberg, graph theory has shown excellent abilities in solving and unveiling the properties of multiple discrete optimization problems. The study of the structure of some integer programs reveals equivalence with graph theory problems making a large body of the literature readily available for solving and characterizing the complexity of these problems. This tutorial presents a framework for utilizing a particular graph theory problem, known as the clique problem, for solving communications and signal processing problems. In particular, this article aims to illustrate the structural properties of integer programs that can be formulated as clique problems through multiple examples in communications and signal processing. To that end, the first part of the tutorial provides various optimal and heuristic solutions for the maximum clique, maximum weight clique, and k -clique problems. The tutorial, further, illustrates the use of the clique formulation through numerous contemporary examples in communications and signal processing, mainly in maximum access for nonorthogonal multiple access networks, throughput maximization using index and instantly decodable network coding, collision-free radio-frequency identification networks, and resource allocation in cloud-radio access networks. Finally, the tutorial sheds light on the recent advances of such applications, and provides technical insights on ways of dealing with mixed discrete-continuous optimization problems.
Abstract
This paper carries out a large dimensional analysis of a variation of kernel ridge regression that we call centered kernel ridge regression (CKRR), also known in the literature as kernel ridge regression with offset. This modified technique is obtained by accounting for the bias in the regression problem resulting in the old kernel ridge regression but with centered kernels. The analysis is carried out under the assumption that the data is drawn from a Gaussian distribution and heavily relies on tools from random matrix theory (RMT). Under the regime in which the data dimension and the training size grow infinitely large with fixed ratio and under some mild assumptions controlling the data statistics, we show that both the empirical and the prediction risks converge to a deterministic quantities that describe in closed form fashion the performance of CKRR in terms of the data statistics and dimensions. Inspired by this theoretical result, we subsequently build a consistent estimator of the prediction risk based on the training data which allows to optimally tune the design parameters. A key insight of the proposed analysis is the fact that asymptotically a large class of kernels achieve the same minimum prediction risk. This insight is validated with both synthetic and real data.
Abstract
In the past few years, Global Navigation Satellite Systems (GNSS) based attitude determination has been widely used thanks to its high accuracy, low cost, and real-time performance. This paper presents a novel 3-D GNSS attitude determination method based on Riemannian optimization techniques. The paper first exploits the antenna geometry and baseline lengths to reformulate the 3-D GNSS attitude determination problem as an optimization over a non-convex set. Since the solution set is a manifold, in this manuscript we formulate the problem as an optimization over a Riemannian manifold. The study of the geometry of the manifold allows the design of efficient first and second order Riemannian algorithms to solve the 3-D GNSS attitude determination problem. Despite the non-convexity of the problem, the proposed algorithms are guaranteed to globally converge to a critical point of the optimization problem. To assess the performance of the proposed framework, numerical simulations are provided for the most challenging attitude determination cases: the unaided, single-epoch, and single-frequency scenarios. Numerical results reveal that the proposed algorithms largely outperform state-of-the-art methods for various system configurations with lower complexity than generic non-convex solvers, e.g., interior point methods.
Abstract
By exploiting large antenna arrays, massive MIMO (multiple input multiple output) systems can greatly increase spectral and energy efficiency over traditional MIMO systems. However, increasing the number of antennas at the base station (BS) makes the uplink joint channel estimation and data detection (JED) challenging in massive MIMO systems. In this paper, we consider the JED problem for massive SIMO (single input multiple output) wireless systems, which is a special case of wireless systems with large antenna arrays. We propose exact Generalized Likelihood Ratio Test (GLRT) optimal JED algorithms with low expected complexity, for both constant-modulus and nonconstant-modulus constellations. We show that, despite the large number of unknown channel coefficients, the expected computational complexity of these algorithms is polynomial in channel coherence time (T) and the number of receive antennas (N), even when the number of receive antennas grows polynomially in the channel coherence time (N=O(T 11 ) suffices to guarantee an expected computational complexity cubic in T and linear in N). Simulation results show that the GLRT-optimal JED algorithms achieve significant performance gains (up to 5 dB improvement in energy efficiency) with low computational complexity.
Abstract
Abstract
Abstract
Magnetic Induction (MI) is an efficient wireless communication method to deploy operational internet of underground things (IoUT) for oil and gas reservoirs. The IoUT consists of underground things which are capable of sensing the underground environment and communicating with the surface. The MI-based IoUT enable many applications, such as monitoring of the oil rigs, optimized fracturing, and optimized extraction. Most of these applications are dependent on the location of the underground things and therefore require accurate localization techniques. The existing localization techniques for MI-based underground sensing networks are two-dimensional and do not characterize the achievable accuracy of the developed methods, which are both crucial and challenging tasks. Therefore, this paper proposes a novel three-dimensional (3D) localization technique based on Isometric scaling (Isomap) for future IoUT. Moreover, this paper also presents the closed-form expression of the Cramer Rao lower bound (CRLB) for the proposed technique, which takes into account the channel parameters of the underground magnetic-induction. The derived CRLB provides the suggestions for an MI-based underground localization system by associating the system parameters with the error trend. Numerical results demonstrate that localization accuracy is affected by different channel and networks parameters such as the number of underground things, ranging error variance, size of the coils, and the transmitting power. The root mean square error performance of the proposed technique shows that increase in the number of turns of the coils, transmitting power, and the number of anchors improves the performance. Results also show that the proposed technique is robust to the ranging error variance in the range of 10 to 30 %; however, a further increase in the ranging error variance does not allow to achieve acceptable accuracy. Also, the results show that the proposed technique achieves an average of 30 % better localization accuracy compare to the traditional methods.
Abstract
Abstract