We address the problem of removing specular ground surface reflections and leakage/cross-talk from downward looking stepped frequency ground-penetrating radar (GPR) data. A new model for the ground-bounce and the leakage/cross-talk is introduced. An algorithm that jointly estimates these effects from collected data is presented. The algorithm has the sound foundation of a nonlinear least squares (LS) fit to the presented model. The minimization is performed in a cyclic manner where one step is a linear LS minimization and the other step is a non-linear LS minimization where the optimum can efficiently be found using, e.g., the chirp-transform algorithm. The results after applying the algorithm to measured GPR data, collected at a US army test range, are also shown
A method for the computationally efficient sliding window time-updating of the Capon and APES spectral estimators based on the time-variant displacement structure of the data covariance matrix is presented. The proposed algorithm forms a natural extension of the computationally most efficient algorithm to date, and offers a significant computationalgain as compared to the computational complexity associated with the batch re-evaluation of the spectral estimates for each time-update.
In this paper, we present a computationally efficient sliding window time updating of the Capon and amplitude and phase estimation (APES) matched filterbank spectral estimators based on the time-variant displacement structure of the data covariance matrix. The presented algorithm forms a natural extension of the most computationally efficient algorithm to date, and offers a significant computational gain as compared to the computational complexity associated with the batch re-evaluation of the spectral estimates for each time-update. Furthermore, through simulations, the algorithm is found to be numerically superior to the time-updated spectral estimate formed from directly updating the data covariance matrix.
Simultaneous wireless information and power transfer techniques for multiway massive multiple-input multiple-output (MIMO) relay networks are investigated. By using two practically viable relay receiver designs, namely 1) the power splitting receiver and 2) the time switching receiver, asymptotic signal-to-interference-plus-noise ratio (SINR) expressions are derived for an unlimited number of antennas at the relay. These asymptotic SINRs are then used to derive asymptotic symmetric sum rate expressions in closed form. Notably, these asymptotic SINRs and sum rates become independent of radio frequency-to-direct current (RF-to-DC) conversion efficiency in the limit of infinitely many relay antennas. Moreover, tight average sum rate approximations are derived in closed form for finitely many relay antennas. The fundamental tradeoff between the harvested energy and the sum rate is quantified for both relay receiver structures. Notably, the detrimental impact of imperfect channel state information (CSI) on the MIMO detector/precoder is investigated, and thereby, the performance degradation caused by pilot contamination, which is the residual interference due to nonorthogonal pilot sequence usage in adjacent/cochannel systems, is quantified. The presence of cochannel interference (CCI) can be exploited to be beneficial for energy harvesting at the relay, and consequently, the asymptotic harvested energy is an increasing function of the number of cochannel interferers. Notably, in the genie-aided perfect CSI case, the detrimental impact of CCI for signal decoding can be cancelled completely whenever the number of relay antennas grows without bound. Nevertheless, the pilot contamination severely degrades the sum rate performance even for infinitely many relay antennas.
Large-scale massive MIMO network deployments can provide higher spectral efficiency and better coverage for future communication systems like 5G. Due to the large number of antennas at the base station, the system achieves stable channel quality and spatially separable channels to the different users. In this paper, linear, planar, circular and cylindrical arrays are used in the evaluation of a large-scale multi-cell massive MIMO network. The system-level performance is predicted using two different kinds of channel models. First, a ray-based deterministic tool is utilized in a real North American city environment. Second, an independent and identically distributed (i.i.d.) Rayleigh fading channel model is considered, as often used in previously published massive MIMO studies. The analysis is conducted in a 16-macro-cell network with outdoor and randomly distributed users. It is shown that the array configuration has a large impact on the throughput statistics. Although the system level performance with i.i.d. Rayleigh fading can be close to the deterministic prediction in some situations (e.g., with large linear arrays), significant differences are noticed when considering other types of arrays.
Massive MIMO network deployments are expected to be a key feature of the upcoming 5G communication systems. Such networks are able to achieve a high level of channel quality and can simultaneously serve multiple users with the same resources. In this paper, realistic massive MIMO channels are evaluated both in single and multi-cell environments. The favorable propagation property is evaluated in the single-cell scenario and provides perspectives on the minimal criteria required to achieve such conditions. The dense multi-cell urban scenario provides a comparison between linear, planar, circular, and cylindrical arrays to evaluate a large-scale multi-cell massive MIMO network. The system-level performance is predicted using two different kinds of channel models. First, a ray-based deterministic tool is utilized in a real North American city environment. Second, an independent and identically distributed (i.i.d.) Rayleigh fading channel model is considered, as often used in previously published massive MIMO studies. The analysis is conducted in a 16-macro-cell network with both randomly distributed outdoor and indoor users. It is shown that the physical array properties like the shape and configuration have a large impact on the throughput statistics. Although the system-level performance with i.i.d. Rayleigh fading can be close to the deterministic prediction in some situations (e.g., with large linear arrays), significant differences are noticed when considering other types of arrays. The differences in the performance of the various arrays utilizing the exact same network parameters and the same number of total antenna elements provide insights into the selection of these physical parameters for upcoming 5G networks.
In this paper we deal with the problem of the joint optimization of the precoders, equalizers and relay beamformer of a multiple-input multiple-output interfering relay channel. This network can be regarded az a generalized model for both one-way and two-way relay channels with/without direct interfering links. Unlike the conventional design procedures, we assume that the Channel State Information (CSI) is not known perfectly. The imperfect CSI is described using the norm bounded error framework. We use a system-wide Sum Mean Square Error (SMSE) based problem formulation which is constrained using the transmit power of the terminals and the relay node. The problem at hand, from a worst-case design perspective, is a multilinear, and hence, a nonconvex problem which is also semiinfinite in its constraints. We use a generalized version of the Peterson’s lemma to handle the semi-infiniteness and reduce the original problem to a single Linear Matrix Inequality (LMI). However, this LMI is not convex, and to resolve this issue we propose an iterative algorithm based on the alternating convex search methodology to solve the aforementioned problem. Finally simulation results, i.e., the convergence of the proposed algorithm and the SMSE properties, are included to asses the performance of the proposed algorithm.
In this paper, we deal with the problem of joint optimization of the source precoder, the relay beamformer and the destination equalizer in a nonregenerative relay network with only a partial knowledge of the Channel State Information (CSI).
We model the partial CSI using a deterministic norm bounded error model, and we use a system-wide mean square error performance measure which is constrained based on the transmit power regulations for both source and relay nodes.
Most conventional designs employ the average performance optimization, however, we solve this problem from a worst-case design perspective.
The original problem formulation is a semi-infinite trilinear optimization problem which is not convex.
To solve this problem we extend the existing theories to deal with the constraints which are semi-infinite in different independent complex matrix variables.
We show that the equivalent approximate problem is a set of linear matrix inequalities, that can be solved iteratively.
Finally simulation results assess the performance of the proposed scheme.
We formally generalize the sign-definiteness lemma to the case of complex-valued matrices and multiple norm-bounded uncertainties. This lemma has found many applications in the study of the stability of control systems, and in the design and optimization of robust transceivers in communications. We then present three different novel applications of this lemma in the area of multi-user multiple-input multiple-output (MIMO) robust transceiver optimization. Specifically, the scenarios of interest are: (i) robust linear beamforming in an interfering adhoc network, (ii) robust design of a general relay network, including the two-way relay channel as a special case, and (iii) a half-duplex one-way relay system with multiple relays. For these networks, we formulate the design problems of minimizing the (sum) MSE of the symbol detection subject to different average power budget constraints. We show that these design problems are non-convex (with bilinear or trilinear constraints) and semiinfinite in multiple independent uncertainty matrix-valued variables. We propose a two-stage solution where in the first step the semi-infinite constraints are converted to linear matrix inequalities using the generalized signdefiniteness lemma, and in the second step, we use an iterative algorithm based on alternating convex search (ACS). Via simulations we evaluate the performance of the proposed scheme.
This paper deals with the problem of discriminating samples that contain only noise from samples that contain a signal embedded in noise. The focus is on the case when the variance of the noise is unknown. We derive the optimal soft decision detector using a Bayesian approach. The complexity of this optimal detector grows exponentially with the number of observations and as a remedy, we propose a number of approximations to it. The problem under study is a fundamental one and it has applications in signal denoising, anomaly detection, and spectrum sensing for cognitive radio. We illustrate the results in the context of the latter.
In this paper, we create a unified framework for spectrum sensing of signals which have covariance matrices with known eigenvalue multiplicities. We derive the generalized likelihood-ratio test (GLRT) for this problem, with arbitrary eigenvalue multiplicities under both hypotheses. We also show a number of applications to spectrum sensing for cognitive radio and show that the GLRT for these applications, of which some are already known, are special cases of the general result.
We point out an error in a derivation in the recent paper [1], and provide a correct and much shorter calculation of the result in question. In passing, we also connect the results in [1] to the literature on array signal processing and on principal component analysis, and show that the main findings of [1] follow as special cases of standard results in these fields.
We consider spectrum sensing of signals encoded with an orthogonal space-time block code (OSTBC). We propose a CFAR detector based on knowledge of the eigenvalue multiplicities of the covariance matrix which are inherent owing to the OSTBC and derive theoretical performance bounds. In addition, we show that the proposed detector is robust to a carrier frequency offset, and propose a detector that deals with timing synchronization using the detector for the synchronized case as a building block. The proposed detectors are shown numerically to perform well.
We consider spectrum sensing of a second-order cyclostationary signal receivedat multiple antennas. The proposed detector exploits both the spatial andthe temporal correlation of the received signal, from knowledge of thefundamental period of the cyclostationary signal and the eigenvaluemultiplicities of the temporal covariance matrix. All other parameters, suchas the channel gains or the noise power, are assumed to be unknown. The proposeddetector is shown numerically to outperform state-of-the-art detectors forspectrum sensing of anOFDM signal, both when using a single antenna and with multiple antennas.
We consider spectrum sensing of OFDM signals in an AWGN channel. For the case of completely unknown noise and signal powers, we derive a GLRT detector based on empirical second-order statistics of the received data. The proposed GLRT detector exploits the non-stationary correlation structure of the OFDM signal and does not require any knowledge of the noise power or the signal power. The GLRT detector is compared to state-of-the-art OFDM signal detectors, and shown to improve the detection performance with 5 dB SNR in relevant cases.
For the case of completely known noise power and signal power, we present a brief derivation of the optimal Neyman-Pearson detector from first principles. We compare the optimal detector to the energy detector numerically, and show that the energy detector is near-optimal (within 0.2 dB SNR) when the noise variance is known. Thus, when the noise power is known, no substantial gain can be achieved by using any other detector than the energy detector.
We consider spectrum sensing of OFDM signals in an AWGN channel. For the case of completely known noise and signal powers, we set up a vector-matrix model for an OFDM signal with a cyclic prefix and derive the optimal Neyman-Pearson detector from first principles. The optimal detector exploits the inherent correlation of the OFDM signal incurred by the repetition of data in the cyclic prefix, using knowledge of the length of the cyclic prefix and the length of the OFDM symbol. We compare the optimal detector to the energy detector numerically. We show that the energy detector is near-optimal (within 1 dB SNR) when the noise variance is known. Thus, when the noise power is known, no substantial gain can be achieved by using any other detector than the energy detector.
For the case of completely unknown noise and signal powers, we derive a generalized likelihood ratio test (GLRT) based onempirical second-order statistics of the received data. The proposed GLRT detector exploits the non-stationary correlation structure of the OFDM signal and does not require any knowledge of the noise power or the signal power. The GLRT detector is compared to state-of-the-art OFDM signal detectors, and shown to improve the detection performance with 5 dB SNR in relevant cases.
We consider detection of signals encoded with orthogonal space-time block codes (OSTBC), using multiple receive antennas. Such signals contain redundancy and they have a specific structure, that can be exploited for detection. We derive the optimal detector, in the Neyman-Pearson sense, when all parameters are known. We also consider unknown noise variance, signal variance and channel coefficients. We propose a number of GLRT based detectors for the different cases, that exploit the redundancy structure of the OSTBC signal. We also propose an eigenvalue-based detector for the case when all parameters are unknown. The proposed detectors are compared to the energy detector. We show that when only the noise variance is known, there is no gain in exploiting the structure of the OSTBC. However, when the noise variance is unknown there can be a significant gain.
In this work, we consider spectrum sensing of Gaussian signals with structured covariance matrices. We show that the optimal detector based on the probability distribution of the sample covariance matrix is equivalent to the optimal detector based on the raw data, if the covariance matrices are known. However, the covariance matrices are unknown in general. Therefore, we propose to estimate the unknown parameters using covariance matching estimation techniques (COMET). We also derive the optimal detector based on a Gaussian approximation of the sample covariance matrix, and show that this is closely connected to COMET.
Cognitive radio is a new concept of reusing a licensed spectrum in an unlicensed manner. The motivation for cognitive radio is various measurements of spectrum utilization, that generally show unused resources in frequency, time and space. These "spectrum holes" could be exploited by cognitive radios. Some studies suggest that the spectrum is extremely underutilized, and that these spectrum holes could provide ten times the capacity of all existing wireless devices together. The spectrum could be reused either during time periods where the primary system is not active, or in geographical positions where the primary system is not operating. In this paper, we deal primarily with the concept of geographical reuse, in a frequency-planned primary network. We perform an analysis of the potential for communication in a geographical spectrum hole, and in particular the achievable sum-rate for a secondary network, to some order of magnitude. Simulation results show that a substantial sum-rate could be achieved if the secondary users communicate over small distances. For a small number of secondary links, the sum-rate increases linearly with the number of links. However, the spectrum hole gets saturated quite fast, due to interference caused by the secondary users. A spectrum hole may look large, but it disappears as soon as someone starts using it.
This paper considers approximations of marginalization sums thatarise in Bayesian inference problems. Optimal approximations ofsuch marginalization sums, using a fixed number of terms, are analyzedfor a simple model. The model under study is motivated byrecent studies of linear regression problems with sparse parametervectors, and of the problem of discriminating signal-plus-noise samplesfrom noise-only samples. It is shown that for the model understudy, if only one term is retained in the marginalization sum, thenthis term should be the one with the largest a posteriori probability.By contrast, if more than one (but not all) terms are to be retained,then these should generally not be the ones corresponding tothe components with largest a posteriori probabilities.
In this paper we deal with spoofing detection in GNSS receivers. We derive the optimal genie detector when the true positions are perfectly known, and the observation errors are Gaussian, as a benchmark for other detectors. The system model considers three dimensional positions, and includes correlated errors. In addition, we propose several detectors that do not need any position knowledge, that outperform recently proposed detectors in many interesting cases.
We present a survey of state-of-the-art algorithms for spectrum sensing in cognitive radio. The algorithms discussed range from energy detection to sophisticated feature detectors. The feature detectors that we present all have in common that they exploit some known structure of the transmitted signal. In particular we treat detectors that exploit cyclostationarity properties of the signal, and detectors that exploit a known eigenvalue structure of the signal covariance matrix. We also consider cooperative detection. Specifically we present data fusion rules for soft and hard combining, and discuss the energy efficiency of several different sensing, sleeping and censoring schemes in detail.
The ever-increasing demand for higher data rates in wireless communications in the face of limited or underutilized spectral resources has motivated the introduction of cognitive radio. Traditionally, licensed spectrum is allocated over relatively long time periods and is intended to be used only by licensees. Various measurements of spectrum utilization have shown substantial unused resources in frequency, time, and space [1], [2]. The concept behind cognitive radio is to exploit these underutilized spectral resources by reusing unused spectrum in an opportunistic manner [3], [4]. The phrase cognitive radio is usually attributed to Mitola [4], but the idea of using learning and sensing machines to probe the radio spectrum was envisioned several decades earlier (cf., [5]).
The computational complexity of optimum decoding for an orthogonal space-time block code is quantified. Four equivalent techniques of optimum decoding which have the same computational complexity are specified. Modifications to the basic formulation in special cases are calculated and illustrated by means of examples.
The computational complexity of optimum decoding for an orthogonal space-time block code {cal G}_N satisfying {cal G}_N^H{cal G}_N=c(∑_{k=1}^Kos_ko^2)I_N where c is a positive integer is quantified. Four equivalent techniques of optimum decoding which have the same computational complexity are specified. Modifications to the basic formulation in special cases are calculated and illustrated by means of examples. This paper corrects and extends and unifies them with the results from the literature. In addition, a number of results from the literature are extended to the case c>1.
Massive MIMO is considered to be one of the key technologies in the emerging 5G systems, but also a concept applicable to other wireless systems. Exploiting the large number of degrees of freedom (DoFs) of massive MIMO is essential for achieving high spectral efficiency, high data rates and extreme spatial multiplexing of densely distributed users. On the one hand, the benefits of applying massive MIMO for broadband communication are well known and there has been a large body of research on designing communication schemes to support high rates. On the other hand, using massive MIMO for Internet-of-Things (IoT) is still a developing topic, as IoT connectivity has requirements and constraints that are significantly different from the broadband connections. In this paper we investigate the applicability of massive MIMO to IoT connectivity. Specifically, we treat the two generic types of IoT connections envisioned in 5G: massive machine-type communication (mMTC) and ultra-reliable low-latency communication (URLLC). This paper fills this important gap by identifying the opportunities and challenges in exploiting massive MIMO for IoT connectivity. We provide insights into the trade-offs that emerge when massive MIMO is applied to mMTC or URLLC and present a number of suitable communication schemes. The discussion continues to the questions of network slicing of the wireless resources and the use of massive MIMO to simultaneously support IoT connections with very heterogeneous requirements. The main conclusion is that massive MIMO can bring benefits to the scenarios with IoT connectivity, but it requires tight integration of the physical-layer techniques with the protocol design. (C) 2019 Elsevier B.V. All rights reserved.
We investigate the energy efficiency performance of cell-free Massive multiple-input multiple-output (MIMO), where the access points (APs) are connected to a central processing unit (CPU) via limited-capacity links. Thanks to the distributed maximum ratio combining (MRC) weighting at the APs, we propose that only the quantized version of the weighted signals are sent back to the CPU. Considering the effects of channel estimation errors and using the Bussgang theorem to model the quantization errors, an energy efficiency maximization problem is formulated with per-user power and backhaul capacity constraints as well as with throughput requirement constraints. To handle this non-convex optimization problem, we decompose the original problem into two sub-problems and exploit a successive convex approximation (SCA) to solve original energy efficiency maximization problem. Numerical results confirm the superiority of the proposed optimization scheme.
Limited-backhaul cell-free Massive multiple-input multiple-output (MIMO), in which the fog radio access network (F-RAN) is implemented to exchange the information between access points (APs) and the central processing unit (CPU), is investigated. We introduce a novel approach where the APs estimate the channel and send back the quantized version of the estimated channel and the quantized version of the received signal to the central processing unit. The Max algorithm and the Bussgang theorem are exploited to model the optimum uniform quantization. The ergodic achievable rates are derived. We show that exploiting microwave wireless backhaul links and using a small number of hits to quantize the estimated channel and the received signal, the performance of limited-backhaul cell-free Massive MIMO closely approaches the performance of cell-free Massive MIMO with perfect backhaul links.
In this paper, we study an active user detection problem for massive machine type communications (mMTC). The users transmit pilot-hopping sequences and detection of active users is performed based on the received energy. We utilize the channel hardening and favorable propagation properties of massive multiple- input multipleoutput (MIMO) to simplify the user detection. We propose and compare a number of different user detection methods and find that using non- negative least squares (NNLS) is well suited for the task at hand as it achieves good results as well as having the benefit of not having to specify further parameters.
In this paper we study the benefits that Internet-of-Things (IoT) devices will have from connecting to a massive multiple-input-multiple-output (MIMO) base station. In particular, we study how many users that could be simultaneously spatially multiplexed and how much the range can be increased by deploying massive base station arrays. We also investigate how the devices can scale down their uplink power as the number of antennas grows with retained rates. We consider the uplink and utilize upper and lower bounds on known achievable rate expressions to study the effects of the massive arrays. We conduct a case study where we use simulations in the settings of existing IoT systems to draw realistic conclusions. We find that the gains which ultra narrowband systems get from utilizing massive MIMO are limited by the bandwidth and therefore those systems will not be able to spatially multiplex any significant number of users. We also conclude that the power scaling is highly dependent on the nominal signal-to-noise ratio (SNR) in the single-antenna case.
We present a survey of some recent developments for decompositions of multi-way arrays or tensors, with special emphasis on results relevant for applications and modeling in signal processing. A central problem is how to find lowrank approximations of tensors, and we describe some new results, including numerical methods, algorithms and theory, for the higher order singular value decomposition (HOSVD) and the parallel factors expansion or canonical decomposition (CP expansion).
Tensor modeling and algorithms for computing various tensor decompositions (the Tucker/HOSVD and CP decompositions, as discussed here, most notably) constitute a very active research area in mathematics. Most of this research has been driven by applications. There is also much software available, including MATLAB toolboxes [4]. The objective of this lecture has been to provide an accessible introduction to state of the art in the field, written for a signal processing audience. We believe that there is good potential to find further applications of tensor modeling techniques in the signal processing field.
Massive MIMO is key technology for the upcoming fifth generation cellular networks (5G), promising high spectral efficiency, low power consumption, and the use of cheap hardware to reduce costs. Previous work has shown how to create a distributed processing architecture, where each node in a network performs the computations related to one or more antennas. The required total number of antennas, M, at the base station depends on the number of simultaneously operating terminals, K. In this work, a flexible node architecture is presented, where the number of terminals can he traded for additional antennas at the same node. This means that the same node can be used with a wide range of system configurations. The computational complexity, along with the order in which to compute incoming and outgoing symbols is explored.
Massive MIMO-systems have received considerable attention in recent years as an enabler in future wireless communication systems. As the idea is based on having a large number of antennas at the base station it is important to have both a scalable and distributed realization of such a system to ease deployment. Most work so far have focused on the theoretical aspects although a few demonstrators have been reported. In this work, we propose a base station architecture based on connecting the processing nodes in a K-ary tree, allowing simple scalability. Furthermore, it is shown that most of the processing can be performed locally in each node. Further analysis of the node processing shows that it should be enough that each node contains one or two complex multipliers and a few complex adders/subtracters operating at some hundred MHz. It is also shown that a communication link of some Gbps is required between the nodes, and, hence, it is fully feasible to have one or a few links between the nodes to cope with the communication requirements.
Neumann series expansion is a method for performing matrix inversion that has received a lot of interest in the context of massive MIMO systems. However, the computational complexity of the Neumann methods is higher than for the lowest complexity exact matrix inversion algorithms, such as LDL, when the number of terms in the series is three or more. In this paper, the Neumann series expansion is analyzed from a computational perspective for cases when the complexity of performing exact matrix inversion is too high. By partially computing the third term of the Neumann series, the computational complexity can be reduced. Three different preconditioning matrices are considered. Simulation results show that when limiting the total number of operations performed, the BER performance of the tree different preconditioning matrices is the same.
In this paper we consider Millimeter wave (mmWave) Massive MIMO systems where a large antenna array at the base station (BS) serves a few scheduled terminals. The high dimensional null space of the channel matrix to the scheduled terminals is utilized to broadcast system information to the non-scheduled terminals on the same time-frequency resource. Our analysis reveals the interesting result that with a sufficiently large antenna array this non-orthogonal broadcast strategy requires significantly less total transmit power when compared to the traditional orthogonal strategy where a fraction of the total resource is reserved for broadcast of system information.
Wireless networks with many antennas at the base stations and multiplexing of many users, known as Massive MIMO systems, are key to handle the rapid growth of data traffic. As the number of users increases, the random access in contemporary networks will be flooded by user collisions. In this paper, we propose a reengineered random access protocol, coined strongest-user collision resolution (SUCR). It exploits the channel hardening feature of Massive MIMO channels to enable each user to detect collisions, determine how strong the contenders channels are, and only keep transmitting if it has the strongest channel gain. The proposed SUCR protocol can quickly and distributively resolve the vast majority of all pilot collisions.
The massive multiple-input multiple-output (MIMO) technology has great potential to manage the rapid growth of wireless data traffic. Massive MIMO achieves tremendous spectral efficiency by spatial multiplexing many tens of user equipments (UEs). These gains are only achieved in practice if many more UEs can connect efficiently to the network than today. As the number of UEs increases, while each UE intermittently accesses the network, the random access functionality becomes essential to share the limited number of pilots among the UEs. In this paper, we revisit the random access problem in the Massive MIMO context and develop a reengineered protocol, termed strongest-user collision resolution (SUCRe). An accessing UE asks for a dedicated pilot by sending an uncoordinated random access pilot, with a risk that other UEs send the same pilot. The favorable propagation of massive MIMO channels is utilized to enable distributed collision detection at each UE, thereby determining the strength of the contenders signals and deciding to repeat the pilot if the UE judges that its signal at the receiver is the strongest. The SUCRe protocol resolves the vast majority of all pilot collisions in crowded urban scenarios and continues to admit UEs efficiently in overloaded networks.
The data traffic in wireless networks is steadily growing. The long-term trend follows Coopers law, where the traffic is doubled every Two-and-a-half year, and it will likely continue for decades to come. The data transmission is tightly connected with the energy consumption in the power amplifiers, transceiver hardware, and baseband processing. The relation is captured by the energy efficiency metric, measured in bit/Joule, which describes how much energy is consumed per correctly received information hit. While the data rate is fundamentally limited by the channel capacity, there is currently no clear understanding of how energy-efficient a communication system can become. Current research papers typically present values on the order of 10Mbit/Joule, while previous network generations seem to operate at energy efficiencies on the order of 10 kbit/Joule. Is this roughly as energy-efficient future systems (5G and beyond) can become, or are we still far from the physical limits? These questions are answered in this paper. We analyze a different cases representing potential future deployment and hardware characteristics.
This paper considers three aspects of Massive MIMO (multiple- input multiple-output) communication networks that have received little attention in previous works, but are important to understand when designing and implementing this promising wireless technology. First, we analyze how bursty data traffic behaviors affect the system. Using a probabilistic model for intermittent user activity, we show that the spectral efficiency (SE) scales gracefully with reduced user activity. Then, we make an analytic comparison between synchronous and asynchronous pilot signaling, and prove that the choice between these has no impact on the SE. Finally, we provide an analytical and numerical study of the SE achieved with random network deployment.
Massive MIMO is a promising technique for increasing the spectral efficiency (SE) of cellular networks, by deploying antenna arrays with hundreds or thousands of active elements at the base stations and performing coherent transceiver processing. A common rule-of-thumb is that these systems should have an order of magnitude more antennas M than scheduled users K because the users channels are likely to be near-orthogonal when M/K > 10. However, it has not been proved that this rule-of-thumb actually maximizes the SE. In this paper, we analyze how the optimal number of scheduled users K-star depends on M and other system parameters. To this end, new SE expressions are derived to enable efficient system-level analysis with power control, arbitrary pilot reuse, and random user locations. The value of K-star in the large-M regime is derived in closed form, while simulations are used to show what happens at finite M, in different interference scenarios, with different pilot reuse factors, and for different processing schemes. Up to half the coherence block should be dedicated to pilots and the optimal M/K is less than 10 in many cases of practical relevance. Interestingly, K-star depends strongly on the processing scheme and hence it is unfair to compare different schemes using the same K.
Massive MIMO is a promising technique to increase the spectral efficiency of cellular networks, by deploying antenna arrays with hundreds or thousands of active elements at the base stations and performing coherent beamforming. A common rule-of-thumb is that these systems should have an order of magnitude more antennas, N, than scheduled users, K, because the users' channels are then likely to be quasi-orthogonal. However, it has not been proved that this rule-of-thumb actually maximizes the spectral efficiency. In this paper, we analyze how the optimal number of scheduled users, K*, depends on N and other system parameters. The value of K* in the large-N regime is derived in closed form, while simulations are used to show what happens at finite N, in different interference scenarios, and for different beamforming.
Wireless communications is one of the most successful technologies in modern years, given that an exponential growth rate in wireless traffic has been sustained for over a century (known as Coopers law). This trend will certainly continue, driven by new innovative applications; for example, augmented reality and the Internet of Things. Massive MIMO has been identified as a key technology to handle orders of magnitude more data traffic. Despite the attention it is receiving from the communication community, we have personally witnessed that Massive MIMO is subject to several widespread misunderstandings, as epitomized by following (fictional) abstract: "The Massive MIMO technology uses a nearly infinite number of high-quality antennas at the base stations. By having at least an order of magnitude more antennas than active terminals, one can exploit asymptotic behaviors that some special kinds of wireless channels have. This technology looks great at first sight, but unfortunately the signal processing complexity is off the charts and the antenna arrays would be so huge that it can only be implemented in millimeter-wave bands." These statements are, in fact, completely false. In this overview article, we identify 10 myths and explain why they are not true. We also ask a question that is critical for the practical adoption of the technology and which will require intense future research activities to answer properly. We provide references to key technical papers that support our claims, while a further list of related overview and technical papers can be found at the Massive MIMO Info Point: http://massivemimo.eu
Distributed massive multiple-input multiple-output (MIMO) combines the array gain of coherent MIMO processing with the proximity gains of distributed antenna setups. In this paper, we analyze how transceiver hardware impairments affect the downlink with maximum ratio transmission. We derive closed-form spectral efficiencies expressions and study their asymptotic behavior as the number of the antennas increases. We prove a scaling law on the hardware quality, which reveals that massive MIMO is resilient to additive distortions, while multiplicative phase noise is a limiting factor. It is also better to have separate oscillators at each antenna than one per BS
The use of base stations (BSs) and access points (APs) with a large number of antennas, called Massive MIMO (multiple-input multiple-output), is a key technology for increasing the capacity of 5G networks and beyond. While originally conceived for conventional sub-6 GHz frequencies, Massive MIMO (mMIMO) is also ideal for frequency bands in the range 30-300 GHz, known as millimeter wave (mmWave). Despite conceptual similarities, the way in which mMIMO can be exploited in these bands is radically different, due to their specific propagation behaviors and hardware characteristics. This article reviews these differences and their implications, while dispelling common misunderstandings. Building on this foundation, we suggest appropriate signal processing schemes and use cases to efficiently exploit mMIMO in both frequency bands.
The rate and energy efficiency of wireless channels can be improved by deploying software-controlled metasurfaces to reflect signals from the source to the destination, especially when the direct path is weak. While previous works mainly optimized the reflections, this letter compares the new technology with classic decode-and-forward (DF) relaying. The main observation is that very high rates and/or large metasurfaces are needed to outperform DF relaying, both in terms of minimizing the total transmit power and maximizing the energy efficiency, which also includes the dissipation in the transceiver hardware.
In this work, we consider spectrum sensing of OFDM signals. We deal withthe inevitable problem of a carrier frequency offset, and propose modificationsto some state-of-the-art detectors to cope with that. Moreover, the (modified)detectors are implemented using GNU radio and USRP, and evaluated over aphysical radio channel. Measurements show that all of the evaluated detectorsperform quite well, and the preferred choice of detector depends on thedetection requirements and the radio environment.
EDGE (enhanced data rates for global evolution) is one of the future wireless communication systems, offering high bit rates and packet data services. For data services an increased link quality will directly translate into improved throughput. A well-known technique that improves link performance is antenna diversity. Antenna diversity also enables interference-cancellation methods, which are evaluated in this paper. The conclusion is that introducing interference rejection in a GSM/EDGE classic scenario could increase average user bit rate by 26%. In a TDMA/EDGE compact scenario the average bit rate increase could be as high as 46% due to the time synchronized structure
This paper studies the potential performance im- provement from cooperative diversity transmission in a cellular network. We consider a simpli ed coopera- tive relaying system which allows the mobile terminals to relay data packets from their partners. We obtain numerical results for the outage probability, taking into account log-distance path loss, spatially corre- lated shadowing and small-scale fading. A compari- son with conventional macrodiversity is performed for various scenarios of interest. The simulation results demonstrate the superiority of cooperative diversity transmission.
We consider the problem of minimizing the packet drop probability (PDP) under an average transmit power constraint for Chase combining (CC)-based hybrid-automatic repeat request (HARQ) schemes in correlated Rayleigh fading channels. We propose a method to find a solution to the non-convex optimization problem using an exact expression of the outage probability. However, the complexity of this method is high. Therefore, we propose an alternative approach in which we use an asymptotically equivalent expression for the outage probability and reformulate it as a geometric programming problem (GPP), which can be efficiently solved using convex optimization algorithms.