Adversarial Attack on DL-based Massive MIMO CSI Feedback

Qing Liu , Jiajia Guo , Chao-Kai Wen and Shi Jin


Abstract: With the increasing application of deep learning (DL) algorithms in wireless communications, the physical layer faces new challenges caused by adversarial attack. Such attack has significantly affected the neural network in computer vision. We choose DL-based channel state information (CSI) to show the effect of adversarial attack on DL-based communication system. We present a practical method to craft white-box adversarial attack on DLbased CSI feedback process. Our simulation results show the destructive effect adversarial attack causes on DL-based CSI feedback by analyzing the performance of normalized mean square error. We also launch a jamming attack for comparison and find that the jamming attack could be prevented with certain precautions. As DL algorithm becomes the trend in developing wireless communication, this work raises concerns regarding the security in the use of DL-based algorithms.

Keywords: adversarial attack , csi feedback , deep learning , wireless security


DEEP learning (DL) is a promising technology in the sixth generation (6G) communication system [1]. DL-based algorithm is used to deal with the huge data produced in the massive multiple-input multiple-output (MIMO) system. The DL-based algorithm can effectively optimize end-to-end performance despite the need for pre-defined mathematic models [2], [3] . As the DL-based algorithm shows superiorities, it is broadly applied to physical layer, e.g., channel estimation, modulation recognition, and channel state information (CSI) feedback [4]– [6].

Although DL algorithm is used increasingly, natural fragility makes it susceptible to adversarial attack in computer vision. Authors in [7] presented a fast gradient method (FGM) to generate adversarial examples, which could lead to misclassification in neural network (NN)-based image classifier. As DL be-

comes increasingly popular in wireless communication, attention is paid to the design of the network to improve transmission rate while little attention is given to DL-based physical layer security. It is first indicated in [8] that DL-based modulation recognition network suffers from adversarial attack via FGM. The same research team futher improved FGM while launching an adversarial attack on an autoencoder-based end-to-end communication system in [9].

However, FGM is limited in launching adversarial attack on classification task. Attack on the reconstruction task demands for a different design of adversarial perturbation. Inspired by [10] that adversarial attack endangers DL-based classifier and maliciously leads to false feature extraction of image, we explore the threats that DL-based CSI feedback faces under adversarial attack. The design is based on the similarity between feature extraction in computer vision and information compression in communication system.

In this paper, we study the security of the DL-based CSI feedback under adversarial attack. We launch a white-box adversarial attack with a well-designed perturbation on DL-based massive MIMO CSI feedback network, called CsiNet, which was proposed in [6]. We then compare the output of CsiNet after the adversarial attack with the original CSI to evaluate the influence caused by the attack. We carry out a jamming attack by adding white Gaussian noise during transmission as comparison. Our specific contributions are as follows. First, we discover that adversarial attack can cause a devastating impact on CSI feedback process compared to jamming attack. Second, we train the original model in additional white Gaussian noise (AWGN) channel and determine that training in this scenario can efficiently prevent CsiNet from white Gaussian noise. Nonetheless, adversarial attack could still disable CsiNet from proper functioning. Encoder of CsiNet compresses CSI into codewords of different dimension. Therefore, we compare the CsiNet trained with different compression rate and discover that the network with lower compression rate presents better robustness against adversarial perturbations. We conduct experiments in indoor and outdoor environments and find that the network suffers danger in both scenarios.

The remainder of this study is organized as follows. Section II introduces the system model and the CSI feedback that our experiments are based on. Section III gives a brief introduction of adversarial attack and proposes a method to attack CsiNet in detail. The simulation results and analysis are presented in Section IV. The conclusion is given in Section V.

Fig. 1.
Architecture of CsiNet: An encoder constructed with convolutional, reshape, and fully connected layers; decoder with fully connected and reshape layers and two RefineNet units connected in series. RefineNet unit is blocked specifically.


A. Massive MIMO System

We consider a system with [TeX:] $$N_{t}$$ antennas at the base station (BS) and a single antenna at the user equipment (UE), which utilize orthogonal frequency division multiplexing (OFDM) with [TeX:] $$N_{s}$$ subcarriers. We denote the received signal of UE as follows.

[TeX:] $$y_{n}=\widetilde{\mathbf{h}}_{n}^{H} \mathbf{v}_{\mathbf{n}} \mathbf{x}_{\mathbf{n}}+\mathbf{z}_{\mathbf{n}}$$

where [TeX:] $$x_{n}$$ represents the transmitted signal, [TeX:] $$z_{n}$$ is the additional noise, [TeX:] $$\widetilde{\mathbf{h}}_{n} \in \mathbb{C}^{N_{t} \times 1}$$ and [TeX:] $$\mathbf{v}_{\mathbf{n}} \in \mathbb{C}^{\mathrm{N}_{\mathrm{t}} \times \mathbf{1}}$$ are the channel frequency response and the precoding vector, respectively, where [TeX:] $$n=1,2, \cdots, N s$$. The downlink CSI can be described as [TeX:] $$\tilde{\mathbf{H}}=\left[\widetilde{\mathbf{h}}_{1}, \widetilde{\mathbf{h}}_{2}, \cdots, \widetilde{\mathbf{h}}_{N_{s}}\right]^{H} \in \mathbb{C}^{N_{s} \times N_{t}}$$ stacked in spatial frequency domain. To reach high quality downlink transmission, BS can design a channel precoding vector with knowledge of downlink CSI. Downlink CSI is first estimated at UE and then fed back to BS in frequency division dual (FDD) system, where downlink and uplink channels have no reciprocity. CSI feedback process can lead to a great overhead and occupy precious bandwidth. Therefore, 2D discrete Fourier transformation (DFT) is used to transform [TeX:] $$\tilde{\mathbf{H}}$$ into angular-delay domain to reduce feedback overhead.

[TeX:] $$\mathbf{H}=\mathbf{F}_{\mathbf{d}} \tilde{\mathbf{H}} \mathbf{F}_{\mathbf{a}}^{\mathbf{H}}$$

where [TeX:] $$\mathbf{F}_{\mathbf{d}}$$ and [TeX:] $$\mathbf{F}_{\mathbf{a}}$$ are [TeX:] $$N_{s} \times N_{s}$$ and [TeX:] $$N_{T} \times N_{T}$$ DFT matrices. The transformation enables H to be sparsified in angular-delay domain.

As the time delay between multipath arrival lies within a limited period, the delay is presented such that the last several rows ofHtend to be infinitely close to zero. Only the first [TeX:] $$N_{c}$$ rows of H exhibit non-zero values. Therefore, practical channel matrix H is truncated into [TeX:] $$N_{c} \times N_{t}$$, by which the parameters waiting to be fed back are effectively reduced without interfering with transmission quality.

B. CSI Feedback

Two main approaches of CSI feedback are available. One is digital CSI feedback, which compresses CSI, and quantization is applied to generate a bit stream for uplink transmission [11]. The other is analog CSI feedback, which avoids quantization by transmitting downlink CSI through uplink channel using unquantized quadrature-amplitude modulation [12]. Adversarial attack on digital CSI feedback is similar to attack on digital end-to-end communication system, which was proposed in [9]. Therefore, we present a method to start adversarial attack on analog CSI feedback. Feedback CSI demands for a welldesigned CSI sensing and recovery mechenism, which can be achieved by DL-based algorithm [6].

Downlink CSI after 2D DFT can be visualized as an image where the gray-scale values represent the normalized absolute values of CSI. Since autoencoder has been proved to be an efficient model in dealing with image reconstruction, similarities between autoencoder and communication system are utilized by treating encoder as transmitter and decoder as receiver.

We denote an encoder by [TeX:] $$f_{\mathrm{en}}: \mathbf{H} \rightarrow \mathbf{s} \in \mathbb{C}^{\mathrm{M} \times \mathbf{1}}$$, where H refers to CSI matrix as input, and s refers to encoder’s ouput which is an M dimensional vector. The encoder extracts latent representations from the original input. Then, representations would be reconstructed into the output with a decoder denoted by [TeX:] $$f_{\mathrm{de}}: \mathrm{s} \rightarrow \hat{\mathrm{H}}$$. Concatenating an encoder with a decoder forms an autoencoder, which would be trained to optimize endto- end performance with the following loss function.

[TeX:] $$\min \left\|f_{\mathrm{de}}\left(f_{\mathrm{en}}(\mathbf{H})\right)-\mathbf{H}\right\|_{2}^{2}$$

An autoencoder-based NN, called CsiNet, was proposed in [6] to feedback downlink CSI. The structure of CsiNet is given in Fig. 1, where the encoder placed at UE compresses CSI with one convolutional layer to generate two feature maps and a fully connected layer to generate two feature maps into code words. Compression rate is defined as follows.

[TeX:] $$\gamma=M / N, \text { with } N=2 N_{c} \times N_{t}$$

The decoder at BS is used to reconstruct original CSI from codewords. Codewords are fed into a fully connected layer to reverse the transformation made by the last layer of encoder. From the output of the fully connected layer, CSI is then refined by two RefineNet units connected in series and constructed by one last convolutional layer. The encoder and decoder would be trained jointly as a complete autoencoder to accomplish end-to-end optimization before placing seperately at UE and BS.


In this section, we present the details of launching a white-box adversarial attack on CsiNet after a brief explanation of adversarial attack.

A. Adversarial Attack in Theory

It is first discovered in computer vision that an intentionally imperceptible perturbation added on original input can generate an adversarial example, which could lead a misclassification with high confidence. The key to launching an adversarial attack is to find such adversarial perturbation, which can be fulfilled in theory by solving the following problem.

[TeX:] $$\begin{array}{c} \max _{\mathbf{z}} \| f_{a e}(\mathbf{x}+\mathbf{z})-\mathbf{x} \|_{\mathbf{2}}^{\mathbf{2}} \\ s . t .\|\mathbf{z}\|_{\mathbf{1}} \leq \delta \end{array}$$

where [TeX:] $$f_{a e}(\cdot): \mathbf{x} \rightarrow \hat{\mathbf{x}}$$ denotes an autoencoder, x refers to original input, z is an adversarial perturbation directly added on the input, and is set to constrain the adversarial perturbation. In computer vision, [TeX:] $$L_{1}$$ norm is used to limit perturbation to keep the variation away from human perception. The constraint could be changed into other forms, such as [TeX:] $$L_{2}$$ norm to fit other kinds of requirements. In wireless communication, [TeX:] $$L_{2}$$ norm is used to calculate signal power. Hence, we adopt [TeX:] $$L_{2}$$ norm to limit the power of perturbation. Moreover, (5) can be altered by adding a regularization part to keep the adversarial perturbation within limitations as follows.

[TeX:] $$\begin{array}{l} \max _{\mathbf{z}}\left\|f_{a e}(\mathbf{x}+\mathbf{z})-\mathbf{x}\right\|_{2}^{2}+\epsilon\|\mathbf{z}\| \\ \text {s.t. }\|\mathbf{z}\|_{\mathbf{1}} \leq \delta \end{array}$$

where is a scaling factor.

Several ways can be applied to solve (5) or (6). According to [7], L-GBFS-b, which is a limited-memory algorithm for solving large nonlinear optimization problems subject to simple bounds on the variables [13], was presented to be a fine optimization in solving problems like (5). Reference [7] proposed FGM to generate an adversarial perturbation, which leads a misclassification successfully. However, no specific classification is observed in attacking the reconstruction task, which makes FGM not feasible in generating an adversarial perturbation against autoencoders. The attack against autoencoders aims at the whole reconstruction. To find an adversarial perturbation against autoencoders, [14] introduced that the parameters of addictive bias can make a well-functional perturbation by self-update of NN. Inspired by [14], we craft adversarial attack against DL-based CSI feedback which is particularly introduced in next subsection.

B. Adversarial Attack on CsiNet

We propose a white-box adversarial attack on DL-based CSI feedback. The complete process of CSI feedback is accomplished by assuming that the perfect codeword is received by the BS. Due to the broadcast nature of wireless communication, the transmission in physical layer can be endangered by malicious attacker. In order to study the security matter in physical layer, we craft adversarial attack during transmission of compressed codewords between encoder and decoder while [14] added perturbation directly on the inputs of autoencoder, which can hardly be accomplished in wireless communication system. We model an attacker, which could simultaneously send an adversarial perturbation to be added to the transmitted data as follows.

[TeX:] $$\widetilde{\mathbf{s}}=\mathbf{s}+\mathbf{p}$$

where p denotes the adversarial perturbation. Our goal is to generate a constant perturbation, which is added on transmitted codewords, and disable the decoder at BS. Subsequently, the BS would fail to reconstruct the perfect downlink CSI, which could further harm the communication system.

We model an attacker by a bias layer1 , which does the following calculation.

1 We assume that the channel between the attacker and the BS is previously known by attacker. Therefore, the channel coefficient could be compensated before the perturbation is sent.

[TeX:] $$\mathbf{y}=\mathbf{g}(\mathbf{h}+\mathbf{p})$$

where we set activation [TeX:] $$g(\cdot)$$ to be linear, such that only the bias represented by p could be updated during back propagation.

We adopt a two-step training strategy by adding the bias layer between encoder and decoder of CsiNet. First, we initialize the parameters of bias layer to be zero, which are futher set to be non-trainable to train a functioning autoencoder designed to fulfill the task of CSI feedback using loss function as (3). After the model is trained, we set the model fixed and start to train the bias layer by using a loss function as follows.

[TeX:] $$\max \left\|f_{\mathrm{de}}(\mathrm{s}+\mathrm{p})-\mathrm{H}\right\|_{2}^{2}$$

To keep the power of perturbation within limitation, a proper constraint is need. We previously set an perturbation-to-signal ratio (PSR), which is used to generate a value with power of codewords using the following equation.

[TeX:] $$\mathrm{PSR}=\|\mathbf{p}\|_{2}^{2} /\|\mathbf{s}\|_{2}^{2}$$

Hence, the parameters of the bias layer make anM dimensional vector that can be used as the addictive adversarial perturbation to attack CsiNet. Epochs, number of training samples, and learning rates of each step of the training are given in Table 1.

After the two-step training is finished, we feed test data into the encoder to collect codewords, which are further added to a perturbation trained earlier. Tampered codewords are sent to the decoder to finish the reconstruction. Normalized mean square error (NMSE) is considered an effective criterion to assess the

Table 1.
Training parameters.
Fig. 2.
NMSE of CsiNet versus SNR with adversarial and jamming attacks for the indoor scenario with set to be 1/4.

performance of CSI feedback. Therefore, we collect and compare the outputs of the decoder with the original CSI by calculating NMSE between two elements as follows.

[TeX:] $$\mathrm{NMSE}=\mathrm{E}\left\{\|\mathbf{H}-\hat{\mathbf{H}}\|_{2}^{2} /\|\mathbf{H}\|_{2}^{2}\right\}$$

To evaluate the effect of adversarial attack, we launch a jamming attack on CsiNet to obtain outputs for the NMSE calculation. For the jamming attack, we generate white Gaussian noise with the same power of the adversarial perturbation to be added to the transmitted signal. We compare the NMSE performance of CsiNet under two kinds of attack for the same PSR and set the original NMSE performance of CsiNet without attack as the baseline.

CsiNet is first trained in an ideal scenario without consideration of noise. We then alter the training scenario by adding the AWGN channel between UE and BS. We study whether the robustness of CsiNet against adversarial perturbation can be enhanced by certain precaution by adding random Gaussian noise with different power to codewords. In our experiments, we utilize signal-to-noise ratio (SNR) to set the power of Gaussian noise in the channel during training, where the signal refers to codewords s. We launch a jamming attack on newly trained CsiNet for comparison. Furthermore, we extend our experiments on CsiNet by adopting different compression rates. Considering that the practical scenario is complicated, we use both indoor and outdoor CSI datasets.


We use the COST 2100 channel model [15] to generate two types of CSI dataset in indoor and outdoor scenarios for simulation. We set the carrier frequency indoor at 5.3 GHz and outdoor

Fig. 3.
NMSE of CsiNet trained in AWGN channel versus SNR with adversarial attack for the indoor scenario with set to be 1/4.

at 300 MHz. We place an ULA with 32 antennas at the BS and a single antenna at the UE in the OFDM system with 1024 subcarriers. Due to the sparsity of massive MIMO-OFDM system, the practical complex channel matrix is truncated into [TeX:] $$32 \times 32$$ after being transformed into an angular-delay domain. The training and validation datasets of the first-step training contain 100,000 and 30,000 samples, respectively. The training dataset of the second-step training contains 30,000 samples. There are additional 20,000 samples generated as testing dataset.

We first train CsiNet in an indoor ideal scenario without consideration of natural noise and with compression rate as 1/4. Adversarial and jamming attacks are launched in succession and NMSE performances of CsiNet under each kind of attack are given in Fig. 2. The horizontal dashed line represents the NMSE performance of CsiNet in the no attack scenario. From Fig. 2, CsiNet presents a significantly higher NMSE while under adversarial attack compared to jamming attack. Hence, adversarial attack presents a more destructive influence on CsiNet for the same value of PSR. Moreover, the jamming attack is less of a threat when the power of noise drops. Adversarial attack holds a steadily destructive influence even when the power of perturbation is small. The results of Fig. 2 are simulated in the scenario assuming that BS could receive intact codewords. However, wireless communication is fragile because of natural noise. Therefore, blocks of physical layer should be designed with consideration of the complex channel state to enhance network robustness. Hence, we retrain the CsiNet in AWGN channel with different SNR set at 10 and 20 dB. Adversarial and jamming attacks are launched on the two new models whose results are given in Fig. 3. To compare the NMSE performance of the new models under attack with previously trained models, we choose compression rate as 1/4 and the indoor CSI dataset.

Fig. 3 shows that NMSE of the new model under jamming attack tends to be infinitely closer to the baseline as the value of PSR drops. The NMSE performance of models trained in AWGN channel indicates that certain precaution can effectively enhance robustness of CsiNet against addictive Gaussian noise.

Fig. 4.
NMSE of CsiNet trained in different compression rates versus SNR with adversarial attack for the indoor scenario.

However, Fig. 3 shows that NMSE performance of new models under adversarial attack drops slightly compared to Fig. 2. When PSR is set to be -30 dB, NMSEs of CsiNet under adversarial attack are 8.94 dB and 14.03 dB higher than the baselines in scenarios where SNR is set to be 10 dB and 20 dB, respectively. In comparison with CsiNet trained without Gaussian noise, the new model shows slight resistance against adversarial attack and low-power adversarial perturbation still prevents CsiNet from proper functioning. Therefore, the adversarial attack could effectively disable the CSI feedback despite precautionary measures.

Previous experiments are performed using only 1/4 as compression rate, but CsiNet is designed with more than one compression rate to deal with different scenarios. We extend our experiments by attacking models with different compression rates set as 1/4, 1/16, and 1/32 using indoor CSI dataset. The results are given in Fig. 4. The network with lower compression rate owns the NMSE performance under adversarial attack, which is closer to the baseline. Simultaneously, the NMSE performance of the network with lower compression rate drops quickly to the baseline while under jamming attack. Results show that the network with lower compression rate exhibit slight superiority in resisting adversarial and jamming attacks. We interpret this phenomenon as the nature of reconstruction network. A reconstruction work relies on representations that are extracted from original input and trained parameters. CsiNet is forced to rely more on model parameters rather than inputs while less representations are extracted. Hence, CsiNet is less sensible to the variation of the latent representations.

Considering the complexity of practical channel state, we conduct experiments in indoor and outdoor CSI scenarios to study whether adversarial attack could endanger CSI feedback in different scenarios. We train CsiNet with indoor and outdoor CSI datasets in the AWGN channel, where SNR values set to 10 and 20 dB. We consider the model trained in 1/4 compression rate as example and compare the NMSE performance of CsiNet trained for different scenarios under adversarial and jam-

Fig. 5.
NMSE of CsiNet trained for outdoor scenario versus SNR with adversarial attack with with set to be 1/4.

ming attacks. Fig. 5 shows that NMSE of CsiNet trained with outdoor CSI dataset appears severely influenced by adversarial attack compared to jamming attack. In summary, CSI feedback suffers from great threats under various circumstances, which arouse our attention on real state environments where DL-based physical layer would be exposed under severe threats from a malicious attacker.


We found that the safety of DL-based CSI feedback against random noise can be guarded by considering noise during training. However, adversarial perturbation still endangered CsiNet despite certain precaution. Due to the broadcast nature of wireless communication, transmitted data can be easily tampered with adversarial perturbation by malicious attackers. With our work, we hope to raise concerns about the security of DL-based physical layer. Further studies in ultra-secure communication system are highly needed.


Qing Liu

Qing Liu received the B.S. degree from the Harbin Institution of Technology, Weihai, China, in 2018. She is currently pursuing her M.S. degree in Cyber Science and Engineering with Southeast University, China. Her current research interests include the security of deep learning based communication system.


Jiajia Guo

Jiajia Guo received the B.S. degree from the Nanjing University of Science and Technology, Nanjing, China, in 2016, and the M.S. degree from the University of Science and Technology of China, Hefei, China, in 2019. He is currently pursuing the Ph.D. degree in Information and Communications Engineering with Southeast University, China. His current research interests include deep learning, neural network compression, and CSI feedback in massive MIMO.


Chao-Kai Wen

Chao-Kai Wen(S’00-M’04) received the Ph.D. degree from the Institute of Communications Engineering, National Tsing Hua University, Taiwan, in 2004. He was with Industrial Technology Research Institute, Hsinchu, Taiwan and MediaTek Inc., Hsinchu, Taiwan, from 2004 to 2009. Since 2009, he has been with National Sun Yat-sen University, Taiwan, where he is Professor of the Institute of Communications Engineering. His research interests center around the optimization in wireless multimedia networks.


Shi Jin

Shi Jin (S’06-M’07-SM’17) received the B.S. degree in Communications Engineering from Guilin University of Electronic Technology, Guilin, China, in 1996, the M.S. degree from Nanjing University of Posts and Telecommunications, Nanjing, China, in 2003, and the Ph.D. degree in Information and Communications Engineering from the Southeast University, Nanjing, in 2007. From June 2007 to October 2009, he was a Research Fellow with the Adastral Park Research Campus, University College London, London, U.K. He is currently with the Faculty of the National Mobile Communications Research Laboratory, Southeast University. His research interests include space time wireless communications, random matrix theory, and information theory. He serves as an Associate Editor for the IEEE Transactions on Wireless Communications, and IEEE Communications Letters, and IET Communications. Dr. Jin and his co-authors have been awarded the 2011 IEEE Communications Society Stephen O. Rice Prize Paper Award in the field of communication theory and a 2010 Young Author Best Paper Award by the IEEE Signal Processing Society.


  • 1 P. Yang, Y. Xiao, M. Xiao, S. Li, "6G wireless communications: Vision and potential techniques," IEEE Netw./, vol. 33, no. 4, pp. 70-75, July, 2019.custom:[[[-]]]
  • 2 T. Wang et al., "Deep learning for wireless physical layer: Opportunities and challenges," ChinaCommun., vol. 14, no. 11, pp. 92-111, Nov, 2017.custom:[[[-]]]
  • 3 Z. Qin, H. Ye, G. Y. Li, B.-H. F. Juang, "Deep learning in physical layer communications," IEEE Wireless Commun., vol. 26, no. 2, pp. 9399-9399, 2019.custom:[[[-]]]
  • 4 H. Ye, G. Y. Li, B.-H. Juang, "Power of deep learning for channel estimation and signal detection in OFDM systems," IEEE Wireless Commun. Lett., vol. 7, no. 1, pp. 114-117, Feb, 2018.doi:[[[10.1109/LWC.2017.2757490]]]
  • 5 T. O’Shea, J. Corgan, T. C. Clancy, "Convolutional radio modulation recognition networks," in Proc.EANN, pp. 213-226, 2016.custom:[[[-]]]
  • 6 C.-K. Wen, W.-T. Shih, S. Jin, "Deep learning for massive MIMO CSI feedback," IEEE Wireless Commun. Lett., vol. 7, no. 5, pp. 748-751, Oct, 2018.doi:[[[10.1109/LWC.2018.2818160]]]
  • 7 I. J. Goodfellow, J. Shlens, C. Szegedy, "Explaining and harnessing adversarial examples," arXivpreprintarXiv:1412.6572, 2014.custom:[[[-]]]
  • 8 M. Sadeghi, E. G. Larsson, "Adversarial attacks on deep-learning based radio signal classification," IEEE Wireless Commun. Lett., vol. 8, no. 1, pp. 213-216, Feb, 2018.custom:[[[-]]]
  • 9 M. Sadeghi, E. G. Larsson, "Physical adversarial attacks against end-to-end autoencoder communication systems," IEEE Commun. Lett., vol. 23, no. 5, pp. 847-850, May, 2019.custom:[[[-]]]
  • 10 S. Sabour, Y. Cao, F. Faghri, D. J. Fleet, "Adversarial manipulation of deep representations," arXivpreprintarXiv:1511.05122, 2015.custom:[[[-]]]
  • 11 J. Guo, C. Wen, S. Jin, G. Y. Li, "Convolutional neural network based multiple-rate compressive sensing for massive MIMO CSI feedback: Design, simulation, and analysis," IEEE Trans. Wireless Commun., vol. 19, no. 4, pp. 2827-2840, Apr, 2020.custom:[[[-]]]
  • 12 G. Caire, N. Jindal, M. Kobayashi, "Achievable rates of MIMO downlink beamforming with non-perfect CSI: A comparison between quantized and analog feedback," in Proc.IEEE ACSSC, pp. 354-358, 2006.custom:[[[-]]]
  • 13 C. Zhu, R. H. Byrd, P. Lu, J. Nocedal, "Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization," ACM Trans.Math.Software, vol. 23, no. 4, pp. 550-560, Dec, 1997.doi:[[[10.1145/279232.279236]]]
  • 14 P. Tabacof, J. Tavares, E. Valle, "Adversarial images for variational autoencoders," arXivpreprintarXiv:1612.00155, 2016.custom:[[[-]]]
  • 15 L. Liu et al., "The COST 2100 MIMO channel model," IEEE Wireless Commun., vol. 19, no. 6, pp. 92-99, Dec, 2012.doi:[[[10.1109/MWC.2012.6393523]]]

Table 1.

Training parameters.
Step of training Epochs Training samples Learning rate
First step 200 100,000 0.001
Second step 10 30,000 0.001
Architecture of CsiNet: An encoder constructed with convolutional, reshape, and fully connected layers; decoder with fully connected and reshape layers and two RefineNet units connected in series. RefineNet unit is blocked specifically.
NMSE of CsiNet versus SNR with adversarial and jamming attacks for the indoor scenario with set to be 1/4.
NMSE of CsiNet trained in AWGN channel versus SNR with adversarial attack for the indoor scenario with set to be 1/4.
NMSE of CsiNet trained in different compression rates versus SNR with adversarial attack for the indoor scenario.
NMSE of CsiNet trained for outdoor scenario versus SNR with adversarial attack with with set to be 1/4.