Haogang Feng , Haiyu Xiao , Shida Zhong , Zhuqing Gao , Tao Yuan and Zhi QuanDeep-Learning-Aided Fast Successive Cancellation Decoding of Polar CodesAbstract: With the continuous evolution of 5G communication technology to B5G and the next generation of communication technology, Deep Learning technology will also lead the automation and intelligent transformation of communication systems. Existing research has shown that the combination of deep learning and communication technology is expected to break some performance bottlenecks of traditional communication algorithms and solutions. This paper explores the application of deep learning (DL) in polar decoding algorithms, proposing a DL-aided-FSC (DL-FSC) polar code decoder algorithm. For the DL-FSC decoding algorithm, the conventional successive cancellation (SC) decoder is partitioned into multiple sub-blocks, which are replaced by R0 nodes, R1 nodes and sub-DL decoder. The log-likelihood ratio (LLR) and frozen bit pattern are input to the sub-DL decoder to predict decode codewords under any decoding code rate. Through simulation verification, under the PBCH channel of 5G communication, the DL-FSC decoder achieves similar block error rate (BLER) performance to the SC decoder, even improving by about 1%. In order to verify the performance optimization effect of the proposed algorithm at the hardware level, the DL-FSC deocder circuit design was completed. Through FPGA synthesis, the proposed decoder achieves a throughput of about 4571 Mbps, which is 1.71× improvement in decoding throughput at the expense of increased logic resources. Keywords: 5G , deep learning , fast successive-cancellation decoding , list decoding , polar codes I. INTRODUCTIONPOLAR code, proposed by Erdal Arikan [1], is a channel coding algorithm that can be rigorously proven to reach channel capacity. In recent years, due to its deterministic construction method and being the only known channel coding method that can be strictly proven to reach channel capacity, it has received widespread attention. During the 3GPP RAN #87 meeting, polar codes were adopted for the control channel of the enhanced mobile broadband (eMBB) service category in 5th generation (5G) wireless communication systems [2]. The proposed polar coded NOMA (PC-NOMA) scheme can significantly improve the capacity of access users with lowcomplexity multi-user detection algorithms [3], and polar codes can also meet the large-capacity access requirements of 6G [4]. Successive cancellation (SC) and belief propagation (BP) are two traditional methods used in polar code decoding. When the code length in a binary memoryless channel is long enough, the Shannon capacity can be achieved using the SC decoding algorithm [5]. However, the serial nature of the SC decoding algorithm imposes data dependencies, resulting in a high decoding latency and low throughput [6]. Compared with their SC counterparts, polar BP decoders are more attractive for low-latency applications. However, due to their iterative nature, the required latency and energy dissipation of BP decoders increase linearly with the number of iterations [7]. In recent years, deep learning (DL) [8], also known as deep neural networks, has received widespread attention for its ability to solve complex tasks. Recently, researchers have attempted to apply deep learning techniques to channel coding problems [9]–[11]. This is because the deep learning network can complete any mapping from one vector space to another through learning, and it has the property of one-time decoding. In polar codes with shorter code lengths, feed-forward neural networks are used for polar code decoding for the first time, where log-likelihood ratios (LLRs) serve as inputs and estimated positions serve as the outputs of the neural network [12]. Based on this, a joint learning system architecture consisting of a residual learning denoiser (RLD) and a neural network decoder (NND) is proposed, which uses the multitask learning (MTL) strategy to jointly optimize the denoising loss function and decoding loss function of residual neural network decoder (RNND), resulting in better denoising and decoding performance [13]. However, neural network decoders for long polar codes encounter significant training challenges due to the highdimensional space involved, with complexity exponentially increasing with the number of information bits. To address this issue, the integration of neural networks with traditional decoding algorithms, particularly through the substitution of certain decoding components, has been extensively explored. Within the conventional BP decoding framework, specific sub-blocks of the BP decoder have been replaced with BP neural network decoding (BP-NND) sub-blocks [14], thereby enhancing decoding performance. Similarly, a ResNet-like belief propagation structure has been employed to improve the effectiveness of traditional polar BP decoding. The proposed BP decoder with a ResNet-like architecture has similar block error rate (BLER) performance to the standard BP decoder, but with fewer iterations [15]. The neural successive cancellation (NSC) decoder is another solution that connects multiple neural network decoders through SC decoding [16]. The proposed NSC sub-block N = 2 SC decoding sub-block can effectively reduce the decoding time step, but the input-output data dimension is too small to limit the role of neural networks. A sub-NN decoder with tanh-based modified LLR is used to replace the N = 4 SC sub-block to reduce the decoding delay of polar codes on FSO turbulence channels [17]. However, these decoding algorithms that utilize DL primarily rely on polar codes with fixed code length and fixed code rate. By exploring different NNN recognition strategies, [18] introduces the last subcode NN-assisted decoding (LSNNAD) and the key-bit-based subcode NN-assisted decoding (KSNNAD) schemes, which can effectively handle Polar codes with long code lengths, although there is no simulation test under 5G channel in this work. Moreover, the BLER performance of the proposed algorithms has not been compared with that of the successive cancellation list (SCL) decoding algorithm. In this paper, we propose a practical deep-learning-aided fast successive cancellation (DL-FSC) decoding algorithm. The DL-FSC decoding algorithm uses R0 nodes, R1 nodes and general N = 8 sub-DL decoders to replace the N = 8 sub-blocks in the traditional SC decoder. The sub-DL decoder can predict the probability of decoding codewords through a deep learning network. Among them, the calculation of R0 and R1 nodes relies on the traditional FSC decoding algorithm to achieve fast decoding. More specifically, the input to the sub- DL decoder consists of two-dimensional data, which includes 8 LLRs and the corresponding frozen bit pattern, allowing the decoding of sub-blocks with varied frozen bit information and code rates. Integrating deep learning techniques with traditional decoding methods not only enhances the performance of polar codes but also aligns with the evolving trend of incorporating deep learning into future communication systems. Simulations under 5G channel conditions have demonstrated that the DL-FSC decoder achieves a BLER performance comparable to the traditional SC decoder. The results indicate that a well-trained sub-DL decoder enables the DL-FSC decoding algorithms to meet the performance standards of 5G. Additionally, the recursive nature of our scheme allows for the reuse of the DL-FSC across different parts of the decoding process. To assess the performance improvements at the hardware level, a hardware circuit design for the DL-FSC decoder was completed. Despite consuming more logical resources, a literature review reveals that the DL-FSC decoder’s throughput has significantly increased. Moreover, the DL-FSC algorithms are adaptable to various channels within 5G and prospective 6G technologies, though the integration of deep learning introduces additional computational complexity. The rest of this paper is organized as follows. Section II briefly introduces polar codes, SC, FSC decoding algorithms and deep learning decoding algorithms. The proposed DL-FSC decoder and hardware implementation will be described in detail in Section III. And the simulation process and results of our DL-FSC decoder in the 5G channel are presented in Section IV. Finally, Section V draws the main conclusions of this paper. II. PRELIMINARIESA. Polar CodesPolar code (N,K) of length [TeX:] $$N=2^n$$ with K information bits is as [TeX:] $$x=u F^{\otimes n},$$ where [TeX:] $$x=\left\{x_0, x_1, \cdots, x_{N-1}\right\}$$ is codeword. After determining the code length N, the generator matrix of the polar code is uniquely determined and can be generated by the Arikan core matrix F [19]. [TeX:] $$F^{\otimes n}$$ denotes the nth Kronecker power of F, which can be recursively obtained from the Arikan core F [20]. Polar code is used as the channel coding scheme for the control channel in the 5G eMBB scenario. The coding schemes for the uplink and downlink control channels are different, and the specific coding scheme is determined according to the different information sequence lengths. B. SC DecodingSC decoding is one of the classic decoding algorithms for polar codes. The SC decoding algorithm uses LLR as the decision criterion, makes a hard decision for each bit, and decodes in the order of bit numbers from small to large. Fig. 1 shows a binary tree representation of a polar code P(8,5) and its corresponding SC decoding. For a node of length N, [TeX:] $$L_i(0 \leq i \lt N / 2)$$ represents the ith LLR value, and [TeX:] $$B_N=\left\{b_0, \cdots, b_{N-1}\right\}$$ represents frozen bit pattern. The LLR [TeX:] $$L_{i+1}$$ of left-child nodes can be computed as:
(2)[TeX:] $$L_{i+1}=\operatorname{sign}\left(L_i\right) \operatorname{sign}\left(L_{i+N / 2}\right) \min \left\{\left|L_i\right|,\left|L_{i+N / 2}\right|\right\} .$$The LLR [TeX:] $$L_{i+1}$$ of right-child nodes can be computed as:
(3)[TeX:] $$\hat{x}_i=\left\{\begin{array}{l} x_i, i\lt \frac{N}{2}, \\ x_i \oplus x_{i+N / 2}, \text { otherwise, } \end{array}\right.$$where [TeX:] $$x_i=\hat{u}_i,$$ at the leaf node, [TeX:] $$\hat{u}_i$$ can be estimated as
(4)[TeX:] $$\hat{u}_i=\left\{\begin{array}{l} 0, b_i==0 \text { or } L_i \geq 0, \\ 1, L_i\lt 0 . \end{array}\right.$$The latency of SC decoding algorithm can be represented in terms of the number of time steps as
C. FSC DecodingThe FSCL algorithm in [21] provides efficient decoders for Rate-0, Rep, SPC, and Rate-1 nodes in SCL without traversing the decoding tree while guaranteeing the errorcorrection performance preservation. Fig. 2 shows the division of special nodes in P(16,8). The pruned decoding tree of the same polar code is shown in Fig. 2(b) which consists of R0 nodes, Rep nodes, SPC nodes, and Rate-1 nodes. The FSC algorithm in [21] provides efficient decoders in SC without traversing the decoding tree while guaranteeing the error-correction performance preservation. The definitions and decoding operations of each special node under FSCL decoding are given as follows 1) Rate 0: A polar code node of length N where all codewords [TeX:] $$u_1, u_2, \cdots u_3$$ are frozen bits, with no information bits, is referred to as an R0 Node. 2) Repetition: A polar code node of length N where only the [TeX:] $$u_N$$ codeword is an information bit, and the rest [TeX:] $$u_1, u_2, \cdots u_{N-1}$$ are frozen bits, is referred to as a Rep Node. 3) Single parity check: A polar code node of length N where only the [TeX:] $$u_1$$ codeword is a frozen bit, and the rest [TeX:] $$u_1, u_2, \cdots u_N$$ are information bits, is referred to as a Rep Node. 4) Rate 1: A polar code node of length N where all codewords [TeX:] $$u_1, u_2, \cdots u_3$$ are information bits, with no frozen bits, is referred to as an R1 Node. D. Deep-learning DecodingDL [22] is a new research direction in the field of ML. Generally speaking, by integrating more processing layers in a neural network, we are able to describe much more complicated algorithms with improved performance via deep learning. The fully connected neural network (FCNN) [23] is a deep neural network model based on multi-layer non-linear transformations. The input layer has N inputs and the output layer has K outputs. For each hidden layer i, [TeX:] $$n_i$$ inputs and [TeX:] $$m_i$$ outputs perform the mapper [TeX:] $$f(i): \mathbb{R}^{n_i} \rightarrow \mathbb{R}^{m_i},$$ and it is composed of multiple neurons. In these neurons all of its weighted inputs are added up, a bias is optionally added, and the result is propagated through a nonlinear activation function. e.g. a sigmoid function or a rectified linear unit (ReLU), which are respectively defined as
(6)[TeX:] $$\operatorname{sigmoid}(z)=\frac{1}{1+e^{-z}}, \quad \operatorname{relu}(z)=\max \{0, z\}.$$Therefore, the input-output mapping of the whole DL decoder can be represented as a chain of functions, which is given by
(7)[TeX:] $$w=f(v, \theta)=\operatorname{out}\left(f^{(L-1)}\left(\cdots\left(f^{(0)}(v)\right)\right)\right.,$$where L gives the number of layers and is also called depth. It was shown in [23] that such a DL decoder and nonlinear activation functions can theoretically approximate any continuous function on a bounded region arbitrarily closely—if the number of neurons is large enough. III. DL-FSC DECODINGA. DL-FSC Decoding AlgorithmIn this paper, leveraging the design concepts of deep learning, we propose a deep learning-aided fast successive cancellation decoder. In the DL-FSC decoding scheme, the R0 node, the R1 node and the sub-DL decoder is used to replace the sub-block in the traditional SC decoding. The R0 node consists only of pure frozen bits, and decoding does not require any computational work, and a node length Nv vector of 0 is output as the decoded results. For Rate-1 node decoding, since there is no frozen bit, a hard decision (using (4)) can be made directly through the LLR of the top layer of the node to obtain [TeX:] $$X_{N_v} .$$ Then multiply it by the corresponding polar transformation matrix [TeX:] $$F^{\otimes s}$$ to output the decoded data [TeX:] $$U_{N_v}$$ of the corresponding node. The sub-DL decoder receives 8 internal LLRs and corresponding frozen bit pattern and predicts 8 output bits by DL network. The difference between DL-FSC decoder and SC decoding is that DL-FSC does not need to traverse all decoding trees and has a similar error correction performance. The Fig. 4 shows a system overview of sub-DL decoder architecture. The LLRs and frozen bit pattern are input to the sub-DL decoder to predict decode codewords. Before inputting the LLR into the sub-DL decoder, a sigmoid-like function is used in the sigmoid-modified layer to normalize LLR, as follows:
where s represents the scale parameter and v is the modified LLR, respectively. The range of v is limited to (0,1) by the sigmoid-modified layer. When LLR is close to zero, there is a greater unreliability for the channel transmission signal. Therefore, it is necessary to adjust the scale parameter s so that the sigmoid-like function has a higher resolution at the zero point, and the modified LLR v retains more information metrics. The modified LLR is input into the neural network decoder. The fully connected neural network is adopted as the neural network decoder, which is composed of an input layer, a sigmoid-modified layer, a fully connected layer (weights: 256 × 16, bias: 256 × 1), and a classification layer. Therefore, the DL-FSC decoder can be seen as a mapper [TeX:] $$f\{R(l l r, \text { frozen bit pattern }) \rightarrow R(\hat{u})\} .$$ Algorithm 1 shows the decoding process of polar code using DL-FSC. The channel LLRs, the frozen bit pattern and the estimated codewords are denoted as [TeX:] $$L_N=\left\{l_0, \cdots, l_{N-1}\right\}, B_N=\left\{b_0, \cdots, b_{N-1}\right\}$$ and [TeX:] $$\hat{U}_N=\left\{\hat{u}_0, \cdots, \hat{u}_{N-1}\right\} .$$ Unlike other FSC decoders that need to be manually designed to decode special constituent codes [24], the DL-FSC decoder in this paper is trained to decode any node without considering any specific frozen bit pattern. B. Training of the DL-FSCAs described above, the DL-FSC decoder for a certain channel polarization code is universal. Therefore, we only need to train one DL-FSC decoder. In this paper, we use the polarization code scheme of the PBCH channel under the 5G standard as shown to collect training data and verify the scheme. The sub-DL decoder is trained using gradient descent optimization method and backpropagation algorithm [25]. In order for the DL-FSC decoder to understand the LLR characteristics under the PBCH channel, the scale parameter s in the sigmoid-modified layer [TeX:] $$\{v=\operatorname{sigmoid}(l l r, s)\}$$ added by the DL-FSC will also participate in the training. The parameter s is typically set around 0.25, as determined through training. The process of collecting training data for DL-FSC decoder is summarized in Algorithm 2. The training data of the DLFSC decoder is collected by assuming that the SC decoder has perfect knowledge of the transmitted bits. Under the condition that all decodings are correct, compute the LLRs of the partitioned sub-blocks in the SC decoder. Finally, collect the LLR [TeX:] $$L_S,$$ frozen bit pattern [TeX:] $$B_S,$$ and correct codeword [TeX:] $$U_S$$ for each sub-block with a non-zero code rate. For instance, in the PBCH channel, the core block of the polar code is (512, 56), which can be decomposed into 64 polar code sub-blocks, among which only 16 sub-blocks have a non-zero code rate. The information bit numbers of these 16 sub-blocks are {0, 1, 3, 4, 6, 7, 8} respectively. Sub-blocks without information bits (R0) do not need to be decoded, and other information sub-blocks can all serve as the training set for a single DL-FSC decoder. The deepNetworkDesigner in Matlab was used to help us quickly establish and train neural networks. Bayesian optimization was used for deep learning to find the optimal network hyperparameters and training options. C. The DL-FSC Decoder ArchitectureAlthough deep learning decoders theoretically exhibit oneshot decoding characteristics, the substantial matrix computations within deep learning networks necessitate a certain clock cycle for decoding predictions during hardware implementation. In this section, for the proposed DL-FSC decoding algorithm, we completed the hardware design of the polar code decoder with N = 16 to verify the performance of the proposed decoding algorithm in hardware. The Fig. 5 shows the hardware structure of the DL-FSC decoder. The input of the decoder is mainly a 16-bit frozen bit pattern and the corresponding LLR value (using 8-bit quantization) to complete the decoding of the 16-bit codeword. The process of the decoder is mainly divided into two parts: the left node and the right node to complete the decoding. The decoding accelerator receives the LLR data and completes the 8 [TeX:] $$L L R_{\text {left }}$$ calculations of the left node through the f operation of (1). Then the [TeX:] $$L L R_{\text {left }}$$ is input into the R0 sub-block, the R1 sub-block and the sub-DL decoder for decoding, and the corresponding result is selected as the decoding codeword [TeX:] $$U_{\text {left }}$$ [7:0] according to the frozen bit pattern. The calculation of the right node is based on the g operation of (2), and the [TeX:] $$L L R_{\text {right }}$$ calculation of the right node is completed, and then the decoding is completed through the sub-block. In the DL-FSC decoder, the core part is the sub-DL decoder, which mainly completes polar code decoding based on the trained deep learning network. Through training, we determined that its core network is a fully connected network module, and int8 is used as data storage. Therefore, the module design of the sub-DL decoder is mainly divided into two parts: 1) Fully connected calculation part: Complete the calculation of each channel (corresponding to the codeword label) based on the weight and bias data obtained from training. The calculation formula for each channel [TeX:] $$Y_i$$ is:
(9)[TeX:] $$Y_i=\sum_{j=1}^8 W_{i, j} \times B_j+\sum_{j=1}^8 W_{i, 8+j} \times L_j+\text {bias}_i$$where B represents the forzen bit pattern and L are the LLRs, and [TeX:] $$W_{i, j}(i \in[0,256])$$ represents the weight of the ith row and jth column in the weight matrix. 2) Classification layer calculation part: Since sub-DL decoding only needs to output the label with the maximum prediction probability as the decoding codeword, the design in this part only needs to calculate the channel corresponding to the maximum value, and the traditional softmax layer is not required. The Fig 6 shows the decoding circuit module of the DLFSC core sub-block, which is mainly divided into the fully connected calculation part and the classification layer calculation part. The channel accumulation module is mainly divided into two modules to complete the accumulation: In the [TeX:] $$B_{PE}$$ module, since the frozen bit is a single-bit input, the multiplier can be optimized to a MUX selector to select the weight data or 0 to complete the accumulation. The [TeX:] $$L_{PE}$$ module completes the accumulation of LLR and weight data through a constant multiplier. The input of the classification layer calculation part is the splicing signal of each channel value and the current channel sequence number, which can complete the size comparison and get the sequence number of the current channel. At the same time, in order to speed up the comparison speed, the three comparators used here are cascaded into a four-selection comparator to complete the comparison of multiple channels. IV. SIMULATION RESULTSA. Data Pre-Processing and Model TrainingIn order to obtain suitable training and validation datasets, we first generate 1 million random codewords and encode them using the corresponding polar code encoding scheme according to different 5G channels. Different noises are superimposed on the encoded data, and then the correctly decoded codewords are collected. The LLR values, frozen bit patterns, and correct codewords of non-R0 and R1 sub-blocks are collected in the SC decoder. These random codewords include sub-block decoding results under different frozen bit patterns and different [TeX:] $$E_b / N_0$$. Then 95% of the random codewords are used as training sets, and the remaining 5% are used as validation sets. As mentioned in Section II-A, the specific coding scheme used in uplink or downlink channel depends on the information length. The specific parameters of the polar code coding schemes for the uplink and downlink channels are shown in Table I. TABLE I
Due to the different message length (A) and rate matching length (E) in the 5G channel, the corresponding polar code encoding construction method is also different. Therefore, we selected some polarization code encoding schemes of 5G channels for simulation testing. Table II shows the detailed parameters of the polar code encoding scheme used in this paper. TABLE II
After completing the collection of training dataset, we use the Bayesian optimization algorithm in MATLAB to select the type of neural network and related hyperparameters. Bayesian optimization is a hyperparameter search algorithm based on Bayesian theory. It can find the optimal hyperparameter combination by establishing a probability model representing the objective function. This experiment selects the following hyperparameters for optimization. The specific hyperparameters are shown in Table III. The squared error of the prediction dataset and validation dataset of the corresponding model used at the same time is used as the objective function of the Bayesian optimization algorithm. TABLE III
TABLE IV shows the training result of different deep learning networks as the core network of the DL-FSC decoder. Although LSTM, GRU and CNN [18] can effectively solve problems such as long-term memory and gradients in backpropagation, fully connected networks have a huge advantage in terms of convergence speed, model size, and verification probability in this application scenario. Most importantly, the simpler network structure makes fully connected networks more hardware-friendly and easy to hardwareize. Therefore, a fully connected network is used as the core component of the deep learning-assisted SC decoder, which is conducive to achieving good decoding performance and faster response speed, even under limited computing resources. The specific network structure comprises an input layer, a fully connected network with weight dimensions of 256×16 and bias dimensions of 128 × 1, a softmax activation layer, and an output layer. Additionally, the data type for this network is INT8. This compact and efficient neural network architecture allows the deep learning-assisted SC decoder to balance decoding performance and computational complexity, making it suitable for practical implementation in resource-constrained 5G and beyond communication systems. TABLE IV
B. 5G Channel Simulation ResultsTo evaluate the error correction performance of the proposed DL-FSC decoder in a 5G communication context, we conduct simulations using BLER as the performance metric. The simulations are implemented in MATLAB, with the channel model set as AWGN and the modulation scheme as QPSK. To ensure a fair comparative analysis, the testing process for the proposed DL-FSC decoder is consistent with that of the other considered decoders, capturing a minimum of 50 error events. The comparison results of BLER versus [TeX:] $$E_b / N_0$$ performance are shown in Fig. 7, when using QPSK for communication over an AWGN channel. As shown in Table V, the experimental results under 5G communication channels show that compared with the traditional SC decoder, the decoding performance of DL-FSC decoder is improved by 1% (0.046 dB) on average in PBCH channel and 0.7% (0.028 dB) on average in PUCCH channel. In the context of polar code decoding, the DL-FSC decoder exhibits a higher degree of parallelism compared to the SC decoder. However, the incorporation of the DL component also introduces an associated increase in computational complexity. TABLE V
C. Hardware Design and Comparison of DL-FSC DecoderIn terms of hardware, the proposed DL-FSC decoder achieves a decoding time of 14 clock cycles for N = 16 polar codes, without considering input data preparation time. Table VI presents the deep learning-based polar code decoder accelerator proposed in this work, compared to other polar code decoders. For ease of comparison, the results are based on FPGA synthesis. The deep learning-assisted FSC decoder in this study consumes more logic resources (look-up tables and flip-flops) on the FPGA compared to decoder [26] and decoder [27]. However, this design avoids using RAM and prioritizes more logic resources to achieve higher processing efficiency. Simultaneously, the proposed decoder significantly improves throughput by sacrificing some hardware resources, achieving a 1.71× throughput improvement compared to the latest decoder [26]. In future communication systems and other high-speed data transmission scenarios with strict requirements for real-time performance and data processing speed, the proposed decoding accelerator can provide enhanced data processing capabilities. V. CONCLUSIONThis paper proposes a DL-FSC polar code decoder. The proposed DL-FSC decoder connects the sub-DL decoder through the SC decoder. The sub-DL decoder can be regarded as a mapper of the general decoder [TeX:] $$f\{R(l l r, \text { frozen bit pattern }) \rightarrow R(\hat{u})\},$$ and the frozen bit pattern allows the DL-FSC decoder to support Polar codes with any code rate under a certain code length. We prove that the proposed DL-FSC decoder has slightly better BLER performance than the SC decoder. At the same time, the proposed deep learning accelerator improves the decoding rate from the hardware level. Compared with the latest literature, the proposed decoding accelerator improves the throughput by 1.71×. The proposed polar code encoding and decoding system based on deep learning is suitable for future intelligent communication systems. Combined with intelligent decoding based on deep learning, it can improve the reliability of communication links and data transmission efficiency. Our future work will design and implement the hardware architecture of the proposed DL-FSC decoder, and optimize the fully connected network of DL-FSC through quantization, pruning, and other algorithms to reduce decoding delays. BiographyHaogang FengHaogang Feng was born in Henan, China, 1997. He received the B.Sc. degree in Electronics Engineering from Shenzhen University, Shenzhen, China, in 2019. From 2019 on, he will pursue Ph.D. degree in Electronics and Electrical Engineering in Shenzhen University, Shenzhen, China. His current research interests include feld-programmable gate array (FPGA), joint algorithm and hardware codesign, and application-specified integrated circuit (ASIC) implementations for next-generation channel coding system. BiographyHaiyu XiaoHaiyu Xiao was born in Fujian, China in 1996. Obtained a bachelor’s degree in Electronic Infor- mation Engineering from Chengdu University of Technology in 2019. Starting from 2021, he will study for a master’s degree in Integrated Circuit Engineering from Shenzhen University. His current research interests include 5G polar code design, neural network accelerator design and application specific integrated circuit (ASIC) implementation of next-generation channel coding systems. BiographyShida ZhongShida Zhong received the B.Sc. degree in elec- tronics engineering from Shenzhen University, Shen- zhen, China, in 2008, and the M.Sc. and Ph.D. de- grees in Electronics and Electrical Engineering from the University of Southampton, Southampton, U.K., in 2009 and 2013, respectively. He is currently an Assistant Professor with the College of Electronics and Information Engineering, Shenzhen University. His current research interests include low power IC design, FPGA and ASIC implementations for next- generation channel coding system. BiographyZhuqing GaoZhuqing Gao received the B.E. degree from Nan- jing Institute of Technology in 2004 and joined Potevio in the same year for R&D of PCMs and optical terminals. Since 2005, he has been engaged in the development and design of handset products in Inventec, ZTE, and Xiaomi. He is dedicated to tech- nology innovation and research, and has obtained more than 20 patents in semiconductor and product application. BiographyTao YuanTao Yuan (M’19) received the B.E. degree in Elec- tronic Engineering and the M.E. degree in Signal and Information Processing at Xidian University, Xi’an, Shaanxi, China, in 1999 and 2003, respectively, and the Ph.D. degree in Electrical and Computer Engineering at National University of Singapore, Singapore, in 2009. He is now a Distinguished Professor (2016–) with the College of Electronics and Information Engineering at Shenzhen Univer- sity, Shenzhen, Guangdong, China. He is the Direc- tor of the Guangdong Provincial Mobile Terminal Microwave and Millimeter-Wave Antenna Engineering Research Center and the Deputy Director of the Guangdong-Hong Kong Joint Laboratory for Big Data Imaging and Communication. His current research interests include design and implementation of novel RF frontend chips/modules and integrated devices/antennas/circuits for 5G/6G applications. BiographyZhi QuanZhi Quan is a distinguished professor with the College of Electronic and Information Engineering, Shenzhen University, China. He received his Ph.D. in Electrical Engineering from University of Cali- fornia, Los Angeles (UCLA) with highest honors in 2009, and his B.E. in Communications Engineering from Beijing University of Posts and Telecommu- nications (BUPT), China in 1999. He worked as a Sr. System Engineer in Qualcomm Research Center (QRC) of Qualcomm Inc. (San Diego, CA) during 2008-2012, and as a RF System Architect with Apple Inc. (Cupertino, CA) during 2012-2015. Dr. Quan has been granted over 40 patents, and published over 70 papers in wireless communications and signal processing with more than 5000 citations from Google Scholar. Dr. Quan was the recipient of UCLA Outstanding Ph.D. Award in 2009, IEEE Signal Processing Society Best Paper Award in 2012, China National Excellent Young Scientist Foundation in 2016, and First Prize Technology Innovation Award by China Institute of Communications in 2020. His current research interests include wireless communication systems, RF system calibration and measurement, data-driven signal processing, and machine learning. References
|