Machine Learning For 5g Technology, A Case study

Thalles Silva | November 26, 2021

Identifying signal modulation types using deep convolutional neural networks.

A Quick Intro to 5G

As we all expected, the 5th generation of the mobile network standard (5G, defined by the 3GPP) provides much faster data rates than previous generations. However, 5G brings much more than that. Developed as an SBA (Service-Based-Architecture) and defining an NG-RAN (Next Generation — Radio Access Network), the new 5G standard is indeed faster (100x compared to 4G), addresses extremely low-latency scenarios (few milliseconds), and supports thousands of user connections in the same cell range.

But how is it possible? With lots of research, effort, smart people, and investment from great minds and companies.

Technologies have been created or enhanced to make the three primary use cases possible — eMBB (enhanced mobile broadband), mMTC (massive machine type of communications), and URLLC (Ultra-reliable and low latency communications). Moreover, recent advances on CUPS (Control and User Plane Separation), Edge Computing, and use of millimeter waves (which seemed unfeasible a few years ago), associated with massiveMIMO (Spatial diversity, spatial multiplexing, and beamforming) and Network Slicing were fundamental to achieve the results we can already witness in live scenarios today.

Another important aspect of 5G systems is the possibility to have AI/ML (Artificial Intelligence and Machine Learning) at pre-defined points on the network. Considering that UE (User Equipment, or Smartphones if you prefer) and 5G core network functions (especially the NWDAF — Network Data Analytics Function) within the SBA already provides computational resources to run ML algorithms, the big news is that, within 5G standardization, it intelligently leverages the usage of MEC (Multi-access Edge Computing) to push Machine Learning to the edge of the Telecom network (O-RAN specified RIC — RAN Intelligent Controller for Non-real-time and Near-real-time), creating a multitude of possibilities.

In this article, we will look at how the NG-RAN can use an end-to-end deep learning system to recognize and classify modulated RF signals and adjust channel parameters appropriately, so both ends of the communication achieve optimal, effective resource usage.

Modulation Recognition

In telecommunications systems, a common approach to transmitting information is to vary some properties of a periodic waveform (the carrier signal) with a separate signal called the modulation signal. The modulation signal is what matters here, as it contains information to be transmitted. In this context, the modulation process can be thought of as embedding the signal we care about (the modulation signal) onto a higher frequency waveform (the carrier signal) that will carry the information to a desired destination. For example, the modulation signal might consist of sounds from a microphone, a video signal representing moving images from a camera, or a digital signal representing a sequence of binary digits.

In non-cooperative communication systems, when sending signals, a transmitter may choose any modulation type for a particular signal. The modulation classification is an intermediate process that occurs between signal detection and demodulation at the receiver. To demodulate the received signal, intelligent radio receivers need knowledge of the modulation type to ensure a successful transmission. The problem is that, with no knowledge of the transmitted data, such as the amplitude of the signal, the phase offsets, or the carrier frequency, recognizing the modulation becomes much more challenging. In this context, automatic modulation classification (AMC) is an approach to solve this problem. The idea is to make intelligent receivers able to figure out the signal’s modulation by only observing it, without additional information about the modulation. This “blind” approach to modulation recognition is very efficient because it reduces information overload and increases transmission performance.

With the advances in 5G technology, signal modulation recognition/classification has become a key topic.

In fact, the ability to automatically identify a modulation type of a received signal has many civil and military applications, such as cognitive radio and adaptive communication.

Many algorithms for solving the automatic modulation classification (AMC) task have been proposed in the last two decades. There has been a substantial effort in developing feature-based (FB) methods for modulation classification among such solutions. Feature-based solutions for AMC have two steps: (1) feature extraction and (2) classifier training. The process of extracting features from the signals is fundamental. For instance, previous work has proposed to mine features from the signal’s frequency and phase in the time domain.

After devising some features, we need to train a classifier to perform the task of modulation classification. Here, any classifier can be used, from a simple linear model to non-linear methods such as decision trees and support vector machines. Nevertheless, for FB methods, the performance of the modulation classifier is limited to how good these features can be.

However, with the recent developments of deep neural networks, current approaches employ deep learning models to create modulation classifiers that improve classification performance over classic solutions.

In addition, one of the benefits of training deep learning models for AMC is that we can deploy models that can classify the received signal from the raw data — without a feature extraction module. In other words, deep learning models can learn the most relevant features for the task by themselves, which has been shown to produce much better classifiers. Moreover, operating in the raw signal (instead of using manually crafted features) means that the performance of deep learning models is limited by how much these models can learn patterns from the raw signal.

How to Approach Building a Modulation Classifier

Here, we present our findings from creating a modulation classifier using PyTorch Lightning to build an end-to-end deep learning system capable of recognizing modulation signals. For this task, we used the GNU radio ML RML2016.10a dataset. This synthetic data contains 220000 input examples from a total of 11 (8 digital and 3 analog) modulation schemes at varying signal-to-noise (SNR) ratios. The digital modulations are BPSK, QPSK, 8PSK, 16QAM, 64QAM, BFSK, CPFSK, and PAM4, and the analog modulations consist of WB-FM, AM-SSB, and AM-DSB. These 11 modulations are widely used in wireless communications systems.

Since we are most interested in 5G use cases, we are going to discard the 3 analog modulations and work with the 8 following digital modulations: BPSK, QPSK, 8PSK, 16QAM, 64QAM, BFSK, CPFSK, and PAM4. Thus, our task is posed as an 8-way classification problem where we need to learn the probability distribution over the 8 modulations (classes) from a complex time-series representation of the received signal. To have an idea, according to the recommendations in the 3GPP R15 protocol, five commonly used 5G modulated signal models are: π/2-BPSK, QPSK, 16QAM, 64QAM, and 256QAM.

Model Architecture

Our ConvNet model follows the following architecture. The input signal has a shape of (Batch Size, 1, 2, 128) and is processed by a sequence of convolutions and dense layers. The model has 3 convolutional blocks and one dense block. Each convolutional block contains a 2D convolutional layer followed by ReLU non-linearity, batch normalization, and Dropout. The three convolutional blocks transform the input data into respective feature volumes with channels 64, 128, and 256. We then flatten the output representation, resulting in a feature vector of shape (Batch Size, 10240, 256), and pass it to a dense block containing 256 neurons with ReLU, batch normalization, and Dropout regularization. Finally, a linear layer maps the output representation to a probability distribution over the 8 modulation classes. The deep modulation classifier is trained with Stochastic Gradient Descent optimization. For more details, you can check the following Jupyter notebook. 


Since the data is well-balanced across signal-to-noise-ratio (SNR) and throughout the classes, the classification accuracy score is an acceptable metric to assess our classifier’s performance. To train our deep modulation classifier, we created:

  1. A training dataset containing nearly 129200 observations.
  2. A validation set size of 6800.
  3. A test set with 24000 records.

Each record in the datasets has 128 samples in length. The datasets were stratified so that each subset contains an equal number of observations across different signal-to-noise ratios (SNR) from -10dB to +20dB. Note that the validation dataset is only used for tuning the model hyperparameters (learning rate and dropout probabilities). After finding suitable values for these hyperparameters, we incorporate the validation set into the training data, which gives a training data size of 136000.

After training a relatively light model, we achieved a test accuracy of approximately 62.8% for the 8 modulations. We can see accuracy scores for different modulations in the confusion matrix below. It is clear that, even with a relatively small convolutional neural network (approximately 8.9 Million training parameters), the ConvNet could learn useful features to discriminate between the 8 types of modulations. To have an idea, a classic ConvNet architecture, like the AlexNet, used by previous work for modulation classification, contains 62.3 million training parameters.


If we break down the evaluation protocol across different SNR values, we can see that for low-SNR, the performance of our classifier is significantly low. And in fact, it follows the same finding from research publications such as “Convolutional Radio Modulation Recognition Networks” [1].


You can see a more detailed assessment of our classifier over different SNR values below. Note how the confusion tables for low-SNRs are messy while the confusion matrices for SNR>=-4.0dB are much cleaner indicating higher performance.



Deep learning-based models have a strong case for 5G applications. As discussed in this article, we can build machine learning models using an established architecture such as convolutional neural networks to implement a modulation classifier that can identify many types of modulation schemes at varying signal-to-noise (SNR) ratios by only looking at the raw signal. In this use case, we tackled the problem of classifying 8 different modulations commonly used in 5G technology. We showed that we could learn representations that can tell the modulations apart with reasonable accuracy, even with a relatively light model. In cases of high SNR, our classifier achieves very high accuracy scores.


This piece was written by Thalles Silva, Innovation Expert at Encora’s Data Science & Engineering Technology Practices group. Thanks to João Caleffi, Mario Zimmer, Henrique Oliveira and Kathleen McCabe for reviews and insights.


[1] O’Shea, Timothy J., Johnathan Corgan, and T. Charles Clancy. “Convolutional radio modulation recognition networks.” International conference on engineering applications of neural networks. Springer, Cham, 2016.

[2] O’Shea, Timothy J., and Nathan West. “Radio machine learning dataset generation with gnu radio.” Proceedings of the GNU Radio Conference. Vol. 1. №1. 2016.

[3] Zhang, Qing, et al. “Modulation recognition of 5G signals based on AlexNet convolutional neural network.” Journal of Physics: Conference Series. Vol. 1453. №1. IOP Publishing, 2020.

[4] Zhou, Siyang, et al. “A robust modulation classification method using convolutional neural networks.” EURASIP Journal on Advances in Signal Processing 2019.1 (2019): 1–15.

[5] Flowers, Bryse, R. Michael Buehrer, and William C. Headley. “Evaluating adversarial evasion attacks in the context of wireless communications.” IEEE Transactions on Information Forensics and Security 15 (2019): 1102–1113.

[6] Usama, Muhammad, et al. “Examining machine learning for 5g and beyond through an adversarial lens.” IEEE Internet Computing 25.2 (2021): 26–34.


About Encora

Fast-growing tech companies partner with Encora to outsource product development and drive growth. Contact us to learn more about machine learning for 5G technology and our software engineering capabilities.

Contact Us

Insight Content

Recent Posts

Share this Post

Featured Insights

Featured Insights

Fill Out Later