A Reinforcement Learning Based QAM/PSK Symbol Synchronizer

Marco Matta*, Gian Carlo Cardarilli, Luca Di Nunzio, Rocco Fazzolari, Daniele Giardino, Alberto Nannarelli, Marco Re, Sergio Spano

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

613 Downloads (Pure)

Abstract

Machine Learning (ML) based on supervised and unsupervised learning models has been recently applied in the telecommunication field. However, such techniques rely on application-specific large datasets and the performance deteriorates if the statistics of the inference data changes over time. Reinforcement Learning (RL) is a solution to these issues because it is able to adapt its behavior to the changing statistics of the input data. In this work, we propose the design of an RL Agent able to learn the behavior of a Timing Recovery Loop (TRL) through the Q-Learning algorithm. The Agent is compatible with popular PSK and QAM formats. We validated the RL synchronizer by comparing it to the Mueller and Müller TRL in terms of Modulation Error Ratio (MER) in a noisy channel scenario. The results show a good trade-off in terms of MER performance. The RL based synchronizer loses less than 1 dB of MER with respect to the conventional one but it is able to adapt its behavior to different modulation formats without the need of any tuning for the system parameters.
Original languageEnglish
JournalIEEE Access
Volume7
Pages (from-to)124147-124157
Number of pages12
ISSN2169-3536
DOIs
Publication statusPublished - 2019

Keywords

  • Artificial Intelligence
  • Machine Learning
  • Reinforcement Learning
  • Q-Learning
  • Synchronization
  • Timing Recovery Loop

Fingerprint

Dive into the research topics of 'A Reinforcement Learning Based QAM/PSK Symbol Synchronizer'. Together they form a unique fingerprint.

Cite this