Graph Neural Network Reinforcement Learning for Autonomous Mobility-on-Demand Systems

Daniele Gammelli, Kaidi Yang, James Harrison, Filipe Rodrigues, Francisco C. Pereira, Marco Pavone

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Abstract

Autonomous mobility-on-demand (AMoD) systems represent a rapidly developing mode of transportation wherein travel requests are dynamically handled by a coordinated fleet of robotic, self-driving vehicles. Given a graph representation of the transportation network-one where, for example, nodes represent areas of the city, and edges the connectivity between them-we argue that the AMoD control problem is naturally cast as a node-wise decision-making problem. In this paper, we propose a deep reinforcement learning framework to control the rebalancing of AMoD systems through graph neural networks. Crucially, we demonstrate that graph neural networks enable reinforcement learning agents to recover behavior policies that are significantly more transferable, generalizable, and scalable than policies learned through other approaches. Empirically, we show how the learned policies exhibit promising zero-shot transfer capabilities when faced with critical portability tasks such as inter-city generalization, service area expansion, and adaptation to potentially complex urban topologies.

Original languageEnglish
Title of host publicationProceedings of the 60th IEEE Conference on Decision and Control, CDC 2021
Number of pages8
PublisherInstitute of Electrical and Electronics Engineers Inc.
Publication date2021
Pages2996-3003
ISBN (Electronic)9781665436595
DOIs
Publication statusPublished - 2021
Event60th IEEE Conference on Decision and Control - Virtual Conference, Austin, United States
Duration: 14 Dec 202117 Dec 2021
Conference number: 60
https://ieeexplore.ieee.org/xpl/conhome/9682670/proceeding
https://2021.ieeecdc.org/

Conference

Conference60th IEEE Conference on Decision and Control
Number60
LocationVirtual Conference
Country/TerritoryUnited States
CityAustin
Period14/12/202117/12/2021
Internet address

Bibliographical note

Funding Information:
The authors would like to thank M. Zallio for help with the graphics. This research was partially supported by the Toyota Research Institute (TRI). K. Yang would like to acknowledge the support of the Swiss National Science Foundation (SNSF) Postdoc.Mobility Fellowship (P400P2 199332). This article solely reflects the opinions and conclusions of its authors and not TRI, SNSF, or any other entity.

Publisher Copyright:
© 2021 IEEE.

Fingerprint

Dive into the research topics of 'Graph Neural Network Reinforcement Learning for Autonomous Mobility-on-Demand Systems'. Together they form a unique fingerprint.

Cite this