Deep Reinforcement Learning for Energy-Efficient Workflow Scheduling in Edge Computing

  • Mengyao Wen
  • , Xiufeng Liu
  • , Xin Ning
  • , Cong Liu
  • , Xiaomin Chen
  • , Jiawei Nian
  • , Long Cheng*
  • *Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

Abstract

Workflow scheduling in dynamic edge computing environments faces challenges in minimizing completion time and energy consumption due to unpredictable workloads and limited resources. We propose DQN-Edge, an efficient scheduling method using an attention-based Deep Q-Network (DQN) to learn optimal task prioritization and resource allocation policies. DQN-Edge’s two-phase approach first prioritizes tasks using a modified upward ranking algorithm considering critical path dependencies, then employs a DQN with a context-aware attention mechanism to balance time and energy rewards adaptively. Comprehensive evaluations using real-world scientific workflows show that DQN-Edge consistently outperforms state-of-the-art methods across various scenarios, maintaining high success rates while reducing completion time and energy consumption, even under high-load conditions. The experimental results demonstrate that DQN-Edge can significantly surpass existing methods, reducing makespan by an average of 51.6% and energy consumption by 54.3% compared to basic techniques, and achieving 20.2% and 24.7% improvements over the latest advanced methods, respectively.
Original languageEnglish
Article number111790
JournalComputer Networks
Volume274
ISSN1389-1286
DOIs
Publication statusPublished - 2026

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 7 - Affordable and Clean Energy
    SDG 7 Affordable and Clean Energy

Fingerprint

Dive into the research topics of 'Deep Reinforcement Learning for Energy-Efficient Workflow Scheduling in Edge Computing'. Together they form a unique fingerprint.

Cite this