Abstract
Workflow scheduling in dynamic edge computing environments faces challenges in minimizing completion time and energy consumption due to unpredictable workloads and limited resources. We propose DQN-Edge, an efficient scheduling method using an attention-based Deep Q-Network (DQN) to learn optimal task prioritization and resource allocation policies. DQN-Edge’s two-phase approach first prioritizes tasks using a modified upward ranking algorithm considering critical path dependencies, then employs a DQN with a context-aware attention mechanism to balance time and energy rewards adaptively. Comprehensive evaluations using real-world scientific workflows show that DQN-Edge consistently outperforms state-of-the-art methods across various scenarios, maintaining high success rates while reducing completion time and energy consumption, even under high-load conditions. The experimental results demonstrate that DQN-Edge can significantly surpass existing methods, reducing makespan by an average of 51.6% and energy consumption by 54.3% compared to basic techniques, and achieving 20.2% and 24.7% improvements over the latest advanced methods, respectively.
| Original language | English |
|---|---|
| Article number | 111790 |
| Journal | Computer Networks |
| Volume | 274 |
| ISSN | 1389-1286 |
| DOIs | |
| Publication status | Published - 2026 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 7 Affordable and Clean Energy
Fingerprint
Dive into the research topics of 'Deep Reinforcement Learning for Energy-Efficient Workflow Scheduling in Edge Computing'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver