This paper introduces for the first time a framework to obtain provable worst-case guarantees for neural network performance, using learning for optimal power flow (OPF) problems as a guiding example. Neural networks have the potential to substantially reduce the computing time of OPF solutions. However, the lack of guarantees for their worst-case performance remains a major barrier for their adoption in practice. This work aims to remove this barrier. We formulate mixed-integer linear programs to obtain worst-case guarantees for neural network predictions related to (i) maximum constraint violations, (ii) maximum distances between predicted and optimal decision variables, and (iii) maximum sub-optimality. We demonstrate our methods on a range of PGLib-OPF networks up to 300 buses. We show that the worst-case guarantees can be up to one order of magnitude larger than the empirical lower bounds calculated with conventional methods. More importantly, we show that the worst-case predictions appear at the boundaries of the training input domain, and we demonstrate how we can systematically reduce the worst-case guarantees by training on a larger input domain than the domain they are evaluated on.
|Title of host publication||Proceedings of 2020 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids|
|Number of pages||7|
|Publication status||Published - 2020|
|Event||2020 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids - Tempe, United States|
Duration: 11 Nov 2020 → 13 Nov 2020
|Conference||2020 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids|
|Period||11/11/2020 → 13/11/2020|