Abstract
We propose a learning-based method for lossless light field compression. The approach consists of two steps: first, the view to be compressed is synthesized based on previously decoded views; then, the synthesized view is used as a context to predict probabilities of the residual signal for adaptive arithmetic coding. We leverage recent advances in deep-learning-based view synthesis and generative modeling. Specifically, we evaluate two strategies for entropy modeling: a fully parallel probability estimation, where all pixel probabilities are estimated simultaneously; and a partially auto-regressive estimation, in which groups of pixels are predicted sequentially. Our results show that the latter approach provides the best coding gains compared to the state of the art, while keeping the computational complexity competitive.
Original language | English |
---|---|
Title of host publication | Proceedings of 23rd IEEE International Workshop on Multimedia Signal Processing |
Number of pages | 6 |
Publisher | IEEE |
Publication date | 2021 |
ISBN (Print) | 9781665432887 |
DOIs | |
Publication status | Published - 2021 |
Event | 2021 IEEE 23rd International Workshop on Multimedia Signal Processing - Hybrid event, Tampere, Finland Duration: 6 Oct 2021 → 8 Oct 2021 Conference number: 23 https://attend.ieee.org/mmsp-2021/ |
Conference
Conference | 2021 IEEE 23rd International Workshop on Multimedia Signal Processing |
---|---|
Number | 23 |
Location | Hybrid event |
Country/Territory | Finland |
City | Tampere |
Period | 06/10/2021 → 08/10/2021 |
Internet address |
Keywords
- Light field
- Lossless Coding
- Deep learning