Drone HDR Infrared Video Coding via Aerial Map Prediction

Evgeny Belyaev, Søren Forchhammer

    Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

    Abstract

    In this paper, we present a novel drone high dynamic range infrared video coding algorithm based on aerial map prediction. First, at the encoder side we accumulate input frames in a buffer and use them to build an aerial map. Then we apply global motion estimation to extract the most similar frame from the aerial map and use it as an extra reference frame in the list of reference frames of H.265/HEVC video coding standard. The map is compressed by H.265/HEVC Intra and included into the overall bit stream. The global motion estimation may reflect the camera rotation which is typical for UAV video sequences. As a result, the overall coding performance is improved. Experimental results show that for a test video with camera rotation the proposed algorithm provides 3–35% bit rate savings comparing to the H.265/HEVC.
    Original languageEnglish
    Title of host publicationProceedings of 2018 25th IEEE International Conference on Image Processing
    PublisherIEEE
    Publication date2018
    Pages1733-1736
    ISBN (Print)9781479970612
    DOIs
    Publication statusPublished - 2018
    Event2018 IEEE International Conference on Image Processing - Athens, Greece
    Duration: 7 Oct 201810 Oct 2018
    Conference number: 25
    https://2018.ieeeicip.org/

    Conference

    Conference2018 IEEE International Conference on Image Processing
    Number25
    Country/TerritoryGreece
    CityAthens
    Period07/10/201810/10/2018
    Internet address

    Keywords

    • Bit rate
    • Encoding
    • Drones
    • Streaming media
    • Video sequences
    • Image coding
    • Quantization (signal)
    • Drone video coding
    • HDR
    • Infrared video

    Fingerprint

    Dive into the research topics of 'Drone HDR Infrared Video Coding via Aerial Map Prediction'. Together they form a unique fingerprint.

    Cite this