Full-Body Motion from a Single Head-Mounted Device: Generating SMPL Poses from Partial Observations

Andrea Dittadi, Sebastian Dziadzio, Darren Cosker, Ben Lundell, Tom Cashman, Jamie Shotton

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

4 Downloads (Pure)

Abstract

The increased availability and maturity of head-mounted and wearable devices opens up opportunities for remote communication and collaboration. However, the signal streams provided by these devices (e.g., head pose, hand pose, and gaze direction) do not represent a whole person. One of the main open problems is therefore how to leverage these signals to build faithful representations of the user. In this paper, we propose a method based on variational autoencoders to generate articulated poses of a human skeleton based on noisy streams of head and hand pose. Our approach relies on a model of pose likelihood that is novel and theoretically well-grounded. We demonstrate on publicly available datasets that our method is effective even from very impoverished signals and investigate how pose prediction can be made more accurate and realistic.
Original languageEnglish
Title of host publicationProceedings of 2021 International Conference on Computer Vision
PublisherIEEE
Publication date2021
Pages11687-11697
Publication statusPublished - 2021
Event2021 International Conference on Computer Vision - Virtual event
Duration: 11 Oct 202117 Oct 2021
https://iccv2021.thecvf.com/

Conference

Conference2021 International Conference on Computer Vision
LocationVirtual event
Period11/10/202117/10/2021
Internet address

Fingerprint

Dive into the research topics of 'Full-Body Motion from a Single Head-Mounted Device: Generating SMPL Poses from Partial Observations'. Together they form a unique fingerprint.

Cite this