Multimedia Mapping using Continuous State Space Models

Tue Lehn-Schiøler

    Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

    83 Downloads (Pure)

    Abstract

    In this paper a system that transforms speech waveforms to animated faces are proposed. The system relies on continuous state space models to perform the mapping, this makes it possible to ensure video with no sudden jumps and allows continuous control of the parameters in 'face space'. Simulations are performed on recordings of 3-5 sec. video sequences with sentences from the Timit database. The model is able to construct an image sequence from an unknown noisy speech sequence fairly well even though the number of training examples are limited.
    Original languageEnglish
    Title of host publicationIEEE 6'th orkshop on Multimedia Signal Processing Proceedings
    Publication date2004
    Pages51-54
    Publication statusPublished - 2004

    Cite this