I have always been fascinated by animated characters. We know there is more to speech than simply words. Facial expression adds significantly to our understanding. As a deaf person I also know too well how precise movement of the lips and face help in my understanding of the spoken word.
Forming speech is complex. About a hundred different muscles in the chest, neck, jaw, tongue, and lips must work together in forming speech. Every word or short phrase that is physically spoken is followed by its own unique arrangement of muscle movements. No wonder then that animations can often appear flat and characterless.
New research from the University of East Anglia (UK) could revolutionise the way that animated characters deliver their lines.
Animating the speech of characters such as Elsa and Mowgli has been both time-consuming and costly. But now computer programmers have identified a way of creating natural-looking animated speech that can be generated in real-time as voice actors deliver their lines.
The discovery was unveiled in Los Angeles at the world’s largest computer graphics conference - Siggraph 2017. This work is a collaboration which includes UEA, Caltech and Carnegie Mellon University
Researchers show how a ‘deep learning’ approach – using artificial neural networks – can generate natural-looking real-time animated speech.
As well as automatically generating lip sync for English speaking actors, the new software also animates singing and can be adapted for foreign languages. The online video games industry could also benefit from the research – with characters delivering their lines on-the-fly with much more realism than is currently possible – and it could also be it can be used to animate avatars in virtual reality.
A central focus for the work has been to develop software which can be seamlessly integrated into existing production pipelines, and which is easy to edit.
Lead researcher Dr Sarah Taylor, from UEA’s School of Computing Sciences, said: “Realistic speech animation is essential for effective character animation. Done badly, it can be distracting and lead to a box office flop.
“Doing it well however is both time consuming and costly as it has to be manually produced by a skilled animator. Our goal is to automatically generate production-quality animated speech for any style of character, given only audio speech as an input.”
The team’s approach involves ‘training’ a computer to take spoken words from a voice actor, predict the mouth shape needed, and animate a character to lip sync the speech.
This is done by first recording audio and video of a reference speaker reciting a collection of more than 2500 phonetically diverse sentences. Their face is tracked to create a ‘reference face’ animation model.
The audio is then transcribed into speech sounds using off-the-shelf speech recognition software.
This collected information can then be used to generate a model that is able to animate the reference face from a frame-by-frame sequence of phonemes. This animation can then be transferred to a CG character in real-time.
‘Training’ the model takes just a couple of hours. Dr Taylor said: “What we are doing is translating audio speech into a phonetic representation, and then into realistic animated speech.”
The method has so far been tested against sentences from a range of different speakers. The research team also undertook a subjective evaluation in which viewers rated how natural the animated speech looked.
Dr Taylor said: “Our approach only requires off-the-shelf speech recognition software, which automatically converts any spoken audio into the corresponding phonetic description. Our automatic speech animation therefore works for any input speaker, for any style of speech and can even work in other languages.
“Our results so far show that our approach achieves state-of-the-art performance in visual speech animation. The real beauty is that it is very straightforward to use, and easy to edit and stylise the animation using standard production editing software.”
Animating the speech of characters such as Elsa and Mowgli has been both time-consuming and costly. But now computer programmers have identified a way of creating natural-looking animated speech that can be generated in real-time as voice actors deliver their lines.
The discovery was unveiled in Los Angeles at the world’s largest computer graphics conference - Siggraph 2017. This work is a collaboration which includes UEA, Caltech and Carnegie Mellon University
Researchers show how a ‘deep learning’ approach – using artificial neural networks – can generate natural-looking real-time animated speech.
As well as automatically generating lip sync for English speaking actors, the new software also animates singing and can be adapted for foreign languages. The online video games industry could also benefit from the research – with characters delivering their lines on-the-fly with much more realism than is currently possible – and it could also be it can be used to animate avatars in virtual reality.
A central focus for the work has been to develop software which can be seamlessly integrated into existing production pipelines, and which is easy to edit.
Lead researcher Dr Sarah Taylor, from UEA’s School of Computing Sciences, said: “Realistic speech animation is essential for effective character animation. Done badly, it can be distracting and lead to a box office flop.
“Doing it well however is both time consuming and costly as it has to be manually produced by a skilled animator. Our goal is to automatically generate production-quality animated speech for any style of character, given only audio speech as an input.”
The team’s approach involves ‘training’ a computer to take spoken words from a voice actor, predict the mouth shape needed, and animate a character to lip sync the speech.
This is done by first recording audio and video of a reference speaker reciting a collection of more than 2500 phonetically diverse sentences. Their face is tracked to create a ‘reference face’ animation model.
The audio is then transcribed into speech sounds using off-the-shelf speech recognition software.
This collected information can then be used to generate a model that is able to animate the reference face from a frame-by-frame sequence of phonemes. This animation can then be transferred to a CG character in real-time.
‘Training’ the model takes just a couple of hours. Dr Taylor said: “What we are doing is translating audio speech into a phonetic representation, and then into realistic animated speech.”
The method has so far been tested against sentences from a range of different speakers. The research team also undertook a subjective evaluation in which viewers rated how natural the animated speech looked.
Dr Taylor said: “Our approach only requires off-the-shelf speech recognition software, which automatically converts any spoken audio into the corresponding phonetic description. Our automatic speech animation therefore works for any input speaker, for any style of speech and can even work in other languages.
“Our results so far show that our approach achieves state-of-the-art performance in visual speech animation. The real beauty is that it is very straightforward to use, and easy to edit and stylise the animation using standard production editing software.”
Comments
Post a Comment