Skip to main content

A revolution in speech animation?

I have always been fascinated by animated characters.  We know there is more to speech than simply words.  Facial expression adds significantly to our understanding.  As a deaf person I also know too well how precise movement of the lips and face help in my understanding of the spoken word. 

Forming speech is complex. About a hundred different muscles in the chest, neck, jaw, tongue, and lips must work together in forming speech. Every word or short phrase that is physically spoken is followed by its own unique arrangement of muscle movements.   No wonder then that animations can often appear flat and characterless. 

New research from the University of East Anglia (UK) could revolutionise the way that animated characters deliver their lines.

Animating the speech of characters such as Elsa and Mowgli has been both time-consuming and costly. But now computer programmers have identified a way of creating natural-looking animated speech that can be generated in real-time as voice actors deliver their lines.

The discovery was unveiled in Los Angeles at the world’s largest computer graphics conference - Siggraph 2017. This work is a collaboration which includes UEA, Caltech and Carnegie Mellon University

Researchers show how a ‘deep learning’ approach – using artificial neural networks – can generate natural-looking real-time animated speech.

As well as automatically generating lip sync for English speaking actors, the new software also animates singing and can be adapted for foreign languages. The online video games industry could also benefit from the research – with characters delivering their lines on-the-fly with much more realism than is currently possible – and it could also be it can be used to animate avatars in virtual reality.

A central focus for the work has been to develop software which can be seamlessly integrated into existing production pipelines, and which is easy to edit.

Lead researcher Dr Sarah Taylor, from UEA’s School of Computing Sciences, said: “Realistic speech animation is essential for effective character animation. Done badly, it can be distracting and lead to a box office flop.

“Doing it well however is both time consuming and costly as it has to be manually produced by a skilled animator. Our goal is to automatically generate production-quality animated speech for any style of character, given only audio speech as an input.”

The team’s approach involves ‘training’ a computer to take spoken words from a voice actor, predict the mouth shape needed, and animate a character to lip sync the speech.

This is done by first recording audio and video of a reference speaker reciting a collection of more than 2500 phonetically diverse sentences. Their face is tracked to create a ‘reference face’ animation model.

The audio is then transcribed into speech sounds using off-the-shelf speech recognition software.

This collected information can then be used to generate a model that is able to animate the reference face from a frame-by-frame sequence of phonemes. This animation can then be transferred to a CG character in real-time.

‘Training’ the model takes just a couple of hours. Dr Taylor said: “What we are doing is translating audio speech into a phonetic representation, and then into realistic animated speech.”

The method has so far been tested against sentences from a range of different speakers. The research team also undertook a subjective evaluation in which viewers rated how natural the animated speech looked.

Dr Taylor said: “Our approach only requires off-the-shelf speech recognition software, which automatically converts any spoken audio into the corresponding phonetic description. Our automatic speech animation therefore works for any input speaker, for any style of speech and can even work in other languages.

“Our results so far show that our approach achieves state-of-the-art performance in visual speech animation. The real beauty is that it is very straightforward to use, and easy to edit and stylise the animation using standard production editing software.”

Comments

Popular posts from this blog

Ian Duncan-Smith says he wants to make those on benefits 'better people'!

By any account, the government's austerity strategy is utilitarian. It justifies its approach by the presumed potential ends. It's objective is to cut the deficit, but it has also adopted another objective which is specifically targeted. It seeks to drive people off benefits and 'back to work'.  The two together are toxic to the poorest in society. Those least able to cope are the most affected by the cuts in benefits and the loss of services. It is the coupling of these two strategic aims that make their policies ethically questionable. For, by combining the two, slashing the value of benefits to make budget savings while also changing the benefits system, the highest burden falls on a specific group, those dependent on benefits. For the greater good of the majority, a minority group, those on benefits, are being sacrificed; sacrificed on the altar of austerity. And they are being sacrificed in part so that others may be spared. Utilitarian ethics considers the ba...

Ethical considerations of a National DNA database.

Plans for a national DNA database   will be revealed by the Prime Minister this week. This is the same proposal the Tories and Liberal Democrats opposed when presented by the Blair government because they argued it posed  a threat to civil liberties. This time it is expected to offer an 'opt-out' clause for those who do not wish their data to be stored; exactly how this would operate isn't yet clear. But does it matter and does it really pose a threat to civil liberties? When it comes to biology and ethics we tend to have a distorted view of DNA and genetics. This is for two reasons. The first is that it is thought that our genome somehow represents the individual as a code that then gets translated. This is biologically speaking wrong. DNA is a template and part of the machinery for making proteins. It isn't a code in anything like the sense of being a 'blueprint' or 'book of life'.  Although these metaphors are used often they are just that, metapho...

Work Capability Assessments cause suffering for the mentally ill

People suffering from mental health problems are often the most vulnerable when seeking help. Mental health can have a major impact on work, housing, relationships and finances. The Work Capability Assessments (WCA) thus present a particular challenge to those suffering mental illness.  The mentally ill also are often the least able to present their case. Staff involved in assessments lack sufficient expertise or training to understand mental health issues and how they affect capability. Because of  concerns that Work Capability Assessments will have a particularly detrimental effect on the mentally ill,  an  e-petition  on the government web site calls on the Department of Work and Pensions to exclude people with complex mental health problems such as paranoid schizophrenia and personality disorders. Problems with the WCA  have been highlighted in general by the fact that up to 78% of 'fit to work' decisions are  being overturned on appeal. I...