Wednesday, 30 August 2017

PFIs bleeding NHS - time to buy them out

The independent Centre for Health and the Public Interest (CHPI) has today released a devastating analysis of accounts showing how PFIs are bleeding money from the NHS, and calls for public sector loans to be used to buy-out PFI contracts.

Over the past 6 years, companies which run PFI contracts to build and run NHS hospitals and other facilities have made staggering pre-tax profits of £831m – money which has thereby not been available for patient care over this period.

The findings show that If the NHS had not been paying profits on PFI schemes, deficits in NHS hospitals would have been reduced by a quarter over this 6 year period.

Over the next 5 years, almost £1bn of taxpayer funds (£973m) will go to PFI companies in the form of pre-tax profits – equivalent to a quarter (22%) of the additional amount of money (£4.5bn) that the government has promised the NHS over this period.

A number of PFI schemes are generating particularly high pre-tax profits for their operators. The company which holds the contract for the hospital at University College London has made pre-tax profits of £190m over the past 11 years. This is out of £527m paid to the company by the NHS. The total value of the hospital is £292m.

A small number of companies are bleeding the NHS of much needed funds.

The report finds that only 8 companies own or have equity stakes in 92% of all the companies holding PFI contracts with the NHS – meaning that there is very little competition between the companies bidding to build and run NHS PFI hospitals.

The report calls for public sector funding buy-out PFI contracts. It also recommends taxing PFI companies to recoup some of the profits which have been made and capping the amount of profit which can be made by a private company which has an exclusive public-sector contract with the NHS.

The report also calls for greater transparency of equity sales to prevent the unnoticed consolidation of market power by a small number of investors and renegotiation of contracts with the private companies to reduce the amounts the NHS has to pay.

Commenting today on the report, Dr Chaand Nagpaul, BMA council chair, said:

“NHS providers and commissioners are being pushed to breaking point because of unprecedented financial pressures so it is outrageous to see over £800m of much needed money being leaked out to private companies for profit alone.

“Private Finance Initiatives are an extortionate drain on the public purse, with private companies scandalously gaining at the expense of taxpayers and patients. The government should instead be properly funding new NHS capital projects to ensure money remains in the NHS in the long term. Ideally the government would either renegotiate lucrative PFI contracts or enable existing PFI schemes to be bought out by the NHS so that vital resources are available for frontline patient care.

“Looking more broadly at the role of private and independent providers in the NHS, a trend is emerging - independent sector provision of NHS healthcare has increased every year for the past five years. More attention needs to be paid to whether it provides."

See also Devestating NHS cuts 'shrouded in secrecy'

Monday, 7 August 2017

A revolution in speech animation?

I have always been fascinated by animated characters.  We know there is more to speech than simply words.  Facial expression adds significantly to our understanding.  As a deaf person I also know too well how precise movement of the lips and face help in my understanding of the spoken word. 

Forming speech is complex. About a hundred different muscles in the chest, neck, jaw, tongue, and lips must work together in forming speech. Every word or short phrase that is physically spoken is followed by its own unique arrangement of muscle movements.   No wonder then that animations can often appear flat and characterless. 

New research from the University of East Anglia (UK) could revolutionise the way that animated characters deliver their lines.

Animating the speech of characters such as Elsa and Mowgli has been both time-consuming and costly. But now computer programmers have identified a way of creating natural-looking animated speech that can be generated in real-time as voice actors deliver their lines.

The discovery was unveiled in Los Angeles at the world’s largest computer graphics conference - Siggraph 2017. This work is a collaboration which includes UEA, Caltech and Carnegie Mellon University

Researchers show how a ‘deep learning’ approach – using artificial neural networks – can generate natural-looking real-time animated speech.

As well as automatically generating lip sync for English speaking actors, the new software also animates singing and can be adapted for foreign languages. The online video games industry could also benefit from the research – with characters delivering their lines on-the-fly with much more realism than is currently possible – and it could also be it can be used to animate avatars in virtual reality.

A central focus for the work has been to develop software which can be seamlessly integrated into existing production pipelines, and which is easy to edit.

Lead researcher Dr Sarah Taylor, from UEA’s School of Computing Sciences, said: “Realistic speech animation is essential for effective character animation. Done badly, it can be distracting and lead to a box office flop.

“Doing it well however is both time consuming and costly as it has to be manually produced by a skilled animator. Our goal is to automatically generate production-quality animated speech for any style of character, given only audio speech as an input.”

The team’s approach involves ‘training’ a computer to take spoken words from a voice actor, predict the mouth shape needed, and animate a character to lip sync the speech.

This is done by first recording audio and video of a reference speaker reciting a collection of more than 2500 phonetically diverse sentences. Their face is tracked to create a ‘reference face’ animation model.

The audio is then transcribed into speech sounds using off-the-shelf speech recognition software.

This collected information can then be used to generate a model that is able to animate the reference face from a frame-by-frame sequence of phonemes. This animation can then be transferred to a CG character in real-time.

‘Training’ the model takes just a couple of hours. Dr Taylor said: “What we are doing is translating audio speech into a phonetic representation, and then into realistic animated speech.”

The method has so far been tested against sentences from a range of different speakers. The research team also undertook a subjective evaluation in which viewers rated how natural the animated speech looked.

Dr Taylor said: “Our approach only requires off-the-shelf speech recognition software, which automatically converts any spoken audio into the corresponding phonetic description. Our automatic speech animation therefore works for any input speaker, for any style of speech and can even work in other languages.

“Our results so far show that our approach achieves state-of-the-art performance in visual speech animation. The real beauty is that it is very straightforward to use, and easy to edit and stylise the animation using standard production editing software.”