Wednesday 8 November 2017

Artificial “photos” that look real to humans.👶

NVIDIA recently published a paper titled “Progressive Growing of GANs for Improved Quality, Stability, and Variation.” GAN stands for “generative adversarial network,” and it’s a system that uses two neural networks — one generates things and the other evaluates them. These algorithms are capable of generating artificial “photos” that look real to humans.

To demonstrate the system, NVIDIA trained the neural network using the CelebA HQ. 

They used an increasingly popular AI training system called a general adversarial network (GAN). The program is fed a massive data set — in this case celebrity photos — and then gets better at creating the desired result (in this case, realistic computer-generated faces) over a period of days or weeks. The "adversarial" component involves pitting two machine-learning programs against one another, with one program "challenging" the other's face creations. 

The future applications of rendering realistic looking people — that aren't actually people — seems both potentially useful and unsettling. This is certainly a boon to graphics companies that always need new images of people, perhaps for use in advertising. But it's also important to recognize AI-programs are getting closer and closer to achieving realism in artificial faces; how this might be employed in fake news and other means of deception is unknown — and boundless. For now, though, it's at least given us some really interesting faces to look at.

For its project, NVIDIA found that training the neural network using low-resolution photos of real celebrities and then ramping up to high-res photos helped to both speed up and stabilize the “learning” process, allowing the AI to create “images of unprecedented quality.”


Popular Posts