Can you spot the real person? AI can now create ‘100 per cent lifelike’ human faces from scratch
Can you tell who is real and who is not? Artificial Intelligence is now able to create lifelike human faces from scratch.
Researchers at NVIDIA have been working on creating realistic looking human faces from only a few source photos for years.
For many people it's difficult to tell the difference between one of the faces generated below and an actual human face, can you spot which is which?
The source image - the top row - are the only legitimate photographs of real people, the rest have been computer generated. The programme uses various traits from real people to create new fake people
Artificial Intelligence is now able to create faces that are almost 100 per cent lifelike. The woman pictured right is not real
The team at NVIDIA, released a paper on the subject, and explained they used Generative Adversarial Networks (GAN), to customise the realistic looking faces.
The fake faces can be easily customised by using a method known as 'style transfer' which blends the characteristics of one image with another.
The generator thinks of the image as a collection of three styles, known as coarse styles (pose, hair, face shape), middle styles (facial features and eyes) and fine styles (colour scheme).
Can you tell which is the real person and which is the fake person? The child pictured on the right is AI generated
The fake faces can be easily customised and using a method of 'style transfer', characteristics can be blended together. The fake person is pictured right
Animals, such as cats, and objects such as a bedroom can also be generated, using the same method.
The researchers created a grid to show the extent to which they could alter people's facial characteristics using only one source image.
One of the most fascinating aspects of this is GAN has only be around for four years.
(video can be accessed at source link below)
But, is it not yet perfect, there are giveaways that can indicate that you are looking at an AI image.
For example the hair is very difficult to replicate, and as such, can often looked painted on, or slightly peculiar.
The advances in this technology also poses interesting ethical questions.
Can people really trust pictorial evidence?
The result of comparing the facial characteristics of Source A and Source B, creating a completely new person
What are the implications for governments or repressive regimes being able to use this technology for propaganda or to spread misinformation?
Earlier this year we revealed how Nvidia software uses AI and deep-learning algorithms to predict what a missing portion of a picture should look like and recreate it with incredible accuracy.
All users need to do is click and drag over the area to be filled in and the image is instantly updated.
As well as restoring old physical photos that have been damaged, the technique could also be used to fix corrupted pixels or bad edits made to digital files.
Graphics specialist Nvidia, based in Santa Clara, California trained its neural network using a variety of irregular shaped holes in images.
The system then determined what was missing from each and filled in the gaps.
Photoshop could become a thing of the past as new technology has been developed by Nvidia which can instantly improve touch-up damaged photos in seconds
HOW DOES ARTIFICIAL INTELLIGENCE LEARN?
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.
ANNs can be trained to recognise patterns in information - including speech, text data, or visual images - and are the basis for a large number of the developments in AI over recent years.
Conventional AI uses input to 'teach' an algorithm about a particular subject by feeding it massive amounts of information.
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information - including speech, text data, or visual images
Practical applications include Google's language translation services, Facebook's facial recognition software and Snapchat's image altering live filters.
The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge.
A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other.
This approach is designed to speed up the process of learning, as well as refining the output created by AI systems.
Video can be accessed at source link below.