A disturbing new study shows fake faces created by artificial intelligence (AI) look more believable than those of real people.
Researchers conducted several experiments to find out if fake faces created with machine learning systems can fool people.
They found that artificially generated faces are not only highly photorealistic, but almost indistinguishable from real faces, and are even considered more authentic.
Due to the results, the researchers are calling for precautionary measures to prevent the spread of “deepfakes” on the Internet.
Deepfakes have already been used for so-called “revenge porn”, fraud and propaganda, leading to misidentification and the spread of fake news.
Real or synthesized? This composite shows the most accurately classified real (R) and synthetic (S) faces in the study (upper eight) and the least (lower eight) accurately classified.
HOW GENERATIVE-OPPOSITION NETWORKS WORK?
The Generative Adversarial Network works by pitting two algorithms against each other in an attempt to create compelling representations of the real world.
These “imaginary” digital creations, which can take the form of images, videos, sounds, and other content, are based on data that enters the system.
One AI bot creates new content based on what it has been taught, while the other criticizes these creations, pointing out imperfections and inaccuracies.
And this process may one day allow robots to learn new information without the participation of people.
The new study was conducted by Sophie J. Nightingale of the University of Lancaster and Honey Farid of the University of California, Berkeley.
“Our assessment of the photorealism of AI-synthesized faces shows that synthesis engines have passed through the uncanny valley and are capable of producing faces that are indistinguishable and more trustworthy than real faces,” they say.
“Perhaps most perniciously, in a digital world where any image or video can be tampered with, the authenticity of any inconvenient or unwanted recording can be called into question.”
For the study, the experts used fake faces created using StyleGAN2, a “generative adversarial network” from American technology company Nvidia.
Generative adversarial networks (or GANs) work by pitting two algorithms against each other in an attempt to create compelling representations of the real world.
In the first experiment, 315 participants classified 128 faces from a set of 800 as real or synthetic.
They found that they were 48 percent accurate, which is close to 50 percent of random results.
In the second experiment, 219 new participants were trained and given feedback on how to classify faces.
They classified 128 faces drawn from the same set of 800 faces as in the first experiment, but despite their training, the accuracy level only improved to 59 percent.
The researchers then set out to explore whether perceptions of certainty could help people identify artificial images with a third experiment.
“Faces are a rich source of information, with exposures of just milliseconds sufficient to draw implicit inferences about individual traits such as reliability,” the authors say.
A representative set of matching real and synthetic faces (in terms of gender, age, race, and overall appearance)
SCIENTISTS DESIGN AI THAT CAN STUDY FACES YOU FIND ATTRACTIVE
An artificial intelligence system has been developed that can penetrate your mind and find out which faces and types of appearance you find most attractive.
The Finnish researchers wanted to find out if a computer can identify facial features that we find attractive without any verbal or written guidance.
The team strapped 30 volunteers to an electroencephalography (EEG) monitor that tracks brain waves and then showed them images of “fake” faces created from 200,000 real celebrity images stitched together in various ways.
They then fed that data into an AI that learned brainwave preferences and created completely new images tailored to each volunteer.
See also: Scientists have developed an artificial intelligence that can remember the faces you like
In a third experiment, 223 participants were asked to rate the reliability of 128 faces, taken from the same set of 800 faces, on a scale from 1 (very unreliable) to 7 (very reliable).
The average score for artificial faces was 7.7% more reliable than the average score for real faces, which is “statistically significant”.
Black faces were rated as more trustworthy than South Asian faces, but race was otherwise unaffected.
However, women were rated as significantly more reliable than men.
The researchers argue that the results were not affected by whether faces were smiling or not, which could increase perceptions of trustworthiness.
“A smiling face is more likely to be rated as trustworthy, but 65.5% of real faces and 58.8% of artificial faces smile, so facial expression alone cannot explain why synthetic faces are rated more trustworthy,” they note. .
Instead, they suggest that synthesized faces can be considered more trustworthy because they resemble normal faces, which are themselves considered more trustworthy.
To protect the public from “deepfakes”, the researchers also offered guidelines for creating and distributing synthesized images.
“Defensive measures could include, for example, the inclusion of strong watermarks in the image and video synthesis networks, which would provide a downstream mechanism for strong identification.
The four most (top) and four least (bottom) trustworthy individuals and their rating of trustworthy on a scale from 1 (very untrustworthy) to 7 (very trustworthy). Fake faces (S) are on average more trustworthy than real faces (R)
“Because it is the democratization of access to this powerful technology that poses the greatest threat, we also call for a reconsideration of the often hands-off approach to open access and the unrestricted release of code for inclusion in any application by any user.
“At this turning point, as in other fields of science and technology, we are calling on the graphics and machine vision community to develop guidelines for the creation and dissemination of synthetic media technologies that include ethical guidelines for media researchers, publishers, and distributors.”
The study was published in Proceedings of the National Academy of Sciences.
SCIENTISTS TRAIN AI ROBOT TO CREATE ART WORKS ‘INDISISTANT’ FROM HUMAN WORK – BUT CAN YOU KNOW THE DIFFERENCE?
A 2021 study found that artificial intelligence (AI) can create works of art ranging from abstract expressionist masterpieces to perfect images of the real world that are indistinguishable from human-made works.
In online surveys, about 200 people were unable to distinguish man-made works of art from artificial ones.
The art of artificial intelligence is created by machine learning algorithms that are trained on many thousands of images of real paintings.
The more images of a certain style or aesthetic the algorithm analyzes, the more human-like the results can be, down to small details like brush strokes.
While AI paintings are already selling for hundreds of thousands of pounds, reproducing artistic human emotions appears to be the last frontier for technology.
However, the author of the study believes that soon computers will be able to create random and unpredictable fragments that emotionally touch people.
The study presents seven paintings – two created by people, and the rest by artificial intelligence. But can you tell which is which?
See also: AI creates works of art “indistinguishable” from works written by people