A stylish Pope Francis became the topic of conversation over the weekend after images of the Catholic leader wearing a white puffer jacket began circulating online, bringing in fashion-related compliments.
But the Pope actually never wore the jacket-and-cross-necklace combination; the photos that circulated weren’t real.
Those images, created through artificial intelligence (AI)-generator model Midjourney, were uploaded to Facebook and Reddit groups dedicated to AI-generated images and art. They were then posted on other social media sites without the context of the groups.
While the images were taken as a light-hearted joke by some, cybersecurity expert Chester Wisniewski warns eerily convincing AI-generated photos could further exacerbate misinformation.
“We’ve kind of crossed an uncanny valley now,” Wisniewski told CTVNews.ca in a phone interview on Monday. “I don’t know that there is a way for people to tell the difference between a real photo and a fake one and this is going to have deeply troubling societal ramifications.”
In the earlier days of AI-generated images, Wisniewski explained, there were many tell-tale signs to distinguish a real photo from a computer-generated one. For example, limbs and fingers are often distorted in AI-generated images, hair appeared to be unrealistic and extremely airbrushed or teeth in portraits looked exaggerated.
Today however, the technology has improved incredibly quickly, making these red-flags not easily distinguishable to an everyday online user.
WILD WEST OF AI
Since the coding behind computer-generated images, voices and video is public knowledge and doesn’t belong to a single company or person, anyone who is technologically savvy can create anything or pay to use one of the various AI-model apps or websites.
While this open access to AI can lead to improvements in the digital world — for example, with expanding cybersecurity to protect users’ personal data — it also places the responsibility of being transparent about the content on the creator, which doesn’t always happen, Wisniewski says.
“The problem is, all of this relies on some sort of honour system, which we already know people are not fairly honest,” he said. “Even influencers have a hard time following the U.S. FTC guidelines where they’re supposed to put #ad or #sponsor when they’re promoting something.”
Since computer generated images are all learned from humans, a bias in the images created also poses other ethical issues. Previous reports have shown some AI-image generators are only able to accurately portray white and male people, showing disparities for Black people and other people of colour. Additionally, apps like Lensa AI portrait have been accused of stealing artwork from artists.
Though the Pope did not address the photos directly, he spoke about the use of artificial intelligence on Monday, saying its power can only help humans if it’s used ethically.
“I am convinced that the development of artificial intelligence and machine learning has the potential to contribute in a positive way to the future of humanity,” the Pope is quoted as saying in a Vatican news release. “I am certain that this potential will be realized only if there is a constant and consistent commitment on the part of those developing these technologies to act ethically and responsibly.”
An out of this world opportunity: Western students to launch mini satellite aboard SpaceX mission
Nova Scotians’ personal information stolen in global security breach: province
Apple is expected to unveil a sleek, pricey headset. Is it the device VR has been looking for?