May 18, 2024
AI and privacy: Experts worry users may have already ‘traded a lot’ for services – National | Globalnews.ca

AI and privacy: Experts worry users may have already ‘traded a lot’ for services – National | Globalnews.ca

As Artificial intelligence (AI) applications such as ChatGPT and LensaAI have gained popularity, ethical and privacy concerns are starting to emerge over their use.

“People are starting to realize they’ve traded a lot for some of these services,” Briana Brownell, CEO and founder of AI consultancy Pure Strategies, told Global News.

“We’re starting to see people pull back and say, ‘maybe this wasn’t a bargain that I really understood I was making,’” said Brownell.

And with that, comes “a lot of interesting questions surrounding AI right now.”

Read more:

Why are there so many cyberattacks lately? An explainer on the rising trend

Read next:

Working from home? Experts say trust is key after ‘time theft’ ruling

One of the concerns, which has been an issue for the better part of a decade, is privacy, according to Brownell.

Story continues below advertisement

“There’s been a longstanding conversation about privacy as it relates to the training of machine learning models and the use of data in order to create some of these models,” she said.

Unlike a decade ago, however, users are now “rethinking” the data they’re producing, Brownell said.

“Part of it is because we’re just producing so much more data and part of it is because that data is getting more and more personal,” she said.

And with these data being largely unregulated, AI companies are walking a fine line when it comes to breaching the privacy of consumers.

Read more:

ChatGPT: Everything to know about the viral, ‘groundbreaking’ AI bot

Read next:

Some firms are offering unlimited time off for employees. Will it work? What experts say

On one hand, there have been lawsuits against companies for using facial recognition systems on publicly scraped photos. Meanwhile, others have got away with data scraping without punishment.

For instance, American company Clearview AI scraped billions of face images from social media and other websites without the consent of individuals, and then created a face-matching service that it sold to the Australian Federal Police and other law enforcement bodies around the world.

In 2021, the Australian Information & Privacy Commissioner found that both Clearview AI and the AFP had breached Australia’s privacy law.

Story continues below advertisement

Many of the world’s biggest tech companies – including Meta, Amazon and Microsoft – have also reduced or discontinued their facial recognition-related services. They have cited concerns about consumer safety and a lack of effective regulation.

“Right now, we don’t really know how everything is going to be settled. New lawsuits are making their way through the legal systems around the world. It’s looking very squishy about how all of these things are going to end up,” said Brownell.


Click to play video: 'Automation Nation: Companion robots help elderly stay connected'


Automation Nation: Companion robots help elderly stay connected


AI linked to misinformation

AI has also been known to spread misinformation. Like many other cases, this was true for Meta’s Galactica technology.

Story continues below advertisement

In Nov. of last year, Meta announced a new large language model AI software called Galactica that could “store, combine and reason about scientific knowledge.”

Launched with a public online demo, the software only lasted three days before being disabled after users noticed the responses Galactica generated being incorrect or biased.

“It was essentially just making up information,” Brownell said. “The reason that system was considered dangerous is that it only takes one or two mistakes. People don’t check and then it’s out there forever.

“There are many examples of mistakes that essentially cascade.”

The fast-paced advancements in AI could also help misinformation thrive this year, according to the Top Risk Report for 2023, an annual report from U.S.-based geopolitical risk analysts within the Eurasia Group.

Read more:

From deepfakes to ChatGPT, misinformation thrives with AI advancements: report

Read next:

Protesters, supporters face-off during Drag Queen Story Time at Coquitlam, B.C.’s public library

Although AI technologies may say something is correct with “conviction,” it surely doesn’t mean it is, according to Brownell.

“They are a statistical model and they can say something with great conviction that is completely false,” she said.

The spread of hate speech is also a concern when it comes to AI technologies.

Story continues below advertisement

The problem can arise through a variety of technologies, such as ChatGPT’s predecessor.

Shortly after its reveal in summer of 2020, GPT-3 made headlines for spewing out shockingly sexist and racist content. 

Meta’s Galactica was also known to produce hate speech. Despite being disabled, the code for the model is still available for anyone to use.

The problem can often be rooted in the AI teams that are creating these applications and their internal biases, according to Huda Idrees, founder and CEO of Dot Health, a health data tracker.


Click to play video: 'Feline OK? Alberta app use AI to test your cat’s mood and health'


Feline OK? Alberta app use AI to test your cat’s mood and health


“It comes down to the teams that we’re building and who we’re funding to actually do this work.”

Brownell also agrees that implicit bias is part of the problem.

Story continues below advertisement

“There are biases that make it prefer white or lighter skin, European features and extremely thin people,” Brownell said.

“This is true with both language models as well as image generation software,” she said.

“It’s picking up associations from the dataset, from a biased world.”

Another part of the problem also has to do with the way users interact with technology, according to Brownell.

Read more:

Canadian company’s self-driving stroller wins big at this year’s CES

Read next:

Armed Forces concerned over Canada’s absence from ‘AUKUS’ security pact

“If you ask ChatGPT how to hotwire a car, it won’t tell you,” she said, noting that it will, however, give you instructions if you ask it to write a poem about hotwiring a car.

“People are extremely ingenious in getting around the safety protocols,” said Brownell.


Click to play video: 'How machine learning is helping the world address the risk of climate change'


How machine learning is helping the world address the risk of climate change


The world is only at the start of figuring out what kind of ethical frameworks need to be implemented to address the issues that occur with AI, according to Brownell.

Story continues below advertisement

“It’s an interesting environment right now,” she said. “There is a lot of work being done internationally to create a cohesive set of principles.”

For example, in the EU, the Artificial Intelligence Act was created.

In Sept. of last year, Brazil also passed a bill that creates a legal framework for AI.

Idrees believes “dynamic policymaking” is essential for AI and should be implemented in Canada, as well.

“The government almost by definition is reactive but it shouldn’t be. I think the policy should first and foremost protect people,” she said.

Read more:

Are AI-generated drawings real art? Canadian artists say they lack ‘human touch’

Read next:

Public transit struggling to lure back riders amid deficits, rising costs of living

“There’s a whole bunch of room for improvement,” according to Eyra Abraham, a tech consultant and founder of tech company Lisnen, which uses technology to aid the deaf and hard of hearing.

“It’s a matter of enforcing and giving huge fines that make people change their minds and I think that’s the best approach,” she said.

Regulating the technology can get tricky, Abraham said.

“It’s hard when it comes to software and applications to begin with,” she said.

Story continues below advertisement

Going forward, Abraham would like to see regulations in place and punishment for those who break them.

“As a matter of approaching and giving huge fines that make people change their mind — I think that’s the best approach.”

Along with policymaking, diversifying the industry would also be a step in the right direction, Abraham said.

“Data representation is really lacking,” she said, noting the bias lack of certain data can create in systems.

“A lot of that bias is reflective of our society.”

— With files from Reuters

Source link