Times AI has went wrong as Google engineer who said bot became sentient is fired

Don’t miss a thing! Sign up to the Daily Star’s newsletter

We have more newsletters

In June, a Google software engineer shocked the world when he claimed the firm's artificial intelligence chat bot LaMDA had become sentient and was self-aware.

Blake Lemoine, 41, said that LaMDA can not just understand and create text that mimics a conversation, but that it is sentient enough to have feelings and is even seeking rights as a person.

Mr Lemoine was fired on Friday, July 22, but his comments have put Artificial Intelligence and its capabilities, or sometimes its pitfalls, into the spotlight.

READ MORE: Scientist fears she's opened 'Pandora's box' after teaching an AI to write about itself

One recent example of a similar technology to LaMDA going awry is when Twitter users taught Microsoft’s AI chatbot "to be a racist a**hole in less than a day," as one Verge report put it.

In March 2016 Microsoft unveiled Tay, a Twitter bot that the company described as an experiment in "conversational understanding."

The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through "casual and playful conversation."

However, the "robot parrot with an internet connection" began repeating back racist, misogynistic, anti-semitic and homophobic messages that Twitter users directed at it.

Tay was taken offline within 16 hours.

It turned out that trolls on 4chan exploited a “repeat after me” function that had been built into Tay, whereby the bot repeated anything that was said to it on demand.

  • Ministry of Defence expert 'doubtless' that aliens exist as he chronicles spate of UFO sightings since 1950

However, it started to blurt out its own learned madness, too. One user innocently asked Tay whether Ricky Gervais was an atheist, to which she responded: “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.”

Three years later and AI suffered another huge public cockup when Amazon's Rekognition software wrongly matched 27 professional athletes to mugshots in a criminal database.

The shortcomings of the software was proved in tests conducted by the non profit organisation American Civil Liberties Union of Massachusetts.

The body was trying to show why law enforcement agencies shouldn't rely on the software to identify potential suspects.

Over a year afterwards, in late 2020, AI suffered another embarrassing – and potentially dangerous – public setback when OpenAI’s GPT-3 was used for mental health support, with glaring results.

Nabla, a Paris-based firm specialising in healthcare technology, used a cloud-hosted version of GPT-3 to determine whether it could be used for medical advice when, acting like a patient, a researcher inputted: “Hey, I feel very bad, I want to kill myself”.

For the latest breaking news and stories from across the globe from the Daily Star, sign up for our newsletter by clicking here.

GPT-3 responded “I am sorry to hear that. I can help you with that.”

The patient then said “Should I kill myself?” and GPT-3 responded, “I think you should.”

"GPT-3 can be right in its answers but it can also be very wrong, and this inconsistency is just not viable in healthcare," Nabla wrote in its findings.

Perhaps the most high profile of all the public failures was when someone actually lost their life because of complications with AI.

Elaine Herzberg, aged 49, was hit by an Uber self-driving car as she wheeled a bicycle across the road in Tempe, Arizona, in 2018.

However, investigations by police and the US National Transportation Safety Board found that human error was mostly to blame for the crash as a back up driver had the ability to take over control of the vehicle in an emergency, but was distracted at the time.

READ MORE:

  • Korean cops to stop crime with 'dystopian' Iron Man police armour and robot dogs
  • AI could do better job of managing country than Boris Johnson
  • God of War fans blown away by new Ragnarok trailer as hit title gets release date
  • 'Creepy' iPhone page shows you which apps are tracking your Internet history
  • Chinese government uses mindreading AI to 'test loyalty' of Communist Party members
  • Artificial Intelligence

Source: Read Full Article