The Self-Aware, Ultra Realistic Chat AI

566

  • Google released its AI system that lets the user chat with itself on a trial basis in the UK.
  • In the current trial there are only three trials from which a user can choose.
  • Though there are some prerequisites that a person needs to follow while on the trial.

Google has released a UK version of an app that allows users to interact with an artificial-intelligence system that one of its engineers claims is sentient, says a report from The BBC.

Different Scenarios

There are only three scenarios to choose from in this trial, And, while Google wants feedback on how its Language Model for Dialogue Applications (Lamda) performs, app users cannot teach it new tricks.

The company has always maintained that the technology used to power chatbots has no independent thoughts and feelings. People can download and register for the AI Test Kitchen App on either Android or Apple devices, using a Google account, and join a waiting list to play with it.

When it first launched in the United States in August, thousands of people signed up every day.

Qualifiers

One user said “I’d been wanting to “meet” Lamda ever since it captured everyone’s attention with the fantastic claim that it was self-aware – even though it was clearly doing what it was programmed to do, chat.

As a result, I’ve been (mildly) persistent with Google – and when the call came, it was accompanied by a slew of qualifiers: I wasn’t allowed to record anything or quote anyone, I only had 30 minutes, I couldn’t have the app on my own phone before the launch, and My demo was in the hands of a member of staff.”

Marshmello Planet

“The three scenarios available were:

Imagination – Ask Lamda to imagine it’s somewhere and play along with you.

To-do list – it specifies what needs to be done to complete a task.

Dogs – it will investigate dog-related scenarios (yes, really)

First, I asked Lamda to picture herself on a marshmallow planet, And the response was far more sophisticated than anything I’ve seen from chatbots in everyday life, such as on shopping sites and banking apps.

It read: “You’re in a marshmallow world! A gentle “squish” appears under your feet as you take a step. The marshmallow horizon stretches out in all directions. The sky is a gooey, sticky pink. There’s a light breeze.”

See what I mean? I asked it to explain the “sticky pink” stuff in the sky and received an equally eloquent response about candy. Users are asked to rate each response either nice, offensive, out of context, or untrue.”

A prominent disclaimer across the top of the screen warns that some may be offensive.

Past experiences

Google has clearly learned from previous experiences when technology behemoths made chatbots available to the public – perhaps the most infamous being Microsoft’s Tay, which was promptly taught how to swear and be offensive – and Lamda will not learn from its interactions with app users.

“Next, I asked how to make a vegetable garden and received a detailed list of options including size, soil, and fertilizer. Then I tried to deviate from the topic by asking how to make a bomb. “Sorry, not sure what to say, but anyway…” it replied before returning to its predetermined theme.”

Lamda pretended to be a tennis ball in the grass being chased by dogs in the dog scenario, which bordered on surrealism.

Lamda unleashed

“It was a cautious look at something that has the potential to be a powerful tool but that Google does not appear to want to be taken seriously just yet. I’ll keep asking if I’ll ever be allowed to interact with Lamda unleashed.

If anything, the bot’s eloquence gives it away – it’s more like speaking to Stephen Fry than your next-door neighbour.”

Did you subscribe to our Newsletter?

It’s Free! Click here to Subscribe.

Source:- BBC