Home | WebMail | Register or Login

      Calgary | Regions | Local Traffic Report | Advertise on Action News | Contact

Sign Up

Sign Up

Please fill this form to create an account.

Already have an account? Login here.

Front Burner

Did Google make conscious AI?

A Google employee's claim that the company's AI chatbot is conscious is raising concerns about Big Techs lack of transparency around its technology and the ease with which people can be fooled by it.
Outside view of a brick building with the Google logo
A man using a mobile phone walks past Google offices in New York. (Mark Lennihan/The Associated Press)

Earlier this week, Blake Lemoine, an engineer who works for Google's Responsible AI department, went public with his belief that Google's LaMDAchatbot is sentient.

LaMDA, orLanguage Model for Dialogue Applications, is an artificial intelligenceprogram that mimics speech and tries to predict which words are most related to the prompts it is given.

While some experts believe that conscious AI is something that will be possible in the future, many in the field think that Lemoine is mistakenand that the conversation he has stirred up about sentience takes away from the immediate and pressing ethical questions surrounding Google's control over this technology and the ease at which people can be fooled by it.

Today on Front Burner, cognitive scientist and author of Rebooting AI, Gary Marcus, discusses LaMDA, the trouble with testing for consciousness in AI, and what we should really be thinking about when it comes to AI's ever-expanding role in our day-to-day lives.

Listen on Google Podcasts

Listen on Apple Podcasts

Listen on Spotify