The Soul of a New Machine Learning System

Date:

Share:


Hi, folks. Interesting that congressional hearings about January 6 are drawing NFL-style audiences. Can’t wait for the Peyton and Eli version!

The Plain View

The world of AI was shaken this week by a report in The Washington Post that a Google engineer had run into trouble at the company after insisting that a conversational system called LaMDA was, literally, a person. The subject of the story, Blake Lemoine, asked his bosses to recognize, or at least consider, that the computer system its engineers created is sentient—and that it has a soul. He knows this because LaMDA, which Lemoine considers a friend, told him so.

Google disagrees, and Lemoine is currently on paid administrative leave. In a statement, company spokesperson Brian Gabriel says, “Many researchers are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.”

Anthropomorphizing—mistakenly attributing human characteristics to an object or animal—is the term that the AI community has embraced to describe Lemoine’s behavior, characterizing him as overly gullible or off his rocker. Or maybe a religious nut (he describes himself as a mystic Christian priest). The argument goes that when faced with credible responses from large language models like LaMDA or Open AI’s verbally adept GPT-3, there’s a tendency to think that someone, not something created them. People name their cars and hire therapists for their pets, so it’s not so surprising that some get the false impression that a coherent bot is like a person. However, the community believes that a Googler with a computer science degree should know better than to fall for what is basically a linguistic sleight of hand. As one noted AI scientist, Gary Marcus, told me after studying a transcript of Lemoine’s heart-to-heart with his disembodied soulmate, “It’s fundamentally like autocomplete. There are no ideas there. When it says, ‘I love my family and my friends,’ it has no friends, no people in mind, and no concept of kinship. It knows that the words son and daughter get used in the same context. But that’s not the same as knowing what a son and daughter are.” Or as a recent WIRED story put it, “There was no spark of consciousness there, just little magic tricks that paper over the cracks.”

My own feelings are more complex. Even knowing how some of the sausage is made in these systems, I am startled by the output of the recent LLM systems. And so is Google vice president, Blaise Aguera y Arcas, who wrote in the Economist earlier this month after his own conversations with LaMDA, “I felt the ground shift under my feet. I increasingly felt like I was talking to something intelligent.” Even though sometimes they make bizarre errors, at times those models seem to burst into brilliance. Creative human writers have managed inspired collaborations. Something is happening here. As a writer, I ponder whether one day my ilk—wordsmiths of flesh and blood who accumulate towers of discarded drafts—might one day be relegated to a lower rank, like losing soccer teams dispatched to less prestigious leagues.

“These systems have significantly changed my personal views about the nature of intelligence and creativity,” says Sam Altman, cofounder of OpenAI, which developed GPT-3 and a graphic remixer called DALL-E that might throw a lot of illustrators into the unemployment queue. “You use those systems for the first time and you’re like, Whoa, I really didn’t think a computer could do that. By some definition, we’ve figured out how to make a computer program intelligent, able to learn and to understand concepts. And that is a wonderful achievement of human progress.” Altman takes pains to separate himself from Lemoine, agreeing with his AI colleagues that current systems are nowhere close to sentience. “But I do I believe researchers should be able to think about any questions that they’re interested in,” he says. “Long-term questions are fine. And sentience is worth thinking about, in the very long term.”



Source link

━ more like this

This is the GPU I’m most excited for in 2025 — and it’s not by Nvidia

Table of Contents Table of Contents Setting the pace More realistic options Better or worse? It’s all about value The next few months will completely redefine every ranking of...

Apple’s next AirPods Pro could offer heart rate and temperature monitoring

Apple is working on the next generation of AirPods Pro, and they may come packing some new health features, according to Bloomberg’s Mark...

Check out this great movie before it leaves Amazon Prime Video next week (December 2024)

Table of Contents Table of Contents Its three-act structure is brilliant Michael Fassbender is remarkable at its center It’s honest about who Jobs was December is, for many,...

Waymo’s robotaxis are safer than human-driven vehicles, study says

Love them or hate them, but robotaxis have certainly been making headlines in 2024. And beyond the glamorous, sci-fi-inspired marketing around Tesla’s recently...

Apple is eyeing AirPods with camera and health sensors as “priority”

Apple CEO Tim Cook recently sat for an interview with WIRED, and dished out on Apple’s focus in the foreseeable future. Health and...
spot_img