The personhood trap: How AI fakes human personality

Date:

Share:



Knowledge emerges from understanding how ideas relate to each other. LLMs operate on these contextual relationships, linking concepts in potentially novel ways—what you might call a type of non-human “reasoning” through pattern recognition. Whether the resulting linkages the AI model outputs are useful depends on how you prompt it and whether you can recognize when the LLM has produced a valuable output.

Each chatbot response emerges fresh from the prompt you provide, shaped by training data and configuration. ChatGPT cannot “admit” anything or impartially analyze its own outputs, as a recent Wall Street Journal article suggested. ChatGPT also cannot “condone murder,” as The Atlantic recently wrote.

The user always steers the outputs. LLMs do “know” things, so to speak—the models can process the relationships between concepts. But the AI model’s neural network contains vast amounts of information, including many potentially contradictory ideas from cultures around the world. How you guide the relationships between those ideas through your prompts determines what emerges. So if LLMs can process information, make connections, and generate insights, why shouldn’t we consider that as having a form of self?

Unlike today’s LLMs, a human personality maintains continuity over time. When you return to a human friend after a year, you’re interacting with the same human friend, shaped by their experiences over time. This self-continuity is one of the things that underpins actual agency—and with it, the ability to form lasting commitments, maintain consistent values, and be held accountable. Our entire framework of responsibility assumes both persistence and personhood.

An LLM personality, by contrast, has no causal connection between sessions. The intellectual engine that generates a clever response in one session doesn’t exist to face consequences in the next. When ChatGPT says “I promise to help you,” it may understand, contextually, what a promise means, but the “I” making that promise literally ceases to exist the moment the response completes. Start a new conversation, and you’re not talking to someone who made you a promise—you’re starting a fresh instance of the intellectual engine with no connection to any previous commitments.



Source link

━ more like this

Artemis II moon crew just entered most critical phase yet

NASA’s Artemis II crew got off to a great start on Wednesday when their Orion spacecraft reached Earth orbit following a spectacular launch...

Look Outside’s April 1 update that let you kiss enemies is now a permanent ‘smooch mode’

For April Fools' Day, the developer of Look Outside released an update that added a new option to your interactions with NPCs: kissing....

Sony’s gaming division just bought an AI startup that turns photos into 3D volumes

Sony Interactive Entertainment, owner of the PlayStation brand, has acquired Cinemersive Labs, a UK startup developing tools to convert 2D photos and videos...

Meta’s AI smart glasses have a creepy reputation, but they are finding a good purpose too

Meta’s Ray-Ban smart glasses have earned a reputation for being creepy, with growing backlash over reports of users secretly recording people in public....
spot_img