The personhood trap: How AI fakes human personality

Date:

Share:



Knowledge emerges from understanding how ideas relate to each other. LLMs operate on these contextual relationships, linking concepts in potentially novel ways—what you might call a type of non-human “reasoning” through pattern recognition. Whether the resulting linkages the AI model outputs are useful depends on how you prompt it and whether you can recognize when the LLM has produced a valuable output.

Each chatbot response emerges fresh from the prompt you provide, shaped by training data and configuration. ChatGPT cannot “admit” anything or impartially analyze its own outputs, as a recent Wall Street Journal article suggested. ChatGPT also cannot “condone murder,” as The Atlantic recently wrote.

The user always steers the outputs. LLMs do “know” things, so to speak—the models can process the relationships between concepts. But the AI model’s neural network contains vast amounts of information, including many potentially contradictory ideas from cultures around the world. How you guide the relationships between those ideas through your prompts determines what emerges. So if LLMs can process information, make connections, and generate insights, why shouldn’t we consider that as having a form of self?

Unlike today’s LLMs, a human personality maintains continuity over time. When you return to a human friend after a year, you’re interacting with the same human friend, shaped by their experiences over time. This self-continuity is one of the things that underpins actual agency—and with it, the ability to form lasting commitments, maintain consistent values, and be held accountable. Our entire framework of responsibility assumes both persistence and personhood.

An LLM personality, by contrast, has no causal connection between sessions. The intellectual engine that generates a clever response in one session doesn’t exist to face consequences in the next. When ChatGPT says “I promise to help you,” it may understand, contextually, what a promise means, but the “I” making that promise literally ceases to exist the moment the response completes. Start a new conversation, and you’re not talking to someone who made you a promise—you’re starting a fresh instance of the intellectual engine with no connection to any previous commitments.



Source link

━ more like this

Spotify now lets people follow venues to find out about concerts

Spotify just introduced a feature that lets users . This will provide people with updates and details about upcoming concerts.Once a venue is...

Amazon’s AWS outage has knocked services like Alexa, Snapchat, Fortnite, Venmo and more offline

On this crisp October morning, it feels like half of the internet is dealing with a hangover. A severe Amazon Web Services outage...

The new commodity frontier: Why London investors are backing waste-based biofeedstocks – London Business News | Londonlovesbusiness.com

As global markets shift from fossil fuels to renewables, London’s financial elite are scouting an unlikely new asset: waste. Specifically, waste-based biofeedstocks such...

Pizza Hut restaurants collapse into administration – London Business News | Londonlovesbusiness.com

Pizza Hut has collapsed into administration putting 1,723 jobs at risk and 68 restaurants are at risk. Piza Hut UK restaurant business said 68...

Amazon’s AWS outage has knocked services like Alexa, Snapchat, Fortnite, Venmo and more offline

On this crisp October morning, it feels like half of the internet is dealing with a hangover. A severe Amazon Web Services outage...
spot_img