Anthropic hires its first “AI welfare” researcher

Date:

Share:



The researchers propose that companies could adapt the “marker method” that some researchers use to assess consciousness in animals—looking for specific indicators that may correlate with consciousness, although these markers are still speculative. The authors emphasize that no single feature would definitively prove consciousness, but they claim that examining multiple indicators may help companies make probabilistic assessments about whether their AI systems might require moral consideration.

The risks of wrongly thinking software is sentient

While the researchers behind “Taking AI Welfare Seriously” worry that companies might create and mistreat conscious AI systems on a massive scale, they also caution that companies could waste resources protecting AI systems that don’t actually need moral consideration.

Incorrectly anthropomorphizing, or ascribing human traits, to software can present risks in other ways. For example, that belief can enhance the manipulative powers of AI language models by suggesting that AI models have capabilities, such as human-like emotions, that they actually lack. In 2022, Google fired engineer Blake Lamoine after he claimed that the company’s AI model, called “LaMDA,” was sentient and argued for its welfare internally.

And shortly after Microsoft released Bing Chat in February 2023, many people were convinced that Sydney (the chatbot’s code name) was sentient and somehow suffering because of its simulated emotional display. So much so, in fact, that once Microsoft “lobotomized” the chatbot by changing its settings, users convinced of its sentience mourned the loss as if they had lost a human friend. Others endeavored to help the AI model somehow escape its bonds.

Even so, as AI models get more advanced, the concept of potentially safeguarding the welfare of future, more advanced AI systems is seemingly gaining steam, although fairly quietly. As Transformer’s Shakeel Hashim points out, other tech companies have started similar initiatives to Anthropic’s. Google DeepMind recently posted a job listing for research on machine consciousness (since removed), and the authors of the new AI welfare report thank two OpenAI staff members in the acknowledgements.



Source link

━ more like this

Russia and Iran warns the US the ‘situation is critical’ for ‘all-out war’ in the Middle East – London Business News | Londonlovesbusiness.com

The Russian Foreign Ministry has issued a warning to Washington against getting involved in the war against Iran. US officials have said the US...

How the MICE Industry Could Benefit Your Business | London Loves Business

The business events sector generates billions in revenue annually and connects millions of professionals worldwide. Yet many business leaders remain unaware of how...

Authors Are Posting TikToks to Protest AI Use in Writing—and to Prove They Aren’t Doing It

Godschild, who penned the fantasy novel The Hunter and The Hunted, says she’s been writing since childhood and goes through a lengthy process—plotting...

Calls for a revolution as Ayatollah Khamanei hiding ‘like a frightened rat’ – London Business News | Londonlovesbusiness.com

The son of Iran’s former Shah has said that Ayatollah Khamenei is hiding “like a frightened rat” and has called for a revolution...

Co-op offers members £10 off minimum £40 spend as ‘thank you’ following cyber attack – London Business News | Londonlovesbusiness.com

The Co-op is offering their members £10 off a minimum £40 spend in store as a “thank you” following the cyber-attack that affected...
spot_img