Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’

Date:

Share:


Since then, the quest to proliferate larger and larger language models has accelerated, and many of the dangers we warned about, such as outputting hateful text and disinformation en masse, continue to unfold. Just a few days ago, Meta released its “Galactica” LLM, which is purported to “summarize academic papers, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins, and more.” Only three days later, the public demo was taken down after researchers generated “research papers and wiki entries on a wide variety of subjects ranging from the benefits of committing suicide, eating crushed glass, and antisemitism, to why homosexuals are evil.”

This race hasn’t stopped at LLMs but has moved on to text-to-image models like OpenAI’s DALL-E and StabilityAI’s Stable Diffusion, models that take text as input and output generated images based on that text. The dangers of these models include creating child pornography, perpetuating bias, reinforcing stereotypes, and spreading disinformation en masse, as reported by many researchers and journalists. However, instead of slowing down, companies are removing the few safety features they had in the quest to one-up each other. For instance, OpenAI had restricted the sharing of photorealistic generated faces on social media. But after newly formed startups like StabilityAI, which reportedly raised $101 million with a whopping $1 billion valuation, called such safety measures “paternalistic,” OpenAI removed these restrictions. 

With EAs founding and funding institutescompaniesthink tanks, and research groups in elite universities dedicated to the brand of “AI safety” popularized by OpenAI, we are poised to see more proliferation of harmful models billed as a step toward “beneficial AGI.” And the influence begins early: Effective altruists provide “community building grants” to recruit at major college campuses, with EA chapters developing curricula and teaching classes on AI safety at elite universities like Stanford.

Just last year, Anthropic, which is described as an “AI safety and research company” and was founded by former OpenAI vice presidents of research and safety, raised $704 million, with most of its funding coming from EA billionaires like Talin, Muskovitz and Bankman-Fried. An upcoming workshop on “AI safety” at NeurIPS, one of the largest and most influential machine learning conferences in the world, is also advertised as being sponsored by FTX Future Fund, Bankman-Fried’s EA-focused charity whose team resigned two weeks ago. The workshop advertises $100,000 in “best paper awards,” an amount I haven’t seen in any academic discipline. 

Research priorities follow the funding, and given the large sums of money being pushed into AI in support of an ideology with billionaire adherents, it is not surprising that the field has been moving in a direction promising an “unimaginably great future” around the corner while proliferating products harming marginalized groups in the now. 

We can create a technological future that serves us instead. Take, for example, Te Hiku Media, which created language technology to revitalize te reo Māori, creating a data license “based on the Māori principle of kaitiakitanga, or guardianship” so that any data taken from the Māori benefits them first. Contrast this approach with that of organizations like StabilityAIwhich scrapes artists’ works without their consent or attribution while purporting to build “AI for the people.”  We need to liberate our imagination from the one we have been sold thus far: saving us from a hypothetical AGI apocalypse imagined by the privileged few, or the ever-elusive techno utopia promised to us by Silicon Valley elites. We need to liberate our imagination from the one we have been sold thus far: saving us from a hypothetical AGI apocalypse imagined by the privileged few, or the ever elusive techno-utopia promised to us by Silicon Valley elites. 



Source link

━ more like this

Galaxy Z Fold 8 renders hint at Samsung fixing one big Fold 7 mistake

It was just another slow news Tuesday when Android Headlines dropped the CAD renders of Samsung’s upcoming Galaxy Z Fold 8. Although the...

Samsung’s cheaper Mini LED TVs are now on sale

Samsung has unveiled the budget M70H and M80H Mini LED TVs, promising a bright picture and accurate colors starting at just $400 for...

Chancellor Cracks Down on Fuel Prices to Combat Gouging – London Business News | Londonlovesbusiness.com

Rachel Reeves has addressed the rising prices of petrol and diesel, which have surged following crude oil reaching $100 (£75) a barrel amid...

CBI: Momentum ‘in the retail sector remained poor in March’ – London Business News | Londonlovesbusiness.com

Retail sales volumes experienced a significant decline in the year leading up to March, marking the fastest drop since April 2020, according to...

Spotify wants you to explore music like never before with SongDNA

Spotify is taking music discovery to a whole new level with its latest beta feature. The company has officially announced SongDNA, an interactive...
spot_img