China joins the global push for AI content regulation

Date:

Share:



Many international entities are pushing for better regulation of AI-generated content on the internet– and China’s government is the latest to reign in the use of the quickly developing technology.

According to Bloomberg, several government ministries have joined with the Chinese internet watchdog Cyberspace Administration of China (CAC) to announce a new mandate that will require internet users to identify any AI-generated content as such in a description or metadata encoding.

This effort is intended to prevent China’s internet from becoming saturated with fake content and harmful disinformation. The mandate is set to take effect in September and will be regulated at the internet service provider level, the South China Morning Post noted.

“The Labeling Law will help users identify disinformation and hold service suppliers responsible for labeling their content. This is to reduce the abuse of AI-generated content,” the CAC wrote in a statement, as translated by Bloomberg.

China isn’t the only government entity that has gotten serious about taking charge of AI-generated content online. The European Union established the AI Act in 2024, as the “first-ever legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally.”

In action, users will have to make clear their intent to share AI-generated content, and those who attempt to edit published AI content labels could be subject to penalties by their internet service providers, the South China Morning Post added.

However, Futurism noted that as AI content becomes more realistic, it can become harder for real and fake content to be accurately identified.

While former President Joe Biden previously established an executive order in 2023 promoting the user of safe, secure, and trustworthy AI, current President Donald Trump has since repealed that order.

Even so, several large tech companies, including Google, Meta, Anthropic, Amazon, and OpenAI, among others also signed a pledge in 2023, stating their commitment to responsible AI with watermarking systems for their technologies. As of now, there is no word on where the companies stand on that pledge.

While battling AI-generated content has been a persistent issue since the industry popularized, recent news has indicated that users on X and Reddit have been using Google’s Gemini 2.0 Flash model to remove watermarks from copyright-protected images. This brings up some ethical and potentially legal issues for those experimenting with the trick. It is a reminder of why AI safeguards are a very good idea.








Source link

━ more like this

AI mania is inflating used MacBook prices, and it’s a sign of what’s coming next

As if it were not enough that the AI boom has driven up RAM and SSD prices, making new laptops and desktops more...

The Handwriting Lab Introduces a Scientific Handwriting Improvement Course to Address Declining Writing Skills – Insights Success

As digital tools continue to dominate communication, concerns around declining handwriting quality among both children and adults are becoming increasingly prominent. The Handwriting...

Perplexity unveils Perplexity Health, an AI tool to transform your scattered medical data into health insights

It appears that the next frontier for AI services is health. Recently, Microsoft launched its Copilot Health, a service that ingests your health...

OpenAI is putting ChatGPT, its browser and code generator into one desktop app

OpenAI is developing a “super app” for desktop that unifies ChatGPT, its browser and its Codex app, according to the Wall Street Journal...

Peek inside NASA’s Mars habitat where humans train for life on the red planet

NASA has offered a sneak peek inside its Mars simulation habitat where four volunteers have now spent 150 days isolated from the outside...
spot_img