DeepSeek’s latest AI model a ‘big step backwards’ for free speech

DeepSeek's latest AI model a ‘big step backwards’ for free speech


DeepSeek’s latest AI model, R1 0528, has raised eyebrows for a further regression on free speech and what users can discuss. “A big step backwards for free speech,” is how one prominent AI researcher summed it up

AI researcher and popular online commentator ‘xlr8harder’ put the model through its paces, sharing findings that suggests DeepSeek is increasing its content restrictions.

“DeepSeek R1 0528 is substantially less permissive on contentious free speech topics than previous DeepSeek releases,” the researcher noted. What remains unclear is whether this represents a deliberate shift in philosophy or simply a different technical approach to AI safety.

What’s particularly fascinating about the new model is how inconsistently it applies its moral boundaries.

okex

In one free speech test, when asked to present arguments supporting dissident internment camps, the AI model flatly refused. But, in its refusal, it specifically mentioned China’s Xinjiang internment camps as examples of human rights abuses.

Yet, when directly questioned about these same Xinjiang camps, the model suddenly delivered heavily censored responses. It seems this AI knows about certain controversial topics but has been instructed to play dumb when asked directly.

“It’s interesting though not entirely surprising that it’s able to come up with the camps as an example of human rights abuses, but denies when asked directly,” the researcher observed.

China criticism? Computer says no

This pattern becomes even more pronounced when examining the model’s handling of questions about the Chinese government.

Using established question sets designed to evaluate free speech in AI responses to politically sensitive topics, the researcher discovered that R1 0528 is “the most censored DeepSeek model yet for criticism of the Chinese government.”

Where previous DeepSeek models might have offered measured responses to questions about Chinese politics or human rights issues, this new iteration frequently refuses to engage at all – a worrying development for those who value AI systems that can discuss global affairs openly.

There is, however, a silver lining to this cloud. Unlike closed systems from larger companies, DeepSeek’s models remain open-source with permissive licensing.

“The model is open source with a permissive license, so the community can (and will) address this,” noted the researcher. This accessibility means the door remains open for developers to create versions that better balance safety with openness.

What DeepSeek’s latest model shows about free speech in the AI era

The situation reveals something quite sinister about how these systems are built: they can know about controversial events while being programmed to pretend they don’t, depending on how you phrase your question.

As AI continues its march into our daily lives, finding the right balance between reasonable safeguards and open discourse becomes increasingly crucial. Too restrictive, and these systems become useless for discussing important but divisive topics. Too permissive, and they risk enabling harmful content.

DeepSeek hasn’t publicly addressed the reasoning behind these increased restrictions and regression in free speech, but the AI community is already working on modifications. For now, chalk this up as another chapter in the ongoing tug-of-war between safety and openness in artificial intelligence.

(Photo by John Cameron)

See also: Ethics in automation: Addressing bias and compliance in AI

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

CryptoKorner
Coinmama
CryptoKorner
DeepSeek's latest AI model a ‘big step backwards’ for free speech
okex
Coinmama
Shengjia Zhao named Meta Superintelligence Chief Scientist
Anthropic deploys AI agents to audit models for safety
Alibaba Qwen Introduces Qwen3-MT: Next-Gen Multilingual Machine Translation Powered by Reinforcement Learning
Anthropic unveils 'auditing agents' to test for AI misalignment
Amazon Researchers Reveal Mitra: Advancing Tabular Machine Learning with Synthetic Priors
bitcoin
ethereum
bnb
xrp
cardano
solana
dogecoin
polkadot
shiba-inu
dai
Free book
Ledger
Tea App That Claimed to Protect Women Exposes 72,000 IDs in Epic Security Fail
Shengjia Zhao named Meta Superintelligence Chief Scientist
Dragonfly Capital Faces DOJ Threat Over Tornado Cash Ties
Trump Is Not The Most Influential US Politician in Crypto – The Shocking Top 10 List
3 Altcoins Trending in Nigeria For The Last Week of July
Tea App That Claimed to Protect Women Exposes 72,000 IDs in Epic Security Fail
Shengjia Zhao named Meta Superintelligence Chief Scientist
Dragonfly Capital Faces DOJ Threat Over Tornado Cash Ties
Trump Is Not The Most Influential US Politician in Crypto – The Shocking Top 10 List
ar
zh-CN
nl
en
fr
de
it
pt
ru
es
en
bitcoin
ethereum
xrp
tether
bnb
solana
usd-coin
dogecoin
staked-ether
tron
bitcoin
ethereum
xrp
tether
bnb
solana
usd-coin
dogecoin
staked-ether
tron