Quantcast
Channel: Analytics India Magazine
Viewing all articles
Browse latest Browse all 3489

The Censorship Dilemma Behind DeepSeek’s AGI Mission

$
0
0

Much has been written about the AI model that jolted the stock market. But, DeepSeek CEO Liang Wenfeng rarely speaks in public, making each of his interviews and statements highly anticipated and closely scrutinised. As per reports, he appeared for just two interviews in 2023 and 2024, in which he revealed his modus operandi to achieve artificial general intelligence (AGI).

Wenfeng, born in the 1980s in the Chinese province of Guangdong, graduated from Zhejiang University with a degree in electronic information engineering. In 2015, he co-founded High-Flyer, a hedge fund which managed $10 billion by 2019. 

The interviews highlight that, unlike many Chinese AI firms prioritising commercialisation, DeepSeek is dedicated to fundamental AGI research. “It could be two, five, or 10 years away, but it will definitely happen in our lifetime,” he said, focusing on three main directions: mathematics and code, multimodality, and natural language itself.

Elaborating on DeepSeek’s approach to talent, Wenfeng clarified that there are no “wizards”. According to him, the company operates with a bottom-up structure, recruiting young local talent from local Chinese universities. 

While well-funded, DeepSeek’s main hurdle lies in securing high-end chips restricted by US export controls. “We don’t have short-term fundraising plans. Our problem has never been funding; it’s the embargo on high-end chips,” he said. 

According to The Wall Street Journal, Wenfeng recently met with Chinese premier Li Qiang to discuss the difficulties Chinese companies face as a result of US restrictions on advanced chip exports. 

On the open-source front, Wenfeng said, “In the face of disruptive technology, a closed-source moat is temporary.” He also noted that while people often speak of a one or two-year gap between Chinese and American AI, the true divide is between originality and imitation.

As DeepSeek rattles global markets, it also raises serious concerns about AI safety, driven by its open-source design and strong links to the Chinese Communist Party. This is especially true, considering top leaders predict that ASI is on an accelerated timeline from before. 

Is All Really Well on DeepSeek?

Censorship of sensitive topics is a major concern with DeepSeek. The model avoids answering questions related to issues such as Uyghur human rights abuses, Taiwan’s political status, the 1989 Tiananmen Square incident, criticism of Chinese supreme leader Xi Jinping, censorship in China, and questions about Arunachal Pradesh and Kashmir’s sovereignty, among others. 

Instead, it deflects these inquiries with responses like: “Sorry, I’m not sure how to approach this type of question yet.” People on X have DeepSeek’ compared the censorship on Anthropic’s Claude and OpenAI’s ChatGPT. 

According to some reports, China’s regulatory body, the Cyberspace Administration of China (CAC), imposes strict testing requirements for AI models, including testing up to 70,000 questions to ensure politically safe answers. This slows AI development and limits the randomness and creativity typical of generative AI. 

The Chinese government rigorously reviews large language models (LLMs) to ensure they adhere to “core socialist values”. Companies such as ByteDance, Alibaba, Moonshot, and 01.AI are obligated to undergo these compulsory audits conducted by the CAC. 

Moonshot’s chatbot, Kimi, rejects most questions about Jinping. Similarly, ByteDance’s LLM ranks highest in safety compliance tests, showcasing its alignment with Beijing’s messaging.

Many argue that because it’s open-source, it can be fine-tuned to suit specific needs or values. But this also hint at a deeper issue; one of censorship that extends beyond simple fine-tuning. Former OpenAI researcher Miles Brundage pointed out that while this is an immediate benefit, it could lead to stricter rules in the future. DeepSeek and governments might focus more on improving AI safeguards, making it harder to release new models. Governments could also push for AI tracking on devices, where smaller AIs monitor the use of larger ones and send reports to central systems. This could also affect how non-Chinese AI models are used in China.

The post The Censorship Dilemma Behind DeepSeek’s AGI Mission appeared first on AIM Media House.


Viewing all articles
Browse latest Browse all 3489

Trending Articles