DeepSeek-R1 is an open-source AI large language model developed by the Chinese research and development company DeepSeek, backed by the Chinese hedge fund, High-Flyer. Initially a side hustle by a few developers, DeepSeek-R1 sent shockwaves through the AI community in the last few weeks when it was revealed that it was trained on older NVIDIA H100 chips and developed for under $6M, while OpenAI burnt through billions developing ChatGPT. The kicker? R1 performs just as well. This week’s 52INSIGHTS gives a download on what played out and where we’re headed next.
Most Downloaded AI App - Ever
Launched to the public on January 20th 2025, DeepSeek blew past ChatGPT to become the most downloaded free app on Apple and Google’s App Stores for the first month of the New Year, with 18 - 24 year olds making up 22.3% of its users.
"To many young Americans, Chinese technology is now cool." Economist Tyler Cowen
'AI for Everyone' - The Open-source Power Move
In comparison to ChatGPT, DeepSeek’s R1 is open-source, meaning anyone can access and modify and improve it, making it a powerful tool for developers and creators. In contrast, ChatGPT is closed-source, meaning the training model is locked away on a server, preventing anyone from accessing it and taking a peek into how it was trained and modelled. Any improvements are shared without insight into how they’re made.
DeepSeek-R1 has been described as a 'Sputnik moment' for AI, drawing parallels to the 1957 Soviet satellite launch that spurred the original space race. This comparison highlights the model's unexpected advancement and its potential to challenge US dominance in AI.
In late January 2025, the US launched the Stargate Project - a massive $500 billion initiative to boost AI infrastructure, with big players like OpenAI, SoftBank, and Oracle on board. Texas is set to be the main hub for new data centres.
To curb China's AI progress, the US has also tightened export controls, limiting China's access to advanced computing tech. Despite these moves, China's AI scene is still thriving, with models like DeepSeek showing they're not slowing down.
The emergence of DeepSeek-R1 has intensified global discussions on the geopolitical implications of artificial intelligence, highlighting the complex interplay between technological innovation and international politics.
Tech Stocks See Red
On January 26th, the US tech stock market saw red all over. DeepSeek not only caused AI developers to celebrate their open-source approach, but it also affected the pockets of all tech and AI investors. DeepSeek's ability to develop a high-performing AI model at a fraction of the cost raised concerns about the sustainability of current AI investments. This led to a high volume sell-off in tech stocks, with NVIDIA making a historic loss of nearly $600B in market cap. Microsoft, Meta, and Alphabet, also faced declines. Investors are now re-evaluating whether investing in established AI developers is worth it, compared with their cheaper and more cost-effective counterparts.
The $6m Question Mark
DeepSeek claimed to have developed the R1 model with just $6M, showcasing that cutting-edge AI doesn't need billion-dollar budgets. But, there are question marks around the validity of that $6M dollar figure. Tech investor Armina Rosenberg from Minotaur Capital Management claims that DeepSeek may have dropped nearly $2B developing the R1 model.
R1 isn't gatekeeping, though. It's used on Microsoft Azure, GitHub, and HuggingChat, letting businesses and developers integrate it into their AI workflows with ease. This move has sped up AI experimentation and deployment, making top-tier tech more accessible.
While some investors are pulling back from the tech side of the stock market, Japan’s SoftBank is making power moves, announcing plans to invest $3B annually in the Stargate Project. This aligns with the US government's hefty $500B commitment to maintaining its top spot in the AI game, benefiting players like OpenAI, SoftBank, Oracle, and MGX.
Cyberattack Chaos
In late January 2025, DeepSeek faced a significant cyberattack that compelled the company to halt new user registrations. The attack disrupted server operations, highlighting potential vulnerabilities in DeepSeek's infrastructure. The company described the incident as a 'large-scale malicious attack' and took measures to mitigate the impact.
Further investigations revealed that a publicly accessible database belonging to DeepSeek was exposed, containing over a million lines of log streams with highly sensitive information, including chat histories, secret keys, and backend details. This exposure raised serious concerns about the company's data security practices.
Censorship and Privacy Drama
DeepSeek-R1 has been reported to avoid addressing topics considered sensitive by the Chinese government. For instance, when users enquire about events like the Tiananmen Square massacre or the status of Taiwan, the model often responds with statements such as, "Sorry, that's beyond my current scope. Let's talk about something else." This behaviour indicates a level of censorship embedded within the AI's responses.
Additionally, security researchers discovered that DeepSeek's chatbot contains hidden code that potentially transmits user data to China Mobile, a state-owned telecommunications company. These incidents have raised significant concerns about the security, privacy, and ethical implications of using DeepSeek-R1, leading to increased scrutiny from governments and organisations worldwide.
The Theft Accusation
Questions have been raised about how DeepSeek achieved the R1 model’s increased performance over OpenAI’s o1 model. Recent reports indicate that Microsoft and OpenAI are investigating DeepSeek for allegedly stealing OpenAI's tech. According to The Times, OpenAI suspects that DeepSeek accessed its data through ChatGPT’s API, violating terms of service that prohibit using its outputs to create competing models. Microsoft's security researchers observed accounts linked to DeepSeek extracting large amounts of data through OpenAI's API, prompting further accusations. David Sack, a US AI specialist, claimed that DeepSeek ‘distilled’ OpenAI’s o1 model to train their R1 model. Distillation means that an existing, pre-trained AI model is used to train a new and improved AI model, completely violating OpenAI’s terms and conditions. This all seems ironic, as former OpenAI researcher, Suchir Balaji claimed that OpenAI is breaking copyright law to train their AI models, scanning the entire internet as material. According to Wikipedia, OpenAI used a website called Common Crawl to access at least sixty percent of the data used to train their models.
Legal Challenges
Security concerns around the sudden rise of DeepSeek have led US Republican Senator Josh Hawley to propose a bill criminalising the use of DeepSeek and other Chinese AI models. With Deepseek storing its fast-growing US user data in China, many feel it poses the same national security risks that led Congress to crack down on TikTok. All China-based companies are subject to the Chinese Communist Party’s cybersecurity laws, which mandate that they share data with the government upon request. If Senator Hawley’s bill is introduced, it would fine up to $1M for individuals and up to $100M for businesses, with potential prison time.
A New Challenger Appears
A mere couple of days after the DeepSeek-R1’s release, Alibaba, a Chinese tech company, launched its own LLM called Qwen2.5-Max, claiming it already outperforms DeepSeek while adding in-chat text-to-video generation as a major feature. Unlike DeepSeek-R1, Qwen is not an open-source model. A few days later, little-known Moonshot AI, released the Kimi K1.5 (an open-source model) which claims to outperform Qwen2.5-Max. Seemingly every day, a new and improved AI model is released, highlighting how quickly the AI space is progressing.
International Regulatory Responses
In response to rapid AI advancements, international bodies and nations are formulating regulatory frameworks to manage AI's impact. The European Union is advancing the enforcement of its comprehensive AI Act, which includes bans on certain uses of AI, such as creating facial recognition databases. Despite opposition from the US, the EU is committed to implementing these regulations to ensure ethical AI development.
The advent of DeepSeek-R1 underscores the critical intersection of AI and global politics, highlighting the need for balanced approaches that promote innovation while safeguarding security and ethical standards.
With low-cost, high-performance AI now a reality, DeepSeek and its predecessors aren’t just competing with each other - they’re a wake-up call for what's to come in the next decade. China’s unregulated use and development of open-source AI has proven that they are leading the way in AI development, for better or worse.
Dead Internet, No Longer a Theory
The Dead Internet Theory suggests that most online content is now generated by bots and AI rather than real people. As automation takes over, it’s becoming increasingly difficult to distinguish between human interactions and AI-generated material.
“I just hope that as AI gets more powerful, we don’t lose touch with what makes us human. I don’t want to live in a world where everything is just automated and we’re not needed anymore.” Jord, Age 17 (Centre for Youth and AI)
This is driven by the rise of advanced large language models (LLMs) like DeepSeek R1 and OpenAI’s o3, along with sophisticated image and video generators like DeepSeek’s Janus Pro. Meta (formerly Facebook) has already introduced several AI-generated Instagram accounts that appear to be run by real people - some even featuring selfies with celebrities - despite being entirely AI-driven. More recently, ByteDance, the parent company of TikTok, released development papers for OmniHuman-1, an advanced AI model capable of creating hyper-realistic digital humans, further blurring the line between reality and artificiality.
As AI continues to dominate online spaces, most interactions, content, and with AI influencers becoming more common, genuine human engagement is increasingly sidelined. The question isn’t whether the internet is 'dead' - it’s how much of it remains real.
Security Concerns: DeepSeek's cybersecurity challenges serve as a cautionary tale. Marketers must prioritise data security in their AI implementations and be transparent about how consumer data is used to maintain trust.
The Human Touch: As AI-generated content floods the internet, human creativity and genuine storytelling will stand out more than ever. Brands that find the right balance between AI efficiency and human-driven creativity will have a competitive edge.
Transparency First: Meta and TikTok are investing heavily in AI influencers. Brands should tread carefully when exploring potential partnerships. Transparency will be key to maintaining audience trust.