Disclaimer: The opinions expressed in this article are solely those of the author and do not necessarily reflect the views of the editorial team at crypto.news.
In January 2025, DeepSeek’s R1 became the most popular free app on the US Apple App Store, surpassing ChatGPT. DeepSeek, unlike proprietary models like ChatGPT, is open-source, allowing anyone to access, study, share, and utilize the code for their own models.
You may also enjoy: DeepSeek, China and Russia AI partnership: Alert for the Western world | Opinion
This shift has sparked interest in transparency in the AI industry, driving towards greater openness. Just recently, in February 2025, Anthropic introduced Claude 3.7 Sonnet, a hybrid reasoning model that is partially open for research previews, further stimulating discussions on accessible AI.
However, while these advancements drive innovation, they also highlight a misconception: that open-source AI is inherently more secure than closed models.
The Potential and Challenges
Open-source AI models like DeepSeek’s R1 and Replit’s latest coding agents demonstrate the power of accessible technology. DeepSeek claims to have developed its system for just $5.6 million, a fraction of the cost of Meta’s Llama model. Meanwhile, Replit’s Agent, powered by Claude 3.5 Sonnet, enables even non-coders to create software from natural language prompts.
The implications are significant. This means that virtually anyone, including small companies, startups, and independent developers, can utilize these robust models to build new specialized AI applications at a lower cost, faster pace, and with greater ease. This could lead to a new AI economy where model accessibility is key.
However, where open-source excels in accessibility, it also faces increased scrutiny. Free access, as evidenced by DeepSeek’s $5.6 million model, democratizes innovation but also exposes vulnerabilities to cyber threats. Malicious actors could exploit these models to create malware or exploit weaknesses faster than fixes can be implemented.
Open-source AI does not inherently lack security measures. It builds on a tradition of transparency that has strengthened technology for years. In the past, engineers relied on “security through obfuscation,” concealing system details behind closed doors. This approach proved ineffective, as vulnerabilities often emerged, sometimes first discovered by malicious parties. Open-source flipped this model, unveiling code—such as DeepSeek’s R1 or Replit’s Agent—to public scrutiny, fostering resilience through collaboration. Yet, neither open nor closed AI models guarantee foolproof verification.
The ethical implications are equally crucial. Open-source AI, like its closed counterparts, can perpetuate biases or generate harmful outcomes based on training data. This is not a flaw unique to open-source; it is a matter of accountability. Transparency alone does not eliminate these risks, nor does it entirely prevent misuse. The distinction lies in how open-source encourages collective oversight, a strength that proprietary models often lack, though it still requires mechanisms to ensure integrity.
The Importance of Verifiable AI
For open-source AI to gain trust, it requires verification. Without it, both open and closed models can be altered or misused, amplifying misinformation or distorting automated decisions that increasingly shape our world. Accessibility alone is insufficient; models must also be auditable, tamper-resistant, and accountable.
By utilizing distributed networks, blockchains can validate that AI models remain intact, their training data remains transparent, and their outputs can be verified against established benchmarks. Unlike centralized verification, which relies on trust in a single entity, blockchain’s decentralized, cryptographic approach prevents tampering by bad actors. It also shifts the control from third parties, distributing oversight across a network and incentivizing broader participation. This is in contrast to the current scenario, where unpaid contributors contribute to trillion-token datasets without consent or compensation, only to pay for using the results.
A verification framework powered by blockchain adds layers of security and transparency to open-source AI. Storing models on-chain or using cryptographic fingerprints ensures that modifications are openly tracked, allowing developers and users to confirm they are using the intended version.
Recording the origins of training data on a blockchain demonstrates that models draw from unbiased, high-quality sources, reducing the risks of hidden biases or manipulated inputs. Additionally, cryptographic techniques can validate outputs without compromising the personal data shared by users, striking a balance between privacy and trust as models evolve.
Blockchain’s transparent, tamper-resistant nature provides the accountability that open-source AI desperately needs. While current AI systems rely heavily on user data with minimal protection, blockchain can reward contributors and safeguard their inputs. By incorporating cryptographic proofs and decentralized governance, we can establish an AI ecosystem that is open, secure, and less dependent on centralized entities.
The Future of AI Relies on Trust… On-Chain
Open-source AI plays a vital role in the AI landscape, and the industry must strive for even greater transparency—but being open-source is not the ultimate goal.
The future of AI and its impact will be shaped by trust, not just accessibility. Trust cannot be open-sourced; it must be built, verified, and reinforced at every level of the AI infrastructure. The industry must focus on the verification layer and the integration of safe AI. Currently, bringing AI on-chain and leveraging blockchain technology is the most secure path towards building a more trustworthy future.
Explore more: Big Tech’s dominance in the AI landscape | Opinion
David Pinger
David Pinger is the co-founder and CEO of Warden Protocol, a company focused on promoting secure AI in web3. Prior to co-founding Warden, he led research and development at Qredo Labs, spearheading web3 innovations like stateless chains, webassembly, and zero-knowledge proofs. Before Qredo, he held various roles in product, data analytics, and operations at Uber and Binance. David began his career as a financial analyst in venture capital and private equity, funding high-growth internet startups. He holds an MBA from Pantheon-Sorbonne University.