Artificial Intelligence: Should AI Be Shared For Free?
Error loading player: No playable sources found

The AI arena is now grappling with a critical conflict, one that's barely discussed outside the tech community but is now gaining mainstream attention with a lawsuit. This issue has huge implications for the future of mankind. Yang Chengxi explains the issue of open source in AI development.

YANG CHENGXI Beijing "In February 2024, the American tech entrepreneur sued OpenAI, a non-profit that he co-founded. The bone of contention in this complaint is: Since 2019, OpenAI, headed by CEO Sam Altman, established a for-profit arm and partnered with Microsoft. Now, Elon Musk has alleged that the internal design of OpenAI's powerful GPT-4 model was kept a secret, except to the two stakeholders."

According to the complaint, OpenAI's alleged secrecy is believed to contravene its Founding Agreement, which says its technology would be open-source. That means the codes should be freely released. Musk takes issue with the company's decision to not only keep GPT-4 under wraps but also to make it accessible only through a paid subscription.

ELON MUSK CEO, Tesla "I mean it would be like, let's say you founded an organization to save the Amazon rainforest, and instead they became a lumber company and chopped down the forest and sold it for money."

YANG CHENGXI Beijing "And to think that he was the one who named it OpenAI."

ELON MUSK CEO, Tesla "OpenAI refers to open-source. Man, fate loves irony, next level."

Why is there a pivot away from open source for the company? A reason might be safety. Last May, the CEOs of OpenAI, Google DeepMind, and Anthropic, three of the most prominent AI labs, signed a statement that warned that AI could be as risky to humanity as pandemics and nuclear war.

YANG CHENGXI Beijing "If AI would one day get so powerful like these companies claim, then open sourcing does carry a fair amount of risk. Because once all the codes of an AI tool are made public, they stay on the internet forever."

LIN JUNYANG Head of Open Source, Qwen Lab Alibaba "It is a very conflicting issue. People may use open source, large language model to do something bad. You cannot stop from it."

YANG CHENGXI Beijing "So, if OpenAI wants to withhold GPT-4 in the name of security, then surely Elon Musk would agree, no? Doesn't Musk himself talk about the existential risk from AI all the time? Well, as it turns out, what Musk deems even more dangerous for humanity, is for such a powerful AI to be tightly controlled by a single for-profit company."

ELON MUSK CEO, Tesla "Let's say they do create some digital super intelligence, almost god-like intelligence, well, who's in control?"

Elon Musk's fear of a monopolized super AI is evident. The complaint mentioned his belief that OpenAI is developing a secretive algorithm called Q-Star, and the security concerns over this powerful project was somehow behind OpenAI CEO Sam Altman's surprise firing in 2023, one of the most high-profile and enigmatic corporate dramas in recent years, raising questions over OpenAI's management of the company and its technologies.

YANG CHENGXI Beijing "So, Mr. Musk believes OpenAI's current secretiveness is not the responsible way to address AI risks. Alright enough with the dramatic editing here. Let's change the tone. There are other AI labs that open-source but do not share Mr. Musk's worries. A prime example is Meta."

Its CEO Mark Zuckerberg said he doesn't understand the AI doomsday scenarios, and that those who hyped them up are "pretty irresponsible." The company's chief AI scientist, Yann LeCun, a Turing Award winner, even said that fears over extreme AI risks are "preposterously stupid."

YANG CHENGXI Beijing "U.S. officials had once warned Zuckerberg about the ethical implication of publicly releasing its AI model last year, and a month after that warning, Meta straight up released its next generation Llama 2 chatbot for free. Many saw it as an emphatic stance for open source."

In a subsequent blog post, Meta explained why they believe open source actually makes an AI system more safe, not less. Because developers and researchers can stress test them, identifying and solving problems fast as a community. The industry aphorism here is "given enough eyeballs, all bugs are shallow."

YANG CHENGXI Beijing "In a word, these people believe AI fears are overblown, and should not infringe upon the open source spirit."

YANN LECUN Chief AI scientist at Meta "What works against this, is for people to think that for reasons of security, we should keep AI systems under lock-in key, because it's too dangerous to put it in the hands of everybody. That would lead to a very bad future."

YANG CHENGXI Beijing "The question to whether open source future AI development remains a highly divisive topic among policy makers and AI researchers around the world. But it's a topic that will only get harder to avoid as AI models become more powerful."

`