Editor's note: All opinions, columns and letters reflect the views of the individual writer and not necessarily those of the IDS or its staffers.
Just a few days ago, Elon Musk, the majority owner of X and more recently a senior adviser to President Donald Trump, reposted a video on X of OpenAI cofounder Sam Altman and captioned it, “Scam Altman."
Artificial intelligence is rapidly reshaping industries, but its unchecked growth poses serious risks to business practices and public trust. Companies leveraging AI may prioritize profit over ethics, causing biased decision-making among other long-term concerns. These dangers only amplify when innovation is driven by market competition, making ethical governance imperative to ensuring AI is used for the good of the public.
Musk’s comment is just the latest development in the feud between him and Altman on OpenAI’s business model: should OpenAI stay as a nonprofit or become a for-profit institution? Musk wants it to stay the same. Altman has different priorities.
According to a Wall Street Journal article published Feb. 14, Musk and Altman had previously co-founded OpenAI in 2015 but separated ways in 2018 due to a power struggle related to the attributes of a for-profit business model.
Musk bid for OpenAI at $97.4 billion in February to gain control over the nonprofit assets because he thinks the company is taking a “dangerous direction” and it’s important to “return to the open-source safety-focused force for good it once was.”
Normally, I despise Musk and consider him callous and power-hungry. His messages on X were bitter rather than a valid debate and offer. I don’t condone how he conducts business no matter how successful his investments are, but I can sort of see his qualms with OpenAI going public.
If OpenAI follows a for-profit strategy, it will likely focus on commercialization, meaning there is a risk of limiting the once vast access to the AI resource. Similarly, OpenAI might spend less time on research benefiting the public instead of profit maximization goals. Speaking of innovation, the company would be competing in an increasingly saturated market, meaning it could try and compromise on user data privacy, for starters, but who’s to say how far it escalates? In addition, OpenAI might try to collaborate with other tech companies to leverage its technology in existing platforms (ex. cloud services like Microsoft Azure) for another revenue stream. In theory, that’s not a red flag, but it could be if investors have too much influence on product development.
Depending on how far and long this feud continues, other assets are also at risk: stakeholders in Tesla and OpenAI for example, might lose trust and let go of their shares in the companies. The public dispute might encourage scrutiny of AI practices because of their reach in development as well as tech ethics, which is important but likely ineffective or fodder for more bottlenecks.
Scott Shackelford is a business law and ethics professor at the Kelley School of Business. He’s also the executive director of the Ostrom Workshop and the Center for Applied Cybersecurity Research. He has similar concerns.
He said it boils down to whether better, quicker innovation happens in open or closed systems. He referenced Apple and Android in his example: Apple has a vertically integrated closed ecosystem where all operations are independent and internal to the company, whereas Android is famously very open and free to use.
“In cybersecurity, open-source software is better in some ways from a security perspective, because there’s a lot of eyes on it; you know when there’s an issue, there’s a whole community that can help build resilience,” Shackelford said.
He also said going open source is better for innovation across the whole ecosystem because practitioners all over the world can replicate, use the scientific method and improve the process.
On the other hand, he said it means the bad guys also have access and can find vulnerabilities when they exist.
Altman, he said, is pushing the for-profit model, which takes on a more laissez-faire approach to governance because profit maximization is the primary end goal, even if Altman and OpenAI are also looking into sustainability and social concerns.
This makes it difficult to say which is better. In this case, Elon helped form OpenAI and the goal was to make it open. It was supposed to be a non-profit helping make the models open in the community, Shackelford said.
“I’m not a big defender of Elon by any stretch of the imagination, but this is one area in which he at least has a leg to stand on because he did contribute some money to this nonprofit with his goal in mind,” he said.
In a case like this, he said you need core tenants of communication to build trust, but that takes a lot of time.
“What we’re seeing internationally when it comes to AI governance right now is a total breakdown of that coordination outside a few core regions,” he said.
For Shackelford, how businesses and governments progress in a fractured global landscape will foreshadow how this feud resolves itself.
This feud between Musk and Altman is more than a mere media spectacle — it's a warning sign for the tech industry. OpenAI’s shift to a for-profit model could set a precedent where innovation prioritizes profit over public accountability. Without a clear ethical framework, we risk developing technologies serving a few at the expense of the public’s trust and safety.
It is increasingly obvious the tech world is ill-equipped to govern itself independently, especially given AI’s permeation of other industries like healthcare and manufacturing. Left unchecked, quarrels like this could lead to further government fractures and monopolized power. Stakeholders and regulators need to demand transparency and accountability before it’s too late because AI is at the risk of serving those who control it more than humanity.
Meghana Rachamadugu (she/her) is a senior studying marketing and business analytics and pursuing a minor in French.