For those not familiar, the OpenAI saga goes like this. In 2015, a number of Silicon Valley luminaries who were worried that Artificial General Intelligence (AGI) could become a threat to humanity, decided to start a nonprofit that would focus on its safe development. By forming OpenAI as a nonprofit, the founders–which included Elon Musk and Reid Hoffman, co-founder of LinkedIn–hoped to avoid what happens to nearly all venture-backed companies: the insatiable profit motive soon overwhelms any stated concerns for social good.
After a few years, however, two key things happened. First, Musk, who wanted to take control of the organization and was rebuffed, walked away from a massive donation pledge. Second, OpenAI realized that it would require billions of dollars to develop AGI faster than competitors like Google and Meta–far more than it could raise through grants and donations. And so, trying to balance its mission “to ensure that artificial general intelligence benefits all of humanity” with the imperative to pay for hundreds of staff and massive computing power, OpenAI started a for-profit subsidiary that was controlled by the nonprofit but which raised billions from Microsoft, venture capital firms, and other investors.
The launch of DALL-E and, especially, ChatGPT propelled OpenAI from a relatively obscure nonprofit to a world-renowned company whose for-profit subsidiary is now valued at $80 billion. Unsurprisingly–inevitably, I would argue–tension began to develop between the nonprofit’s mission and the investors’ desire for rapid growth to maximize their return on investment. This tension came to a head when OpenAI’s board unceremoniously fired Sam Altman, its superstar CEO–with little explanation and even less transparency.
As a nonprofit, OpenAI was not accountable to investors the way a for-profit is, but rather to its mission–in theory. But in practice, investors like Microsoft–the second largest company in the world–soon asserted themselves: they threatened to pull their money and hire Altman and much of his staff. Within days, Altman had been re-hired and the directors that fired him were replaced with people more amenable to the goal of growth, even if at the expense of safety. (It’s been reported that just before Altman was fired,”several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity.”)
This story matters for several reasons.
First, it speaks to a fundamental issue with how we fund the development of technologies and social innovations that can positively improve the world. If you go the route of a nonprofit, you can protect the social mission, but at the expense of the ability to quickly raise the capital needed to grow. If you form a for-profit, you can raise capital from investors, but at the expense of the mission: the moment you have to decide between what can make money and what is best for people and planet, the investors–who usually get an ownership stake in the business and / or a seat on its board–can force your hand. OpenAI tried to have it both ways, only for the money to win out anyway. The result is that we often only solve those problems that happen to be profitable solve–unless governments step in with funding and good public policy.
Second, it reinforces that work on artificial intelligence will not be constrained by concern for the public interest. And it shows that the issue isn’t so much that we might develop AIs whose goals do not align with the goals of humanity–the so-called “alignment problem“–but rather that the goals of those building ever more powerful AI may not align with the goals of humanity. Put another way, all this crucial work is being funded and owned by a small, ultra-wealthy cadre of people who are unaccountable to voters or the general public. What drives them to develop AI, even if they promise to do so for the benefit of all, may not be the same as what would drive other stakeholders to do so. The same undemocratic dynamic exists in other spheres of life, such as public policy–where billionaires fund think tanks, train and select judges, and support politicians that will protect their narrow interests.
Finally, it raises the question of how we fund, govern, regulate, and manage a global economy in a way that can sustain all life on Earth, protect human rights, and allow for broadly shared prosperity. In an era of rising greenhouse gas emissions, a resurgent far-right political movement (both Argentina and the Netherlands just elected extremists), and persistent war, this question must be tackled head on. It has always been too easy to manipulate public opinion: to blame minorities or immigrants for a nation’s problems, to gain power by promising a firm hand. The age of social media has only made this even easier, as nuance is lost and the platforms’ algorithms reward knee-jerk reactions over thoughtful debate.
I believe Musk, Hoffman, Altman and others created OpenAI with good intentions. That those intentions have been subverted by ego, greed, and the realities of capital-intensive ventures isn’t principally an indictment of them but rather of an entire system. Dr. Muhammad Yunus, winner of the 2006 Nobel Peace Prize and so-called “Father of Microfinance”, speaks of the idea of a social business, which he defines as an entity that meets three criteria:
- It exists to solve a social problem
- It is financially self-sufficient (i.e., doesn’t depend on philanthropy or operate at a loss)
- It does not pay dividends to its owners
The last point is the most radical. Yunus’ argument is that no one should get rich bettering the world; in fact, he believes that an investor in social good should only get back the principal of her investment–nothing more, nothing less. Another way to interpret his concept is to say that we should not act like heartless rational actors when it comes to the economy and soulful human beings in the rest of our lives; only by integrating our humanity into all aspects of life can we build an authentic and just world. This idea is so radical, in fact, that almost no one believes it to be possible to implement. Not Musk and company; not “social investors”; not even nonprofit leaders like myself, who see the intractability of generating a return and solving a problem but recognize that finding people willing to at least take a below-market return is exceedingly difficult and time-consuming.
I remember, as an idealistic teenager, being outraged by all the rational adults who would calmly assure me that I would grow out of that phase–that refusal to accept the inevitability of war, environmental damage, greed–and become a realist. My response then, and my response today, was that perhaps the issue isn’t with the ideals but rather with the people too cowardly to hold onto them in the face of life’s difficulties. The saga of OpenAI is about many things, but principally among them is the imperative to marry idealism to pragmatism, absent which we end up with toothless mission statements on the one hand and powerful, unchecked greed on the other. Doing good on paper is easy: I can point to countless business models and policies that can build a heaven on Earth. Doing good in the messy world of politics, money, power, ignorance, and hate is hard. Nor is there an end point, a moment at which we have “achieved” a final outcome, positive or negative (we aren’t any more doomed to climate catastrophe, for instance, than we have forever secured the rights of LGBT couples in America through the Obergefell Supreme Court decision).
As the legendary civil rights icon and politician John Lewis wrote just before his death, “Democracy is not a state. It is an act, and each generation must do its part to help build what we called the Beloved Community, a nation and world society at peace with itself.” One could say the same for love, for peace, for environmental justice: we will never have a single corporate structure, or piece of legislation, or group of founders or investors that can solve our hardest problems. As much as I wish the moral to the story of OpenAI fit on a bumper sticker, the only approach we have is to wrestle with the hard problems every day, falling victim neither to cynicism nor Pollyanna.
For more on OpenAI, check out:
The Godfather of A.I. Fears What He’s Built
OpenAI’s Boardroom Drama Could Mess Up Your Future
Open AI is a Strange Nonprofit
Leave A Reply