A Crisis of Mission, and Failed Crisis Management, at OpenAI

November 27, 2023

I thought I needed AppleTV to bask in a good board v. CEO drama (thank you, The Morning Show), but I was wrong (thank you, OpenAI):

Sam’s unexpectedly out. Nadella’s reportedly furious. Sam’s hurt. Chaos, threats and four days of high-drama negotiations ensue. Sam’s back. Employees chill. Investors breathe. With the deal, half of humanity and AI ethicists are now unrepresented on the new board, a sting to core mission. To satisfy the old board, an investigation is promised. Is the public display of dysfunction over? Or will the coming days and weeks bring a new twist? Pass the cranberry sauce.

Before we get into the thermonuclear meltdown in internal and external communications, crisis management and public relations at OpenAI, the mission at stake and personalities involved, let’s spin the clock back to a few weeks ago.

On Nov. 6, OpenAI held its first developer conference, DevDay, in San Francisco, and Sam Altman gave a keynote chock full of impressive, new GPT-related products and capabilities—all of them “turbo”-charging AI’s advancement and accessibility to the masses.

It seems that everyone there and everyone watching could feel the importance of each announcement, after announcement … after announcement. The technology’s potential is staggering, the array of product development mighty. To boot, Microsoft CEO Satya Nadella casually took the stage, chatting with Altman, and making Microsoft’s commitment to OpenAI and AI’s future everywhere exceptionally clear.

Nadella was easy, friendly and smart in his brief conversational remarks, as you’d expect from a senior technology statesman helming a massive ship. Altman was definitely personable and practiced, but also noticeably staccato in his pauses. Maybe a little too self-conscious. Since fanboys are quick to compare him, demons and all, to Steve Jobs, I couldn’t help but compare the missing flow and fire, which Jobs oozed on stage. Something else seemed missing too—not just for Altman, but in this whole debacle, missing for the millennial tech wizards on the old board who ousted him: Adam D’Angelo, Ilya Sutskever, Tasha McCauley and Helen Toner. Missing for Greg Brockman, Brad Lightcap and Mira Murati as well.

Where is their Joanna Hoffman, Jobs’ right-hand woman and Apple’s former top marketing and communications executive?

In all of this, where was their wise, experienced and media-savvy C-suite-level executive unafraid to vigorously explain to all these Harry Potters—in advance—the investor wrath, social media frenzy, top-tier media frenzy, employee fallout, government scrutiny, and general tech world upheaval that their failures in communication and transparency were about to create? To stay the madness?

Some crises can’t be predicted. This one could.

Lawyers will advise on legal risks and ramifications, not always deeply on optics. One wonders: Was Altman clear on the potential consequences of not being “consistently candid” with the board? Of trying to push out the do-gooder academic? Of aggressive product development betraying philosophical differences about “artificial general intelligence [that] benefits all humanity”? Was he truly keeping his ego in check, or believing in his own hero-worship and blurry vision of AI’s future?

Was the board clear on the nature of the shitshow it was about to unleash? That the vast majority of its employees—almost 800—would sign this searing letter in allegiance to the CEO? That, one way or another, Nadella would keep Altman and OpenAI in Microsoft’s fold? Indeed, it seems unreal Nadella and other key investors were kept in the dark, but Kara Swisher called it out of the gates.

Mission, mistakes and the missing pieces

It’s probably the case that all the OpenAI players want humanitarian-minded, even altruistic, artificial general intelligence (AGI). But humanity, ever just and wicked at the same time, never creates something pure. Sutskever’s former graduate professor was none other than Geoffrey Hinton, the “godfather of AI” who sounded the alarm this year on AI’s dangers and left Google in protest. Remember the moratorium championed by Musk, Woz and so many others, that didn’t happen? Instead, development sped up. Remember this dire warning to shut it all down now?

In the great tales that raised our millennials, Harry snapped the Elderwand in two and Frodo let the ring go, but those in our world with awesome power and technology in their hands don’t give it up. Case in point, nuclear weapons. While we haven’t nuked ourselves yet, Joshua also hasn’t existed in an unfettered and widespread way, until now (or soon), that could actually decide for us, if not with atomic bombs, with chemicals, energy infrastructure, propaganda, etc.

In our OpenAI tale, no one guided the narrative in a constructive way. Altman and the old board clearly weren’t hashing out the real problems in honesty together. That board accused him of a lack of transparency. Sutskever gave limited examples. The employees turned on the board. And Altman? He pouted on X: “go tell your friends how great you think they are.”

The verbiage—in the statements, including the most recent, the blog, the letters, the tweets, the posts and the other tweets, as well as top-tier articles—heavily point to disagreement about interpretation of the company’s mission: what constitutes the best interests of all humanity along the timeline to AGI. Should such a timeline and goal even exist? At DevDay, Altman repeated his commitment to “gradual iterative deployment” as a quality and safety measure. He then proceeded to showcase wildly powerful deployments that felt anything but gradual. Meanwhile, each director’s fiduciary duty has been, in writing, to “humanity, not OpenAI investors.”

The same tension seems nuanced, but present, in this MIT Club conversation between Brockman and D’Angelo, a full four years ago.

What a Joanna Hoffman, and the best and bravest communications leaders, bring to the table is the will to call out bullshit, bitterness, ego, stalemate and unwise strategies that can damage an organization, bold endeavor, investors, employees, users, customers and the public. They offer an effective plan to resolve communications problems and differences at the highest level, and every level. Anyone who works in tech long enough has seen the developer v. marketer fallacy play out: engineers are smart; engineers with leadership qualities will handle serious scenarios—in business, product, comms—better than non-engineers. That mistaken logical non sequitur is quite real in startups, as is “either/or” thinking. But often A does not follow B. And often the choice is not absolutely A or absolutely B. Was a version of that happening at OpenAI? Jobs himself faced developer criticism on many occasions. He had Hoffman, at his level, to temper him. To advise him. For years. It’s worth listening to how he grew into responding constructively and transparently.

Transparency must be a north star for even the most brilliant CEOs and board members. Highly intelligent people will know they have blindspots and go beyond just admitting it. That doesn’t necessarily mean shifting strategy or reacting to everyone’s point of view, but it does mean de-escalation and constructive problem-solving of what can feel intractable—like criticism of your approach to AI safety by a board member in an academic paper. In crisis management, you always want to deal with the issues head-on and thoroughly before they morph into bitterness and crisis.

Doing hard things does not equal solving hard problems

It may be that OpenAI’s non-profit-minded former board, genuinely and wisely, was trying to slow humanity’s demise and give us a chance to catch up to the ethical questions of what’s coming. It may be that Altman and Nadella are right in knowing that Pandora is out and that more AI action, not less, will help. At least their $80+ billion valuation is safe.

What keeps coming back to me is the insularity of all of this. Are these people doing hard things? In a way, yes. Highly technical people are building hard-to-build things. But, are they solving the world’s hard problems by building those things? You could make a powerful argument that no, they’re really not. This technology, like all big technology before it, will be used for massive good and massive ill. It will serve as a new avenue for human creation and for human corruption.

An entrepreneur who solves for human corruption would be an actual hero. Corruption dismantles whole nations, societies and peoples. Right now. All over. Let’s see a concrete, real-world generative AI use case deployed in an area that matters—like ending war and the ravages of hatred, poverty, disease and slavery. Yep, that’s still around, more virulent than ever.

Bill and Melinda Gates have understood humanity’s truly hard problems. They have understood that we live in a world that needs to deploy toilets, water pumps and hygiene infrastructure. It’s not either/or—but that’s probably where heroes emerge. These devices aren’t hard to build, but widespread deployment of aid and physical infrastructure, in the face of corruption and egotism, is a much harder problem than any coding will ever be. Than any rocket building to Mars will ever be.

In crisis management, if a disaster impacting mission does unfold in all its gore, one of the best things we can do is actionably learn from it, implement better preventative strategies, and use it as a pivot to concrete good. Not platitudes and vague ideas about the future. Where’s the AI-driven solution scaled to the world’s actual hardest problems? Are LLM builders just kicking the can down the road (look, GPTs!) to an undefined vision that will never come? Where’s OpenAI’s Joanna Hoffman—a chief communicator who can also meaningfully challenge the CEO’s approach? Helen Toner, who apparently was trying to solve part of that equation, is gone. What’s the pivot, Sam?

Share this post:

About the author

Racquel Yerbury loves inquiry, technology and poetry—as well as feisty debates about the rise and fall of crypto. As senior content director at Bospar, she brings 20+ years of writing and editing experience to the team, with technical chops in cloud services, data management, DevOps, APIs, AI/ML, cybersecurity, and blockchain. Her work includes research reports, press releases, articles, strategic digital communications, and marketing campaigns. She is also a former educator, licensed private pilot and Fulbright scholar.

Latest

Blog