spot_img
19.3 C.
Londra
spot_img

În contul scandalos OpenAI.

I was given the idea to write a narrative about a then-unknown business, OpenAI, by Karen Hao, a senior columnist with MIT Technology Review in 2019. It was her biggest task to time. Hao’s reporting feat over the upcoming month underwent a number of twists and turns, ultimately revealing how OpenAI’s passion had taken it far beyond its initial purpose. The finished history was a prophetic glance at a organization at a tipping point—or now past it. And OpenAI was never content with the outcome. In an in-depth analysis of the company that spearheaded the AI arms race and what that culture means for everyone, Hao’s new publication, Empire of AI: Goals and Nightmares in Sam Altman’s OpenAI, is published. This extract is the source account of that monitoring. MIT Technology Review professional director Niall Firth

I arrived at OpenAI’s practices on August 7, 2019. Greg Brockman, next thirty‑one, OpenAI’s chief technology officer and soon‑to‑be firm leader, came down the staircase to greet me. He gave me a preliminary laugh as he shook my hand. ” We’ve never given someone so much access before”, he said.

Some people outside the closed world of AI studies at the time were aware of OpenAI. But when a columnist at Revista Tehnologiei MIT covering the ever‑expanding frontiers of artificial knowledge, I had been following its movements attentively.

Until that time, OpenAI had been something of a child in AI study. When most non-OpenA I authorities doubted that AGI could be achieved in a decade, the idea was absurd. To much of the industry, it had an outrageous amount of funding despite little way and spent too much of the money on advertising what other experts often snubbed as uninspired study. For some, it was an additional source of fear. As a volunteer, it had said that it had no purpose to fight monetization. It was a rare academic park without strings attached, a shelter for border ideas.

However, the rapid influx of alterations at OpenAI signaled a significant change in its course during the six months leading up to my attend. First was its perplexing choice to withdraw GPT‑2 and talk about it. Then, it was announced that Sam Altman, who had unfortunately left his top job at YC, would take over as CEO of OpenAI with the development of its new” capped profit” structure. I had already made my arrangements to visit the office when it eventually revealed its cope with Microsoft, which gave the software giant concern for commercializing OpenAI’s technologies and locked it into entirely using Azure, Microsoft’s cloud‑computing system.

Each new statement garnered new controversy, intense debate, and growing focus, beginning to reach beyond the confines of the technology industry. It was difficult for my coworkers and I to fully comprehend what was happening as we covered the agency’s development. What was apparent was that OpenAI was beginning to impose significant sway over AI analysis and the means policymakers were learning to grasp the technology. The lab’s decision to change into a partially for-profit business would have a ripple effect on both its political and business spheres. &nbsp,

So late one night, with the urging of my editor, I dashed off an email to Jack Clark, OpenAI’s policy director, whom I had spoken with before: I would be in town for two weeks, and it felt like the right moment in OpenAI’s history. Could I post a profile to show my interest? Clark passed me on to the communications head, who came back with an answer. OpenAI was indeed prepared to reappear to the general public. I would have three days to interview leadership and embed inside the company.


Brockman and I settled into a glass meeting room with the company’s chief scientist, Ilya Sutskever. They each played their part by sitting side by side at a long conference table. Brockman, the coder and doer, leaned forward, a little on edge, ready to make a good impression, Sutskever, the researcher and philosopher, settled back into his chair, relaxed and aloof.

I turned up my laptop and looked through my questions. OpenAI’s mission is to ensure beneficial AGI, I began. Why spend billions of dollars on this problem and not something else?

Brockman gave a powerful nod. He was used to defending OpenAI’s position. We think AGI is important to build because we believe it can solve complex problems that are beyond the scope of human interaction, he said.

He offered two examples that had become dogma among AGI believers. Climate change. It’s a” super-complex problem,” the author states. How are you even supposed to solve it”? also medicine ” Look at how important health care is in the US as a political issue these days. How do we actually get better treatment for people at lower cost”?

He then began to share the story of a friend who had a rare disorder and had recently had to switch between various doctors to figure out his issue. AGI would bring together all of these specialties. People like his friend would no longer put forth so much effort and frustration in order to receive a response.

Why did we need AGI to do that instead of AI? I asked.

This distinction was crucial. The term AGI, once relegated to an unpopular section of the technology dictionary, had only recently begun to gain more mainstream usage—in large part because of OpenAI.

AGI, as OpenAI aptly put it, was the theoretical pinnacle of AI research: a piece of software that had just as much sophistication, agility, and creativity as the human mind, to match or exceed its performance on most ( economically valuable ) tasks. The operative word was theoretical. Since the beginning of earnest research into AI several decades earlier, debates had raged about whether silicon chips encoding everything in their binary ones and zeros could ever simulate brains and the other biological processes that give rise to what we consider intelligence. Without even getting into the normative discussion of whether people should develop it, there was still undeveloped evidence that this was possible.

AI, on the other hand, was the term du jour for both the version of the technology currently available and the version that researchers could reasonably attain in the near future through refining existing capabilities. These abilities, which are rooted in powerful machine learning techniques, have already shown promising applications for reducing climate change and improving healthcare.

Sutskever chimed in. When it comes to solving complex global challenges, “fundamentally the bottleneck is that you have a large number of humans and they don’t communicate as fast, they don’t work as fast, they have a lot of incentive problems”. He claimed that AGI would be different. ” Imagine it’s a large computer network of intelligent computers—they’re all doing their medical diagnostics, they all communicate results between them extremely fast”.

This sounded to me like another way to say that AG I’s goal was to make people obsolete. Is that what Sutskever meant? I asked Brockman a few hours later, once it was just the two of us.

” No,” Brockman retorted quickly. ” This is one thing that’s really important. What function does technology serve? Why is it here? Why do we build it? Right, we’ve been developing technologies for a long time. We do it because they serve people. AGI won’t be different if it doesn’t work as we want it to be, not in the way that we want to build it, or as we anticipate it to go.

That said, he acknowledged a few minutes later, technology had always destroyed some jobs and created others. OpenAI’s challenge would be to build AGI that gave everyone “economic freedom” while allowing them to continue to “live meaningful lives” in that new reality. If it were to succeed, it would break the distinction between necessity and survival.

” I actually think that’s a very beautiful thing”, he said.

Brockman brought the larger picture to mind when we met with Sutskever. ” What we view our role as is not actually being a determiner of whether AGI gets built”, he said. This was a favorite argument in Silicon Valley—the inevitability card. Someone else will do it if we don’t. ” The trajectory is already there”, he emphasized,” but the thing we can influence is the initial conditions under which it’s born.

What exactly is OpenAI? “he continued”. What is our purpose? What do we actually try to accomplish? Our mission is to ensure that AGI benefits all of humanity. Build AGI and distribute its economic benefits, that is how we want to go about doing that.

His tone was matter‑of‑fact and final, as if he’d put my questions to rest. And yet we had somehow just arrived back to exactly where we’d started.


Our conversation ran out of time after 45 minutes, and it went on in circles. I tried with little success to get more concrete details on what exactly they were trying to build—which by nature, they explained, they couldn’t know—and why, then, if they couldn’t know, they were so confident it would be beneficial. At one point, I tried a different strategy, asking them to provide examples of the technology’s drawbacks instead. This was a pillar of OpenAI’s founding mythology: The lab had to build good AGI before someone else built a bad one.

Brockman attempted an answer: deepfakes”. It’s not clear that the world will improve as a result of its applications, he said.

I offered my own example: Speaking of climate change, what about the environmental impact of AI itself? The enormous and growing carbon emissions of training larger and larger AI models were alarming according to a recent study from the University of Massachusetts Amherst.

That was” undeniable,” Sutskever said, but the payoff was worth it because AGI would”, among other things, counteract the environmental cost specifically. ” He stopped short of offering examples.

It is unquestionably very highly desired that data centers be as green as possible, he continued.

” No question,” Brockman quipped.

According to Sutskever, “data centers are the biggest consumer of energy and electricity,” he continued, appearing determined to show that he was aware of and concerned about this issue.

” It’s 2 percent globally,” I offered.

” Isn’t Bitcoin like 1 percent? ” Brockman said,”

Wow! ” Sutskever said in a sudden burst of emotion that felt somewhat performative forty minutes into the conversation.”

Sutskever would later sit down with New York Times reporter Cade Metz for his book Genius Makers, which recounts a narrative history of AI development, and say without a hint of satire”, I think that it’s fairly likely that it will not take too long of a time for the entire surface of the Earth to become covered with data centers and power stations. ” There would be” a tsunami of computing… almost like a natural phenomenon. AGI and the data centers required for their support would be” too useful to not exist,” they said.

I tried again to press for more details”. You’re claiming that OpenAI is taking a big chance that you will be able to achieve beneficial AGI to stop global warming before the act of doing so will only make it worse.

” I wouldn’t go too far down that rabbit hole,” Brockman hastily cut in”. The way we think about it is the following: We’re on a ramp of AI progress. This is more significant than OpenAI, isn’t it? It’s the field. And I believe society is actually benefiting from it.

” The day we announced the deal, “he said, referring to Microsoft’s new$ 1 billion investment”, Microsoft’s market cap went up by$ 10 billion. People believe there is a positive ROI even just on short‑term technology.”

He explained that OpenAI’s plan of action was quite straightforward:” to keep up with that progress.” That’s the standard we should really hold ourselves to. That progress should be made more frequently. That’s how we know we’re on track.”

Later that day, Brockman reiterated that the central challenge of working at OpenAI was that no one really knew what AGI would look like. However, as engineers and researchers, the challenge was to continue to advance and discover the technology’s gradual evolution.

He spoke like Michelangelo, as though AGI already existed within the marble he was carving. He only had to chip away until it became apparent.


There had been a change of plans. I had been scheduled to eat lunch with employees in the cafeteria, but something now required me to be outside the office. My chaperone would be Brockman. We headed two dozen steps across the street to an open‑air café that had become a favorite haunt for employees.

This would become a constant theme throughout my visit: floors I couldn’t see, meetings I couldn’t attend, researchers stealing furtive glances at the communications head every few sentences to make sure they hadn’t broken any disclosure rules. I would later learn that after my visit, Jack Clark would issue an unusually stern warning to employees on Slack not to speak with me beyond sanctioned conversations. The security guard would receive a photo of me with instructions to be on the lookout if I appeared unapproved on the premises. Overall, OpenAI’s commitment to transparency made it strange behavior. What, I began to wonder, were they hiding, if everything was supposed to be beneficial research eventually made available to the public?

I looked more into Brockman’s cofounder’s motivations during lunch and the course of the day. He was a teen when he first grew obsessed with the idea that it could be possible to re‑create human intelligence. It was a famous paper from British mathematician Alan Turing that sparked his fascination. The Imitation Game, which served as the title for the 2014 Hollywood dramatization of Turing’s life, has its opening provocation, Can machines think?, which is the title of its first section. ” The paper goes on to define what would become known as the Turing test: a measure of the progression of machine intelligence based on whether a machine can talk to a human without giving away that it is a machine. It was a well-known origin story for AI professionals. Enchanted, Brockman coded up a Turing test game and put it online, garnering some 1, 500 hits. It made him feel amazing”. He said,” I just realized that was the kind of thing I wanted to pursue.”

In 2015, as AI saw great leaps of advancement, Brockman says that he realized it was time to return to his original ambition and joined OpenAI as a cofounder. Even if it meant working as a janitor, he stated in his notes that he would do anything to help bring AGI to life. When he got married four years later, he held a civil ceremony at OpenAI’s office in front of a custom flower wall emblazoned with the shape of the lab’s hexagonal logo. Sutskever officiated. The robotic hand they used to conduct research sat in the aisle with the rings, a sentinel from a post-apocalyptic future.

” Fundamentally, I want to work on AGI for the rest of my life,” Brockman told me.

What led to his motivation? I asked Brockman.

What are the chances that a transformative technology could arrive in your lifetime? he countered.

He was confident that he—and the team he assembled—was uniquely positioned to usher in that transformation”. I’m really drawn to issues that won’t work out in the same way if I don’t take part, he said.

Brockman did not in fact just want to be a janitor. He wanted to lead AGI. And he sprang from the anxious energy of a person who wanted recognition for history. He wanted people to one day tell his story with the same mixture of awe and admiration that he used to recount the ones of the great innovators who came before him.

He had told a group of young tech entrepreneurs at an exclusive retreat in Lake Tahoe that the chief technology officers had never been known a year before our conversation, and he had a twinge of self-pity about it. Name a famous CTO, he challenged the crowd. They struggled to do so. He had made a convincing argument.

In 2022, he became OpenAI’s president.


During our conversations, Brockman insisted that no structural change in OpenAI’s structure would indicate a shift in its main purpose. In fact, the capped profit and the new crop of funders enhanced it”. We managed to get these mission‑aligned investors who are willing to prioritize mission over returns. That is a bizarre thing, he said.

OpenAI now had the long‑term resources it needed to scale its models and stay ahead of the competition. Brockman emphasized that this was necessary. Failing to do so was the real threat that could undermine OpenAI’s mission. If the lab fell behind, it had no hope of bending the arc of history toward its vision of beneficial AGI. Only later would I realize the full scope of this assertion. It was this fundamental assumption—the need to be first or perish—that set in motion all of OpenAI’s actions and their far‑reaching consequences. It accelerated each of OpenAI’s research advancements, not based on the amount of time it would take, but rather on the unafraidness to advance before anyone else. It justified OpenAI’s consumption of an unfathomable amount of resources: both compute, regardless of its impact on the environment, and data, the amassing of which couldn’t be slowed by getting consent or abiding by regulations.

Brockman pointed once again to the$ 10 billion jump in Microsoft’s market cap”. What that really reflects is that AI is actually bringing real value to the modern world, he said. That value was currently being concentrated in an already wealthy corporation, he acknowledged, which was why OpenAI had the second part of its mission: to redistribute the benefits of AGI to everyone.

Was there a historical instance of a technology’s advantages being successfully exploited? I asked.

” Well, I actually think that—it’s actually interesting to look even at the internet as an example, “he said, fumbling a bit before settling on his answer”. There are also issues, right? “he said as a caveat”. It’s not going to be simple to figure out how to maximize positive and minimize negative when you have something super transformative.

” Fire is another example”, he added. ” It’s also got some real drawbacks to it. We must determine how to maintain control and establish common standards.

” Cars are a good example, “he followed”. Many people benefit from having cars. They have some drawbacks to them as well. They have some externalities that are not necessarily good for the world, “he finished hesitantly.

The things we want from AGI, the positive aspects of cars, and the positive aspects of fire are all similar, I suppose, just as I suppose. The implementation is very different, though, because it’s a very different type of technology.”

His eyes flashed with a fresh perspective. Just look at utilities. Power companies, electric companies are very centralized entities that provide low‑cost, high‑quality things that meaningfully improve people’s lives.”

It worked well as an example. But Brockman seemed once again unclear about how OpenAI would turn itself into a utility. He might have pondered aloud about distributing the universal basic income, or perhaps through another means.

He returned to the one thing he knew for certain. OpenAI was committed to redistributing AG I’s benefits and giving everyone economic freedom”. He said,” We actually mean that,”

” The way that we think about it is: Technology so far has been something that does rise all the boats, but it has this real concentrating effect, “he said”. AGI might be more extreme. What if all value gets locked up in one place? That is the trajectory we’re on as a society. And we’ve never seen anything quite that extreme. I don’t think that’s a good world. I don’t want to enter that world. That’s not a world that I want to help build.”


In February 2020, I published my profile for Revista Tehnologiei MIT, drawing on my observations from my time in the office, nearly three dozen interviews, and a handful of internal documents”. According to” I wrote,” there is a disconnect between what the company publicly supports and how it operates inside of the company’s shadows. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration.”

Elon Musk responded to the story with three quick tweets in succession hours later:

” OpenAI should be more open imo”

” I have no control &amp, only very limited insight into OpenAI. He said, referring to Dario Amodei, the director of research, and that Dario has a high level of confidence in his for safety.

” All orgs developing advanced AI should be regulated, including Tesla”

After that, Altman email OpenAI employees.

” I wanted to share some thoughts about the Tech Review article, “he wrote”. While definitely not catastrophic, it was clearly bad.”

He claimed that the piece had identified a gap between the perception of OpenAI and its reality. It was” a fair criticism.” This could be smoothed over not with changes to its internal practices but some tuning of OpenAI’s public messaging”. It’s good, not bad, that we have learned to be adaptable, he said, including changing the workplace and ensuring confidentiality in order to fulfill our mission as we learn more. ” OpenA I should ignore my article for now and, in a few weeks ‘ time, start underscoring its continued commitment to its original principles under the new transformation”. This may also be a good opportunity to talk about the API as a strategy for openness and benefit sharing, “he added, referring to an application programming interface for delivering OpenAI’s models.

The most important problem, in my opinion, is the leak of our internal documents, “he continued.” ” They had already opened an investigation and would keep the company updated. He also suggests that Amodei and Musk meet to sort out Musk’s criticism, which was “mild in comparison to what he’s said” but still” a bad thing to do. ” For the avoidance of any doubt, Amodei’s work and AI safety were critical to the mission, he wrote”. I think we should at some point in the future find a way to publicly defend our team ( but not give the press the public fight they’d love right now ).”

OpenAI refused to contact me again for three years.

From the book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, by Karen Hao, to be published on May 20, 2025, by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Karen Hao’s Copyright 2025 is available.

spot_img

cele mai recente articole

explorează mai mult

LĂSAȚI UN MESAJ

Vă rugăm să introduceți comentariul dvs.!
Introduceți aici numele dumneavoastră.

ro_RORomanian