Building a chatbot like ChatGPT requires billions upon billions of dollars. That’s the driving force behind OpenAI’s plans to change how it’s managed.
Early last year, OpenAI raised $10 billion. Just 18 months later, the company had burned through most of that money. So it raised $6.6 billion more and arranged to borrow an additional $4 billion.
But in another 18 months or so from now, OpenAI will need another cash infusion because the San Francisco start-up is spending more than $5.4 billion a year. And by 2029, OpenAI expects to spend $37.5 billion a year.
OpenAI’s accelerating expenses are the main reason the corporate structure of the company, which began as a nonprofit research lab, could soon change. OpenAI must raise billions of additional dollars in the years to come, and its executives believe it will be more attractive to investors as a for-profit company.
In many ways, artificial intelligence has inverted how computer technology used to be created. For decades, Silicon Valley engineers designed new technologies one small step at a time. As they built social media apps like Facebook or shopping sites like Amazon, they wrote line after line of computer code. With each new line, they carefully defined what the app would do.
But when companies build A.I. systems, they go big first: They feed these systems enormous amounts of data. The more data companies feed into these systems, the more powerful they become. Just as a student learns more by reading more books, an A.I. system can improve its skills by ingesting larger pools of data. Chatbots like ChatGPT learn their skills by ingesting practically all the English language text on the internet.
That requires larger and larger amounts of computing power from giant data centers. Inside those data centers are computers packed with thousands of specialized computer chips called graphics processing units, or GPUs, which can cost more than $30,000 apiece.