What are the biggest constraints restricting the development of AI? Just a few years ago, the answer might have varied. But now that large models are prevalent, there is only one answer to this question—not enough computing power!
Or, in other words, Nvidia’s dedicated AI computing chips are not enough.
Whoever controls Nvidia’s AI chips controls the future of AI.
Now, there is such a company, which has tens of thousands of Nvidia’s AI “calculating cards” in its hands, and its customers include many AI giants such as OpenAI and Microsoft.
As an “AI computing power scalper”, ** this company named CoreWeave has achieved a company valuation of 8 billion US dollars in 4 years**. In addition to obtaining Nvidia’s exclusive investment, CoreWeave also secured US$2.3 billion in debt financing from top institutions such as Blackstone and Coatue, using its Nvidia chips as collateral.
Nothing can stop CoreWeave’s crazy expansion. How did it manage NVIDIA and transform from a cryptocurrency “mining” company into an AI “computing infrastructure” giant?
01 From “Mining Card” to “Counting Card”
CoreWeave’s founding team consists of three people, namely Michael Intrator, Brian Venturo and Brannin McBee. The three initially worked in the financial field and ran hedge funds and family offices.
When they were still managing funds in New York, the cryptocurrency mining craze had not subsided. Initially just to earn extra income, they bought the first GPU, and then bought more and more, and the desks on Wall Street were filled with GPUs.
“In 2016, we bought our first GPU, plugged it in, placed it on the pool table in our lower Manhattan office overlooking the East River, and mined the first block on the Ethereum network. "CoreWeave CEO Michael Intrator recalled in a 2021 blog post.
Soon, in 2017, they officially turned their side business into a company. The name of the company was initially related to cryptocurrency and was later changed to CoreWeave. When they chose to bid farewell to Wall Street, they moved the GPU hardware into a garage just like Silicon Valley bigwigs like to start a business in a garage. However, this garage is not in Silicon Valley on the west coast, but in the suburbs of New Jersey on the east coast. man’s grandfather.
CoreWeave three co-founders Michael Intrator (left), Brian Venturo (middle) and Brannin McBee (right) | CoreWeave
In the past decade, GPUs have been an important engine of the cryptocurrency and artificial intelligence technology boom. At the end of 2018, CoreWeave became one of the largest Ethereum miners in North America, with more than 50,000 GPUs in hand, accounting for more than 1% of the Ethereum network.
During the period, several people also began to understand other companies’ thirst for GPU resources. They also recognize that there is no durable competitive advantage in the cryptocurrency space because the market is highly competitive and highly dependent on electricity prices.
When cryptocurrency prices plummeted in 2018 and 2019, they decided to diversify into other areas that were more stable but also required a lot of GPU computing. They focus on the three major areas of artificial intelligence, media entertainment and life sciences,** and starting from 2019, they will focus on purchasing enterprise-level GPU chipsets, building specialized cloud infrastructure, and adjusting their business around Nvidia’s chips**.
As the new business gets on track, the Ethereum mining business is gradually marginalized. The decision to transform proved to be correct and lucky. None of the founders expected the coming wave of AI, which allowed CoreWeave to gradually expand from a small office to data centers across the country to cope with the ever-expanding AI market demand.
According to one of the founders, in 2022, CoreWeave’s revenue will be about 30 million US dollars, and it is expected to exceed 500 million US dollars in 2023, an increase of more than 10 times, and has signed contracts of nearly 2 billion US dollars. This year it announced a US$1.6 billion investment in data centers in Texas, with plans to expand to 14 data centers by the end of the year.
02 AI “Power Grid”
Just a few years after CoreWeave was established, GPUs for AI have become one of the most valuable assets in the world. Just like Elon Musk and others ridiculed, it is now harder to buy GPUs than to buy medicines. As generative AI ignites the market, demand for GPUs increases dramatically, and CoreWeave is well-positioned to provide AI companies with the resources they need.
As a cloud service provider, CoreWeave provides rental services of high-performance computing resources, mainly for customers who need a lot of computing power. The first model is infrastructure as a service, renting GPUs by the hour. Customers only need to pay according to the usage time and the amount of computing resources. To pay the fee**, major customers also have customized facilities, the banner is “35 times faster than traditional cloud providers, 80% lower cost, and 50% lower latency.” The company focuses on high-performance computing services, unlike general cloud service providers that also provide storage, network and other services.
Last year, CoreWeave executives bought large quantities of Nvidia’s latest chips just as Stable Diffusion and Midjourney were released. Later, when they saw the release of ChatGPT, they realized that such an investment was far from enough. These people needed not just thousands of GPUs, but millions.
They describe what CoreWeave wants to do as “building a power network for the AI market” and believe that “if these things are not built, then AI will not be able to scale.”
CoreWeave builds new data center in Texas|CoreWeave
Brannin McBee, chief strategy officer of CoreWeave, said in a podcast that at the end of last year, all hyperscale computing companies combined, including Amazon, Google, Microsoft, and Oracle, including CoreWeave, provided a total of about 50 10,000 GPU**, and by the end of this year, perhaps close to 1 million.
In terms of industry growth rate and profit margins, he believes that the demand of the AI market can be broken down into two stages: training models and executing inference tasks. Currently, there is a shortage of chip supply in the training stage,** while the inference stage will be the main growth in future demand. point is where the real demand lies**.
For a model of an AI company, after exiting the training stage, inference execution in the commercialization stage within the first two years of product launch requires at least one million GPUs, but the global AI infrastructure is not enough to meet this demand. It will be a long-term challenge, and it will take at least another two years before the GPU supply shortage is likely to start to ease.
Today, most of the hot money invested in the field of AI has to be used in cloud computing. In June of this year, CNBC reported that Microsoft “has agreed to spend potentially billions of dollars on startup CoreWeave’s cloud computing infrastructure over the next many years.” Star AI startups like Inflection AI recently raised $1.3 billion in funding to build large-scale GPU clusters, and the company’s choice was CoreWeave.
03 Hug Nvidia’s thighs
In April this year, CoreWeave completed a $221 million Series B financing, with investors including chip manufacturer Nvidia, as well as former GitHub CEO Nat Friedman and former Apple executive Daniel Gross. A month later, the company announced it had received additional investment of US$200 million, bringing the total funding round to US$421 million.
In August, CoreWeave secured another $2.3 billion in debt financing by pledging the highly sought-after Nvidia H100 as collateral. The funds will be used to acquire more chips and build more data centers.
According to the latest news from Bloomberg, CoreWeave is currently preparing to sell 10% of its shares, and its company valuation has reached a maximum of $8 billion.
Nvidia founder Jensen Huang said in the company’s earnings conference call this year: “You will see a large number of new GPU specialized cloud service providers.” “One of the famous ones is CoreWeave, and they are doing a very good job.”
CoreWeave’s relationship with NVIDIA has already begun in 2020. The company announced that year that it would join the cloud service provider program of the NVIDIA Partner Network, with the main purpose of introducing GPU acceleration into the cloud. At the recent 2023 Siggraph Computer Graphics Conference, Jen-Hsun Huang appeared, and each CoreWeave booth was specifically marked “Powered by NVIDIA” in small letters.
Jensen Huang appeared at the CoreWeave booth| CoreWeave
NVIDIA executives, including Jen-Hsun Huang, are not shy about endorsing CoreWeave.
NVIDIA Global Director of Business Development, Cloud and Strategic Partners called CoreWeave “the first elite computing cloud solution provider in the NVIDIA partner network. They provide customers with a wide range of computing options, from A100 to A40, in an unprecedented scale, and delivering world-class results in artificial intelligence, machine learning, visual effects, and more. Nvidia is proud of CoreWeave.” Another Nvidia executive positioned it as “the highest performing, The most energy-efficient computing platform.”
Such praise is also relevant to Nvidia’s own interests. Nvidia needs to ensure that their computing end-users can access their computing resources at the highest performance possible, at scale, just as customers want to get their hands on new generations of chips as soon as they are released. This also makes them not stingy about promoting their cooperation with CoreWeave, and it doesn’t hurt to develop a loyal “downline” more**.
CoreWeave is building to meet Nvidia’s standards and requirements, operating at scale, and launching a new generation of chips within months of their release, rather than the quarters that traditional hyperscale computing companies may take. This gives CoreWeave high access within Nvidia.
“As a business, this gives us trust in NVIDIA’s eyes because they know our infrastructure will be delivered to customers faster than anyone else on the market, and in the highest performance configurations,” said Brannin McBee.
04 Hard Gang Silicon Valley Giant
However, how does CoreWeave fare in the face of competition from Silicon Valley giants?
From an industry-wide perspective, CoreWeave’s competitors in AI infrastructure operations include technology giants such as Microsoft, Google and Amazon.
At the end of August, Google Cloud CEO Thomas Kurian said at the annual Next conference that more than 50% of AI startups in the industry and more than 70% of generative AI unicorns are customers of Google Cloud.
How can a start-up company with a valuation of US$8 billion avoid being crushed by a bunch of giants with trillions of US dollars? The current answer lies in the flexibility and business focus of small companies themselves, as well as the sensitive strategic landscape among technology companies.
CoreWeave executives like to draw an analogy: “General Motors can build an electric car, but that doesn’t mean it becomes a Tesla.” They believe that AI presents challenges that traditional cloud platforms cannot handle, allowing emerging The company has an edge over established players forced to adapt.
Silicon Valley giants such as Amazon, Google, and Microsoft are like aircraft carriers, requiring more time and space every time they adjust direction. In its view, they need time to adapt to the new way of building AI infrastructure, and it usually takes a while after the latest chip is released to provide large-scale access. Now people are paying more attention to building supercomputers, which require highly collaborative tasks between these computers and higher data throughput, but the main resources of the giants are not used here.
“When these three giants build cloud services, they do so to serve hundreds of thousands or even millions of so-called general-purpose use cases within their user bases, and within those areas, there may only be a small portion of the capacity dedicated to GPU computing. said Brian Venturo, Chief Technology Officer of CoreWeave.
CoreWeave believes its flexibility and specialization allow it to stand out in the field of AI infrastructure, competitive in terms of performance and cost-effectiveness, and better suited for AI applications. CoreWeave has only two hundred employees, and the number of customers is more than that of employees, but** they have reached an agreement with Inflection AI and even OpenAI supporter Microsoft to provide custom systems and more configured chips than are equipped for general computing Servers are more efficient**.
Currently in terms of scale, CoreWeave claims to have more than 45,000 high-end Nvidia GPUs that can be used on demand. It’s not just the quantity that matters, it’s the access provided. When it comes to selection, CoreWeave claims to maintain the industry’s broadest selection of Nvidia GPUs for a variety of computing needs. They design systems for workloads that are “right sized”, claiming “neither more nor less: just right”.
As for price, CoreWeave’s banner is “80% cheaper than competitors”.
On the other hand, Nvidia’s decision behind it is also critical. By controlling scarce GPU resources, choosing who to pick up the goods will also affect the entire market. Despite the tight supply, Nvidia allocated a large batch of the latest AI chips to CoreWeave, diverting supply from top cloud service providers including AWS. The reason is that these companies are trying to develop their own AI chips to reduce their dependence on Nvidia.
CoreWeave executives take the view that “not making your own chips is definitely not a disadvantage” because it helps them fight to get more GPUs from Nvidia. After all, they have no conflict of interest with Nvidia, which may not be the case with Silicon Valley giants with huge appetites.
However, the technology giant is still a big customer of Nvidia after all. At the end of August this year, Huang Renxun appeared at Google Cloud’s annual Next conference and announced a new cooperation with Google. Google’s GPU supercomputer A3 VM will be launched to the market in September, equipped with Nvidia’s H100 GPU.
At the Google Cloud Next2023 conference, Huang Renxun appeared to announce the cooperation with Google Cloud|Google Cloud
In addition, if a new chip does suddenly appear that can perform better than NVIDIA, or is not inferior to it, what impact will it have on CoreWeave’s business?
Brannin McBee believes that the same chip’s life span includes the first two to three years for model training, and then four to five years for inference execution, and there is little risk in the short term. Also, Nvidia is trying to build an open ecosystem around the hardware to increase the industry’s stickiness to its chip technology. Other manufacturers are clearly very motivated to enter this space, but their lack of an ecosystem is a gap that cannot be ignored.
In the absence of hard-core chip manufacturing technology, CoreWeave’s relative advantages and success are firmly tied to the supply chain and stability of its partners. When the industry-wide GPU is in short supply, this dependence is still an advantage.
From the cryptocurrency “mine” to the artificial intelligence “computing power mine”, CoreWeave’s history of success is staggering - a grain of gold in the times can make a startup company rise rapidly even if it falls on it. In this era of rapid AI growth, the industry’s desire for computing power has created the trillion-dollar company Nvidia, and it has obviously also created companies like CoreWeave that can seize the opportunity and go all in.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
NVIDIA strongly supports this "AI computing power scalper" with a valuation of 56 billion in 4 years
What are the biggest constraints restricting the development of AI? Just a few years ago, the answer might have varied. But now that large models are prevalent, there is only one answer to this question—not enough computing power!
Or, in other words, Nvidia’s dedicated AI computing chips are not enough.
Whoever controls Nvidia’s AI chips controls the future of AI.
Now, there is such a company, which has tens of thousands of Nvidia’s AI “calculating cards” in its hands, and its customers include many AI giants such as OpenAI and Microsoft.
As an “AI computing power scalper”, ** this company named CoreWeave has achieved a company valuation of 8 billion US dollars in 4 years**. In addition to obtaining Nvidia’s exclusive investment, CoreWeave also secured US$2.3 billion in debt financing from top institutions such as Blackstone and Coatue, using its Nvidia chips as collateral.
Nothing can stop CoreWeave’s crazy expansion. How did it manage NVIDIA and transform from a cryptocurrency “mining” company into an AI “computing infrastructure” giant?
01 From “Mining Card” to “Counting Card”
CoreWeave’s founding team consists of three people, namely Michael Intrator, Brian Venturo and Brannin McBee. The three initially worked in the financial field and ran hedge funds and family offices.
When they were still managing funds in New York, the cryptocurrency mining craze had not subsided. Initially just to earn extra income, they bought the first GPU, and then bought more and more, and the desks on Wall Street were filled with GPUs.
“In 2016, we bought our first GPU, plugged it in, placed it on the pool table in our lower Manhattan office overlooking the East River, and mined the first block on the Ethereum network. "CoreWeave CEO Michael Intrator recalled in a 2021 blog post.
Soon, in 2017, they officially turned their side business into a company. The name of the company was initially related to cryptocurrency and was later changed to CoreWeave. When they chose to bid farewell to Wall Street, they moved the GPU hardware into a garage just like Silicon Valley bigwigs like to start a business in a garage. However, this garage is not in Silicon Valley on the west coast, but in the suburbs of New Jersey on the east coast. man’s grandfather.
In the past decade, GPUs have been an important engine of the cryptocurrency and artificial intelligence technology boom. At the end of 2018, CoreWeave became one of the largest Ethereum miners in North America, with more than 50,000 GPUs in hand, accounting for more than 1% of the Ethereum network.
During the period, several people also began to understand other companies’ thirst for GPU resources. They also recognize that there is no durable competitive advantage in the cryptocurrency space because the market is highly competitive and highly dependent on electricity prices.
When cryptocurrency prices plummeted in 2018 and 2019, they decided to diversify into other areas that were more stable but also required a lot of GPU computing. They focus on the three major areas of artificial intelligence, media entertainment and life sciences,** and starting from 2019, they will focus on purchasing enterprise-level GPU chipsets, building specialized cloud infrastructure, and adjusting their business around Nvidia’s chips**.
As the new business gets on track, the Ethereum mining business is gradually marginalized. The decision to transform proved to be correct and lucky. None of the founders expected the coming wave of AI, which allowed CoreWeave to gradually expand from a small office to data centers across the country to cope with the ever-expanding AI market demand.
According to one of the founders, in 2022, CoreWeave’s revenue will be about 30 million US dollars, and it is expected to exceed 500 million US dollars in 2023, an increase of more than 10 times, and has signed contracts of nearly 2 billion US dollars. This year it announced a US$1.6 billion investment in data centers in Texas, with plans to expand to 14 data centers by the end of the year.
02 AI “Power Grid”
Just a few years after CoreWeave was established, GPUs for AI have become one of the most valuable assets in the world. Just like Elon Musk and others ridiculed, it is now harder to buy GPUs than to buy medicines. As generative AI ignites the market, demand for GPUs increases dramatically, and CoreWeave is well-positioned to provide AI companies with the resources they need.
As a cloud service provider, CoreWeave provides rental services of high-performance computing resources, mainly for customers who need a lot of computing power. The first model is infrastructure as a service, renting GPUs by the hour. Customers only need to pay according to the usage time and the amount of computing resources. To pay the fee**, major customers also have customized facilities, the banner is “35 times faster than traditional cloud providers, 80% lower cost, and 50% lower latency.” The company focuses on high-performance computing services, unlike general cloud service providers that also provide storage, network and other services.
Last year, CoreWeave executives bought large quantities of Nvidia’s latest chips just as Stable Diffusion and Midjourney were released. Later, when they saw the release of ChatGPT, they realized that such an investment was far from enough. These people needed not just thousands of GPUs, but millions.
They describe what CoreWeave wants to do as “building a power network for the AI market” and believe that “if these things are not built, then AI will not be able to scale.”
Brannin McBee, chief strategy officer of CoreWeave, said in a podcast that at the end of last year, all hyperscale computing companies combined, including Amazon, Google, Microsoft, and Oracle, including CoreWeave, provided a total of about 50 10,000 GPU**, and by the end of this year, perhaps close to 1 million.
In terms of industry growth rate and profit margins, he believes that the demand of the AI market can be broken down into two stages: training models and executing inference tasks. Currently, there is a shortage of chip supply in the training stage,** while the inference stage will be the main growth in future demand. point is where the real demand lies**.
For a model of an AI company, after exiting the training stage, inference execution in the commercialization stage within the first two years of product launch requires at least one million GPUs, but the global AI infrastructure is not enough to meet this demand. It will be a long-term challenge, and it will take at least another two years before the GPU supply shortage is likely to start to ease.
Today, most of the hot money invested in the field of AI has to be used in cloud computing. In June of this year, CNBC reported that Microsoft “has agreed to spend potentially billions of dollars on startup CoreWeave’s cloud computing infrastructure over the next many years.” Star AI startups like Inflection AI recently raised $1.3 billion in funding to build large-scale GPU clusters, and the company’s choice was CoreWeave.
03 Hug Nvidia’s thighs
In April this year, CoreWeave completed a $221 million Series B financing, with investors including chip manufacturer Nvidia, as well as former GitHub CEO Nat Friedman and former Apple executive Daniel Gross. A month later, the company announced it had received additional investment of US$200 million, bringing the total funding round to US$421 million.
In August, CoreWeave secured another $2.3 billion in debt financing by pledging the highly sought-after Nvidia H100 as collateral. The funds will be used to acquire more chips and build more data centers.
According to the latest news from Bloomberg, CoreWeave is currently preparing to sell 10% of its shares, and its company valuation has reached a maximum of $8 billion.
Nvidia founder Jensen Huang said in the company’s earnings conference call this year: “You will see a large number of new GPU specialized cloud service providers.” “One of the famous ones is CoreWeave, and they are doing a very good job.”
CoreWeave’s relationship with NVIDIA has already begun in 2020. The company announced that year that it would join the cloud service provider program of the NVIDIA Partner Network, with the main purpose of introducing GPU acceleration into the cloud. At the recent 2023 Siggraph Computer Graphics Conference, Jen-Hsun Huang appeared, and each CoreWeave booth was specifically marked “Powered by NVIDIA” in small letters.
NVIDIA executives, including Jen-Hsun Huang, are not shy about endorsing CoreWeave.
NVIDIA Global Director of Business Development, Cloud and Strategic Partners called CoreWeave “the first elite computing cloud solution provider in the NVIDIA partner network. They provide customers with a wide range of computing options, from A100 to A40, in an unprecedented scale, and delivering world-class results in artificial intelligence, machine learning, visual effects, and more. Nvidia is proud of CoreWeave.” Another Nvidia executive positioned it as “the highest performing, The most energy-efficient computing platform.”
Such praise is also relevant to Nvidia’s own interests. Nvidia needs to ensure that their computing end-users can access their computing resources at the highest performance possible, at scale, just as customers want to get their hands on new generations of chips as soon as they are released. This also makes them not stingy about promoting their cooperation with CoreWeave, and it doesn’t hurt to develop a loyal “downline” more**.
CoreWeave is building to meet Nvidia’s standards and requirements, operating at scale, and launching a new generation of chips within months of their release, rather than the quarters that traditional hyperscale computing companies may take. This gives CoreWeave high access within Nvidia.
“As a business, this gives us trust in NVIDIA’s eyes because they know our infrastructure will be delivered to customers faster than anyone else on the market, and in the highest performance configurations,” said Brannin McBee.
04 Hard Gang Silicon Valley Giant
However, how does CoreWeave fare in the face of competition from Silicon Valley giants?
From an industry-wide perspective, CoreWeave’s competitors in AI infrastructure operations include technology giants such as Microsoft, Google and Amazon.
At the end of August, Google Cloud CEO Thomas Kurian said at the annual Next conference that more than 50% of AI startups in the industry and more than 70% of generative AI unicorns are customers of Google Cloud.
How can a start-up company with a valuation of US$8 billion avoid being crushed by a bunch of giants with trillions of US dollars? The current answer lies in the flexibility and business focus of small companies themselves, as well as the sensitive strategic landscape among technology companies.
CoreWeave executives like to draw an analogy: “General Motors can build an electric car, but that doesn’t mean it becomes a Tesla.” They believe that AI presents challenges that traditional cloud platforms cannot handle, allowing emerging The company has an edge over established players forced to adapt.
Silicon Valley giants such as Amazon, Google, and Microsoft are like aircraft carriers, requiring more time and space every time they adjust direction. In its view, they need time to adapt to the new way of building AI infrastructure, and it usually takes a while after the latest chip is released to provide large-scale access. Now people are paying more attention to building supercomputers, which require highly collaborative tasks between these computers and higher data throughput, but the main resources of the giants are not used here.
“When these three giants build cloud services, they do so to serve hundreds of thousands or even millions of so-called general-purpose use cases within their user bases, and within those areas, there may only be a small portion of the capacity dedicated to GPU computing. said Brian Venturo, Chief Technology Officer of CoreWeave.
CoreWeave believes its flexibility and specialization allow it to stand out in the field of AI infrastructure, competitive in terms of performance and cost-effectiveness, and better suited for AI applications. CoreWeave has only two hundred employees, and the number of customers is more than that of employees, but** they have reached an agreement with Inflection AI and even OpenAI supporter Microsoft to provide custom systems and more configured chips than are equipped for general computing Servers are more efficient**.
Currently in terms of scale, CoreWeave claims to have more than 45,000 high-end Nvidia GPUs that can be used on demand. It’s not just the quantity that matters, it’s the access provided. When it comes to selection, CoreWeave claims to maintain the industry’s broadest selection of Nvidia GPUs for a variety of computing needs. They design systems for workloads that are “right sized”, claiming “neither more nor less: just right”.
As for price, CoreWeave’s banner is “80% cheaper than competitors”.
On the other hand, Nvidia’s decision behind it is also critical. By controlling scarce GPU resources, choosing who to pick up the goods will also affect the entire market. Despite the tight supply, Nvidia allocated a large batch of the latest AI chips to CoreWeave, diverting supply from top cloud service providers including AWS. The reason is that these companies are trying to develop their own AI chips to reduce their dependence on Nvidia.
CoreWeave executives take the view that “not making your own chips is definitely not a disadvantage” because it helps them fight to get more GPUs from Nvidia. After all, they have no conflict of interest with Nvidia, which may not be the case with Silicon Valley giants with huge appetites.
However, the technology giant is still a big customer of Nvidia after all. At the end of August this year, Huang Renxun appeared at Google Cloud’s annual Next conference and announced a new cooperation with Google. Google’s GPU supercomputer A3 VM will be launched to the market in September, equipped with Nvidia’s H100 GPU.
In addition, if a new chip does suddenly appear that can perform better than NVIDIA, or is not inferior to it, what impact will it have on CoreWeave’s business?
Brannin McBee believes that the same chip’s life span includes the first two to three years for model training, and then four to five years for inference execution, and there is little risk in the short term. Also, Nvidia is trying to build an open ecosystem around the hardware to increase the industry’s stickiness to its chip technology. Other manufacturers are clearly very motivated to enter this space, but their lack of an ecosystem is a gap that cannot be ignored.
In the absence of hard-core chip manufacturing technology, CoreWeave’s relative advantages and success are firmly tied to the supply chain and stability of its partners. When the industry-wide GPU is in short supply, this dependence is still an advantage.
From the cryptocurrency “mine” to the artificial intelligence “computing power mine”, CoreWeave’s history of success is staggering - a grain of gold in the times can make a startup company rise rapidly even if it falls on it. In this era of rapid AI growth, the industry’s desire for computing power has created the trillion-dollar company Nvidia, and it has obviously also created companies like CoreWeave that can seize the opportunity and go all in.