This year marked a turning point for enterprise tech as spending recovered and the economy stabilized following years of rising interest rates and supply-chain disruption. While no one knows what lies ahead, here are five things we thought summed up a pivotal year.
Today: Salesforce continues its agentic AI push, Databricks secures one of the biggest funding rounds in tech history, and the rest of this week's enterprise funding.
Today: How the upcoming launch of Nvidia's Blackwell chips and the growth of upstart AI clouds could change how companies buy infrastructure, the discovery of a widespread telecom hack serves as a reminder that legal backdoors are problematic, and the latest funding rounds in enterprise tech.
Welcome to Runtime! Today: How the upcoming launch of Nvidia's Blackwell chips and the growth of upstart AI clouds could change how companies buy infrastructure, the discovery of a widespread telecom hack serves as a reminder that legal backdoors are problematic, and the latest funding rounds in enterprise tech.
(Was this email forwarded to you?Sign up here to get Runtime each week.)
Training day
During the last two years of the AI investment frenzy Nvidia's GPUs have been the hottest commodity in SIlicon Valley, replacing the sleeveless fleece vests and Allbirds that ruled before the pandemic. But as a new generation of Nvidia GPUs trickles out and AI usage patterns start to change, tech buyers will have options for finding AI hardware, and that could shuffle the pecking order of infrastructure tech.
Nvidia's new Blackwell H200 chip is expected to deliver a substantial performance increase over the H100, the chip that launched the generative AI boom. After the company ironed out some packaging problems that delayed mass production of the chip earlier this year, the first H200s are starting to roll out this week.
Microsoft declared itself the "1st cloud running Nvidia's Blackwell system with GB200-powered AI servers" in a post on X Tuesday morning, referring to the combo Blackwell/Grace GPU/CPU server design Nvidia introduced earlier this year.
Not to be outdone, OpenAI — which is starting to look more like a frenemy than a close partner of Microsoft — announced hours later that it had received "one of the first engineering builds of the DGX B200," referring to a different server design that combines eight Blackwell chips with Intel CPUs.
Meanwhile, The Information reported Tuesday that armed with $6.6 billion in new funding, OpenAI plans to rely less on Microsoft Azure and more on its own data-center and hardware strategy following complaints that "Microsoft hadn’t moved fast enough to supply OpenAI with enough computing power."
All the major cloud providers will eventually get their hands on Blackwell, although they'll likely grumble about their allocation. But the smaller, AI-focused cloud companies such as CoreWeave that were just getting started two years ago are armed with far more investment and a much stronger track record of service delivery at this point, and if Blackwell remains in tight supply well into 2025 other AI chips could gain traction.
Cisco will put more money into Nvidia-backed CoreWeave that could value the fast-growing GPU cloud provider at $23 billion, Bloomberg reported over the weekend.
Intel, desperate for anything that could revive its flagging enterprise business, launched its own AI cloud Monday in hopes of getting customers interested in its Gaudi 3 AI chips.
And Business Insider reported two weeks ago that AWS could finally start to see a return on its investment in its own AI chips due to the fact that it has distributed those chips more broadly across the world than Microsoft or Google.
But as investment in AI infrastructure shifts from training to inference, which most providers expect to make up the bulk of AI workloads over the rest of the decade, it should also become much easier to find processing power.
Inference is what happens behind the scenes when the AI model takes your input and spits out a response, which is less resource-intensive than training the model in the first place and could be handled by a wider variety of servers and silicon.
Much of that work could be handled with decently powerful but much cheaper CPUs compared to today's training-oriented GPUs.
That trend could be a boon for the Big Three cloud infrastructure providers, but the upstart AI cloud providers could certainly add more CPU capacity to their networks.
And as this second wave of the AI boom plays out, the end result could be more competition in cloud infrastructure computing than we've seen in a long time.
In through the back door
Details are still emerging about what appears to have been a massive hack of U.S. telecom infrastructure, which according to the Wall Street Journal was conducted by "hackers linked to the Chinese government." The group is known as Salt Typhoon under Microsoft's threat-actor naming convention, and "for months or longer, the hackers might have held access to network infrastructure used to cooperate with lawful U.S. requests for communications data," according to the report.
According to Techcrunch, U.S. telecom and internet providers are required to maintain a system that allows federal law enforcement officials to obtain communications data in the course of an investigation. But as security professionals have argued for years, it's basically impossible to build a "secure" backdoor into a network or operating system that won't be abused by outside groups bent on intelligence gathering or financial gain.
The most likely scenario advanced by a Washington Post article is that China wanted to figure out how and where U.S. authorities were surveilling Chinese targets. But the group could have also had access to unencrypted business traffic as well.
Voyage AI landed $20 million in Series A funding from rivals Snowflake and Databricks, among others, for its work on turning files into "embeddings" that can be understood by AI models.
Lightdash secured $11 million in Series A funding as it builds out a series of open-source developer tools that allow companies to build easy-to-use business-intelligence dashboards.
The Runtime roundup
Cloudflare acquired Kivera, a startup working on cloud deployment security, for an undisclosed sum.
AWS CEO Matt Garman confirmed the company is pruning servicesin an interview with TechCrunch, arguing "you can’t build everything" but also saying "we take it seriously if companies are going to bet their business on us supporting things for the long term."
Tom Krazit has covered the technology industry for over 20 years, focused on enterprise technology during the rise of cloud computing over the last ten years at Gigaom, Structure and Protocol.
Today: Salesforce continues its agentic AI push, Databricks secures one of the biggest funding rounds in tech history, and the rest of this week's enterprise funding.
Today: An interview with AWS AI chief Swami Sivasubramanian, why Amazon held off on deploying Microsoft 365 after last year's security debacle, and the latest enterprise moves.