Today: the quantum computing hype train leaves the station once again, this time with Microsoft in the driver's seat, multiple reports outline DOGE's control over computing infrastructure formerly run by official government agencies, and the latest enterprise moves.
Today: Liberty Mutual CIO Monica Caldas explains how the insurance company quickly rolled out an internal generative AI app, former OpenAI CTO Mira Murati surfaces with a new company, and the latest funding rounds in enterprise tech.
LibertyGPT is an internal application that is currently being used by more than 10,000 Liberty Mutual employees to summarize information and answer common questions. An early version was built in just two weeks thanks to previously established data pipelines and cost controls.
AWS's Matt Garman lays a foundation for his first year as CEO
AWS didn't ignore AI during Garman's presentation Tuesday, but it spent a significant amount of time on the services that turned it into a $100-billion a year enterprise computing powerhouse: compute, storage, and databases.
LAS VEGAS — Customers, partners, and even employees have complained about AWS's overwhelming emphasis on generative AI technology during its public events over the last several years, and Matt Garman seemed to get that message during his first keynote address as CEO. AWS certainly didn't ignore AI during Garman's presentation Tuesday, but it spent a significant amount of time on the services that turned it into a $100-billion a year enterprise computing powerhouse: compute, storage, and databases.
Perhaps befitting someone who was around during its earliest days, Garman emphasized an old message from AWS: we provide the raw materials for the modern digital economy. There were two acts in today's presentation, and Garman opened the morning by rattling off new enhancements to the services that drive most of its revenue.
Garman started off his AWS career running the flagship EC2 compute service, and introduced new instances running Nvidia's Blackwell chips as well as AWS's own Trainium2 chips, which are now generally available.
AWS now offers fully managed support for Apache Iceberg tables in S3, which will help more companies move their data into the open formats that are taking the data analysis world by storm. And the company introduced a new database: Amazon Aurora DSQL, which Garman described as "the fastest distributed SQL database anywhere" and compared it favorably to Google Cloud's Spanner.
The second act underscored how AWS is trying to carve out a place for itself in the exploding market for generative AI services, where its previous accomplishments don't necessarily matter. Amazon Q — which was the big, if underwhelming announcement from last year's re:Invent — is AWS's take on the AI assistant, and Garman spent a significant amount of time talking about new features for Q Developer.
Q Developer can now launch unit tests, generate documentation, and help operations teams solve performance issues. It can also help companies migrate from older technologies to newer ones, including the tricky process of finally modernizing mainframe applications.
"We think that Q can actually turn what was going to be a multiyear effort into like a multiquarter effort, cutting by more than 50% the time to migrate [off] mainframes," Garman said.
While this year's re:Invent keynote didn't really break new ground or change the trajectory of the enterprise tech market,Garman's solid, no-nonsense approach (not a single competitor was awkwardly trash-talked) came off well as businesses gear up for a year that will determine whether or not their generative AI investments will pay off. It did, however, signal that an ongoing internal debate over the best way to build services for those customers has entered a new chapter.
As it grew into an enterprise powerhouse, AWS sought to replicate everything enterprise tech builders could do in their own data centers, and those early customers loved the freedom to mix and match basic services to meet their unique needs.
But later arrivals to cloud computing found themselves overwhelmed by the sheer number of what Garman called "building blocks" — like wannabe-DIY shoppers at Home Depot wandering through the massive aisles with a glazed look — and AWS started to offer managed services that made it easier to get up and running.
However, while there are a lot of benefits to managed services, AWS has often struggled to deliver managed services with the same quality and reliability that its building blocks promise.
Over the years cloud computing Kremlinologists have examined re:Invent keynotes for signals that show how the company is thinking about this pendulum when it comes to the future of its product strategy. This year's edition felt like a return to the early days, but Redmonk's Stephen O'Grady wondered if that might be a mistake.
"On the one hand, Amazon … [has] historically done primitives well and abstractions not so well. So this return to its roots, as it were, likely bodes well for its current customers. On the other, the growing sea of primitives is, in and of itself, growing less manageable," O'Grady wrote on Bluesky.
The reality is that if AWS wants to keep growing, it needs to continue making the best building blocks on the planet for companies that value flexibility and customization, and it needs to offer companies that need more help something other than a contact at Accenture or Deloitte. As the generative AI hype settles down, solving that challenge will be one of Garman's top priorities for the years to come.
(This post originally appeared in the Runtime newsletter on Dec. 3rd, sign up here to get more enterprise tech news three times a week.)
Tom Krazit has covered the technology industry for over 20 years, focused on enterprise technology during the rise of cloud computing over the last ten years at Gigaom, Structure and Protocol.
Today: AWS narrowly misses analyst estimates if you're not into the rounding thing, Stanford and UW researchers think they've found a way to out-DeepSeek DeepSeek, and the latest enterprise moves.
Vercel's serverless infrastructure was designed at a time when speed was the most important goal. AI apps are a little different, and Fluid Compute is an effort to rebuild that infrastructure for the AI era.
How DeepSeek's new AI model upended industry assumptions about the price of building leading-edge AI models, the U.K. will consider remedies to address cloud competition involving AWS and Microsoft, and the latest funding rounds in enterprise tech.
Even companies eager to jump on the GenAI bandwagon have struggled to organize their data and get past deployment hurdles, and nobody likes to spend all that time, effort, and money to build technology that can't be shipped because it can't be trusted.