AWS didn't ignore AI during Garman's presentation Tuesday, but it spent a significant amount of time on the services that turned it into a $100-billion a year enterprise computing powerhouse: compute, storage, and databases.
Today: AWS strikes a balance between its cloud computing core and its AI ambitions, Intel's Pat Gelsinger "retires," and the latest funding rounds in enterprise tech.
Today: As is tradition, AWS released all the news that won't make the re:Invent keynote ahead of time, the Allen Institute for AI introduces a powerful and truly open-source AI model, and the quote of the week.
AWS's Matt Garman lays a foundation for his first year as CEO
AWS didn't ignore AI during Garman's presentation Tuesday, but it spent a significant amount of time on the services that turned it into a $100-billion a year enterprise computing powerhouse: compute, storage, and databases.
LAS VEGAS — Customers, partners, and even employees have complained about AWS's overwhelming emphasis on generative AI technology during its public events over the last several years, and Matt Garman seemed to get that message during his first keynote address as CEO. AWS certainly didn't ignore AI during Garman's presentation Tuesday, but it spent a significant amount of time on the services that turned it into a $100-billion a year enterprise computing powerhouse: compute, storage, and databases.
Perhaps befitting someone who was around during its earliest days, Garman emphasized an old message from AWS: we provide the raw materials for the modern digital economy. There were two acts in today's presentation, and Garman opened the morning by rattling off new enhancements to the services that drive most of its revenue.
Garman started off his AWS career running the flagship EC2 compute service, and introduced new instances running Nvidia's Blackwell chips as well as AWS's own Trainium2 chips, which are now generally available.
AWS now offers fully managed support for Apache Iceberg tables in S3, which will help more companies move their data into the open formats that are taking the data analysis world by storm. And the company introduced a new database: Amazon Aurora DSQL, which Garman described as "the fastest distributed SQL database anywhere" and compared it favorably to Google Cloud's Spanner.
The second act underscored how AWS is trying to carve out a place for itself in the exploding market for generative AI services, where its previous accomplishments don't necessarily matter. Amazon Q — which was the big, if underwhelming announcement from last year's re:Invent — is AWS's take on the AI assistant, and Garman spent a significant amount of time talking about new features for Q Developer.
Q Developer can now launch unit tests, generate documentation, and help operations teams solve performance issues. It can also help companies migrate from older technologies to newer ones, including the tricky process of finally modernizing mainframe applications.
"We think that Q can actually turn what was going to be a multiyear effort into like a multiquarter effort, cutting by more than 50% the time to migrate [off] mainframes," Garman said.
While this year's re:Invent keynote didn't really break new ground or change the trajectory of the enterprise tech market,Garman's solid, no-nonsense approach (not a single competitor was awkwardly trash-talked) came off well as businesses gear up for a year that will determine whether or not their generative AI investments will pay off. It did, however, signal that an ongoing internal debate over the best way to build services for those customers has entered a new chapter.
As it grew into an enterprise powerhouse, AWS sought to replicate everything enterprise tech builders could do in their own data centers, and those early customers loved the freedom to mix and match basic services to meet their unique needs.
But later arrivals to cloud computing found themselves overwhelmed by the sheer number of what Garman called "building blocks" — like wannabe-DIY shoppers at Home Depot wandering through the massive aisles with a glazed look — and AWS started to offer managed services that made it easier to get up and running.
However, while there are a lot of benefits to managed services, AWS has often struggled to deliver managed services with the same quality and reliability that its building blocks promise.
Over the years cloud computing Kremlinologists have examined re:Invent keynotes for signals that show how the company is thinking about this pendulum when it comes to the future of its product strategy. This year's edition felt like a return to the early days, but Redmonk's Stephen O'Grady wondered if that might be a mistake.
"On the one hand, Amazon … [has] historically done primitives well and abstractions not so well. So this return to its roots, as it were, likely bodes well for its current customers. On the other, the growing sea of primitives is, in and of itself, growing less manageable," O'Grady wrote on Bluesky.
The reality is that if AWS wants to keep growing, it needs to continue making the best building blocks on the planet for companies that value flexibility and customization, and it needs to offer companies that need more help something other than a contact at Accenture or Deloitte. As the generative AI hype settles down, solving that challenge will be one of Garman's top priorities for the years to come.
(This post originally appeared in the Runtime newsletter on Dec. 3rd, sign up here to get more enterprise tech news three times a week.)
Tom Krazit has covered the technology industry for over 20 years, focused on enterprise technology during the rise of cloud computing over the last ten years at Gigaom, Structure and Protocol.
Despite recent challenges to their hegemony, x86 chips still power the vast majority of cloud and on-premises servers in use today. However, over all those years Intel and AMD tweaked x86 in subtle but incompatible ways to suit their own needs, and Tuesday's agreement is a promise to unify x86.
This week a U.K. regulatory agency published summaries of hearings it conducted this past July with AWS, Microsoft, and Google Their responses provide an interesting look into how the cloud providers see themselves, their competitors, and the current state of the market.
For years, Oracle tried to convince longtime database customers who wanted to shed their on-premises data centers to run those databases on Oracle's public infrastructure cloud, slamming AWS at every turn. Times have changed.
A generation of cloud architects, developers, and systems engineers has stayed loyal to AWS over nearly two decades in part because of its reputation for supporting anything it launched that was used by a customer to build their infrastructure. That commitment appears to be changing.