No matter what happens in the end, this era of enterprise software will be studied forever as the dawn of a new era of corporate productivity or the most hyped money pit since the metaverse. Enterprise software veteran Amit Zavery has seen many of these cycles, and he believes the key to success this time around is making sure SaaS customers don't need to confront the complexity of AI models.
Zavery was recently named president, chief product officer, and chief operating officer at ServiceNow, which at many $215 billion companies is three separate jobs. But Zavery, who has spent the last 30 years building enterprise software products and teams at Oracle and most recently Google Cloud, acknowledged in a recent interview that while the role is "quite broad," he's planning to focus on three key goals.
"The core thing always comes down to, are we building the right thing? Are we delivering value? And are we innovating fast and disrupting ourselves before anybody else disrupts us?" he said. Agentic AI and coding assistants threaten to upend the market for enterprise software applications to an extent that we haven't seen since the dawn of SaaS itself, and Zavery's progress toward those goals will determine the future of ServiceNow, which has grown steadily for several years on the strength of its IT and business management software.
Zavery discussed the impact of generative AI, how SaaS companies should think about AI models, and his decision to leave Google Cloud and longtime colleague Thomas Kurian right as the third-place cloud infrastructure provider is starting to hit its stride.
This interview has been edited and condensed for clarity.
So I guess the first question is, why now? Why did you think it was time to do something new?
I was at Google for almost six years, and Google Cloud has been a great place to learn as well as grow a large enterprise business. I've been working on AI for some time there as well, and one company I've been following for a long time has been ServiceNow.
The real value for a lot of those large language models, as well as the conversational pieces, is completely aligned to what ServiceNow does. ServiceNow can really turbocharge their market as well as their presence in this space, and get a probably even much bigger proportion of the market share because of what you can do with the combination of the ServiceNow platform and Gen AI.
I think the company culture itself, the idea of being humble and hungry, really resonated with me. When I met with Fred Luddy, the founder … his vision, his humility in this area, and what he's been able to build in this space to really change how people work today and how they will work in the future, I think it's very, very exciting.
I am an enterprise software person. I learned a lot at Google. I learned a lot about what we can do with Gen AI, and now if I could take that experience and turbocharge what you can do with ServiceNow's platform, I couldn't think of a reason to not join them.
What are some of the things that ServiceNow will need to invest in to stay competitive over the rest of the decade?
I think the good thing which ServiceNow has going for it is this idea of having one unified AI platform with one big common data model, one unified management [platform] and the whole interface associated with that. If you look at the amount of workflows they're able to create — be it the technology workflow, what they're doing around the creator workflow, on the finance and supply-chain [workflow], the employee workflow, on the customer workflow, and [they] recently announced the workflow data fabric — the platform is so robust and capable that I can build more and more capabilities, fast. They've done a very good job of making AI pervasive across the platform. It gets absorbed and delivered to all the different workflows and applications they built without having to do it individually.
Customers don't really care what happens underneath the covers.
But the goal now, and if you see some of the announcements they've done, and what I've been now looking into, is agentic AI. How do you create the system of agents?
The good thing is the differentiation ServiceNow has is a little more end-to-end, and it's action oriented. It's not like the siloed agents who do pieces of it and they're not connected together. The idea of having an orchestration across all these agents is something which we can keep on enhancing and keep on delivering value to customers.
Customers don't really care what happens underneath the covers, right? If I want to go on maternity leave as an employee, can I have the agent do all the tasks underneath? Which means changing things in one particular application, giving you all of the benefits associated with your leave, making sure you have an existing employee who will take over your role? All the parts of the workflow connect so many different pieces together, so the agentic AI can do all this task for you by just providing one prompt.
It's much more powerful than saying "I will go to the benefits agent, then I will go to the agent for my employee management. I will go to agents for my out-of-office email system." That is not really what agentic AI will be.
My goal, as we think about the future of all these things, is that agents are definitely the future of how people are going to work. Can we now do a very good predictive as well as integrated end-to-end system for that?
I think one thing we've realized over the last year is that there is a fair amount of choice in AI models, and people want to work with different ones. But there can't be a zillion models either, right? If we look at the way things have gone in the past, this would condense down into a smaller number of players as the models themselves get more capable. How far along are we in that progression, and how do you see that kind of playing out over the next year or so?
I don't think there's going to be a huge amount of model companies. I think the reality is setting that one, it is expensive to build models, and second it is harder to even monetize, right? If you look at most of the startups out there who are building the models, they might be getting into much more specific use cases, versus trying to be very generic. There's going to be three, four, very deep as well as generic models. You talk about OpenAI and Gemini and Anthropic, and I feel those will be the big, big providers.
We do want to make sure we look at it from the outcome perspective versus from the model perspective. What do you want to deliver as a use case? What do you want to be able to do for the user? If there is a model out there in the market that can deliver that, of course, we should use it. There might be some specific use cases where it's better to have a small and domain-specific model, where you might be able to do things to a particular use case or a data set.
I think having an open architecture where we can choose the appropriate thing — there is not going to be hundreds of them, for sure — but picking the right ones for the right use cases and making sure that it's not a customer's job to worry about. It is our job to really make sure we are finding the right technology — be it built by us or a third party — but providing them, as an end user, with the right outcome. And then eventually, if the underlying technology changes, we should be better adopted without, again, the customer, the user, having to deal with it.
Do you feel like as a pretty large company with a lot of resources that you need to be competitive in model development on your own? Does ServiceNow have to have its own model that competes with the other models?
It will evolve as we go. I don't think there's an answer there [where] what we did last year is what we're going to do today and what we'll do next year.
We are not in the business of building and like growing and competing with Gemini or OpenAI, because it's not really our core business.
As we are looking at the different models as well as building some of the research team, we have to know the inside technology as well. It's not about just picking an API and building on it, because there might be some things we can do much better; from the tuning perspective, performance perspective, the things we need to learn in terms of what works in a particular enterprise environment, from a data residency perspective.
Having people in the organization who have built this or understand the technology or model is definitely important. We have a lot of good, smart team technologists in the model and LLM space. They do build models which are very specific for areas where we think is important, but I don't think it will be for everything, right?
We are not there to be out there selling a model. We are not in the business of building and like growing and competing with Gemini or OpenAI, because it's not really our core business, right?
There are good engineers who know a lot of the stuff, a lot of research scientists in our organization, which really keeps us on our toes as well as keeps us ahead and understanding of when you compare things, we don't want to take a black box and try to use it. You have to understand inside of it, and that's really what our research and the engineering team will do.
We'll keep our eyes open to what makes sense as we go forward. As I said, what we do today is not what we'll do next year, for sure.