Tom Siebel has made a lot of money in enterprise technology over the last 25 years by being in the right place at the right time. This time around, it's not clear how much he's enjoying the AI boom.
Certainly, the generative AI craze has been good for C3 AI, which started off life as an enterprise internet-of-things company but pivoted into AI right around the time of its 2020 IPO. The company wisely chose "AI" as its stock ticker and the price of its stock soared last year amid the height of the AI hype cycle despite tepid revenue and massive losses, although it's down more than 20% on the year to date.
C3 AI builds AI applications for enterprises and governments, and also sells a development platform for companies that want to do it themselves. Siebel has been selling software to powerful people for decades, having made a fortune from the rise of Siebel Systems and its subsequent acquisition by Oracle in 2006, but in a recent interview he expressed a little trepidation about how the government is thinking about deploying AI.
In recent years C3 AI was asked to build an AI decision-making application for the HR department of a branch of the U.S. military as well as an AI system that could detect "extremists" in the country, and Siebel said the company turned down both requests. "I was in a very important building with a very important person and candidly was really disturbed," he said of the second conversation. "I was like, how fast can I get out of this meeting?"
In the interview, Siebel also expressed support for a law similar to Europe's "right to be forgotten" that could see Mark Zuckerberg hauled off to jail if Meta doesn't promptly delete your data, and outlined some ways AI could actually make government services easier to access.
This interview has been edited and condensed for clarity.
You've been around this industry longer than I have, but it seems like there's such a herd mentality when something new comes along.
AI is legitimately pretty big, it's not ephemeral. At some point in time, will the market get overvalued and correct? Absolutely. It always happens, right? Does it happen this month? I don't know. Could it go another three years? I don't know.
But you know, it's a legitimate big deal. We were able to solve classes of problems that have never been able to be solved before. It has enormous social and economic benefit. And this technology's advancing really dramatically, particularly as it relates to generative AI.
This is the first time we've done things in computer science that weren't a mathematical certainty. Everything else we've done in computer science, up until this day, has been deterministic. Every time you run it, it's going to happen that you get the same answer. Now, with generative AI, you'd never quite know what the answer is going to be. And that creates some very interesting issues.
If you want to read a really good book, it's called What is ChatGPT Doing and How Does It Work? by Stephen Wolfram. I've been looking for a good book on this subject for a year and it's all been drivel. And he really does explain how these learning models work, what they do, and importantly, what they won't do.
This is the first time we've done things in computer science that weren't a mathematical certainty, other than random number generators. Everything else we've done in computer science, up until this day, has been deterministic. Every time you run it, it's going to happen that you get the same answer. Now, with generative AI, you'd never quite know what the answer is going to be. And that creates some very interesting issues.
What everybody is scrambling about, whether we're dealing with government leaders, military leaders, or business leaders, I think they're scared that if they don't take advantage of these technologies that they'll be at a competitive disadvantage, whether that's China in the military context, or whether it's the other automotive manufacturers. And it's true, they will be at a disadvantage if they don't figure out how to take advantage of these technologies.
I was looking at the earnings report from a couple months ago, and it seems like over half of your customers are either the federal government or state or local governments or defense-adjacent industries. Why do you think your customer base is concentrated in those sectors?
We've seen a huge interest [from] the federal government, particularly in the defense and intelligence communities, and their concern is China. And China is a legitimate concern, they're investing billions in AI, and an AI war is something that we do not want to lose.
The largest primary application of AI in the U.S. Department of Defense is for predictive maintenance for aircraft, where they can look at all the telemetry and identify failures before they happen. And one of the systems that we do, on any given day we can get 25% more aircraft in the air, which at the scale of the United States Air Force is kind of a big deal.
It's also used for testing logistics; these people move a lot of stuff around the world, and they're doing it in very hostile environments where your shipping lanes and ports are disappearing in real time. That's a classic, perfect example of AI and a place where we do a lot of work.
The regulations associated with the Affordable Care Act are, like, biblical. States like California basically have call-center agents answering our questions about what physicians are covered, how much am I going to pay for my kids, how much am I going to pay for my grandmother, what's the difference if I'm in Fresno versus Sacramento. The average call is 41 minutes, which means you're 39 minutes on hold, and probably 80% of the answers are wrong.
This is a perfect use of generative AI, where we can load the corpus of all of the rules, regulations and statutes associated with the Affordable Care Act into … when we do a learning model at the scale of a state or company we call it an enterprise learning model. And so it enables the call-center agent to answer the question right the first time in like two minutes, rather than wrong and 41 minutes. So we have Medicare, Medicaid, Social Services, disaster recovery, these are natural applications where we can apply these information technologies to provide higher quality constituent services at lower cost.
You mentioned that your services weren't being used for facial recognition with law enforcement and how the defense industry was using these technologies for logistics and maintenance. Does the company prohibit its technology from being used for facial recognition or weapons targeting?
First of all, when you get into the United States government, they're very sensitive about AI ethics, they really are. That being said, even with that, we are asked to do applications that we think are ethically troubling, and we just won't do them.
I will give you an example with one of the [armed] services with which we do business which will remain unnamed — there's three large services, just pick one of them — [they] asked us to build an AI-enabled HR system for that service. When you get into the reserves and retired veterans, you're dealing with like a million and a half people. And the purpose of this HR system — by the way, we've been asked to do this in the private sector also, and we won't do it — was to decide who to assign and who to promote.
We were being visited by the undersecretary of that service with whom we're quite close, and he wanted us to build this application. And I said, "yes, we could build this, but we're not going to do it, and we recommend that you not do it."
The problem is that, yes, we can do this, and yes, the system will work. But we have something called cultural bias that [is] embedded in these data, and this system is just going to perpetuate that bias. So no matter what the question is, the answer is gonna be "white, male, went to West Point," and this is just not going to fly in the 21st century.
If somebody sends you a request that says "I want to be forgotten" at, name the social media company, then within 24 hours all the records should be expunged. And if the social media company doesn't expunge those records, then put the CEO in jail; you only have to do that one time and everybody will comply.
I think there are a lot of issues associated with AI that are really troubling. I think any time we get into perpetuating social, cultural bias, we need to be concerned about that. Issues associated with privacy are very troubling. Privacy looks to me like a fundamental human right, and why aren't there laws to enforce that? Why isn't there a law that provides us the right to be forgotten?
At the core of a lot of these things is a data gathering problem, or a data gathering gap that we just haven't really addressed at any level. But that's also the engine for all of this AI boom, the data that is out there in the world, and these companies are trying to gather as much as they can.
I think it's unstoppable. We are going to aggregate this data, that is unstoppable. I think what we can stop is how we use [that] data. We can make it unlawful to use the data to perpetuate social bias, we can make it unlawful to use the data to propagate a public health hazard, and we can make it unlawful to use the data to interfere in democratic processes.
If somebody sends you a request that says "I want to be forgotten" at, name the social media company, then within 24 hours all the records should be expunged. And if the social media company doesn't expunge those records, then put the CEO in jail; you only have to do that one time and everybody will comply.
These guys stand up in front of Congress, Sam Altman, and these guys who say, "please regulate us," and they're just lying through their teeth. How many millions of algorithms are we generating every year that we're going to set up something like the FDA to inspect algorithms? It's impossible, and these people know it when they're saying it, that nobody is going to look at these algorithms and figure out what they do because they don't even know how they work.
I guess my job is I'm supposed to be singing the praises of AI and life's just all goodness and light. Is there goodness and light in AI? Absolutely. We will deliver higher quality products at lower cost [to] more satisfied consumers at lower environmental impact. We will deliver higher quality government services at lower cost into the hands of more satisfied consumers. The streets will be safer, people will be healthier. This is all good.
The downside is really dark. Let's look at the intersection of AI and criminal justice. Do you want to be afraid? Go there. You know, where we start [predicting] who's likely to be incarcerated or who's likely to be a criminal? You know what the answer is?
I have been asked — and I won't say what administration, and I won't say in what department — but let's say it was as senior a person as there is, almost, in the United States government. And the question was, "Tom, can we use your system to identify extremists in the US population?"
Is conversation really taking place in the United States of America? Like, what's an extremist? Like, maybe, white, male, Christian? I mean, what's an extremist? I was in a very important building with a very important person and candidly was really disturbed. I was like, how fast can I get out of this meeting?
What did you say to them?
I said, "You know, I don't feel comfortable with a conversation, and I'm the wrong person to be talking to."
Do you know if that was pursued through another vendor or method?
I can think of vendors who would say yes to that in a heartbeat. I think the next guy in the door said yes, and you've just got to live with that.