Today: Why cloud storage architectures have an enormous impact on generative AI app performance, a vital component of cybersecurity preparedness is in limbo thanks to a cut in federal funding, and the latest funding rounds in enterprise tech.
Today on Product Saturday: Google Cloud outlines a new way for Kubernetes users to run inference on their existing clusters, why IBM thinks its new mainframe is an AI engine, and the quote of the week.
Today: Google Cloud makes its pitch to developers and CIOs as the best place to build enterprise AI apps, the meteoric rise of MCP hits a snag, and the latest enterprise moves.
Why GitHub Copilot needed a few new flight engineers from Anthropic and Google
GitHub Copilot users will be able to swap in AI models from Anthropic and Google in place of the default models from OpenAI. When the flagship product of the generative AI era takes such a step, it's a sign OpenAI's leadership position is waning.
GitHub Copilot is the shining star of the generative AI boom, perhaps the most widely used AI enterprise tool among businesses around the world. Microsoft and its GitHub subsidiary spent the last several years promoting their special relationship with OpenAI and its large-language models as the secret sauce behind the coding assistant, but times have changed.
GitHub Copilot users will be able to swap in AI models from Anthropic and Google to generate answers when using Copilot Chat to ask questions, GitHub CEO Thomas Dohmke announced Tuesday at GitHub Universe. "It is clear the next phase of AI code generation will not only be defined by multimodel functionality, but by multimodel choice," he said in a blog post.
Anthropic's Claude 3.5 Sonnet — which has gained a lot of traction as a coding assistant this year — is available today, while Copilot users will be able to select Google's Gemini 1.5 Pro in "the coming weeks," according to GitHub. Developers will be able to continue using several models from OpenAI, including GPT-4o, o1-preview, and o1-mini.
“We truly believe that the era of a single model is over,” Dohmke told TechCrunch, citing the tradeoffs that Copilot customers need to make between latency and accuracy.
It's not clear from Tuesday's presentation if other models, such as Meta's Llama, will eventually make their way into Copilot, but now that GitHub has built the ability to switch models into the tool it's not hard to imagine it offering several other choices at a later date.
During the early days of their partnership, Microsoft and GitHub sang the praises of OpenAI's technology and actively encouraged the rest of enterprise tech to think their exclusive access to OpenAI put them way ahead of competitors. But as the pace of OpenAI's model breakthroughs has slowed, rivals like Anthropic, Google, Meta, and others have quickly managed to erase much of that advantage.
Simply providing exclusive access to OpenAI's models was enough to jumpstart Microsoft's cloud AI business, but the real enterprise tech competition has shifted to the vendors that build the best tools and platforms that their customers need to build their own AI apps atop those models. Over the last year, Microsoft has also made several other models available through its Azure AI service in a strategy that more closely resembles AWS's approach with Bedrock.
GitHub has always had an independent streak within the Microsoft universe, as COO Kyle Daigle told Runtime last year at AWS re:Invent. But Tuesday's announcement makes it clear that if the company behind one of the most popular generative AI tools on the planet thinks it can no longer afford to rely entirely on OpenAI, nobody can.
"We, at GitHub, believe in developer choice and that developers — for reasons of company policy, benchmarks that they have seen, different programming languages and of course, personal preference, or because they’re using that model for other scenarios already — prefer one of the competing models, and so we’re officially partnering with both Anthropic and Google," Dohmke told TechCrunch.
(This post originally appeared in the Runtime newsletter on Oct. 29th, sign up here to get more enterprise tech news three times a week.)
Tom Krazit has covered the technology industry for over 20 years, focused on enterprise technology during the rise of cloud computing over the last ten years at Gigaom, Structure and Protocol.
Today: Why cloud storage architectures have an enormous impact on generative AI app performance, a vital component of cybersecurity preparedness is in limbo thanks to a cut in federal funding, and the latest funding rounds in enterprise tech.
Today on Product Saturday: Google Cloud outlines a new way for Kubernetes users to run inference on their existing clusters, why IBM thinks its new mainframe is an AI engine, and the quote of the week.
Today: Google Cloud makes its pitch to developers and CIOs as the best place to build enterprise AI apps, the meteoric rise of MCP hits a snag, and the latest enterprise moves.
Today on Product Saturday: more companies line up behind MCP, which could simplify generative AI app development, LoftLabs introduces a new way to secure multitenant Kubernetes, and the quote of the week.