This year marked a turning point for enterprise tech as spending recovered and the economy stabilized following years of rising interest rates and supply-chain disruption. While no one knows what lies ahead, here are five things we thought summed up a pivotal year.
Today: Salesforce continues its agentic AI push, Databricks secures one of the biggest funding rounds in tech history, and the rest of this week's enterprise funding.
Today: Meta releases its Llama 3 foundation model with a big emphasis on developers, Cisco takes Splunk's observability tech into security, and the latest enterprise moves.
Welcome to Runtime! Today: Meta releases its Llama 3 foundation model with a big emphasis on developers, Cisco takes Splunk's observability tech into security, and the latest enterprise moves.
(Was this email forwarded to you?Sign up here to get Runtime each week.)
Wooly bully
Ever since OpenAI took the world by storm in late 2022 with the release of ChatGPT, big tech platform companies and scrappy startups have scrambled to match the performance of its GPT large-language model. Companies that want to build generative AI applications now have several high-performance models to choose from, but at some point the market won't be able to support them all.
Meta is in an interesting position ahead of the inevitable consolidation in AI models as a cloud-neutral AI research powerhouse printing money from its other businesses, unlike the AI startups living off venture capital. Llama 3, released on Thursday, is one of the most powerful language models yet released and is an intriguing option for enterprises as one of the most powerful open models currently available.
Two versions of Llama 3 were released Thursday, an 70-billion parameter model and a 8-billion parameter model.
Meta compared both models favorably to similar-sized models from OpenAI, Google, Mistral, and Anthropic, as we await the great reckoning that like most benchmarks, AI benchmarks are probably flawed.
Meta also said that it had improved the trust and safety tools that accompany the new models, and added a new feature called Code Shield, which "is a robust inference time filtering tool engineered to prevent the introduction of insecure code generated by LLMs into production systems."
Llama 3 was trained on a much larger dataset than Llama 2, and although Meta flirts with the "open" label by releasing the model weights used in Llama it falls short of the standard set by others like the Allen Institute by refusing to release that training data.
Meta's enterprise AI reputation predates the generative AI boom.
PyTorch, an open-source library for computer vision and natural language processing, was developed during the Facebook era of Meta and quickly became one of the most widely used tools in the AI community.
It later collaborated with Microsoft to produce ONNX, which made it easier to use a variety of AI tools when building internal models.
With the release of Llama 3, Meta said it made it easier for developers to use Llama responsibly through a mixture of automated and human interventions against "problematic responses," which is just one reason why many enterprises have tiptoed into external-facing generative AI applications.
And one of the primary selling points for Llama is that you can bring your own operating environment: The Big Three cloud providers all announced support for Llama 3 Thursday, and server huggers can also be confident that their hardware of choice will support it.
The downside of embracing leading foundational models like OpenAI or Gemini is like it or not, you have to run those workloads on Microsoft Azure or Google Cloud.
Portability could be an interesting teaser for companies trying to decide which model to embrace, especially if Meta follows through on promises to keep Llama competitive with other models.
If Meta added managed AI services around Llama for enterprises that like its approach but need help deploying their apps, it could give cloud providers and OpenAI something to think about.
Cisco launched Hypershield Thursday, which Cisco's Jeetu Patel described as "not a product, but a new architecture – the first version of something new," according to CNBC. In reality, it's a product that Cisco intends to sell to current customers to secure their hybrid cloud deployments by automatically detecting security threats in both public cloud deployments and on-premises data centers running Cisco's networking gear.
What once seemed like an inevitable march to the cloud has stalled, as more companies realize they can get away with a mix of existing data-center deployments and cloud services where they make sense. That's a welcome development for Cisco's hardware business, and if it can sell software to the data-center operators that have stuck with it, it might manage to erase the decline in its revenue it saw last quarter.
Microsoft wants to secure 1.8 million GPUs by the end of the year, which would triple the number it currently runs according to Business Insider.
At the same time, chip stocks are down 10% from their peaks earlier this year, which technically represents a correction and could be a signal that AI demand is starting to wane.
Wiz is in talks to acquire fellow security startup Laceworks for $200 million, which is a far cry from the $8.3 billion valuation it enjoyed just a few years ago, according to The Information.
Slack rolled out generative AI features to all paid users this week, such as a summary of all the updates to that channel you never read.
Tom Krazit has covered the technology industry for over 20 years, focused on enterprise technology during the rise of cloud computing over the last ten years at Gigaom, Structure and Protocol.
Today: Salesforce continues its agentic AI push, Databricks secures one of the biggest funding rounds in tech history, and the rest of this week's enterprise funding.
Today: An interview with AWS AI chief Swami Sivasubramanian, why Amazon held off on deploying Microsoft 365 after last year's security debacle, and the latest enterprise moves.