Can Meta be the indie AI model company?

Today: Meta releases its Llama 3 foundation model with a big emphasis on developers, Cisco takes Splunk's observability tech into security, and the latest enterprise moves.

An aerial view of Meta's headquarters in Menlo Park, Calif.
Meta headquarters in Menlo Park, Calif. (Credit: Meta)

Welcome to Runtime! Today: Meta releases its Llama 3 foundation model with a big emphasis on developers, Cisco takes Splunk's observability tech into security, and the latest enterprise moves.

(Was this email forwarded to you? Sign up here to get Runtime each week.)


Wooly bully

Ever since OpenAI took the world by storm in late 2022 with the release of ChatGPT, big tech platform companies and scrappy startups have scrambled to match the performance of its GPT large-language model. Companies that want to build generative AI applications now have several high-performance models to choose from, but at some point the market won't be able to support them all.

Meta is in an interesting position ahead of the inevitable consolidation in AI models as a cloud-neutral AI research powerhouse printing money from its other businesses, unlike the AI startups living off venture capital. Llama 3, released on Thursday, is one of the most powerful language models yet released and is an intriguing option for enterprises as one of the most powerful open models currently available.

  • Two versions of Llama 3 were released Thursday, an 70-billion parameter model and a 8-billion parameter model.
  • Meta compared both models favorably to similar-sized models from OpenAI, Google, Mistral, and Anthropic, as we await the great reckoning that like most benchmarks, AI benchmarks are probably flawed.
  • Meta also said that it had improved the trust and safety tools that accompany the new models, and added a new feature called Code Shield, which "is a robust inference time filtering tool engineered to prevent the introduction of insecure code generated by LLMs into production systems."
  • Llama 3 was trained on a much larger dataset than Llama 2, and although Meta flirts with the "open" label by releasing the model weights used in Llama it falls short of the standard set by others like the Allen Institute by refusing to release that training data.

Meta's enterprise AI reputation predates the generative AI boom.

  • PyTorch, an open-source library for computer vision and natural language processing, was developed during the Facebook era of Meta and quickly became one of the most widely used tools in the AI community.
  • It later collaborated with Microsoft to produce ONNX, which made it easier to use a variety of AI tools when building internal models.
  • And the company's biggest priority — its advertisers — have been using AI tools to optimize their Facebook campaigns for years.
  • With the release of Llama 3, Meta said it made it easier for developers to use Llama responsibly through a mixture of automated and human interventions against "problematic responses," which is just one reason why many enterprises have tiptoed into external-facing generative AI applications.

And one of the primary selling points for Llama is that you can bring your own operating environment: The Big Three cloud providers all announced support for Llama 3 Thursday, and server huggers can also be confident that their hardware of choice will support it.

  • The downside of embracing leading foundational models like OpenAI or Gemini is like it or not, you have to run those workloads on Microsoft Azure or Google Cloud.
  • Portability could be an interesting teaser for companies trying to decide which model to embrace, especially if Meta follows through on promises to keep Llama competitive with other models.
  • The Facebook-era Meta always seemed primed for an entrance into the enterprise tech market, given its world-class data-center infrastructure and intricate knowledge of what it takes to comply with government regulations.
  • If Meta added managed AI services around Llama for enterprises that like its approach but need help deploying their apps, it could give cloud providers and OpenAI something to think about.

Raise the Hypershields

Earlier this year Runtime took a closer look at how the observability market was on a collision course with the security market, as companies from both sides of the equation realized how real-time data could improve their products. Turns out that was one of the main reasons Cisco spent more than $28 billion for Splunk and Isovalent.

Cisco launched Hypershield Thursday, which Cisco's Jeetu Patel described as "not a product, but a new architecture – the first version of something new," according to CNBC. In reality, it's a product that Cisco intends to sell to current customers to secure their hybrid cloud deployments by automatically detecting security threats in both public cloud deployments and on-premises data centers running Cisco's networking gear.

What once seemed like an inevitable march to the cloud has stalled, as more companies realize they can get away with a mix of existing data-center deployments and cloud services where they make sense. That's a welcome development for Cisco's hardware business, and if it can sell software to the data-center operators that have stuck with it, it might manage to erase the decline in its revenue it saw last quarter.


Enterprise moves

Tom Evans is the new chief partner officer at Cloudflare, a newly created role that he'll establish after several years in a similar role at Palo Alto Networks.


The Runtime roundup

Microsoft wants to secure 1.8 million GPUs by the end of the year, which would triple the number it currently runs according to Business Insider.

At the same time, chip stocks are down 10% from their peaks earlier this year, which technically represents a correction and could be a signal that AI demand is starting to wane.

Wiz is in talks to acquire fellow security startup Laceworks for $200 million, which is a far cry from the $8.3 billion valuation it enjoyed just a few years ago, according to The Information.

Slack rolled out generative AI features to all paid users this week, such as a summary of all the updates to that channel you never read.

Valkey published the first release candidate of its Redis fork, and picked up new backers including Alibaba, Huawei, and Verizon.

Cloudflare CEO Matthew Prince is trying to get an 11,000-square-foot mansion built in Park City, Utah over the objections of neighbors, and the local paper suddenly began running positive articles about his plans after he purchased it.


Thanks for reading — see you Saturday!

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Runtime.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.