The sagging shared-responsibility model

Today: why the decades-old bargain that governed cloud security is showing its age, OpenAI thinks smaller is better, and the latest moves in enterprise tech.

The sagging shared-responsibility model
Photo by Dimitar Donovski / Unsplash

Welcome to Runtime! Today: why the decades-old bargain that governed cloud security is showing its age, OpenAI thinks smaller is better, and the latest moves in enterprise tech.

(Was this email forwarded to you? Sign up here to get Runtime each week.)


We're all in this together

Cloud computing's fundamental approach to securityl seemed like a great deal when it was first proposed to companies struggling to protect their self-managed infrastructure. The bargain was simple: we take care of the hard stuff, and all you have to do is control access to your account.

But the shared-responsibility model is groaning under the weight of the modern security environment, with its sophisticated threat actors, scarily good phishing scams, and automated attacks. Snowflake's ongoing nightmare should be a wake-up call for any infrastructure or SaaS provider that they need to do more to protect their customers, because the old model is no longer working.

Here's how Microsoft defines the shared-responsibility model, which is one of the better summaries of how cloud computing security has traditionally worked across its three major pillars: infrastructure services, platform services, and software services.

  • A diagram on that page outlines a sliding scale of responsibilities, from the on-premises world where the customer must manage everything to the SaaS world, where the customer manages very little.
  • For example, if you're a Microsoft Azure customer, you're not responsible for the physical security of the servers you're renting, but you are responsible for the security of any operating systems or homegrown applications you run on that cloud instance.
  • A classic example of this model in action was the 2018 response to the design flaws in Intel chips that could have allowed attackers to access secure areas of those processors; cloud providers patched those instances with little or no disruption to their customers.
  • But no matter what level of cloud service you're buying, under the shared responsibility model, "you're responsible for protecting the security of your data and identities," according to Microsoft, and all cloud providers use similar language to describe the partnership.

Security experts have been sounding the alarm about that last statement for some time. While Snowflake did nothing wrong under the shared responsibility model, which holds that customers are responsible for properly securing access to their accounts, a growing number of people believe that cloud providers need to do more to protect their customers.

  • Leading that charge is CISA and its Secure by Design initiative, which all three major cloud providers have pledged to support but has not been adopted by the engines of the generative AI boom, Snowflake and Databricks.
  • "Products designed with Secure by Design principles prioritize the security of customers as a core business requirement, rather than merely treating it as a technical feature," according to CISA.
  • For example, Snowflake customers who used multifactor authentication were protected against the attacks using stolen credentials, but Snowflake still doesn't require customers to use MFA and didn't even provide a way for customers to force their own users to adopt it until last week.
  • "If we give you the choice to do the right thing, and you can’t seem to choose to do the right thing, then maybe it just shouldn’t be a choice anymore,” Chester Wisniewski, director and global field CTO at Sophos, told CyberSecurity Dive.

But taking on more responsibility for account security will force enterprise tech vendors to accept more friction in the user experience of their products. That could be a tough sell for vendors that have made onboarding and ease-of-use a big part of their product strategy.

  • One reason why a lot of enterprise software companies haven't imposed stricter security policies on their users is because those policies can frustrate customers or break existing workflows.
  • And while every enterprise vendor promises that they take security very seriously, product teams tend to win arguments with security teams at companies that are desperate for revenue.
  • At the very least, enterprise vendors need to provide easier ways for customers to detect anomalous login attempts or unusual activity, which is one reason why observability companies are thinking very hard about getting into the security market.

Security experts are hopeful that vendors can turn security into a competitive advantage, forcing everyone to follow suit. Google Cloud is trying to rebrand the shared responsibility model as "shared fate," and is reportedly willing to spend $23 billion on Wiz to double down on that strategy.

  • But it took legislation and a massive PR campaign to get car companies to provide seat belts, and even more effort to get people to use them.
  • The path to a more secure cloud will likely be just as difficult.

Mini me

One clear trend emerging from the maturing generative AI boom is that while the massive models might be powerful, they are extremely expensive to run. OpenAI released a new, smaller version of its GPT-4o model Thursday that it said would be "an order of magnitude more affordable than previous frontier models."

GPT-4o mini will also replace GPT 3.5 Turbo as OpenAI's cheapest option, although it doesn't yet support video and audio use cases. “For every corner of the world to be empowered by AI, we need to make the models much more affordable,” OpenAI’s Olivier Godement told TechCrunch.

Enterprise customers won't be able to get their hands on the new model until next week, but individual customers can start playing with it today. Believe it or not, OpenAI compared the performance of the new model favorably to other small models currently available based on benchmark results, as we await the reckoning that AI benchmark results are as problematic as processor benchmarking results were decades ago.


Enterprise moves

Portland's own Lisa Spelman is the new CEO of Cornelis Networks, the networking chip startup spun off from Intel, after 18 years in the chip maker's data-center group.

Kim Baslie is the new CIO of Kyndryl, after previously serving as vice president of IT transformation and strategy at the IT consulting company spun off from IBM in 2021.

Anirban Sengupta is the new CTO at Aviatrix, joining the cloud networking company after running Google Cloud's Anthos and GKE Enterprise products.

Dan Priest is the new chief AI officer at PwC, after serving in various technology leadership roles at the consulting company over the last 11 years.


The Runtime roundup

GitLab is courting buyers, according to Reuters, and seeing interest from Datadog as it tries to ride GitHub's coattails.

Reuters also reported that Smartsheet is fielding takeover offers from private equity firms as the SaaS consolidation spree continues.

Microsoft convinced France's OVHCloud to drop an antitrust complaint, tying off another loose end in its battle with European regulators.

OpenAI is thinking about making its own AI server chip, according to The Information, hiring designers from Google Cloud's TPU team and consulting with Broadcom about working together.


Thanks for reading — see you Tuesday!

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Runtime.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.