Generative AI is making CISOs nervous

Why CISOs are worried about security risks from the headlong rush to adopt generative AI apps, Microsoft gives Jay Parikh a broad mandate, and the latest funding rounds in enterprise tech.

a man sits at a table holding his head in his hand looking at a laptop screen
Photo by Tim Gouw / Unsplash

Welcome to Runtime! Today: Why CISOs are worried about security risks from the headlong rush to adopt generative AI apps, Microsoft gives Jay Parikh a broad mandate, and the latest funding rounds in enterprise tech.

(Was this email forwarded to you? Sign up here to get Runtime each week.)


Once more unto the breach

While there's evidence generative AI technology can improve cybersecurity software and techniques by improving the speed and accuracy of threat detection, CISOs are still worried about the downside. The concern is that the rush to build and deploy generative AI apps has opened new vulnerabilities that existing security safeguards can't detect, which has happened time and time again at companies when speed is the software-development team's north star.

That's the conclusion of a recent survey of CISOs conducted by NTT Data featured in CSO Tuesday. Senior business leaders currently believe "'the promise and ROI of gen AI outweigh the risks' — a situation that can leave the CISO as the lone voice of risk management reason," according to the article.

  • There are several longstanding security concerns with respect to generative AI models: Companies worry that employees will share sensitive corporate data with unvetted models, and fret about "prompt injections," where outsiders could fool an outward-facing application built around an AI model into sharing that data.
  • "Shadow IT" was a huge concern a decade ago as developers started using cloud services without telling the bosses, and "shadow AI" is quickly becoming the next generation of that problem.
  • CISOs have several options to prevent their employees from sharing data, but the rapid proliferation of large-language models in both their own applications and the SaaS applications they rely upon has made it much harder to know how they are exposed.
  • “The attack surface for gen AI has changed. It used to be enterprise users using foundation models provided by the biggest providers. Today, hundreds of SaaS applications have embedded LLMs that are in use across the enterprise,” Jim Routh, chief trust officer at Saviynt, told CSO.

One of the most common ways to find security holes is to create a "red team," a friendly group tasked with probing those systems for weaknesses in hopes of fixing those issues before the bad folks find them. On Tuesday Microsoft released some tips for companies actively investigating their own AI applications based on its own experience protecting its software and services.

  • Security teams need to think about new concepts like prompt injections when evaluating their AI apps, but "many discussions around AI security overlook existing vulnerabilities," it said in a blog post.
  • Humans need to be a central part of the red-teaming process, according to Microsoft, dashing hopes that LLMs could automate those steps: "as AI models are deployed around the world, it is crucial to design red teaming probes that not only account for linguistic differences but also redefine harms in different political and cultural contexts.
  • And "AI red teaming is a continuous process that should adapt to the rapidly evolving risk landscape and aim to raise the cost of successfully attacking a system as much as possible," Microsoft said.

But the rise of agentic AI could introduce further risks, as applications are empowered to make decisions and execute tasks based on their conclusions. Once again, security teams are under a lot of pressure to deploy agentic AI applications from business leaders worried about getting left behind by competitors that moved faster.

  • At some point some company is going to be the victim of the first major breach directly related to flaws in their generative AI or agentic AI applications.
  • Even if you fire the CISO after that incident occurs, the financial and reputational repercussions of any such breach will be felt across the company.
  • Business leaders tend to dislike subordinates who say "no," or "slow down," but right now a lot of software is getting thrown out into the world without a full understanding of the risks involved.
  • And as readers who have been through a few technology cycles might think, "same as it ever was."

Find your Core

Microsoft has always been a developer-centric company, to the point where even casual students of tech history have probably seen this video before. But thanks in part to Microsoft, generative AI is changing some fundamental aspects of how software developers do their jobs, and the tools and frameworks they need will have to change too.

Former Facebook data center and engineering executive Jay Parikh will lead the next generation of Microsoft's developer tools business, CEO Satya Nadella announced Monday. Parikh, who joined the company in an unspecified role last year, is now executive vice president in charge of the newly created CoreAI — Platform and Tools group.

That group will include Microsoft's "[Developer Division], AI Platform, and some key teams from the Office of the CTO (AI Supercomputer, AI Agentic Runtimes, and Engineering Thrive), with the mission to build the end-to-end Copilot & AI stack for both our first-party and third-party customers to build and run AI apps and agents," Nadella wrote in a company memo. New technologies like generative AI always tend to produce a few new companies that inject fresh thinking into the market, and Parikh's new job will be to make sure Microsoft keeps developers in its fold.


Enterprise funding

DDN raised $300 million in new funding from Blackstone to expand its AI storage business for both on-premises and cloud infrastructure customers.

Anysphere landed $105 million in new funding as it continues to build out Cursor, one of several AI-powered coding editors drawing the interest of developers.

Overhaul scored $55 million in new funding for its supply-chain management software, which helps companies identify and minimize the theft and spoilage risks that come along with shipping cargo.

Orchid Security raised $36 million in seed funding to build out identity-management security software using LLMs.

Thoras landed $5 million in seed funding for its take on observability software, which uses AI to identify ways to cut cloud costs.


The Runtime roundup

President Biden signed an executive order making federal land available to developers that want to build data centers, which could dramatically increase the number of viable sites and speed construction.

Arm could raise the price of licensing its chip core designs by up to 300% over the next several years, according to Reuters, which could have profound effects on the burgeoning Arm server chip market.


Thanks for reading — see you Thursday!

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Runtime.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.