Google's AI scores a bug bounty

Today: how Google used an AI agent to find a memory vulnerability in a widely used database, AWS's nuclear plans run into a setback, and the latest funding in enterprise tech.

Google's AI scores a bug bounty
Photo by Luca / Unsplash

Welcome to Runtime! Today: how Google used an AI agent to find a memory vulnerability in a widely used database, AWS's nuclear plans run into a setback, and the latest funding in enterprise tech.

(Was this email forwarded to you? Sign up here to get Runtime each week.)


Security agents

Security professionals have been excited and worried about the possibilities of generative AI technology ever since ChatGPT arrived on the scene in late 2022. The technology theoretically allows attackers to automatically probe for weaknesses and launch attacks, but it could also help defenders find flaws in places people didn't think to look.

That's exactly what Google's Project Zero did last month, according to a blog post published Friday. The security team said it used an AI agent to discover a buffer underflow vulnerability in SQLite, a popular open-source database that was the third most-widely used database in Stack Overflow's 2024 developer survey.

  • In June Google detailed how it was using large-language models to explore "how these models can reproduce the systematic approach of a human security researcher when identifying and demonstrating security vulnerabilities" in an effort called Project Naptime.
  • Now that advances in those LLMs have ushered in the agentic AI era, Project Naptime evolved into an agent called Big Sleep based on Gemini Pro 1.5 that was developed in partnership with Google Deepmind.
  • It didn't take long for Big Sleep to find a vulnerability in SQLite that Google believes "is the first public example of an AI agent finding a previously unknown exploitable memory-safety issue in widely used real-world software."

Google reported the vulnerability to SQLite's maintainers in October, and they fixed it immediately. The SQLite project has extensive testing procedures in place, but in this case a common testing technique failed to discover a vulnerability that could have had serious ramifications for SQLite users.

  • Fuzzing is used by software testers to find flaws by flooding the software with corrupt or invalid data that falls outside the bounds of its usual input, which can trigger crashes or memory holes.
  • For example, Crowdstrike acknowledged that it wasn't using fuzzing to test the configuration updates it sent to its users, which might have prevented that huge Windows outage in July after one of those updates had an extra variable the kernel wasn't expecting.
  • "Fuzzing has helped significantly, but we need an approach that can help defenders to find the bugs that are difficult (or impossible) to find by fuzzing, and we're hopeful that AI can narrow this gap," Google wrote in its post.
  • Big Sleep was directed to focus on finding possible variants of previous SQLite vulnerabilities that had already been patched under the theory that "this was a previous bug; there is probably another similar one somewhere," Google said.

The bug discovered by Big Sleep would have been very difficult for an attacker to exploit, but given the rise in professional and nation-state hacking groups around the world and the popularity of SQLite, it wouldn't have been out of the question. Still, there's a lot of work that remains to turn a project like Big Sleep into an actual tool that enterprises can use for defense.

  • "We hope that in the future this effort will lead to a significant advantage to defenders — with the potential not only to find crashing testcases, but also to provide high-quality root-cause analysis, [which means] triaging and fixing issues could be much cheaper and more effective in the future," Google wrote.
  • Assuming they can be made easier to use, tools like Big Sleep could be essential to securing the incredible amount of software that has been written over the last two decades.
  • They could also be an answer to the open-source supply-chain security problem by giving understaffed and overworked maintainers a much faster way to find and repair the types of vulnerabilities that can have a cascading effect on critical-infrastructure software.

Pennsylvania power

When AWS announced plans to acquire a data-center site next to the Susquehanna nuclear power plant in Salem Township, Penn. earlier this year, it hoped to be able to plug directly into that plant to power a new data center complex. But federal energy regulators nixed that plan Friday over concerns from other utilities that tap into the plant.

Nevertheless, Bloomberg reported Monday that AWS plans to continue building a data center on the site with access to 300 megawatts of power, rather than the 960 megawatts it originally sought under the previous deal. A panel of regulators voted 2-1 Friday to prohibit that deal, which would have allowed AWS to avoid paying for broader upgrades to the local electrical grid in what dissenters called a "first of its kind" deal that could set a precedent.

As hyperscalers race to build new data centers to accommodate the unique needs of AI workloads, they're running into roadblocks left and right. Both AWS and Microsoft acknowledged during their earnings conference calls last week that they're having trouble bringing capacity online as quickly as they had hoped amid concerns about electrical supply and local opposition to the enormous buildings.


Enterprise funding

Bugcrowd raised $50 million in "a growth capital facility" from Silicon Valley Bank, allowing the company to add AI to its bug bounty platform and pursue "strategic M&A" deals.

Noma exited stealth mode with $32 million in seed and Series A funding to build an AI security platform that can protect customers against supply-chain risks.

Dash0 landed $9.5 million in seed funding for its observability tool built around OpenTelemetry, an open-source project that has upended the budding observability market over the last year.

Symbiotic Security raised $3.5 million in seed funding as it perfects a copilot-of-sorts for software developers that alerts them if they are writing insecure code.


The Runtime roundup

SuperMicro's stock fell nearly 17% in after-hours trading after the server company said Tuesday that it didn't know when it would file its annual earnings report and issued weaker guidance for the upcoming quarter, a week after its auditors walked away from the company over accounting concerns.

Okta embraced the good old-fashioned Friday news dump in revealing that for months, anyone could have logged into an Okta account with a username longer than 52 characters by typing literally anything into the password field.

Canadian law enforcement arrested the person believed to be behind the hacks of dozens of Snowflake customers earlier this year, and he will likely face charges in the U.S.

Google Cloud will require all customers to use multifactor authentication starting next year, which could prevent the type of attack used to breach Snowflake customers without that security protection.


Thanks for reading — take a deep breath, everybody — see you Thursday!

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Runtime.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.