Back to Blog
Vibe CodingData BreachAppSecAI Development

From Vibe Coded to Hacked: The Tea App Breach and the Hidden Cost of AI-Generated Code

MindFort Security Research Team|July 30, 2025|8 min read

The premise of Tea was straightforward and, in its way, admirable: a platform where women could anonymously share their dating experiences, warn each other about problematic individuals, and vet potential partners before agreeing to meet them. In the complicated landscape of modern dating, Tea positioned itself as a tool for safety.

In July 2025, that positioning collapsed spectacularly. Over the course of just a few days, hackers exposed 72,000 user images—including 13,000 selfies and government-issued photo IDs that users had submitted for account verification—followed by more than 1.1 million private messages. Those messages contained exactly the kind of deeply personal content you might expect from a platform built on trust: discussions about reproductive health, accounts of relationship trauma and infidelity, and safety warnings about specific individuals identified by name.

An app built to protect women had become the instrument of their exposure. And the technical failures that enabled these breaches point to a troubling pattern in modern software development, one that security researchers have been watching with growing concern.

The Rise of Vibe Coding

"Vibe coding" is a term that's emerged to describe a new approach to software development, one that relies heavily on AI tools like Cursor, GitHub Copilot, or dedicated platforms like Lovable and Base44. Instead of writing code line by line, developers describe what they want in natural language and let AI generate the implementation.

The appeal is easy to understand. Vibe coding dramatically lowers the barrier to entry for software development and compresses timelines that once stretched into weeks or months down to hours or days. Entire applications can be prototyped, iterated, and launched at speeds that would have seemed impossible just a few years ago.

The problem, of course, is that speed comes with tradeoffs. AI-generated code frequently contains security vulnerabilities that the developers using these tools don't fully understand and often can't identify. The code works—it does what it's supposed to do—but it may be riddled with misconfigurations, exposed endpoints, and authentication flaws that a more experienced developer would catch.

A Breach in Two Parts

What's striking about the Tea breach is how unsophisticated it was. There was no zero-day exploit, no advanced persistent threat, no evidence of nation-state involvement. The attackers simply found doors that had been left unlocked.

The first breach targeted a legacy storage system that should have been secured or decommissioned long ago. This forgotten infrastructure still held 13,000 selfies and government IDs, plus another 59,000 images from posts, comments, and direct messages. Every user who had signed up before February 2024 was affected.

Days later, security researchers discovered that the platform's private messaging system was similarly exposed. More than 1.1 million messages were accessible to unauthorized parties—conversations that users had believed were confidential, shared only with other members of a trusted community. In response, Tea suspended its direct messaging feature entirely and confirmed that the FBI had opened an investigation.

A Pattern Emerges

Tea's failure wasn't isolated. In the same month, Lovable—one of the more popular vibe coding platforms—was found to have misconfigured database access controls that exposed sensitive user data. Base44, a similar platform owned by Wix, had an access bypass flaw that allowed unauthorized users to reach private applications.

The pattern is consistent enough to constitute a trend: applications built rapidly with AI-generated code are shipping with fundamental security flaws. The reasons aren't mysterious. AI code generators produce output that works functionally but lacks awareness of the specific security requirements of a given deployment environment. Developers who rely on these tools often don't have the expertise to audit the code they're shipping. And the entire value proposition of vibe coding—the speed, the rapid iteration—militates against the kind of careful security review that might catch these issues before launch.

There's another factor at play, too. AI makes it easy to pivot and rebuild, which means old systems get abandoned rather than properly decommissioned. That legacy storage system that exposed Tea's user images? It was probably an artifact of an earlier iteration of the product, forgotten when the team moved on to a new architecture but never actually secured or taken offline.

Matching the Pace of Development

The fundamental challenge here is one of tempo. When an application can be built in hours, testing it with methods that take weeks—if they work at all—creates an enormous gap. Traditional security scanning was designed for human-written code with predictable patterns. AI-generated code requires a different approach.

At MindFort, we've built autonomous AI agents that approach applications the way an attacker would. They discover forgotten assets like Tea's legacy storage system. They test authentication and authorization controls to find the access flaws that exposed those private messages. They understand business logic well enough to identify when sensitive data is flowing to unintended places. And critically, they run continuously, because vibe-coded applications change constantly and point-in-time assessments can't keep pace.

The lesson of the Tea breach is that speed without security is ultimately a liability. The faster you build, the faster you accumulate security debt—and that debt compounds quickly when you're handling the kind of sensitive personal information that users shared on Tea's platform. Those users trusted the company with their most vulnerable experiences, and that trust was violated not by sophisticated adversaries but by basic failures that continuous testing would have caught.

Start continuous security testing with MindFort →

Ready to see what's exploitable in your app?

Get Started