Vibe Check AI May Replace SaaS (But Not for a While)
Vibe Check: AI May Replace SaaS (But Not for a While) 🚀
February 2026 saw a billion-dollar wobble in the value of US tech companies over investors’ fears that AI will destroy the Software-as-a-Service (SaaS) business model. This quickly became known as the SaaSpocalypse. Recent events have indicated that the cost/effort curve for ‘bespoke enough’ software is shifting. Using AI to write code, often without human review, has become known as vibe coding. Experienced developers are discovering that vibe coding can massively increase their productivity, allowing them to write entire stacks in an afternoon rather than weeks. Currently, the code produced is far from perfect and there’s a lot for the developer to improve, but the promise is there. One startup, for example, received a SaaS renewal quote that was twice the current price; instead of renewing, one of their engineering leads ‘vibe coded’ a replacement with the core functionality they needed within a couple of hours.
Whilst SaaS was generally an improvement on the state of on-prem software, security experts have expressed concerns, including:
- Trust in the provider: You have to trust the provider to be secure.
- Data compromise: If the provider is compromised, all their clients’ data gets compromised.
- Sovereignty issues: What about sovereignty issues?
Despite all the benefits over the classic on-prem approach, SaaS isn’t perfect. It can end up being expensive, as all those subscriptions add up, particularly when you factor in the additional features you might need beyond the basic functionality. Increasingly, security has become a means for vendors to create a price differential, with expensive ‘enterprise’ tiers frequently required for organizations wishing to implement even relatively basic security features such as logging, monitoring, single sign-on, and multi-factor authentication.
There are MANY issues today with vibe coding, including poor code quality, security vulnerabilities, ‘slopsquatting’, and other new types of attack. Over the next five years, it will become increasingly common to see AI-written code in production systems that a human has never reviewed or even looked at.
Some security professionals may be horrified by all this, but there is an opportunity here for us to shape the future. A challenge the security community will face is that no one yet knows exactly what we need to introduce to ensure the ‘vibe coded future’ is a safer one. There is a call to action here for the security community on research, and broad opportunities for new companies to emerge around this. If we face this challenge head-on from the start, we have a chance to introduce some strong security fundamentals. More worryingly, if security professionals don’t lean in from the start, the landscape will evolve without this crucial input, as was arguably the case in the early years of cloud adoption.
Some safeguards are obvious: We need models to write code that is secure by default. We need to be able to have confidence in the model provenance and be able to trust and verify that it has not been developed in a way to maliciously introduce issues in the code it produces. We need to think about how we can use AI to review code, both existing human-written code and that which will be written by AI. Furthermore, more nuanced considerations include: How do we use a deterministic architecture to limit what code can do even if it is malicious, compromised, or unsafe? What platforms for hosting AI-generated services can we design to implement the controls above and protect the organization and its data even if the code running is of poor quality?