When Everyone Becomes a Creator - The Opportunities and Risks of AI-Builders
By Rick Doten, Veteran CISO, AI Researcher and Shahar Bahat, CEO of Pluto Security
The Shift No One Prepared For
AI has quickly and quietly changed who can build.
With tools like Lovable, V0, n8n, Cursor, and Claude Code, anyone in the organization (from developers to marketing and sales) can now create full applications, workflows, and automations in minutes. The separation of development team from the rest of the company is dissolving fast.
According to lovable, 30% of fortune 1000 companies are using vibe coding platforms.
This movement, “vibe coding,” is transforming how organizations innovate. But it’s also introducing a massive, largely invisible security problem - one that most companies don’t yet know they have.
Three Flavors of AI-Assisted Development
To understand the risk, it helps to look at the spectrum of how AI is used to build software:
AI in the IDE [Integrated Development Environment] (Cursor + GPT5 / Claude Code) - Professional developers use AI copilots directly in their IDEs to refactor, troubleshoot, and write code.
These tools accelerate productivity but still work within controlled repositories and system development lifecycle (SDLC) review processes that include security checks.Embedded AI Development (Claude Code, GitHub Copilot) - Here, AI assists inline: generating code snippets, logic, or bug fixes while the developer maintains human-in-the-loop oversight.
The process follows the SDLC, and code is still visible to traditional application security (AppSec) tooling - though not always fully explainable or auditable.No-Code, prompt-based, AI Builders (Lovable, Bolt, V0) - Business users describe what they want - “Build me an internal dashboard connected to Salesforce” - and the system automatically scaffolds, configures, and deploys a working app.
There’s no SDLC, no code review, and often no access or knowledge by the IT or security teams.
Across this spectrum, the tradeoff is clear: the more abstract and accessible development becomes, the less transparency and control of the platform, organizations retain. And therefore, it introduces an unknown and unmeasured risk to the organization.
The Blind Spot: When Innovation Outruns Security
Fundamental security models start with asset management: we can’t secure what we don’t see - servers, data repositories, network pipelines, and application programming interfaces (APIs) that let applications talk to each other are cataloged and under IT control. AI code builders changed that by creating applications outside and opaque to the organization..
Today, a marketing manager can spin up an app in Lovable, connect it to a production database on Supabase, expose it through a public endpoint, and share it with customers - all without a single provisioning ticket to IT or security.
While this can speed up development, and democratize application creation to accelerate innovation, it is bringing an entire shadow layer of infrastructure, attack surface, identity access, and data exposure..
Let’s break down what’s really happening behind the scenes:
Third-party infrastructure: Tools like Lovable or V0 automatically deploy apps on external clouds or shared multi-tenant environments, often storing organizational data outside known asset inventories, and outside the scope of IT controls.
Unmanaged access and credentials: Users authenticate with personal accounts or share API keys, bypassing internal identity services and audit trails. Service-to-service communication often uses long-lived, hard coded secrets instead of managed identities.
Unmonitored integrations and sensitive data exposure: Citizen-built apps frequently connect to core systems , Salesforce, Slack, internal APIs , without any request tracking permission boundaries, or logging. Even when data such as PII is collected only within the system, it can still introduce significant privacy and compliance risks if not properly governed.
Internet exposure: These apps are instantly live, discoverable, and sometimes indexed, creating open attack surfaces without security testing.
No version control or security scanning: There are no quality assurance (QA) checks; and generated or configured software logic never passes through SDLC security gates such as; Static Application Security testing (SAST), Software Composition Analysis [code review] (SCA), Automated SCA, (ASCA), or Infrastructure as Code (IaC) checks, leaving vulnerabilities and misconfigurations undetected–in Production Applications.
This combination makes AI-built apps the perfect storm of innovation and risk - fast-moving, hard to detect, and integrated deep into business operations. And because traditional tools were never designed to see into these environments, security teams are unaware of their existence and flying completely blind.
Security Teams Don’t want to be Ones Blocking Innovation
When security leaders can’t see what’s being built, their default response is to block.
It’s not because they want to slow the business down - it’s because they can’t quantify the risk and protect what they can’t observe. We’ve seen this tension before: cloud adoption, open-source, and even DevOps all went through similar phases. But this time, the speed and democratization of AI-driven development make the challenge exponentially harder. Blocking AI tools outright doesn’t work. Developers and business users will just find workarounds. The real goal is to enable safe innovation - giving teams freedom to build while maintaining basic visibility, control, and data protection.
AI builders are a positive shift. They unlock creativity, speed, and empowerment across the business. But right now, that innovation runs without shared visibility or security standards. There’s no way to measure or enforce basic hygiene - what data is used, who has access, or how apps connect. You can put policies in place (asset registries, deployment gates, data restrictions, allowlists) and they help, but only up to a point. The citizen developers don’t intentionally want to bring risk to the organization, but they don’t know the SDLC, what the application security risks and standards are, or know what to ask.
Enabling Innovation, Safely
The goal isn’t to slow this movement down - it’s to educate, gain visibility, and enable it safely. Organizations need a way to apply security metrics and enforcement to this new layer of creation, just like we already do for code, cloud, and data.
At a high level, those guardrails should act as an invisible scaffolding and framework around innovation: automatically identifying new apps and workflows as they’re created, classifying what data and systems they touch, verifying access and permissions, and enforcing lightweight policies in real time. It’s about making security convenient, continuous, and contextual - built into the creative process, not bolted on afterward.
That shift is already starting. Security and innovation no longer have to compete — they can finally move at the same speed. Even simple policies like “you can create internal tools or workflows in Lovable or Bolt, but they can only access non-sensitive or tier-low systems until the app goes through the security SDLC and approval” go a long way in balancing empowerment and safety.
There is a common saying in security that Formula 1 cars can go so fast because they have good breaks. But just having a policy for vibe coding doesn’t make it magically happen. We need a way to automatically identify new creation, and route vibe coding projects into the right processes, while allowing everyone to use it frictionlessly; but behind the scenes IT and security are able to review the app / workflow and supporting infrastructure and remediate risks.
This new security flow requires the use of current and new platforms. We will discuss and illustrate the details of how this would be applied in the next paper.
Security for the New Era of Digital Creation
AI builders have introduced a new reality inside organizations - one where applications, workflows, and integrations are created dynamically by anyone, often outside IT and security oversight. Each of these creations lives on its own infrastructure, connected to corporate systems, storing data, and operating with real permissions.
This isn’t a problem of code quality or vulnerability management - it’s a problem of visibility, trust, and governance across a completely new layer of creation. Security teams now need to understand who is building what, where it runs, what data it touches, and how it connects back into the organization.
But visibility alone isn’t enough. To effectively secure this new layer, security teams must also establish new communication channels with these emerging builders - channels built on trust, guidance, and collaboration rather than control. Only by doing so can they align innovation with protection and ensure that security becomes an enabler, not an obstacle.
The goal isn’t to control or block innovation, it’s to give organizations a way to observe, guide, and protect this new form of digital development without slowing it down. Those who will adopt this security first will not only prevent risk - they’ll be the ones who can safely move at the speed of AI.




