The gap between how fast companies deploy AI-driven products and how slowly their security infrastructure catches up is becoming one of the more consequential blind spots in enterprise tech.
Twelve months ago, the conversation about AI in enterprise was still largely theoretical pilot programs, proof-of-concept deployments, executive memos about transformation. That phase is over. What has replaced it is something messier and considerably harder to manage: a sprawl of AI-powered products, portals, APIs, and customer-facing tools being pushed live at a pace that engineering teams are struggling to track, let alone secure.
How AI Products Are Being Launched Faster Than Security Can Follow
The pattern tends to follow the same arc. A product team gets budget approved for an AI feature. They spin up a new environment- sometimes a fresh domain registered that week, sometimes a cluster of subdomains branching off the main property. The tool goes live. The team moves to the next thing. And somewhere in the gap between deployment and documentation, the security function either wasn’t consulted or arrived too late to shape the decision.
This is not a niche problem. According to HiddenLayer’s 2026 AI Threat Landscape Report, organisations across sectors are embedding AI deeper into critical operations while simultaneously expanding their exposure to new attack vectors- with one in eight companies now reporting security incidents tied directly to agentic AI systems. The report does not frame this as a future risk. It frames it as a present one, already materialising in production environments.
Managing a Growing Digital Estate in AI-Driven Companies
The scale of what’s being deployed makes the management question genuinely hard. A mid-sized SaaS company operating in 2026 might be running a customer portal, a developer API, a staging environment, a documentation site, a partner login, and three or four AI-powered tools that each acquired their own domain or subdomain during development. Each of those properties carries its own certificate requirement, its own DNS configuration, its own exposure surface. Treated individually, the overhead is significant. Treated inconsistently- which is what tends to happen when speed is the priority- the exposure accumulates quietly until something forces the issue.
DNS Risks and the Hidden Exposure Layer
The DNS layer is where the most overlooked risks tend to live. Teams launching AI pilots routinely spin up cloud services and integrate third-party tools, many of which are short lived. But their DNS records often outlast the projects themselves, leaving what researchers call dangling references- pointers to decommissioned services that attackers can step into without any breach at all. CSC’s 2026 Domain Security Report found that 67% of Global 2000 companies have implemented fewer than half of eight key domain security controls, a number that is striking given how long the domain security conversation has been running in enterprise circles. The implication is not that companies are indifferent to the problem. It is that the operational reality of managing a fast-moving digital estate consistently defeats good intentions.
Certificate Management Challenges for Expanding AI Products
The certificate management dimension compounds this. When a team registers a new domain for an AI product- say, a standalone tool being tested with a subset of enterprise customers- that domain needs its own certificate. If a second product follows, and then a third, the overhead of managing individual certificates across a growing portfolio of domains starts to create gaps. Teams with mature infrastructure typically consolidate this under a multi-domain SSL certificate, which lets them cover multiple distinct domains under a single issuance rather than maintaining a separate certificate lifecycle for each property. In practice, most fast-moving teams don’t think about this until they’re already managing five domains and discovering that renewal dates are scattered across the calendar.
Why Subdomains Create Security Complexity
The subdomain problem is structurally different but equally common. A single primary domain- a company’s core product, for instance- can generate dozens of active subdomains as features ship: api.product.com for developer integrations, app.product.com for the main interface, staging.product.com for pre-release testing, docs.product.com for documentation, admin.product.com for internal tooling. Each of these is alive web property. Each needs to be secured. And each, if overlooked, is a potential entry point. The standard answer to this is a wildcard SSL certificate which covers every subdomain under a given domain with a single certificate rather than requiring teams to issue and track individual certificates as subdomains proliferate. The value proposition is straightforward in theory. The difficulty is that security decisions of this kind rarely get made at the moment subdomains are being created — they get made retroactively, after the exposure has already accumulated.
DNS Attacks Are Becoming More Common
The research from Indusface adds a harder edge to this. According to their recent analysis, 87% of organisations report experiencing DNS-related attacks, a figure that has risen as attackers have recognised that the domain layer- fast-changing, often inconsistently managed, and sitting below the application logic that security teams tend to focus on- is a reliable point of entry. Subdomain hijacking, where an attacker claims a subdomain pointing to a decommissioned third-party service, and domain shadowing, where malicious subdomains are created under a compromised but trusted domain, are both increasing in frequency. Neither requires a sophisticated exploit. Both benefit directly from the operational sloppiness that speed-driven AI deployment tends to produce.
Legacy Security Frameworks Are Struggling With AI Agents
Foundation Capital’s 2026 outlook put the structural issue plainly: most legacy security frameworks were not designed for software that can act autonomously. AI agents that call external APIs, spin up temporary environments, and authenticate across services create a category of web activity that existing certificate management and domain governance processes were not built to handle. The expectation among the analysts who track this closely is that at least one high-profile AI agent security incident in 2026 will force the issue into boardroom conversations that have so far treated it as a technical detail.
Domain Strategy Is Becoming a Time-Sensitive Decision
There is also a timing dimension that is rarely discussed in coverage of this topic. ICANN opened applications in April for the first new round of brand-owned top-level domains since 2012, a window that closes in August. For companies that have been building AI driven product suites, the question of how to organise their domain estate is not abstract- it has a deadline attached to it, and decisions made now about domain structure will shape the certificate and security architecture that teams have to manage for years afterward.
Security Must Move From Retrofit to Foundation
The deeper problem is not really technical. The tools exist. The certificate products exist. The DNS management platforms exist. What doesn’t reliably exist is the organisational moment at which security teams are brought into AI deployment decisions early enough to shape them. The pattern that is repeating across the industry — security as a retrofit rather than a foundation- is not new. What is new is the velocity at which AI is creating the conditions for it to matter.
Data references in this piece draw on HiddenLayer’s 2026 AI Threat Landscape Report, CSC’s 2026 Domain Security Report, Indusface threat research, and Foundation Capital’s 2026 security outlook.




