Skip to content
Containers

Kubernetes Gateway API vs Ingress: Why You Should Migrate

The Gateway API is the official successor to Kubernetes Ingress. Compare routing features, controller support from NGINX to Istio, and follow a practical migration guide from Ingress to HTTPRoute.

A
Abhishek Patel11 min read

Infrastructure engineer with 10+ years building production systems on AWS, GCP,…

Kubernetes Gateway API vs Ingress: Why You Should Migrate
Kubernetes Gateway API vs Ingress: Why You Should Migrate

Ingress Had a Good Run -- But It's Over

The Kubernetes Ingress API shipped in 2015 as a minimal abstraction for HTTP routing. It worked for basic use cases, but the community quickly outgrew it. No TCP/UDP support. No gRPC routing. No standard way to split traffic or attach policies. Every controller invented its own annotation dialect, turning Ingress manifests into vendor-locked configuration blobs that broke the moment you switched controllers.

The Gateway API is the official successor to Ingress, designed from the ground up to fix these problems. It reached GA for core HTTP routing in Kubernetes 1.28 and has rapidly become the recommended path forward. If you're still writing Ingress resources in 2026, you're accumulating migration debt with every manifest you create.

What Is the Kubernetes Gateway API?

Definition: The Kubernetes Gateway API is a collection of API resources -- GatewayClass, Gateway, HTTPRoute, TCPRoute, GRPCRoute, and others -- that model infrastructure and routing for service networking. It replaces the Ingress resource with a role-oriented, typed, and extensible design. Unlike Ingress, routing behavior is defined in the spec itself rather than in vendor-specific annotations, making configurations portable across controllers.

Why Ingress Falls Short

Ingress was intentionally minimal. The designers expected controllers to extend it through annotations. That decision created a mess:

  • No protocol diversity -- Ingress only handles HTTP and HTTPS. Need TCP routing for a database? UDP for DNS or gaming servers? You're on your own with CRDs or NodePort hacks.
  • Annotation sprawl -- Rate limiting, CORS, timeouts, rewrites, authentication -- all shoved into annotations with no validation, no documentation in the schema, and no portability between controllers.
  • No role separation -- A single Ingress resource mixes infrastructure concerns (which load balancer, which IP) with application concerns (which path goes where). Platform teams and app developers edit the same resource.
  • No traffic management -- Canary deployments, traffic splitting, request mirroring, and header-based routing all require controller-specific CRDs or annotations.
  • Limited TLS options -- TLS passthrough requires annotations. Client certificate validation is inconsistent. There's no standard for mTLS configuration.

Gateway API Architecture: Three Resources, Three Roles

The Gateway API's biggest design win is separating concerns into distinct resources owned by different personas:

ResourceOwnerResponsibility
GatewayClassInfrastructure providerDefines the controller implementation (like StorageClass for storage)
GatewayCluster operator / platform teamConfigures listeners, ports, TLS settings, and allowed routes
HTTPRoute / TCPRoute / GRPCRouteApplication developerDefines routing rules, backends, filters, and traffic policies

This separation means a platform team can provision a Gateway with TLS termination, IP addresses, and security policies -- and app developers can attach routes to it without touching infrastructure config. In large organizations, this maps cleanly to existing RBAC boundaries.

# 1. Infrastructure provider installs the GatewayClass
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: nginx
spec:
  controllerName: gateway.nginx.org/nginx-gateway-controller
# 2. Platform team creates the Gateway
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: production-gateway
  namespace: infra
spec:
  gatewayClassName: nginx
  listeners:
  - name: http
    protocol: HTTP
    port: 80
  - name: https
    protocol: HTTPS
    port: 443
    tls:
      mode: Terminate
      certificateRefs:
      - name: wildcard-tls
    allowedRoutes:
      namespaces:
        from: Selector
        selector:
          matchLabels:
            gateway-access: "true"
# 3. App developer attaches an HTTPRoute
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: store-api
  namespace: store
spec:
  parentRefs:
  - name: production-gateway
    namespace: infra
  hostnames:
  - "store.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /api/v2
    backendRefs:
    - name: store-api-v2
      port: 8080

Ingress vs Gateway API: Head-to-Head

FeatureIngressGateway API
HTTP routingPath and host basedPath, host, header, query param, method
TCP/UDP routingNot supportedTCPRoute, UDPRoute resources
gRPC routingAnnotations (controller-specific)Native GRPCRoute resource
Traffic splittingAnnotations or CRDsBuilt-in weight-based backendRefs
Request mirroringNot standardizedNative RequestMirror filter
Header modificationAnnotationsRequestHeaderModifier filter
TLS passthroughAnnotationsTLSRoute with mode: Passthrough
Role separationNone -- one resourceGatewayClass / Gateway / Route split
Cross-namespace routingNot supportedReferenceGrant resource
TimeoutsAnnotationsSpec-level timeouts on HTTPRoute rules
Configuration portabilityLow (annotation-dependent)High (behavior in spec)
API statusFrozen (no new features)Active development, GA core

Routing Scenarios Compared

Canary Deployment with Traffic Splitting

With Ingress, you'd need controller-specific annotations or a separate tool like Flagger. With Gateway API, it's built into the spec:

# Gateway API: 90/10 traffic split
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: app-canary
spec:
  parentRefs:
  - name: production-gateway
  hostnames:
  - "app.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: app-stable
      port: 8080
      weight: 90
    - name: app-canary
      port: 8080
      weight: 10

Header-Based Routing

Route requests to a specific backend based on headers -- useful for A/B testing or internal previews:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: header-routing
spec:
  parentRefs:
  - name: production-gateway
  hostnames:
  - "app.example.com"
  rules:
  - matches:
    - headers:
      - name: X-Preview
        value: "true"
    backendRefs:
    - name: app-preview
      port: 8080
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: app-stable
      port: 8080

Controller Support in 2026

Gateway API adoption has reached critical mass. Every major proxy and service mesh supports it:

ControllerGateway API SupportMaturityNotes
NGINX Gateway FabricHTTPRoute, GRPCRouteGAOfficial NGINX implementation, replacing ingress-nginx for Gateway API
IstioHTTPRoute, TCPRoute, GRPCRoute, TLSRouteGADeepest feature coverage, auto mTLS
CiliumHTTPRoute, TLSRoute, GRPCRouteGAeBPF-based, excellent performance, no sidecar
Envoy GatewayHTTPRoute, TCPRoute, UDPRoute, GRPCRouteGAReference implementation, broadest route type support
TraefikHTTPRoute, TCPRoute, TLSRouteGAAlso supports its own IngressRoute CRD
KongHTTPRoute, TCPRoute, GRPCRouteGAAPI gateway features via policy attachment

My recommendation: For new clusters, start with Envoy Gateway if you want the broadest Gateway API coverage, or NGINX Gateway Fabric if your team already knows NGINX. If you're running a service mesh, Istio and Cilium both treat Gateway API as their primary ingress path now.

Migration Guide: Ingress to HTTPRoute

You don't need a big-bang migration. Run both APIs side by side -- every controller listed above supports Ingress and Gateway API simultaneously. Here's a step-by-step approach:

Step 1: Install a Gateway API Controller

# Example: Install NGINX Gateway Fabric with Helm
helm install ngf oci://ghcr.io/nginx/charts/nginx-gateway-fabric \
  --create-namespace --namespace nginx-gateway \
  --set service.type=LoadBalancer
# Verify the GatewayClass exists
kubectl get gatewayclass
# NAME    CONTROLLER                                  ACCEPTED
# nginx   gateway.nginx.org/nginx-gateway-controller  True

Step 2: Create a Gateway

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: main-gateway
  namespace: infra
spec:
  gatewayClassName: nginx
  listeners:
  - name: http
    protocol: HTTP
    port: 80
  - name: https
    protocol: HTTPS
    port: 443
    tls:
      mode: Terminate
      certificateRefs:
      - name: wildcard-cert

Step 3: Convert Ingress to HTTPRoute

Here's a typical Ingress and its equivalent HTTPRoute:

# Before: Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
    nginx.ingress.kubernetes.io/rate-limit-rps: "20"
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - app.example.com
    secretName: app-tls
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /api
        pathType: Prefix
        backend:
          service:
            name: api-svc
            port:
              number: 8080
# After: HTTPRoute
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: my-app
spec:
  parentRefs:
  - name: main-gateway
    namespace: infra
  hostnames:
  - "app.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /api
    filters:
    - type: URLRewrite
      urlRewrite:
        path:
          type: ReplacePrefixMatch
          replacePrefixMatch: /
    backendRefs:
    - name: api-svc
      port: 8080
    timeouts:
      request: 60s

Notice that annotations like proxy-read-timeout become spec-level timeouts, and rewrite-target becomes a URLRewrite filter. Rate limiting moves to a policy attachment -- controller-specific but structured rather than annotation-based.

Step 4: Test and Switch DNS

  1. Apply the HTTPRoute and verify the route is accepted: kubectl get httproute my-app -o yaml -- check status.parents for Accepted: True.
  2. Test the new Gateway's external IP directly with curl --resolve.
  3. Update DNS to point to the Gateway's IP.
  4. Remove old Ingress resources once traffic has migrated.

Advanced: TLS Passthrough

For services that handle their own TLS (like databases or internal PKI-dependent services), use TLSRoute with passthrough mode:

apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TLSRoute
metadata:
  name: db-passthrough
spec:
  parentRefs:
  - name: production-gateway
    sectionName: tls-passthrough
  hostnames:
  - "db.internal.example.com"
  rules:
  - backendRefs:
    - name: database-service
      port: 5432

The Gateway listener must be configured with tls.mode: Passthrough on the corresponding section. The proxy forwards the raw TLS connection without decrypting it.

Advanced: gRPC Routing

GRPCRoute lets you match on gRPC service and method names -- something Ingress could never do natively:

apiVersion: gateway.networking.k8s.io/v1
kind: GRPCRoute
metadata:
  name: grpc-routing
spec:
  parentRefs:
  - name: production-gateway
  hostnames:
  - "grpc.example.com"
  rules:
  - matches:
    - method:
        service: shop.ProductService
        method: GetProduct
    backendRefs:
    - name: product-svc
      port: 50051
  - matches:
    - method:
        service: shop.OrderService
    backendRefs:
    - name: order-svc
      port: 50051

Advanced: Request Mirroring

Mirror production traffic to a shadow service for testing -- without affecting responses to the client:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: mirror-traffic
spec:
  parentRefs:
  - name: production-gateway
  hostnames:
  - "app.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /api
    filters:
    - type: RequestMirror
      requestMirror:
        backendRef:
          name: shadow-api
          port: 8080
    backendRefs:
    - name: production-api
      port: 8080

Performance Benchmarks

Gateway API itself doesn't add overhead -- it's a configuration API, not a data plane. Performance depends on the underlying proxy. Here's what to expect with common controllers routing 1,000 RPS through basic HTTP routing rules:

Controllerp50 Latencyp99 LatencyMemory (idle)Memory (1K RPS)
NGINX Gateway Fabric0.8ms2.1ms~45MB~80MB
Envoy Gateway0.9ms2.4ms~60MB~110MB
Cilium0.5ms1.6ms~120MB~150MB
Istio (ambient)0.7ms2.0ms~80MB~130MB
Traefik1.0ms2.8ms~50MB~95MB

Cilium's eBPF-based data plane gives it the lowest latency since it bypasses parts of the kernel networking stack. NGINX Gateway Fabric is the most memory-efficient. For most workloads, the differences are negligible -- pick your controller based on features and ecosystem fit, not benchmarks.

Frequently Asked Questions

Is the Ingress API being deprecated?

Not formally removed, but it's frozen. The Kubernetes project has stated that no new features will be added to the Ingress resource. It will remain in the API for backward compatibility, but all new networking features land exclusively in Gateway API. Starting new projects on Ingress means you'll miss out on traffic splitting, gRPC routing, request mirroring, and everything else that's been built since 2023.

Can I run Gateway API and Ingress side by side?

Yes, and this is the recommended migration path. Every major controller supports both APIs simultaneously. You can migrate routes one at a time, test each HTTPRoute against the Gateway's external IP, and switch DNS when you're confident. There's no reason to do a big-bang cutover.

Which Gateway API controller should I choose?

It depends on your stack. If you're already running Istio, use its built-in Gateway API support. If you want an eBPF-based solution with no sidecars, go with Cilium. For the broadest route type coverage and the closest alignment to the spec, Envoy Gateway is the reference implementation. For teams comfortable with NGINX, NGINX Gateway Fabric is the natural choice. All of them are production-ready.

Do I need to install Gateway API CRDs separately?

Usually yes. The Gateway API CRDs aren't bundled with Kubernetes itself. Most controller Helm charts offer an option to install them automatically, but you can also install them manually with kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml. Check your controller's docs for the supported version.

How do timeouts work in Gateway API vs Ingress?

In Ingress, timeouts are set via controller-specific annotations like nginx.ingress.kubernetes.io/proxy-read-timeout. In Gateway API, timeouts are part of the HTTPRoute spec: you set timeouts.request (total request duration) and timeouts.backendRequest (time waiting for the backend) directly on a rule. They're portable and validated by the API server.

What about rate limiting and authentication?

Gateway API uses a "policy attachment" model for cross-cutting concerns. Rate limiting, authentication, and circuit breaking aren't in the core spec -- instead, controllers define policy resources that attach to Gateways or Routes. This is intentional: these features are inherently implementation-specific. The benefit over Ingress annotations is that policies are structured, validated, and versionable rather than opaque strings.

Is Gateway API only for north-south traffic?

No. While its primary use case is north-south (external to cluster) traffic, service meshes like Istio and Cilium also use Gateway API resources for east-west (service-to-service) routing. Istio's ambient mode, for example, uses HTTPRoute to configure L7 traffic policies between services without sidecars. The GAMMA (Gateway API for Mesh Management and Administration) initiative is standardizing this.

Conclusion

The Gateway API isn't just "Ingress but better" -- it's a fundamentally different model for service networking. The role separation alone justifies migration in any team larger than one person. Add native traffic splitting, gRPC routing, request mirroring, and spec-level timeouts, and there's no technical reason to stick with Ingress for new work. Install your preferred controller, create a Gateway, and start converting Ingress resources to HTTPRoutes one at a time. Your future self will thank you when you need to do a canary deployment and it's three lines of YAML instead of a prayer and an annotation.

A

Written by

Abhishek Patel

Infrastructure engineer with 10+ years building production systems on AWS, GCP, and bare metal. Writes practical guides on cloud architecture, containers, networking, and Linux for developers who want to understand how things actually work under the hood.

Related Articles

Enjoyed this article?

Get more like this in your inbox. No spam, unsubscribe anytime.

Comments

Loading comments...

Leave a comment

Stay in the loop

New articles delivered to your inbox. No spam.