Skip to main content
Calico Enterprise 3.21 (early preview) documentation

Configure an ingress gateway

Big picture

Enable support for the Kubernetes Gateway API.

note

This feature is tech preview. Tech preview features may be subject to significant changes before they become GA.

Value

Calico Enterprise includes support for the Kubernetes Gateway API, which allows advanced routing to services in a cluster, including weighted or blue-green load balancing.

Concepts

Gateway API

The Gateway API is an official Kubernetes API for advanced routing to services in a cluster. To read about its use cases, structure and design, please see the official docs. Calico Enterprise provides the following resources and versions of the Gateway API.

ResourceVersions
BackendLBPolicyv1alpha2
BackendTLSPolicyv1alpha3
GatewayClassv1, v1beta1
Gatewayv1, v1beta1
GRPCRoutev1, v1alpha2
HTTPRoutev1, v1beta1
ReferenceGrantv1beta1, v1alpha2
TCPRoutev1alpha2
TLSRoutev1alpha2
UDPRoutev1alpha2

Envoy Gateway

Several implementations of the Gateway API are available, one of which is the Envoy Gateway. Calico Enterprise integrates the Envoy Gateway implementation in order to provide support for the Gateway API.

Access into a cluster from outside

The Gateway API only provides access into a cluster from outside when the cluster is also provisioned to support Kubernetes Services with type: LoadBalancer. When a Gateway is configured, Calico Enterprise creates a Deployment that does the actual work of routing and load balancing, etc., and a Service with type: LoadBalancer that fronts that Deployment. If the cluster has a type: LoadBalancer provider, it will then allocate an IP outside the cluster and arrange for requests to that IP to be forwarded to the Gateway Service.

Managed Kubernetes services like AKS, EKS and GKE include a type: LoadBalancer provider that automatically integrates with Azure, AWS and GCP respectively. On-prem clusters and non-managed clusters in the cloud need to set up their own type: LoadBalancer support.

Before you begin

Unsupported

  • Windows

How to

Enable Gateway API support

To enable Gateway API support, create a GatewayAPI resource with name tigera-secure:

kubectl apply -f - <<EOF
apiVersion: operator.tigera.io/v1
kind: GatewayAPI
metadata:
name: tigera-secure
EOF

Shortly after this you will see that Gateway API resources are now available:

kubectl api-resources | grep gateway.networking.k8s.io
backendlbpolicies                   blbpolicy                                       gateway.networking.k8s.io/v1alpha2          true         BackendLBPolicy
backendtlspolicies btlspolicy gateway.networking.k8s.io/v1alpha3 true BackendTLSPolicy
gatewayclasses gc gateway.networking.k8s.io/v1 false GatewayClass
gateways gtw gateway.networking.k8s.io/v1 true Gateway
grpcroutes gateway.networking.k8s.io/v1 true GRPCRoute
httproutes gateway.networking.k8s.io/v1 true HTTPRoute
referencegrants refgrant gateway.networking.k8s.io/v1beta1 true ReferenceGrant
tcproutes gateway.networking.k8s.io/v1alpha2 true TCPRoute
tlsroutes gateway.networking.k8s.io/v1alpha2 true TLSRoute
udproutes gateway.networking.k8s.io/v1alpha2 true UDPRoute

And also that there is a GatewayClass resource corresponding to the Envoy Gateway implementation included in Calico Enterprise:

kubectl get gatewayclass -o yaml | yq r - 'items[0].spec'
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: envoy-proxy-config
namespace: tigera-gateway

Use the Gateway API

Now you can use the Gateway API in any way that the API supports. When configuring a Gateway resource, be sure to specify gatewayClassName: tigera-gateway-class.

By way of a simple example:

  • Create echo servers in namespaces ns1 and ns2:

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: Namespace
    metadata:
    name: ns1
    ---
    apiVersion: v1
    kind: Pod
    metadata:
    name: echoserver-ns1
    namespace: ns1
    labels:
    app: echoserver
    spec:
    containers:
    - image: ealen/echo-server:latest
    imagePullPolicy: IfNotPresent
    name: echoserver
    ports:
    - containerPort: 80
    env:
    - name: PORT
    value: "80"
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: echoserver
    namespace: ns1
    spec:
    ports:
    - port: 80
    targetPort: 80
    protocol: TCP
    type: ClusterIP
    selector:
    app: echoserver
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
    name: ns2
    ---
    apiVersion: v1
    kind: Pod
    metadata:
    name: echoserver-ns2
    namespace: ns2
    labels:
    app: echoserver
    spec:
    containers:
    - image: ealen/echo-server:latest
    imagePullPolicy: IfNotPresent
    name: echoserver
    ports:
    - containerPort: 80
    env:
    - name: PORT
    value: "80"
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: echoserver
    namespace: ns2
    spec:
    ports:
    - port: 80
    targetPort: 80
    protocol: TCP
    type: ClusterIP
    selector:
    app: echoserver
    EOF
  • Create a client pod that we can test from:

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: Pod
    metadata:
    name: test-client
    spec:
    containers:
    - args:
    - "3600"
    command:
    - /bin/sleep
    image: ubuntu
    imagePullPolicy: Always
    name: c1
    terminationGracePeriodSeconds: 0
    EOF
  • Once that pod is running, install curl in it:

    kubectl exec -it test-client -- apt-get -y update
    kubectl exec -it test-client -- apt-get -y install curl
  • Configure a Gateway:

    kubectl apply -f - <<EOF
    apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
    name: eg
    spec:
    gatewayClassName: tigera-gateway-class
    listeners:
    - name: http
    protocol: HTTP
    port: 80
    EOF
  • Configure routes to the two echo servers:

    kubectl apply -f - <<EOF
    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
    name: ns1-2-echo
    spec:
    parentRefs:
    - name: eg
    rules:
    - backendRefs:
    - group: ""
    kind: Service
    name: echoserver
    namespace: ns1
    port: 80
    weight: 1
    matches:
    - path:
    type: PathPrefix
    value: /ns1
    - backendRefs:
    - group: ""
    kind: Service
    name: echoserver
    namespace: ns2
    port: 80
    weight: 1
    matches:
    - path:
    type: PathPrefix
    value: /ns2
    EOF
  • Configure the ReferenceGrants that are needed to allow the Gateway to reference services in other namespaces:

    kubectl apply -f - <<EOF
    apiVersion: gateway.networking.k8s.io/v1beta1
    kind: ReferenceGrant
    metadata:
    name: echo
    namespace: ns1
    spec:
    from:
    - group: gateway.networking.k8s.io
    kind: HTTPRoute
    namespace: default
    to:
    - group: ""
    kind: Service
    ---
    apiVersion: gateway.networking.k8s.io/v1beta1
    kind: ReferenceGrant
    metadata:
    name: echo
    namespace: ns2
    spec:
    from:
    - group: gateway.networking.k8s.io
    kind: HTTPRoute
    namespace: default
    to:
    - group: ""
    kind: Service
    EOF
  • Find the cluster IP of the Gateway Service:

    kubectl get services -n tigera-gateway
    NAME                        TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)                                   AGE
    envoy-default-eg-e41e7b31 LoadBalancer 10.0.20.19 135.232.51.10 80:32636/TCP 9m3s
    envoy-gateway ClusterIP 10.0.24.12 <none> 18000/TCP,18001/TCP,18002/TCP,19001/TCP 36m

    The Service for the Gateway is the one beginning with envoy-default-, followed by the name of the Gateway resource. So the correct cluster IP in this case is 10.0.20.19. (The envoy-gateway Service represents the gateway controller, which is the component that monitors for Gateway API resources and creates corresponding components to implement those.)

  • Curl from the test client pod to a URL that should be handled by the echo server in namespace ns1, via the Gateway:

    kubectl exec -it test-client -- curl http://10.0.20.19/ns1/subpath?query=demo | jq
  • The output confirms - see the "HOSTNAME" line - that the request was handled by the echo server in namespace ns1:

    {
    "host": {
    "hostname": "10.0.20.19",
    "ip": "::ffff:10.224.0.10",
    "ips": []
    },
    "http": {
    "method": "GET",
    "baseUrl": "",
    "originalUrl": "/ns1/subpath?query=demo",
    "protocol": "http"
    },
    "request": {
    "params": {
    "0": "/ns1/subpath"
    },
    "query": {
    "query": "demo"
    },
    "cookies": {},
    "body": {},
    "headers": {
    "host": "10.0.20.19",
    "user-agent": "curl/8.5.0",
    "accept": "*/*",
    "x-forwarded-for": "10.224.0.18",
    "x-forwarded-proto": "http",
    "x-envoy-internal": "true",
    "x-request-id": "375a5b78-60fc-4a87-89b0-b4c6501115ca"
    }
    },
    "environment": {
    "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
    "HOSTNAME": "echoserver-ns1",
    "NODE_VERSION": "20.11.0",
    "YARN_VERSION": "1.22.19",
    "PORT": "80",
    "ECHOSERVER_SERVICE_HOST": "10.0.111.210",
    "ECHOSERVER_PORT": "tcp://10.0.111.210:80",
    "ECHOSERVER_PORT_80_TCP": "tcp://10.0.111.210:80",
    "KUBERNETES_SERVICE_HOST": "10.0.0.1",
    "KUBERNETES_SERVICE_PORT": "443",
    "KUBERNETES_SERVICE_PORT_HTTPS": "443",
    "ECHOSERVER_PORT_80_TCP_ADDR": "10.0.111.210",
    "KUBERNETES_PORT": "tcp://10.0.0.1:443",
    "ECHOSERVER_SERVICE_PORT": "80",
    "ECHOSERVER_PORT_80_TCP_PORT": "80",
    "KUBERNETES_PORT_443_TCP_PORT": "443",
    "ECHOSERVER_PORT_80_TCP_PROTO": "tcp",
    "KUBERNETES_PORT_443_TCP": "tcp://10.0.0.1:443",
    "KUBERNETES_PORT_443_TCP_PROTO": "tcp",
    "KUBERNETES_PORT_443_TCP_ADDR": "10.0.0.1",
    "HOME": "/root"
    }
    }
  • Curl from the test client pod to a URL that should be handled by the echo server in namespace ns2, via the Gateway:

    kubectl exec -it test-client -- curl http://10.0.20.19/ns2/subpath?query=demo | jq
  • The output confirms that the request was handled by the echo server in namespace ns2:

    ...
    "HOSTNAME": "echoserver-ns2",
    ...

Disable Gateway API support

To disable Gateway API support, first delete all the Gateway API resources that you have configured yourself. In the example above, that would be the ReferenceGrants, the HTTPRoutes and the Gateway. Then delete the GatewayAPI resource with name tigera-secure:

kubectl delete -f - <<EOF
apiVersion: operator.tigera.io/v1
kind: GatewayAPI
metadata:
name: tigera-secure
EOF

Please note that the Gateway API CRDs will be left in place. This is to allow for the possibility of using other Gateway API implementations in addition to the one provided by Calico Enterprise.