What I want to do

The scenario I wanted to check is to have the egress gateway doing the TLS origination. This way applications wouldn’t need to take care of that.

The example is based on https://istio.io/docs/tasks/traffic-management/egress/egress-gateway-tls-origination/#perform-mutual-tls-origination-with-an-egress-gateway

What do we need for testing?

External Service

In my case that’s an nginx running on OpenShift, but outside of the Service Mesh. The service calling this nginx will use the route.

Certficiates

Well, for mTLS we need of course certificates. We’re gonna create self-signed ones.

Service Mesh

Read how to install it, here: https://docs.openshift.com/container-platform/3.11/servicemesh-install/servicemesh-install.html.

Client application

The test application is a pod which can simply run a curl command. This basically simulates a service which would call - as part of the business logic - another external service.

Istio Configurations

The following few things will be involved. Detailed explanations will follow.

  1. Service Entry (my mesh is configured to only allow outbound traffic to known services)
  2. Destination Rules
  3. Gateway
  4. VirtualServices

Let’s do it

First let’s get our certificates. The Istio docs provide neat instructions for that:

 git clone https://github.com/nicholasjackson/mtls-go-example

 cd mtls-go-example

 ./generate.sh external-service.apps.e5dd.opentlc.com supersecret

 mkdir ../external-service.apps.e5dd.opentlc.com && mv 1_root 2_intermediate 3_application 4_client ../external-service.apps.e5dd.opentlc.com

 cd ..

Now let’s quickly spin up a nginx in a separate namespace. Since I am lazy I’m just following the docs of Istio. Which also means that the service account for the in the istio docs used nginx image needs the anyuid scc:

oc create project mesh-external → this is the project where nginx will run.

oc adm policy add-scc-to-user anyuid system:serviceaccount:mesh-external:default → the anyuid SCC. Or you could use a curated image from the Red Hat container registry.

Create secrets for the CA cert and the server certificate for nginx:

oc create -n mesh-external secret tls nginx-server-certs \
        --key external-service.apps.e5dd.opentlc.com/3_application/private/external-service.apps.e5dd.opentlc.com.key.pem \
        --cert external-service.apps.e5dd.opentlc.com/3_application/certs/external-service.apps.e5dd.opentlc.com.cert.pem

oc create -n mesh-external secret generic nginx-ca-certs --from-file=external-service.apps.e5dd.opentlc.com/2_intermediate/certs/ca-chain.cert.pem

Now create a configmap where we store the nginx.conf. Here’s where we configure that the clients also need to present a valid certificate. the content of the config map would be something like:

events {
}

http {
  log_format main '$remote_addr - $remote_user [$time_local]  $status '
  '"$request" $body_bytes_sent "$http_referer" '
  '"$http_user_agent" "$http_x_forwarded_for"';
  access_log /var/log/nginx/access.log main;
  error_log  /var/log/nginx/error.log;

  server {
    listen 443 ssl;

    root /usr/share/nginx/html;
    index index.html;

    server_name external-service.apps.e5dd.opentlc.com;
    ssl_certificate /etc/nginx-server-certs/tls.crt;
    ssl_certificate_key /etc/nginx-server-certs/tls.key;
    ssl_client_certificate /etc/nginx-ca-certs/ca-chain.cert.pem;
    ssl_verify_client on;
  }
}

In that config we’re pointing to certificates which will later be mounted from the secrets we created. Additionally, for mTLS, we say ssl_verify_client on;.

Save that as nginx.conf and then run:

oc create configmap nginx-configmap -n mesh-external --from-file=nginx.conf=nginx.conf

So, now all we need is the actual deployment, a service and the route.

  • Deployment and Service
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    run: my-nginx
spec:
  ports:
  - port: 443
    protocol: TCP
  selector:
    run: my-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 1
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 443
        volumeMounts:
        - name: nginx-config
          mountPath: /etc/nginx
          readOnly: true
        - name: nginx-server-certs
          mountPath: /etc/nginx-server-certs
          readOnly: true
        - name: nginx-ca-certs
          mountPath: /etc/nginx-ca-certs
          readOnly: true
        - name: index-page
          mountPath: /usr/share/local/nginx/html/index.html
          readOnly: true
          subPath: index.html
      volumes:
      - name: nginx-config
        configMap:
          name: nginx-configmap
      - name: index-page
        configMap:
          name: nginx-index
      - name: nginx-server-certs
        secret:
          secretName: nginx-server-certs
      - name: nginx-ca-certs
        secret:
          secretName: nginx-ca-certs
  • Route
kind: Route
apiVersion: route.openshift.io/v1
metadata:
  name: nginx-test
  labels:
    run: my-nginx
spec:
  host: external-service.apps.e5dd.opentlc.com
  subdomain: ''
  to:
    kind: Service
    name: my-nginx
    weight: 100
  port:
    targetPort: 443
  tls:
    termination: passthrough
  wildcardPolicy: None

The important bit here is to have a passthrough route. We want to use our own self-signed certs we just created.

before we continue, let’s verify first, that we can call the route and get the status code 400 (Bad Request) back when not presenting a valid client certificate and 200 with a valid one.

From a terminal:

curl -v --cacert external-service.apps.e5dd.opentlc.com/2_intermediate/certs/ca-chain.cert.pem https://external-service.apps.e5dd.opentlc.com

Now also present your client cert:

curl -v --cert external-service.apps.e5dd.opentlc.com/4_client/certs/external-service.apps.e5dd.opentlc.com.cert.pem --cacert external-service.apps.e5dd.opentlc.com/2_intermediate/certs/ca-chain.cert.pem https://external-service.apps.e5dd.opentlc.com

With that test being successful, let’s get our simplistic client app deployed:

We just need the following (assuming you got the webbhook configured for automatic sidecar injection):

apiVersion: v1
kind: ServiceAccount
metadata:
  name: sleep
---
apiVersion: v1
kind: Service
metadata:
  name: sleep
  labels:
    app: sleep
spec:
  ports:
  - port: 80
    name: http
  selector:
    app: sleep
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: sleep
spec:
  replicas: 1
  template:
    metadata:
      annotations:
        sidecar.istio.io/inject: "true"
      labels:
        app: sleep
    spec:
      serviceAccountName: sleep
      containers:
      - name: sleep
        image: pstauffer/curl
        command: ["/bin/sleep", "3650d"]
        imagePullPolicy: IfNotPresent
---

Right now we could open a remote shell in the pod and run a curl command to call the external route. This will fail because we don’t present any client certificate (with the -k or --insecure option for just trusting nginx’s server cert).

Now the fun part begins. With our test scenario in mind, where the egress gateway shall perform the TLS origination, it’s clear that the very first thing we need there is the client certificates.

The procedure is the same: create secrets from the certs (this time the client and the CA cert), and mount them into the egress gateway:

Secrets:

oc create -n istio-system secret tls nginx-client-certs --key external-service.apps.e5dd.opentlc.com/4_client/private/external-service.apps.e5dd.opentlc.com.key.pem --cert external-service.apps.e5dd.opentlc.com/4_client/certs/external-service.apps.e5dd.opentlc.com.cert.pem

oc create -n istio-system secret generic nginx-ca-certs --from-file=external-service.apps.e5dd.opentlc.com/2_intermediate/certs/ca-chain.cert.pem

Volume mounts:

oc set volumes --add deployment/istio-egressgateway --name nginx-client-certs --type secret --secret-name nginx-client-certs --mount-path /etc/nginx-client-certs -n istio-system

oc set volumes --add deployment/istio-egressgateway --name nginx-ca-certs --type secret --secret-name nginx-ca-certs --mount-path /etc/nginx-ca-certs -n istio-system

Wait for the egress-gateway re-deployment to finish.

So, now let’s start with the istio configuration. The very first thing we need is a ServiceEntry. My service mesh is configured to not allow any outbound traffic unless the service we’re calling is known to istio’s registry.

The ServiceEntry is pretty simple:

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: nginx
spec:
  hosts:
  - external-service.apps.e5dd.opentlc.com
  ports:
  - number: 443
    name: https
    protocol: HTTPS
  resolution: DNS

We just say “this is the url, we use port 443 and https and you can resolve this address using DNS” - done.

Now the tricky bit begins - the routing and everyhting which we need for successful routing to this external service. The routing shall be the following:

sleep application → egress gateway → nginx.

That’s pretty simple. BUT we also want mTLS between the sleep app and the egress gateway. And we need to make sure to use the proper certificates when the egress gateway is trying to call the external service.

For the latter, the respective Istio object is a DestinationRule - so let’s start with that one:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: originate-mtls-for-nginx
spec:
  host: external-service.apps.e5dd.opentlc.com
  trafficPolicy:
    loadBalancer:
      simple: ROUND_ROBIN
    portLevelSettings:
    - port:
        number: 443
      tls:
        mode: MUTUAL
        clientCertificate: /etc/nginx-client-certs/tls.crt
        privateKey: /etc/nginx-client-certs/tls.key
        caCertificates: /etc/nginx-ca-certs/ca-chain.cert.pem
        sni: external-service.apps.e5dd.opentlc.com

As you can see, in that DestinationRule we just point to the certificates we’ve just mounted.

Of course we need a gateway, so that the istio proxy listens to traffic for our external service and loadbalances it:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: istio-egressgateway
spec:
  selector:
    istio: egressgateway
  servers:
  - port:
      number: 443
      name: https
      protocol: HTTPS
    hosts:
    -  external-service.apps.e5dd.opentlc.com
    tls:
      mode: MUTUAL
      serverCertificate: /etc/certs/cert-chain.pem
      privateKey: /etc/certs/key.pem
      caCertificates: /etc/certs/root-cert.pem

Now, when routing the traffic from the sleep app to the egress gateway, we also need to make sure that we use mTLS. For the mTLS bit, we use another DestinationRule:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: egressgateway-for-external-service
spec:
  host: istio-egressgateway.istio-system.svc.cluster.local
  subsets:
  - name: nginx
    trafficPolicy:
      loadBalancer:
        simple: ROUND_ROBIN
      portLevelSettings:
      - port:
          number: 443
        tls:
          mode: ISTIO_MUTUAL
          sni:  external-service.apps.e5dd.opentlc.com

So, when traffic is routed to the egress gateway and the subset “nginx” shall be used, we apply the above traffic policy, namely we use ISTIO_MUTUAL (sleep app to egress-gateway is still inside the service mesh).

Now let’s do the actual routing. The logic is:

  • When there’s traffic for our external service and it’s coming from inside the mesh, then route it to the egress gateway and make sure it’s mTLS.
  • When there’s traffic for our external service and it’s coming from the egress-gateway itself, then route it to the actual external service

This is translated to the following config:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: route-nginx-through-egress-gateway
spec:
  hosts:
  - external-service.apps.e5dd.opentlc.com
  gateways:
  - istio-egressgateway
  - mesh
  http:
  - match:
    - gateways:
      - mesh
      port: 80
    route:
    - destination:
        host: istio-egressgateway.istio-system.svc.cluster.local
        subset: nginx
        port:
          number: 443
      weight: 100
  - match:
    - gateways:
      - istio-egressgateway
      port: 443
    route:
    - destination:
        host: external-service.apps.e5dd.opentlc.com
        port:
          number: 443
      weight: 100

Let’s check the gateways first. It’s saying mesh and istio-egressgateway. The first is a sort of reserved gateway - it just means the traffic is coming from somewhere inside of the mesh.

So, this VirtualService takes care of routing for traffic to our external service when it is either coming from somewhere inside of the mesh (which would be our sleep app) or our egress-gateway (the gateway definition selects the egress-gateway via label).

The first match condition says: When traffic is coming from the mesh AND is on port 80 (remember that we don’t want to have to deal with any TLS in our application, so inside the app we just want to use plain http), then route the traffic to istio-egressgateway.istio-system.svc.cluster.local on port 443 (the FQDN of the egress-gateway service) and apply the subset nginx. This subset is where we said: “use mTLS”. We haven’t created any ServiceEntry for this service. We don’t need to since Services are added automatically to Istio’s registry.

The second match is when the egress gateway wants to route the traffic to the external service. So we have traffic on the gateway with the name istio-egressgateway for our external service, on port 443 (we changed from 80 to 443 when we routed to the egress-gateway first). If that’s the case, then route to our actual external service. This is, when the DestinationRule gets applied where we’ve configured our client certificates.

Now, when opening a remote shell on the sleep app pod (oc rsh) and trying to call our external service with

curl -v http://external-service.apps.e5dd.opentlc.com

you should see the index page of nginx.