You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Kubernetes/Ingress

From Wikitech-static
< Kubernetes
Revision as of 12:10, 16 February 2022 by imported>JMeybohm (Created page with "{{Kubernetes nav}} {{Warn|content=Work in progress, treat this as a draft}} == Introduction == The Kubernetes Ingress uses Istio Ingresscontroller (ultimately the Ingressgateway) running as a Daemonset on each worker node to route traffic to workload services and (in last instace) Pods. The Istio Ingressgateway is implemented as an envoy instace that is configured via XDS (??) by the Istiod Control Plane. All configuration is derived from Kubernetes API Objects like the...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Introduction

The Kubernetes Ingress uses Istio Ingresscontroller (ultimately the Ingressgateway) running as a Daemonset on each worker node to route traffic to workload services and (in last instace) Pods. The Istio Ingressgateway is implemented as an envoy instace that is configured via XDS (??) by the Istiod Control Plane. All configuration is derived from Kubernetes API Objects like the Service Objects as well as Istio specific custom resources: Gateway VirtualService and DestinationRule. ?? The process of configuring the Ingressgateway is absracted away from service owners/deployers via helm common_templates _ingesss_helper.tpl so that it is enough to specify .Values.ingress.enabled: true ?? for a very basic setup.

  • ingressgateway terminates TLS connections from clients
  • ingressgateway establishes TLS connections to upstream (pods) which will be terminated on pod level by service-proxy

Setup and configuration

Istio (the control plane as well as components like the Ingressgateway) are installed and initially configured using istioctl (from a deployment host) together with an environment specific config in custom_deploy.d/istio. For initial installation or deploying updates, run:

istioctl apply -f <environment>/config.yaml

For details on how to configure Istoi, please see:

  • Istio docs (1.9 version)

Some parts of the configuration have to be (or can be) done via helm chart values. istioctl comes with embedded helm charts that get rendered and applied by istiotcl directly. You may find those embedded charts (to look up possible configuration options etc. at:

  • Istio embedded helm chart stuff and how to derive config.yaml options from it

TLS certificates are generated and maintained by cert-manager and deployed into the istio-system namespace for the ingressgateway to pick them up. This is configured and deployed by SRE via helmfile.d/admin_ng/helmfile_namespace_certs.yaml ??. If a service needs to be made available under additional hostnames they need to be configured at as tlsHostname in the namespaces config (helmfile.d/admin_ng/values/*.yaml).

Debugging

  • istioctl commands to list proxy config etc.
  • how/where to enable envoy debug logging (would be nice to figure out how to do that at runtime)
  • Create a kibana dashboard for ingressgateway logs
  • Create a grafana dashbaord for ingressgateway
  • ?? Maybe we should disable generic access logging in ingressgateway?


Configuration (for service owners)

To enable Ingress for your chart you need to undertake the following steps:

  • Make sure your chart uses the latest (at least 0.4) version of common_templates
  • Make sure the common templates _ingress_helpers.tpl is linked to the templates directory of your chart

For the absolute basic setup, all you need to do not is enable ingress via your values.yaml:

ingress:
  enabled: true
  staging: true # If you are doing this for a staging service

This will make your service available as https://SERVICE_NAME.discovery.wmnet (https://SERVICE_NAME.staging.discovery.wmnet for staging) traffic will be routed as is to all pods of your service in a round-robin fashion.

You may configure more complex routing logic, listen to different or more than one hostname etc. via the ingress configuration stanza. Please keep in mind that for different or additional hostnames you may need SRE assistance to set up certificates etc.

The routing behaviour may be modified via the ingress.httproutes stanza which supports all options described in https://istio.io/v1.9/docs/reference/config/networking/virtual-service/#HTTPRoute.

If you want to make several services available as subpaths of a hostname (https://SERVICE_NAME.discovery.wmnet/foo, https://SERVICE_NAME.discovery.wmnet/bar, ...) you need to make sure to configure only one Istio Gateway (in one of your helm chart releses) for this hostname and attach multiple HTTPRoute objects to it. Multiple Istio Gateway objects claiming the same hostname will simply be ignored.

Assuming you have two releases of your chart (one and two), you may configure ingress like in the following example to achieve that:

---
# release "one" values.yaml:
# made available as https://SERVICE_NAME.discovery.wmnet/one
# via default options + httproute
ingress:
  enabled: true
  httproutes:
  - match:
    - uri:
        prefix: /one
---
# release "two" values.yaml:
# made available as https://SERVICE_NAME.discovery.wmnet/two
# via the Gateway deployed by release "one"
ingress:
  enabled: true
  existingGateway: "SERVICE_NAMESPACE/one" # referencing the Gateway deployed by the release "one"
  routeHosts:
  - SERVICE_NAME.discovery.wmnet # Attach the following routes to this hostname in the referenced Gateway
  httproutes:
  - match:
    - uri:
        prefix: /two
    route:
    - destination:
      host: two-tls-service.SERVICE_NAMESPACE.svc.cluster.local # The cluster internal DNS name for this releases service
      port:
        number: SERVICE_TLS_PUBLIC_PORT # Port you defined in .Values.tls.public_port