You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org

Kubernetes/Add a new service

From Wikitech-static
< Kubernetes
Revision as of 17:19, 2 May 2022 by imported>JMeybohm (Own section for deployment of admin_ng to link to)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

All steps below assume you want to deploy a new service named service-foo to the clusters of the main (wikikube) group.

Accessibility of service-foo from outside of Kubernetes can be achieved via Kubernetes Ingress or LVS. The method of choice will have some impact on the following steps.

Prepare the clusters for the new service

  1. Create deployment user/tokens in the puppet private and public repos (public repo means 'labs/private'). You can use a randomly generated 22-character [A-z0-9] password for each of the two required tokens. You need to edit the hieradata/common/profile/kubernetes.yaml file in the private repository - specifically the profile::kubernetes::infrastructure_user key, as in the example below:
    profile::kubernetes::infrastructure_users:
      main:
        client-infrastructure:
          token: <REDACTED>
          groups: [system:masters]
    ...
    +    service-foo:
    +      token: <YOUR_TOKEN>
    +      groups:
    +        - deploy
    +    service-foo-deploy:
    +      token: <ANOTHER_TOKEN>
    
    The additional user with the -deploy suffix is required due to the access control policies configured. Please see this comment for a more detailed explanation of how this pattern arose.
  2. Tell the deployment server how to set up the kubeconfig files. This is done by modifying the profile::kubernetes::deployment_server::services hiera key (hieradata/common/profile/kubernetes/deployment_server.yaml) as in the example below:
    profile::kubernetes::deployment_server::services:
      main:
        mathoid:
          usernames:
            - name: mathoid
            - name: mathoid-deploy
    ...
    +    service-foo:
    +      usernames:
    +        - name: service-foo
    +        - name: service-foo-deploy
    
    Please note that the file permissions of your kubeconfig file (/etc/kubernetes/service-foo-<cluster_name>.config) are inherited from the defaults at profile::kubernetes::deployment_server::user_defaults. Typically you won't need to override them. If you do need to, you can specify the keys owner, group and mode for each element in the usernames array.
  3. Add a Kubernetes namespace. Example commit:
  4. At this point, you can safely merge the changes (after somebody from Service Ops validates). After merging, it is important to deploy your changes to avoid impacting other people rolling out changes later on.

Deploy changes to helmfile.d/admin_ng

If you have just created new kubernetes infrastructure users (profile::kubernetes::infrastructure_users), make sure those are available on the kubernetes masters by running puppet (on one ofcumin1001.eqiad.wmnet, cumin2002.codfw.wmnet):

sudo cumin -b 4 -s 2 'P{O:kubernetes::master or O:kubernetes::staging::master}' 'run-puppet-agent'

On deploy1002:

sudo run-puppet-agent
sudo -i
cd /srv/deployment-charts/helmfile.d/admin_ng/
helmfile -e staging-codfw -i apply
# if that went fine
helmfile -e staging-eqiad -i apply
helmfile -e codfw -i apply
helmfile -e eqiad -i apply

The command above should show you a diff in namespaces/quotas/etc.. related to your new service. If you don't see a diff, ping somebody from the Service Ops team! Check that everything is ok:

sudo -i
kube_env admin staging-codfw
kubectl describe ns service-foo

You should be able to see info about your namespace.

Create certificates (for the services proxy)

If service-foo has an HTTPS endpoint, certificates need to be created like described in Create_and_place_certificates. This need to be done regardless of whether Ingress or LVS is used. But certificates are only created for production clusters, staging deployments will automatically use a default certificate.

Add private data/secrets (optional)

Ask Service Ops to add the private data for your service.

This is done by adding an entry for service-foo under profile::kubernetes::deployment_server_secrets::services in the private repository (hieradata/role/common/deployment_server/kubernetes.yaml). Secrets will most likely be needed for all clusters, including staging.

Setting up Ingress

This is only needed if service-foo should be accessed via Ingress.

Follow Ingress#Add a new service under Ingress to create Ingress related config, DNS records etc.

Deploy the service

At this point you should have a a Chart for your service (Creating a Helm Chart), and will need to setup a helmfile.d/services directory in the operations/deployment-charts repository for the deployment. You can copy the structure (helmfile.yaml, values.yaml, values-staging.yaml, etc.) from helmfile.d/services/_example_ and customize as needed.

If this service will be accessed via LVS: Ensure the service has its ports registered at Service ports ($SERVICE-FOO-PORT)

You can proceed to deploy the new service to staging for real.

On deploy1002:

cd /srv/deployment-charts/helmfile.d/services/service-foo
helmfile -e staging -i apply

The command above will show a diff related to the new service, make sure that everything looks fine and then hit Yes to proceed.

Testing a service

  1. Now we can test the service in staging. Use the very handy endpoint: https://staging.svc.eqiad.wmnet:$SERVICE-FOO-PORT to quickly test if everything works as expected.

Deploy a service to production

  1. Ensure you have enabled TLS support via tls.enabled in your values.yaml
  2. Then the final step, namely deploying the new service. On deploy1002:
    cd /srv/deployment-charts/helmfile.d/services/service-foo
    helmfile -e codfw -i apply
    # if that went fine
    helmfile -e eqiad -i apply
    

The service can now be accessed via the registered port on any of the kubernetes nodes (for manual testing).

Setting up LVS

This is only needed if service-foo should be accessed via LVS.

Follow LVS#Add_a_new_load_balanced_service to create a new LVS service on $SERVICE-FOO-PORT.