You are browsing a read-only backup copy of Wikitech. The live site can be found at


From Wikitech-static
Revision as of 00:27, 25 January 2022 by imported>Dzahn (→‎How this service was made)
Jump to navigation Jump to search

miscweb is a new service on kubernetes.

Since 2022-01-20 it serves production traffic for static-bugzilla.

It was requested in task T281538 to replace the legacy service "miscweb" running on Ganeti VMs in production.

Also see: miscweb1002, miscweb2002 for the legacy machines still serving other microsites.

Sites running on miscweb k8s

The first of the sites hosted on miscweb-k8s is static-bugzilla.

Since 2022-01-20 is served from k8s.

The actual switch to the new backend was made here.

Other micro-sites are going to follow this quarter.

Where does the code live?

The docker image is is built by the Deployment Pipeline/CI from the repo operations/container/miscweb. This is also where the actual content and webserver config can be found.

The helm charts for kubernetes are together with the other services in operations/deployment-charts.

Note that all the HTML content files are gzipped to reduce image size. If you want to edit HTML inside it you need to gunzip, edit and gzip, then upload to Gerrit.

How to deploy changes


  • ssh deploy1002.eqiad.wmnet
  • [deploy1002:~] $ kube_env miscweb staging
  • [deploy1002:~] $ helmfile -e staging diff
  • [deploy1002:~] $ helmfile -e staging -i apply

And wait.. either it works after a little while or it will automatically revert after 5 minutes.


  • ssh deploy1002.eqiad.wmnet
  • [deploy1002:~] $ kube_env miscweb codfw
  • [deploy1002:~] $ helmfile -e codfw diff
  • [deploy1002:~] $ helmfile -e codfw -i apply

And wait.. either it works after a little while or it will automatically revert after 5 minutes.

  • [deploy1002:~] $ kube_env miscweb eqiad
  • [deploy1002:~] $ helmfile -e eqiad diff
  • [deploy1002:~] $ helmfile -e eqiad -i apply

And wait.. either it works after a little while or it will automatically revert after 5 minutes.

Service names

miscweb.svc.eqiad.wmnet has address  (eqiad)
miscweb.svc.codfw.wmnet has address  (codfw)
miscweb.discovery.wmnet has address  (DNS/Discovery)

LVS / discovery

How this service was made

Here I am trying to compile a table / list of all the changes made to get this service from scratch into WMF production, in chronological order of how they were merged.

steps for miscweb
# action link
1 created a new service request ticket
2 read docs
3 reserved a service port
4 added tokens in private repo to CI::master and deployment_server in private repo cd /srv/private/.. on the puppetmaster (ask an SRE with root access if needed)
5 added dummy tokens in the labs/private repo
6 created a new namespace in kubernetes, use helmfile apply on deployment servers
7 added new namespace to CI and deployment_server ,
8 requested a new Gerrit repo to host your (Blubber) code
9 read about deployment pipeline
10 added initial config stub for pipeline lib
11 read about Blubber ,
12 added initial Blubber file
13 added pipelines and config in integration/config (asked releng)
14 added bespoke pipeline in integration/config if needed (asked releng)
15 added LVS service IPs
16 added in Blubber
17 tried staging/test variants
18 simplified apache config , ,
19 installed vim, curl in container for testing ,
20 dropped/merged unused pipeline
21 switched service to not run 'insecurely' (as a separate user)
22 added virtual site inside webserver
23 tested cloning from repo, letting Blubber generate a Dockerfile and got shell inside container
24 stopped loading modules not used
25 reserved a public port for LVS
26 opened firewall on deployment server to dump data from pre-k8s service
27 rsynced data over to deployment server
28 added config to serve data gzipped to reduce image size, installed browser in container to test ,
29 load mod_rewrite and mod_headers, add headers/encoding settings for gziped content
30 read about helm and deployments on kubernetes ,
30 cloned the repo 'operations/deployment-charts' where the helm files live
31 read README in the repo about how to create charts
32 read and ran ''
33 adjusted values in new files generated by script and uploaded to the repo
34 created a new app type for a httpd without php-fpm, added a prometheus (metrics) exporter
35 added helmfile.yaml and values under services.d, copying from another service
36 set the docker registry name specifically to use the discovery name
37 added uncompressed content of the first 1000 Bugzilla bugs
38 cleaned up and added comments for others to delete files they don't use
39 set a main_app version and added some CPU/RAM limits
40 added reserved port as nodePort
41 added version tags for staging and production
42 linked staging httpd config to prod httpd config
43 added httpd rewrite rules from pre-k8s config
44 set service deployment to production, not minikube
45 bumped staging version to latest build created by CI , etc .. (skipping these in the future, needed after every change)
46 loaded missing mod_alias for Redirect directive
47 added HTML content for the first 10000 bugs, checked image size
48 compressed content with gzip and added more bug HTML
49 various changes to add all the content in batches of 10k bugs, then the same for activities HTML files , and various others up to
50 added and gzipped index and "all" pages
51 added old Bugzilla Wikimedia skin directory
52 read about adding a new service to LVS
53 added service IPs in DNS (ask infra foundations)
53 added LVS config and had it merged (coordinate with serviceops/traffic for this step)
54 switched service_state from service_setup to lvs_setup
55 enabled LVS in helm chart
56 removed nodePort, added public_port, enabled TLS, multiple attempts to get the order right, then TLS worked , , ,
57 switched service_state from lvs_setup to monitoring_setup, checked new Icinga monitoring being added, further testing to confirm it at all works ,
58 debugged gzip encoding issue in cloud VPS, confirmed can pull and run directly from prod docker registry
59 fixed content type for HTML, which was set to CSS, service now working in cloud ,
60 further version bumping / deploying / testing
61 confirmed working with curl directly from production service names with right content-type and content-encoding
62 switched service_state from monitoring_setup to production (make it page) but only very carefully after checking confd templates on DNS servers, downtiming services in Icinga ,
63 read about discovery DNS
64 added discovery DNS as an active-active service, confirmed could now curl from discovery name ,
65 switched ATS (traffic servers/caching layer) from old backend to new backend, the discovery name on our reserved service port
66 added service to
67 ATS servers got 502, did not work, reverted, turned out the reason was a missing SAN on the TLS cert
68 addded SAN to cert, created new cert, checked it