You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Global traffic routing: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>BBlack
(Add pseudo-code explanation)
imported>Krinkle
 
(11 intermediate revisions by 9 users not shown)
Line 5: Line 5:
== Sites ==
== Sites ==


There are currently four total sites involved.  All four sites can receive direct user traffic, however <code>eqiad</code> and <code>codfw</code> are ''Primary sites'' which also host application layer services, while <code>ulsfo</code> and <code>esams</code> are ''Edge sites'' which do not.
There are currently [[Data centers|six total data centers]] involved.  All locations can receive direct user traffic, however <code>eqiad</code> and <code>codfw</code> also host ''Core application services'', whereas <code>ulsfo</code>, <code>esams</code>, <code>drmrs</code>, and <code>eqsin</code> are limited to ''Edge caching''.


{{ClusterMap}}
{{ClusterMap}}


== Global Routing Overview ==
== Global Routing Overview ==
[[File:WMF Global Traffic Routing.svg|frameless|768x768px]]


User traffic can enter through the front edge of any of the sites, and is then routed on to eventually reach an application service in a primary site.
User traffic can enter through the front edge of any of the sites, and is then routed on to eventually reach an application service in a primary site (either eqiad or codfw).


Ideally all of our application-layer services operate in an active/active configuration like <code>App1</code> above, meaning they can directly accept user traffic in both primary sites simultaneously.  Some application services are active/passive like <code>App2</code> above, meaning that they're only accepting user traffic in one of the primary sites but not the other at any given time.  Active/active services might also be temporarily configured to use only a single one of the primary sites for various operational maintenance or outage reasons.
Ideally all of our application-layer services operate in an active/active configuration, meaning they can directly accept user traffic in both primary sites simultaneously.  Some application services are active/passive, meaning that they're only accepting user traffic in one of the primary sites but not the other at any given time.  Active/active services might also be temporarily configured to use only a single one of the primary sites for various operational maintenance or outage reasons.


In the active/active application's case (<code>App1</code> above), global traffic is effectively split and does not cross the inter-cache route between the two primary sites.  Users whose traffic enters at either of <code>ulsfo</code> or <code>codfw</code> would reach the application service in <code>codfw</code>, and users whose traffic enters at <code>esams</code> or <code>eqiad</code> would reach the application service in <code>eqiad</code>.
In the active/active application's case, global traffic is effectively split.  Users whose traffic enters at either of <code>ulsfo</code> or <code>codfw</code> would reach the application service in <code>codfw</code>, and users whose traffic enters at <code>esams</code> or <code>eqiad</code> would reach the application service in <code>eqiad</code>.
 
When an application is active/passive, the primary site which does not have a direct route to the application configured (e.g. <code>eqiad</code> for <code>App2</code> above) will forward to the other primary site's cache to reach that application.


== GeoDNS (User-to-Edge Routing) ==
== GeoDNS (User-to-Edge Routing) ==
Line 27: Line 24:


To disable a site as an edge destination for user traffic in GeoDNS:
To disable a site as an edge destination for user traffic in GeoDNS:
Downtime the matching site alert in https://icinga.wikimedia.org/cgi-bin/icinga/status.cgi?search_string=traffic+drop


In the <code>operations/dns</code> repo, edit the file <code>admin_state</code>
In the <code>operations/dns</code> repo, edit the file <code>admin_state</code>
Line 34: Line 33:
   geoip/generic-map/esams => DOWN
   geoip/generic-map/esams => DOWN


... and then deploy the DNS change in the usual way: merge through gerrit, ssh to any '''one''' of our 3x authdns servers (<code>baham</code>, <code>radon</code>, and <code>eeden</code>), and execute <code>authdns-update</code> as root.
... and then deploy the DNS change in the usual way: merge through gerrit, ssh to any '''one''' of our authdns servers (<code>authdns[12]001.wikimedia.org</code>), and execute <code>authdns-update</code> as root.


=== Hard enforcement of GeoDNS-disabled sites ===
=== Hard enforcement of GeoDNS-disabled sites ===


In the case that we need to '''guarantee''' that zero requests are flowing into the user-facing edge of a disabled site for a given cache cluster (or all clusters), we can forcibly block all traffic at the front edge.  This should only be done when strictly necessary, and only long after (e.g. 24H after) making the DNS switch above, to avoid impacting those with minor trailing DNS cache update issues.  To lock traffic out of the frontends for a given cluster in a given site, you'll need to merge and deploy a puppet hieradata update which sets the key <code>cache::traffic_shutdown</code> to <code>true</code> for the applicable cluster/site combinations.
In the case that we need to '''guarantee''' that zero requests are flowing into the user-facing edge of a disabled site for a given cache cluster (or all clusters), we can forcibly block all traffic at the front edge.  This should only be done when strictly necessary, and only long after (e.g. 24H after) making the DNS switch above, to avoid impacting those with minor trailing DNS cache update issues.  To lock traffic out of the cache frontends for a given cluster in a given site, you'll need to merge and deploy a puppet hieradata update which sets the key <code>cache::traffic_shutdown</code> to <code>true</code> for the applicable cluster/site combinations.


For example, to lock all traffic out of the text cluster in eqiad, add the following line to <code>hieradata/role/eqiad/cache/text.yaml</code>:
For example, to lock all traffic out of the text cluster in eqiad, add the following line to <code>hieradata/role/eqiad/cache/text.yaml</code>:
  cache::traffic_shutdown: true
== Inter-cache (Inter-Site) Routing ==
Once a user's request has entered the front edge of our Traffic infrastructure through GeoDNS, inter-cache routing then takes place to route the request towards a primary site where the application service lives.  The flow of traffic through our sites is currently controlled via hieradata.  If one or more sites route their traffic '''through''' another site on their way to the app layer, and that site is down, we'd want to re-route the traffic around that.  Each cache cluster has its own routing table.
In the <code>operations/puppet</code> repo, there are per-cluster files <code>hieradata/role/common/cache/*.yaml</code> (there are currently 4 of them: text, upload, misc, maps).
There you'll see a cache route table mapping sources to destinations that looks like:


<syntaxhighlight lang="yaml">
<syntaxhighlight lang="yaml">
cache::route_table:
cache::traffic_shutdown: true
  eqiad: 'codfw'
  codfw: 'eqiad'
  ulsfo: 'codfw'
  esams: 'eqiad'
</syntaxhighlight>
</syntaxhighlight>


Note that the two ''primary'' sites (<code>eqiad</code> and <code>codfw</code>) intentionally route to each other in a loop.  This is so that each can route to the other for services which are active/passive in only one of the primary sites.  The ''edge'' sites (<code>ulsfo</code> and <code>esams</code>) should normally point at one of the ''primary'' sites (although it is possible to point an edge at another edge as well and route through it, but this would probably be a rare operational scenario).
Once the change is merged and applied to the nodes with puppet, all requests sent to eqiad will get a HTTP 403 response from the cache frontends instead of being served from cache or routed to the appropriate origin server.


=== Disabling a Site ===
== Cache-to-application routing ==
 
Upon entering a given data center, HTTP requests reach a cache frontend host running Varnish. At this layer, caching is controlled by either the <code>cache::req_handling</code> or <code>cache::alternate_domains</code> hiera setting. The former is used by main sites like the wikis and upload.wikimedia.org, while the latter is used by miscellaneous sites such as for example [[Phabricator|phabricator.wikimedia.org]] and [[grafana.wikimedia.org]]. Choosing which data structure to use depends on whether the site needs to be controlled by the regular or misc VCL, most likely misc. It is thus almost sure that additional services need to be added to <code>cache::alternate_domains</code>. If in doubt, contact the traffic team. The format of both data structures is:
If an edge site is malfunctioning, it usually won't be the right-hand destination of any route, so there's no change to be made here.
 
If a primary site is malfunctioning, it should be removed from the right-hand destinations of edge sites.
 
'''The loop between the two primary sites should be left alone'''.  Scenarios in which we might alter the loop between the primaries fall outside the scope of a simple instructional wiki page.
 
To disable routing through <code>codfw</code> due to malfunction, one would only need to change <code>ulsfo</code>'s entry, pointing it at <code>eqiad</code> instead:


<syntaxhighlight lang="yaml">
<syntaxhighlight lang="yaml">
cache::route_table:
cache::alternate_domains:
   eqiad: 'codfw'
   hostname1:
  codfw: 'eqiad'
    caching: 'normal'
   ulsfo: 'eqiad' # was 'codfw', but changed due to codfw outage!
   hostname2:
  esams: 'eqiad'
    caching: 'pass'
</syntaxhighlight>
</syntaxhighlight>


After merging this through gerrit + puppet-merge, puppet agent needs to be run on the affected caches before this takes effect.
In Puppet terms there is a data type for those structures: <code>[https://gerrit.wikimedia.org/r/plugins/gitiles/operations/puppet/+/refs/heads/production/modules/profile/types/cache/sites.pp Profile::Cache::Sites]</code>. The <code>caching</code> attribute is particularly interesting, see its [https://gerrit.wikimedia.org/r/plugins/gitiles/operations/puppet/+/refs/heads/production/modules/profile/types/cache/caching.pp type definition].
 
A value of '''normal''' in the caching attribute means that Varnish will cache the responses for this site unless '''Cache-Control''' says otherwise. Conversely, '''pass''' means that objects for this site are never to be cached. It would be preferable to specify '''normal''' and ensure that the origin returns '''Cache-Control''' with appropriate values for responses that should not be cached, but where this is not possible '''pass''' can be used. For sites that need to support websockets, such as Phabricator/Etherpad, use '''websockets'''. A sample of the production values for <code>cache::alternate_domains</code> as of July 2020 follows.
== Cache-to-application routing ==
 
The final step is routing requests out the back edge of the Traffic caching infrastructure into the application layer.  The application layer services can exist at one or both of the two primary sites (<code>eqiad</code> and <code>codfw</code>) on a case-by-case basis. This is controlled by per-application routing entries found in the same hieradata files as inter-cache routing above.
 
In the <code>operations/puppet</code> repo, there are per-cluster files <code>hieradata/role/common/cache/*.yaml</code> (there are currently 4 of them: text, upload, misc, maps).
 
Within these files, underneath the <code>cache::app_directors</code> key, you will see one stanza per application layer service used by each cluster. Within each application service, there's <code>backends</code> which defines the available hostnames for this service at <code>eqiad</code> and/or <code>codfw</code>.  Ideally all services should exist active/active at both, but currently many are active/passive instead. For active/passive services with hot standby available, the inactive side will probably already be specified in the hieradata file but commented out, to make changes easier.
 
Example of current <code>cache::app_directors</code> stanza for the text cluster, with all services active/passive (most active only in <code>eqiad</code>, but <code>appservers_debug</code> active only in <code>codfw</code>):


<syntaxhighlight lang="yaml">
<syntaxhighlight lang="yaml">
cache::app_directors:
cache::alternate_domains:
  appservers:
  15.wikipedia.org:
    backends:
    caching: 'normal'
      eqiad: 'appservers.svc.eqiad.wmnet'
  analytics.wikimedia.org:
      # codfw: 'appservers.svc.codfw.wmnet'
    caching: 'normal'
  api:
  annual.wikimedia.org:
    backends:
    caching: 'normal'
      eqiad: 'api.svc.eqiad.wmnet'
  blubberoid.wikimedia.org:
      # codfw: 'api.svc.codfw.wmnet'
    caching: 'pass'
  rendering:
  bienvenida.wikimedia.org:
    backends:
    caching: 'normal'
      eqiad: 'rendering.svc.eqiad.wmnet'
  etherpad.wikimedia.org:
      # codfw: 'rendering.svc.codfw.wmnet'
    caching: 'websockets'
  security_audit:
    backends:
      eqiad: 'appservers.svc.eqiad.wmnet'
      # codfw: 'appservers.svc.codfw.wmnet'
  appservers_debug:
    be_opts:
      max_connections: 20
    backends:
      # eqiad: 'hassium.eqiad.wmnet'
      codfw: 'hassaleh.codfw.wmnet'
  restbase_backend:
    be_opts:
      port: 7231
      max_connections: 5000
    backends:
      eqiad: 'restbase.svc.eqiad.wmnet'
      # codfw: 'restbase.svc.codfw.wmnet'
  cxserver_backend:
    be_opts:
      port: 8080
    backends:
      eqiad: 'cxserver.svc.eqiad.wmnet'
      # codfw: 'cxserver.svc.codfw.wmnet'
  citoid_backend:
    be_opts:
      port: 1970
    backends:
      eqiad: 'citoid.svc.eqiad.wmnet'
      # codfw: 'citoid.svc.codfw.wmnet'
</syntaxhighlight>
</syntaxhighlight>


Within each <code>backends</code> stanza, the primary site listed on the left names the site where the traffic would exit the cache layer, and the hostname on the right is the applayer hostname it will contact to do so.  The code which operates on this data doesn't actually care whether the hostname on the right is actually within the site named on the left. This allows for interesting operational possibilities such as:
In case there is no cache hit at the frontend layer, requests are sent to a cache backend running [[ATS]] in the same DC. Backend selection is done by applying consistent hashing on the request URL. If at the backend layer there is also no cache hit, the final step is routing requests out the back edge of the Traffic caching infrastructure into the application layer.  The application layer services can exist at one or both of the two primary sites (<code>eqiad</code> and <code>codfw</code>) on a case-by-case basis.  This is controlled by ATS remap rules mapping the '''Host''' header to a given origin server hostname. The hiera setting controlling the rules is <code>profile::trafficserver::backend::mapping_rules</code>, and for production it is specified in <code>hieradata/common/profile/trafficserver/backend.yaml</code>. For most services, the configuration of whether the service is active/active or active/passive is done via [[DNS/Discovery]]. The exception to this rule is services available in one primary DC only, such as pivot (eqiad-only) in the example below:


<syntaxhighlight lang="yaml">
<syntaxhighlight lang="yaml">
cache::app_directors:
profile::trafficserver::backend::mapping_rules:
  appservers:
    - type: map
     backends:
      target: http://15.wikipedia.org
       eqiad: 'appservers.svc.eqiad.wmnet'
      replacement: https://webserver-misc-apps.discovery.wmnet
       codfw: 'appservers.svc.eqiad.wmnet'
     - type: map
       target: http://phabricator.wikimedia.org
      replacement: https://phabricator.discovery.wmnet
    - type: map
       target: http://pivot.wikimedia.org
      replacement: https://an-tool1007.eqiad.wmnet
</syntaxhighlight>
</syntaxhighlight>


This would cause inter-cache routing to behave like an active/active service (dropping from the cache to the applayer directly at both primary sites), but both site's caches will contact only the eqiad applayer service.  This is not how we would prefer to operate under normal conditions, but it can be a useful step during complex transitions and testing.
Any administrative action such as depooling a primary site for active/active services, or moving an active/passive service from one primary DC to the other, can be performed via [[DNS/Discovery#How_to_manage_a_DNS_Discovery_service|DNS discovery updates]].


'''Important Caveat:'''  Because changes to this configuration roll out asynchronously to many cache hosts, swapping a single-site backends list from one primary site to the other in a single commit step will cause temporary traffic-routing loops as caches with different versions of the configuration forward traffic to each other.  The caches will detect the looping requests immediately and return HTTP error code <code>508 Loop Detected</code> for the affected requests, causing a spike in user-facing errors until the situation resolves itself a short time later when the async config deployment process finishes.  To avoid this, it's best to do an intermediate commit which enables both primary sites' caches to reach the application layer.  In other words, you want this sequence of states to get from <code>eqiad</code>-only to <code>codfw</code>-only:
When adding a new service to <code>profile::trafficserver::backend::mapping_rules</code>, ensure that the public hostname (ie: the hostname component of <code>target</code>) is included in the Subject Alternative Name (SAN) list of the certificate served by <code>replacement</code>. This is needed to ensure a successful TLS connection establishment between ATS and the origin server.
 
Initial State:
<syntaxhighlight lang="yaml">
backends:
  eqiad: 'appservers.svc.eqiad.wmnet'
  # codfw: 'appservers.svc.codfw.wmnet'
</syntaxhighlight>


Intermediate State (temporarily active/active):
The following command provides an example for how to verify that the hostname '''phabricator.wikimedia.org''' is included in the SAN of the certificate offered by '''phabricator.discovery.wmnet''':
<syntaxhighlight lang="yaml">
backends:
  eqiad: 'appservers.svc.eqiad.wmnet'
  codfw: 'appservers.svc.codfw.wmnet'
</syntaxhighlight>


Final State:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="yaml">
$ echo | openssl s_client -connect phabricator.discovery.wmnet:443 2>&1 | openssl x509 -noout -text | grep -q DNS:phabricator.wikimedia.org && echo OK || echo KO
backends:
OK
  # eqiad: 'appservers.svc.eqiad.wmnet'
  codfw: 'appservers.svc.codfw.wmnet'
</syntaxhighlight>
</syntaxhighlight>


== A code-level view of inter-cache and cache->app routing ==
If the above command fails, you might have to update the origin server certificate to include the public hostname. See [[Cergen]].


The details of the inter-cache and cache->app routing are probably easier to understand for some as pseudo-code operating on the given hieradata.
To further verify that HTTPS requests are served properly by the configured origin, and everything works including the TLS handshake:


Each cache handles requests according to the following pseudo-code logic:
<syntaxhighlight lang="bash">
 
# get the IP address of phabricator.discovery.wmnet
<syntaxhighlight>
$ host phabricator.discovery.wmnet
$req = <incoming request from user or forwarded from another cache>
phabricator.discovery.wmnet is an alias for phab1001.eqiad.wmnet.
$route_table = <hieradata cache::route_table for this cache cluster>
phab1001.eqiad.wmnet has address 10.64.16.8
$app_directors = <hieradata cache::app_directors for this cache cluster>
# test an HTTPS request
$req_handling = <hieradata cache::req_handling for this cache cluster>
$ curl -I https://phabricator.wikimedia.org --resolve phabricator.wikimedia.org:443:10.64.16.8
$my_site = <the local site name>
HTTP/1.1 200 OK
 
[...]
$which_app = parse($req, $req_handling);
if ($app_directors[$which_app].has_key?($my_site)) {
    send_to_applayer_at_hostname($req, $app_directors[$which_app][$my_site])
} else {
    forward_to_another_cache($req, $route_table[$my_site])
}
</syntaxhighlight>
</syntaxhighlight>
== Future directions ==
The current state of affairs is an iterative improvement on the previous situation, but there's still a ways to go!  We're still missing some simplification of process, and then the most important piece of the puzzle that remains is transferring all of these routing-state controls to etcd/confctl control so that they don't involve the (much slower and task-inappropriate) full configuration commit->deploy process that they do today.


[[Category:Caching]]
[[Category:Caching]]

Latest revision as of 23:46, 17 June 2022

This page covers our mechanisms for routing user requests through our Traffic infrastructure layers. The routing can be modified through administrative actions to improve performance and/or reliability, and/or respond to site/network outage conditions.

Sites

There are currently six total data centers involved. All locations can receive direct user traffic, however eqiad and codfw also host Core application services, whereas ulsfo, esams, drmrs, and eqsin are limited to Edge caching.

Map of Wikimedia Foundation data centers.

Global Routing Overview

User traffic can enter through the front edge of any of the sites, and is then routed on to eventually reach an application service in a primary site (either eqiad or codfw).

Ideally all of our application-layer services operate in an active/active configuration, meaning they can directly accept user traffic in both primary sites simultaneously. Some application services are active/passive, meaning that they're only accepting user traffic in one of the primary sites but not the other at any given time. Active/active services might also be temporarily configured to use only a single one of the primary sites for various operational maintenance or outage reasons.

In the active/active application's case, global traffic is effectively split. Users whose traffic enters at either of ulsfo or codfw would reach the application service in codfw, and users whose traffic enters at esams or eqiad would reach the application service in eqiad.

GeoDNS (User-to-Edge Routing)

The first point of entry is when the client performs a DNS request on one of our public hostnames. Our authoritative DNS servers perform GeoIP resolution and hand out one of several distinct IP addresses, sending users approximately to their nearest site. We can disable sending users directly to a particular site through DNS configuration updates. Our DNS TTLs are commonly 10 minutes long, and some rare user caches will violate specs and cache them longer. The bulk of the traffic should switch inside of 10 minutes, though, with a fairly linear progression over that window.

Disabling a Site

To disable a site as an edge destination for user traffic in GeoDNS:

Downtime the matching site alert in https://icinga.wikimedia.org/cgi-bin/icinga/status.cgi?search_string=traffic+drop

In the operations/dns repo, edit the file admin_state

There are instructions inside for complex changes, but for the basic operation of completely disabling a site, the line you need to add at the bottom for e.g. disabling esams is:

 geoip/generic-map/esams => DOWN

... and then deploy the DNS change in the usual way: merge through gerrit, ssh to any one of our authdns servers (authdns[12]001.wikimedia.org), and execute authdns-update as root.

Hard enforcement of GeoDNS-disabled sites

In the case that we need to guarantee that zero requests are flowing into the user-facing edge of a disabled site for a given cache cluster (or all clusters), we can forcibly block all traffic at the front edge. This should only be done when strictly necessary, and only long after (e.g. 24H after) making the DNS switch above, to avoid impacting those with minor trailing DNS cache update issues. To lock traffic out of the cache frontends for a given cluster in a given site, you'll need to merge and deploy a puppet hieradata update which sets the key cache::traffic_shutdown to true for the applicable cluster/site combinations.

For example, to lock all traffic out of the text cluster in eqiad, add the following line to hieradata/role/eqiad/cache/text.yaml:

cache::traffic_shutdown: true

Once the change is merged and applied to the nodes with puppet, all requests sent to eqiad will get a HTTP 403 response from the cache frontends instead of being served from cache or routed to the appropriate origin server.

Cache-to-application routing

Upon entering a given data center, HTTP requests reach a cache frontend host running Varnish. At this layer, caching is controlled by either the cache::req_handling or cache::alternate_domains hiera setting. The former is used by main sites like the wikis and upload.wikimedia.org, while the latter is used by miscellaneous sites such as for example phabricator.wikimedia.org and grafana.wikimedia.org. Choosing which data structure to use depends on whether the site needs to be controlled by the regular or misc VCL, most likely misc. It is thus almost sure that additional services need to be added to cache::alternate_domains. If in doubt, contact the traffic team. The format of both data structures is:

cache::alternate_domains:
  hostname1:
    caching: 'normal'
  hostname2:
    caching: 'pass'

In Puppet terms there is a data type for those structures: Profile::Cache::Sites. The caching attribute is particularly interesting, see its type definition. A value of normal in the caching attribute means that Varnish will cache the responses for this site unless Cache-Control says otherwise. Conversely, pass means that objects for this site are never to be cached. It would be preferable to specify normal and ensure that the origin returns Cache-Control with appropriate values for responses that should not be cached, but where this is not possible pass can be used. For sites that need to support websockets, such as Phabricator/Etherpad, use websockets. A sample of the production values for cache::alternate_domains as of July 2020 follows.

cache::alternate_domains:
  15.wikipedia.org:
    caching: 'normal'
  analytics.wikimedia.org:
    caching: 'normal'
  annual.wikimedia.org:
    caching: 'normal'
  blubberoid.wikimedia.org:
    caching: 'pass'
  bienvenida.wikimedia.org:
    caching: 'normal'
  etherpad.wikimedia.org:
    caching: 'websockets'

In case there is no cache hit at the frontend layer, requests are sent to a cache backend running ATS in the same DC. Backend selection is done by applying consistent hashing on the request URL. If at the backend layer there is also no cache hit, the final step is routing requests out the back edge of the Traffic caching infrastructure into the application layer. The application layer services can exist at one or both of the two primary sites (eqiad and codfw) on a case-by-case basis. This is controlled by ATS remap rules mapping the Host header to a given origin server hostname. The hiera setting controlling the rules is profile::trafficserver::backend::mapping_rules, and for production it is specified in hieradata/common/profile/trafficserver/backend.yaml. For most services, the configuration of whether the service is active/active or active/passive is done via DNS/Discovery. The exception to this rule is services available in one primary DC only, such as pivot (eqiad-only) in the example below:

profile::trafficserver::backend::mapping_rules:
    - type: map
      target: http://15.wikipedia.org
      replacement: https://webserver-misc-apps.discovery.wmnet
    - type: map
      target: http://phabricator.wikimedia.org
      replacement: https://phabricator.discovery.wmnet
    - type: map
      target: http://pivot.wikimedia.org
      replacement: https://an-tool1007.eqiad.wmnet

Any administrative action such as depooling a primary site for active/active services, or moving an active/passive service from one primary DC to the other, can be performed via DNS discovery updates.

When adding a new service to profile::trafficserver::backend::mapping_rules, ensure that the public hostname (ie: the hostname component of target) is included in the Subject Alternative Name (SAN) list of the certificate served by replacement. This is needed to ensure a successful TLS connection establishment between ATS and the origin server.

The following command provides an example for how to verify that the hostname phabricator.wikimedia.org is included in the SAN of the certificate offered by phabricator.discovery.wmnet:

$ echo | openssl s_client -connect phabricator.discovery.wmnet:443 2>&1 | openssl x509 -noout -text | grep -q DNS:phabricator.wikimedia.org && echo OK || echo KO
OK

If the above command fails, you might have to update the origin server certificate to include the public hostname. See Cergen.

To further verify that HTTPS requests are served properly by the configured origin, and everything works including the TLS handshake:

# get the IP address of phabricator.discovery.wmnet
$ host phabricator.discovery.wmnet
phabricator.discovery.wmnet is an alias for phab1001.eqiad.wmnet.
phab1001.eqiad.wmnet has address 10.64.16.8
# test an HTTPS request
$ curl -I https://phabricator.wikimedia.org --resolve phabricator.wikimedia.org:443:10.64.16.8 
HTTP/1.1 200 OK
[...]