You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Global traffic routing: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>Filippo Giunchedi
(→‎Sites: Add eqsin)
imported>Ema
Line 10: Line 10:


== Global Routing Overview ==
== Global Routing Overview ==
[[File:WMF Global Traffic Routing.svg|frameless|768x768px]]


User traffic can enter through the front edge of any of the sites, and is then routed on to eventually reach an application service in a primary site.
User traffic can enter through the front edge of any of the sites, and is then routed on to eventually reach an application service in a primary site (either eqiad or codfw).


Ideally all of our application-layer services operate in an active/active configuration like <code>App1</code> above, meaning they can directly accept user traffic in both primary sites simultaneously.  Some application services are active/passive like <code>App2</code> above, meaning that they're only accepting user traffic in one of the primary sites but not the other at any given time.  Active/active services might also be temporarily configured to use only a single one of the primary sites for various operational maintenance or outage reasons.
Ideally all of our application-layer services operate in an active/active configuration, meaning they can directly accept user traffic in both primary sites simultaneously.  Some application services are active/passive, meaning that they're only accepting user traffic in one of the primary sites but not the other at any given time.  Active/active services might also be temporarily configured to use only a single one of the primary sites for various operational maintenance or outage reasons.


In the active/active application's case (<code>App1</code> above), global traffic is effectively split and does not cross the inter-cache route between the two primary sites.  Users whose traffic enters at either of <code>ulsfo</code> or <code>codfw</code> would reach the application service in <code>codfw</code>, and users whose traffic enters at <code>esams</code> or <code>eqiad</code> would reach the application service in <code>eqiad</code>.
In the active/active application's case, global traffic is effectively split.  Users whose traffic enters at either of <code>ulsfo</code> or <code>codfw</code> would reach the application service in <code>codfw</code>, and users whose traffic enters at <code>esams</code> or <code>eqiad</code> would reach the application service in <code>eqiad</code>.
 
When an application is active/passive, the primary site which does not have a direct route to the application configured (e.g. <code>eqiad</code> for <code>App2</code> above) will forward to the other primary site's cache to reach that application.


== GeoDNS (User-to-Edge Routing) ==
== GeoDNS (User-to-Edge Routing) ==
Line 40: Line 37:
=== Hard enforcement of GeoDNS-disabled sites ===
=== Hard enforcement of GeoDNS-disabled sites ===


In the case that we need to '''guarantee''' that zero requests are flowing into the user-facing edge of a disabled site for a given cache cluster (or all clusters), we can forcibly block all traffic at the front edge.  This should only be done when strictly necessary, and only long after (e.g. 24H after) making the DNS switch above, to avoid impacting those with minor trailing DNS cache update issues.  To lock traffic out of the frontends for a given cluster in a given site, you'll need to merge and deploy a puppet hieradata update which sets the key <code>cache::traffic_shutdown</code> to <code>true</code> for the applicable cluster/site combinations.
In the case that we need to '''guarantee''' that zero requests are flowing into the user-facing edge of a disabled site for a given cache cluster (or all clusters), we can forcibly block all traffic at the front edge.  This should only be done when strictly necessary, and only long after (e.g. 24H after) making the DNS switch above, to avoid impacting those with minor trailing DNS cache update issues.  To lock traffic out of the cache frontends for a given cluster in a given site, you'll need to merge and deploy a puppet hieradata update which sets the key <code>cache::traffic_shutdown</code> to <code>true</code> for the applicable cluster/site combinations.


For example, to lock all traffic out of the text cluster in eqiad, add the following line to <code>hieradata/role/eqiad/cache/text.yaml</code>:
For example, to lock all traffic out of the text cluster in eqiad, add the following line to <code>hieradata/role/eqiad/cache/text.yaml</code>:
  cache::traffic_shutdown: true
== Inter-cache (Inter-Site) Routing ==
Once a user's request has entered the front edge of our Traffic infrastructure through GeoDNS, inter-cache routing then takes place to route the request towards a primary site where the application service lives.  The flow of traffic through our sites is currently controlled via hieradata.  If one or more sites route their traffic '''through''' another site on their way to the app layer, and that site is down, we'd want to re-route the traffic around that.  Each cache cluster has its own routing table.
In the <code>operations/puppet</code> repo, there are per-cluster files <code>hieradata/role/common/cache/*.yaml</code> (there are currently 3 of them: text, upload, and misc).
There you'll see a cache route table mapping sources to destinations that looks like:


<syntaxhighlight lang="yaml">
<syntaxhighlight lang="yaml">
cache::route_table:
cache::traffic_shutdown: true
  eqiad: 'codfw'
  codfw: 'eqiad'
  ulsfo: 'codfw'
  esams: 'eqiad'
</syntaxhighlight>
</syntaxhighlight>


Note that the two ''primary'' sites (<code>eqiad</code> and <code>codfw</code>) intentionally route to each other in a loop.  This is so that each can route to the other for services which are active/passive in only one of the primary sites.  The ''edge'' sites (<code>ulsfo</code> and <code>esams</code>) should normally point at one of the ''primary'' sites (although it is possible to point an edge at another edge as well and route through it, but this would probably be a rare operational scenario).
Once the change is merged and applied to the nodes with puppet, all requests sent to eqiad will get a HTTP 403 response from the cache frontends instead of being served from cache or routed to the appropriate origin server.
 
=== Disabling a Site ===
 
If an edge site is malfunctioning, it usually won't be the right-hand destination of any route, so there's no change to be made here.
 
If a primary site is malfunctioning, it should be removed from the right-hand destinations of edge sites.
 
'''The loop between the two primary sites should be left alone'''.  Scenarios in which we might alter the loop between the primaries fall outside the scope of a simple instructional wiki page.
 
To disable routing through <code>codfw</code> due to malfunction, one would only need to change <code>ulsfo</code>'s entry, pointing it at <code>eqiad</code> instead:
 
<syntaxhighlight lang="yaml">
cache::route_table:
  eqiad: 'codfw'
  codfw: 'eqiad'
  ulsfo: 'eqiad' # was 'codfw', but changed due to codfw outage!
  esams: 'eqiad'
</syntaxhighlight>
 
After merging this through gerrit + puppet-merge, puppet agent needs to be run on the affected caches before this takes effect.


== Cache-to-application routing ==
== Cache-to-application routing ==
 
Upon entering a given data center, HTTP requests reach a cache frontend host running Varnish. At this layer, caching is controlled by either the <code>cache::req_handling</code> or <code>cache::alternate_domains</code> hiera setting. The former is used by main sites like the wikis and upload.wikimedia.org, while the latter is used by miscellaneous sites such as for example [[Phabricator|phabricator.wikimedia.org]] and [[grafana.wikimedia.org]]. Choosing which data structure to use depends on whether the site needs to be controlled by the regular or misc VCL, most likely misc. It is thus almost sure that additional services need to be added to <code>cache::alternate_domains</code>. If in doubt, contact the traffic team. The format of both data structures is:
The final step is routing requests out the back edge of the Traffic caching infrastructure into the application layer. The application layer services can exist at one or both of the two primary sites (<code>eqiad</code> and <code>codfw</code>) on a case-by-case basis.  This is controlled by per-application routing entries found in the same hieradata files as inter-cache routing above.
 
In the <code>operations/puppet</code> repo, there are per-cluster files <code>hieradata/role/common/cache/*.yaml</code> (there are currently 3 of them: text, upload, and misc).
 
Within these files, underneath the <code>cache::app_directors</code> key, you will see one stanza per application layer service used by each cluster. Within each application service, there's <code>backends</code> which defines the available hostnames for this service at <code>eqiad</code> and/or <code>codfw</code>. Ideally all services should exist active/active at both, but currently many are active/passive instead. For active/passive services with hot standby available, the inactive side will probably already be specified in the hieradata file but commented out, to make changes easier.
 
Example of current <code>cache::app_directors</code> stanza for the text cluster, with all services active/passive (most active only in <code>eqiad</code>, but <code>appservers_debug</code> active only in <code>codfw</code>):


<syntaxhighlight lang="yaml">
<syntaxhighlight lang="yaml">
cache::app_directors:
cache::alternate_domains:
  appservers:
  hostname1:
    backends:
    caching: 'normal'
      eqiad: 'appservers.svc.eqiad.wmnet'
  hostname2:
      # codfw: 'appservers.svc.codfw.wmnet'
    caching: 'pass'
  api:
    backends:
      eqiad: 'api.svc.eqiad.wmnet'
      # codfw: 'api.svc.codfw.wmnet'
  rendering:
    backends:
      eqiad: 'rendering.svc.eqiad.wmnet'
      # codfw: 'rendering.svc.codfw.wmnet'
  security_audit:
    backends:
      eqiad: 'appservers.svc.eqiad.wmnet'
      # codfw: 'appservers.svc.codfw.wmnet'
  appservers_debug:
    be_opts:
      max_connections: 20
    backends:
      # eqiad: 'hassium.eqiad.wmnet'
      codfw: 'hassaleh.codfw.wmnet'
  restbase_backend:
    be_opts:
      port: 7231
      max_connections: 5000
    backends:
      eqiad: 'restbase.svc.eqiad.wmnet'
      # codfw: 'restbase.svc.codfw.wmnet'
  cxserver_backend:
    be_opts:
      port: 8080
    backends:
      eqiad: 'cxserver.svc.eqiad.wmnet'
      # codfw: 'cxserver.svc.codfw.wmnet'
  citoid_backend:
    be_opts:
      port: 1970
    backends:
      eqiad: 'citoid.svc.eqiad.wmnet'
      # codfw: 'citoid.svc.codfw.wmnet'
</syntaxhighlight>
</syntaxhighlight>


Within each <code>backends</code> stanza, the primary site listed on the left names the site where the traffic would exit the cache layer, and the hostname on the right is the applayer hostname it will contact to do so. The code which operates on this data doesn't actually care whether the hostname on the right is actually within the site named on the left.  This allows for interesting operational possibilities such as (note the hostname is eqiad in both cases):
A value of '''normal''' in the caching attribute means that Varnish will cache the responses for this site unless '''Cache-Control''' says otherwise. Conversely, '''pass''' means that objects for this site are never to be cached. It would be preferable to specify '''normal''' and ensure that the origin returns '''Cache-Control''' with appropriate values for responses that should not be cached, but where this is not possible '''pass''' can be used. A sample of the production values for <code>cache::alternate_domains</code> as of July 2020 follows.


<syntaxhighlight lang="yaml">
<syntaxhighlight lang="yaml">
cache::app_directors:
cache::alternate_domains:
   appservers:
   15.wikipedia.org:
     backends:
     caching: 'normal'
      eqiad: 'appservers.svc.eqiad.wmnet'
  analytics.wikimedia.org:
      codfw: 'appservers.svc.eqiad.wmnet'
    caching: 'normal'
  annual.wikimedia.org:
    caching: 'normal'
  blubberoid.wikimedia.org:
    caching: 'pass'
  bienvenida.wikimedia.org:
    caching: 'normal'
</syntaxhighlight>
</syntaxhighlight>


Note above that the hostname is eqiad in both cases. This would cause inter-cache routing to behave like an active/active service (dropping from the cache to the applayer for both primary sites), but both site's caches will contact only the eqiad applayer service.  This is not how we would prefer to operate under normal conditions, but it can be a useful step during complex transitions and testing.
In case there is no cache hit at the frontend layer, requests are sent to a cache backend running [[ATS]] in the same DC. Backend selection is done by applying consistent hashing on the request URL. If at the backend layer there is also no cache hit, the final step is routing requests out the back edge of the Traffic caching infrastructure into the application layer.  The application layer services can exist at one or both of the two primary sites (<code>eqiad</code> and <code>codfw</code>) on a case-by-case basis.  This is controlled by ATS remap rules mapping the '''Host''' header to a given origin server hostname. The hiera setting controlling the rules is <code>profile::trafficserver::backend::mapping_rules</code>, and for production it is specified in <code>hieradata/common/profile/trafficserver/backend.yaml</code>. For most services, the configuration of whether the service is active/active or active/passive is done via [[DNS/Discovery]]. The exception to this rule is services available in one primary DC only, such as pivot (eqiad-only) in the example below:


'''Important Caveat:'''  Because changes to this configuration roll out asynchronously to many cache hosts, swapping a single-site backends list from one primary site to the other in a single commit step will cause temporary traffic-routing loops as caches with different versions of the configuration forward traffic to each other.  The caches will detect the looping requests immediately and return HTTP error code <code>508 Loop Detected</code> for the affected requests, causing a spike in user-facing errors until the situation resolves itself a short time later when the async config deployment process finishes.  To avoid this, it's best to do an intermediate commit which enables both primary sites' caches to reach the application layer.  In other words, you want this sequence of states to get from <code>eqiad</code>-only to <code>codfw</code>-only:
Initial State:
<syntaxhighlight lang="yaml">
<syntaxhighlight lang="yaml">
backends:
profile::trafficserver::backend::mapping_rules:
  eqiad: 'appservers.svc.eqiad.wmnet'
    - type: map
  # codfw: 'appservers.svc.codfw.wmnet'
      target: http://15.wikipedia.org
      replacement: https://webserver-misc-apps.discovery.wmnet
    - type: map
      target: http://phabricator.wikimedia.org
      replacement: https://phabricator.discovery.wmnet
    - type: map
      target: http://pivot.wikimedia.org
      replacement: https://an-tool1007.eqiad.wmnet
</syntaxhighlight>
</syntaxhighlight>


Intermediate State (temporarily active/active):
Any administrative action such as depooling a primary site for active/active services, or moving an active/passive service from one primary DC to the other, can be performed via [[DNS/Discovery#How_to_manage_a_DNS_Discovery_service|DNS discovery updates]].
<syntaxhighlight lang="yaml">
backends:
  eqiad: 'appservers.svc.eqiad.wmnet'
  codfw: 'appservers.svc.codfw.wmnet'
</syntaxhighlight>
 
Final State:
<syntaxhighlight lang="yaml">
backends:
  # eqiad: 'appservers.svc.eqiad.wmnet'
  codfw: 'appservers.svc.codfw.wmnet'
</syntaxhighlight>
 
== A code-level view of inter-cache and cache->app routing ==
 
The details of the inter-cache and cache->app routing are probably easier to understand for some as pseudo-code operating on the given hieradata.
 
Each cache handles requests according to the following pseudo-code logic:
 
<syntaxhighlight lang="php">
$req = <incoming request from user or forwarded from another cache>
$route_table = <hieradata cache::route_table for this cache cluster>
$app_directors = <hieradata cache::app_directors for this cache cluster>
$req_handling = <hieradata cache::req_handling for this cache cluster>
$my_site = <the local site name>
 
$which_app = parse($req, $req_handling);
if ($app_directors[$which_app].has_key?($my_site)) {
    send_to_applayer_at_hostname($req, $app_directors[$which_app][$my_site])
} else {
    forward_to_another_cache($req, $route_table[$my_site])
}
</syntaxhighlight>
 
== Future directions ==
 
The current state of affairs is an iterative improvement on the previous situation, but there's still a ways to go!  We're still missing some simplification of process, and then the most important piece of the puzzle that remains is transferring all of these routing-state controls to etcd/confctl control so that they don't involve the (much slower and task-inappropriate) full configuration commit->deploy process that they do today.


[[Category:Caching]]
[[Category:Caching]]

Revision as of 12:17, 27 July 2020

This page covers our mechanisms for routing user requests through our Traffic infrastructure layers. The routing can be modified through administrative actions to improve performance and/or reliability, and/or respond to site/network outage conditions.

Sites

There are currently five total sites involved. All four sites can receive direct user traffic, however eqiad and codfw are Primary sites which also host application layer services, while ulsfo, esams and eqsin are Edge sites which do not.

Map of Wikimedia Foundation data centers.

Global Routing Overview

User traffic can enter through the front edge of any of the sites, and is then routed on to eventually reach an application service in a primary site (either eqiad or codfw).

Ideally all of our application-layer services operate in an active/active configuration, meaning they can directly accept user traffic in both primary sites simultaneously. Some application services are active/passive, meaning that they're only accepting user traffic in one of the primary sites but not the other at any given time. Active/active services might also be temporarily configured to use only a single one of the primary sites for various operational maintenance or outage reasons.

In the active/active application's case, global traffic is effectively split. Users whose traffic enters at either of ulsfo or codfw would reach the application service in codfw, and users whose traffic enters at esams or eqiad would reach the application service in eqiad.

GeoDNS (User-to-Edge Routing)

The first point of entry is when the client performs a DNS request on one of our public hostnames. Our authoritative DNS servers perform GeoIP resolution and hand out one of several distinct IP addresses, sending users approximately to their nearest site. We can disable sending users directly to a particular site through DNS configuration updates. Our DNS TTLs are commonly 10 minutes long, and some rare user caches will violate specs and cache them longer. The bulk of the traffic should switch inside of 10 minutes, though, with a fairly linear progression over that window.

Disabling a Site

To disable a site as an edge destination for user traffic in GeoDNS:

Downtime the matching site alert in https://icinga.wikimedia.org/cgi-bin/icinga/status.cgi?search_string=traffic+drop

In the operations/dns repo, edit the file admin_state

There are instructions inside for complex changes, but for the basic operation of completely disabling a site, the line you need to add at the bottom for e.g. disabling esams is:

 geoip/generic-map/esams => DOWN

... and then deploy the DNS change in the usual way: merge through gerrit, ssh to any one of our 3x authdns servers (baham, radon, and eeden), and execute authdns-update as root.

Hard enforcement of GeoDNS-disabled sites

In the case that we need to guarantee that zero requests are flowing into the user-facing edge of a disabled site for a given cache cluster (or all clusters), we can forcibly block all traffic at the front edge. This should only be done when strictly necessary, and only long after (e.g. 24H after) making the DNS switch above, to avoid impacting those with minor trailing DNS cache update issues. To lock traffic out of the cache frontends for a given cluster in a given site, you'll need to merge and deploy a puppet hieradata update which sets the key cache::traffic_shutdown to true for the applicable cluster/site combinations.

For example, to lock all traffic out of the text cluster in eqiad, add the following line to hieradata/role/eqiad/cache/text.yaml:

cache::traffic_shutdown: true

Once the change is merged and applied to the nodes with puppet, all requests sent to eqiad will get a HTTP 403 response from the cache frontends instead of being served from cache or routed to the appropriate origin server.

Cache-to-application routing

Upon entering a given data center, HTTP requests reach a cache frontend host running Varnish. At this layer, caching is controlled by either the cache::req_handling or cache::alternate_domains hiera setting. The former is used by main sites like the wikis and upload.wikimedia.org, while the latter is used by miscellaneous sites such as for example phabricator.wikimedia.org and grafana.wikimedia.org. Choosing which data structure to use depends on whether the site needs to be controlled by the regular or misc VCL, most likely misc. It is thus almost sure that additional services need to be added to cache::alternate_domains. If in doubt, contact the traffic team. The format of both data structures is:

cache::alternate_domains:
  hostname1:
    caching: 'normal'
  hostname2:
    caching: 'pass'

A value of normal in the caching attribute means that Varnish will cache the responses for this site unless Cache-Control says otherwise. Conversely, pass means that objects for this site are never to be cached. It would be preferable to specify normal and ensure that the origin returns Cache-Control with appropriate values for responses that should not be cached, but where this is not possible pass can be used. A sample of the production values for cache::alternate_domains as of July 2020 follows.

cache::alternate_domains:
  15.wikipedia.org:
    caching: 'normal'
  analytics.wikimedia.org:
    caching: 'normal'
  annual.wikimedia.org:
    caching: 'normal'
  blubberoid.wikimedia.org:
    caching: 'pass'
  bienvenida.wikimedia.org:
    caching: 'normal'

In case there is no cache hit at the frontend layer, requests are sent to a cache backend running ATS in the same DC. Backend selection is done by applying consistent hashing on the request URL. If at the backend layer there is also no cache hit, the final step is routing requests out the back edge of the Traffic caching infrastructure into the application layer. The application layer services can exist at one or both of the two primary sites (eqiad and codfw) on a case-by-case basis. This is controlled by ATS remap rules mapping the Host header to a given origin server hostname. The hiera setting controlling the rules is profile::trafficserver::backend::mapping_rules, and for production it is specified in hieradata/common/profile/trafficserver/backend.yaml. For most services, the configuration of whether the service is active/active or active/passive is done via DNS/Discovery. The exception to this rule is services available in one primary DC only, such as pivot (eqiad-only) in the example below:

profile::trafficserver::backend::mapping_rules:
    - type: map
      target: http://15.wikipedia.org
      replacement: https://webserver-misc-apps.discovery.wmnet
    - type: map
      target: http://phabricator.wikimedia.org
      replacement: https://phabricator.discovery.wmnet
    - type: map
      target: http://pivot.wikimedia.org
      replacement: https://an-tool1007.eqiad.wmnet

Any administrative action such as depooling a primary site for active/active services, or moving an active/passive service from one primary DC to the other, can be performed via DNS discovery updates.