You are browsing a read-only backup copy of Wikitech. The live site can be found at


From Wikitech-static
Revision as of 20:02, 25 July 2019 by imported>Ayounsi (→‎Future evolution)
Jump to navigation Jump to search


In discussion:


In production

Anycast recursive DNS


How does it work?

  • The VIP (virtual IP) is configured on the servers loopback
  • Bird (routing daemon) advertises the VIP to the routers using BGP
  • (optional) A BFD session is established between Bird and the routers to ensure fast failover in case of server or link failure
  • Anycast_healthchecker monitors the local (anycasted) service by querying it every second
  • If a service failure is detected, the VIP stops being advertised to the routers
  • When the service is restored, anycast_healthchecker waits 10s before re-advertising the IP to avoid flaps
  • The bird service is linked (systemd bind) to the anycast_healthchecker service so bird is stopped if anycast_healthchecker is not running/crashed
  • Time between a local service failure and clients to be redirected to a different server (advertising the same VIP) is 1s max
  • All servers advertise the same VIP worldwide, clients will be be routed to the closest (in the BGP definition) server (same DC, then shorter AS path, etc...) but is not based on latency
  • Routers do per flow load balancing (ECMP) between all local (same site) servers. Hashing is done on IP and port (L4)
  • As last hope backup, in case all servers stop advertising the VIP (eg. global missconfiguration), eqiad and codfw routers have less specific (/30) backup static routes pointing to their local servers

How to deploy a new service?

  1. Assign an IP in DNS, from the range - (eg. Gerrit CR 524045)
  2. Configure the server side (eg. Gerrit CR 524037)
    1. Add include ::profile::bird::anycast where you see fit (usually to the service's role)
    2. Configure the VIP and its attributes (usually hieradata/role/common/
        <vip_fqdn>:  # used as identifier
          address: 10.3.x.x # VIP to advertise
          check_cmd: '/bin/true' # Any command to check the healh of the service
      • check_cmd is ran once per second from user "bird"
      • anycast-healthchecker use the return code of the heath-check script, 0 = good, everything else is considered as a failure
  3. Configure the router side:
    1. set protocols bgp group Anycast4 neighbor <server_IP>
  4. Add monitoring to the VIP, similar to any Icinga checks, but in modules/profile/manifests/bird/anycast_monitoring.pp
  5. (Optional) if deploying a new type of service, ask Netops to add a backup static route

What other configuration bits are relevant?

Hiera keys:

# service to bind bird to. Usually the anycast-healthchecker
# this mean if anycast-healthchecker crashes, Bird will stop as well
# Usually set globally for Bird
profile::bird::bind_service: 'anycast-healthchecker.service'

# Router IPs with which Birds establish BGP sessions
# Usually set per site
  - routerIP
  - other_router_IP

# Usually set per service (role)
# But can be set for a specific host as well, for example to specifically remove the VIP from a host to be decommissioned.
  <vip_fqdn>: # Used as identifier
    address: 10.3.x.x # VIP to advertise (required)
    check_cmd: '/bin/true' # Any command to check the healh of the service, ran as user "bird" once per second (required)
    ensure: present # Set to absent to cleanly remove the check (optional, present by default)
    bfd: true # Fast failure detection between router and server (Optional, true by default)

How are the routers configured?

# show protocols bgp group Anycast4 
type external;
/* T209989 */
multihop {
    ttl 193;
local-address; # Router's loopback
import anycast_import;  # See below
family inet {
    unicast {
        prefix-limit {
            maximum 50; # Take the session down if more than 50 prefixes
            teardown;  # learned from the servers (eg. missconfiguration)
export NONE;
peer-as 64605;  # Server's ASN
bfd-liveness-detection {
    minimum-interval 300; # Take the session down after 3*300ms failures
multipath;  # Enable load balancing (remove for active/passive)
neighbor;  # Servers IPs

# show policy-options policy-statement anycast_import 
term anycast4 {
    from {
        prefix-list-filter anycast-internal4 longer; # Only accept prefixes in the defined range
    then accept;
then reject;

# show policy-options prefix-list anycast-internal4;

# show routing-options static route 

How to monitor anycast_healthchecker logs?


How to know which routes a router takes to a specific VIP?

Here both next hops (servers) are load balanced, as they are under the same *[BGP] block.

> show route        *[BGP/170] 1w4d 08:54:21, localpref 100, from
                      AS path: 64605 I, validation-state: unverified
                      to via ae3.2003
                    > to via ae4.2004

MTR can also be used for less granularity (site). Eg:

bast5001:~$ mtr --report
Start: Fri Apr  5 16:48:21 2019
HOST: bast5001                    Loss%   Snt   Last   Avg  Best  Wrst StDev
  1.|-- ae1-510.cr2-eqsin.wikimed  0.0%    10    0.3   0.7   0.2   4.4   1.1
  2.|-- ae0.cr1-eqsin.wikimedia.o  0.0%    10    0.2   1.0   0.2   7.8   2.3
  3.|-- xe-5-1-2.cr1-codfw.wikime  0.0%    10  195.1 195.3 195.1 196.5   0.3
  4.|-- recdns.anycast.wmnet       0.0%    10  195.1 195.1 195.1 195.1   0.0

How to temporarily depool a server

Disable Puppet, stop the bird service.

How to long term depool a server

Several options:

  • Deactivate the neighbor IP on the router side
  • (Cleaner) Add a specific profile::bird::advertise_vips with the same identifier to the server, and check_cmd: /bin/false or ensure: absent


  • The server "self-monitor" itself, if it fails in a way where BGP is up, but DNS is unreachable from outside to the VIP (eg. iptables) this will cause an outage
  • By the nature of Anycast, Icinga will only check the health of the VIP closer to it
    • This could be worked around by checking the anycasted service health from various vantage points (eg. bastion hosts)
    • health checks to the servers' real IP still works

Future evolution

  • IPv6 is supported by both Bird and anycast_healthchecker, but not implemented in Puppet (no current need)
  • Upgrade anycast_healthchecker to 0.9.0 or more recent (and rollback
  • Implement BGP graceful shutdown on the server side to drain traffic before depooling
  • Send anycast_healthchecker logs to central syslog server
  • User BGP metrics to influence anycast routing (eg. don't send eqiad to esams but to codfw in case of eqiad's resolvers failure)
  • Investigate BGP routing policies between sites (eg. eqiad only send public prefixes to esams via BGP) - T227808