You are browsing a read-only backup copy of Wikitech. The live site can be found at

Portal:Cloud VPS/Admin/Keepalived

From Wikitech-static
< Portal:Cloud VPS‎ | Admin
Revision as of 15:06, 17 June 2020 by imported>Arturo Borrero Gonzalez (create page with basic content)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

This page contains information on how to introduce L3 network high availability for arbitrary services running on virtual machines in Cloud VPS by using keepalived.

The documentation here is based on original notes by Jason Hedden.

Puppet manifest

In the puppet profile you want HA-enabled, declare the keepalived class. You should have at least 3 hiera keys for configuring the service:

class profile::myprofile (
    Array[Stdlib::Fqdn] $keepalived_vips     = lookup('profile::myprofile::keepalived::vips',     {default_value => ['localhost']}),
    Array[Stdlib::Fqdn] $keepalived_peers    = lookup('profile::myprofile::keepalived::peers',    {default_value => ['localhost']}),
    String              $keepalived_password = lookup('profile::myprofile::keepalived::password', {default_value => 'notarealpassword'}),
) {
    class { 'someawesomeservice': }
    class { 'keepalived':
        auth_pass         => $keepalived_password,
        peers             => delete($keepalived_peers, $::fqdn),
        vips              => $ |$host| { ipresolve($host, 4) },
        priority          => fqdn_rand(100),
        default_state     => 'BACKUP',
        interface         => 'eth0',
        virtual_router_id => 51,

The profile::myprofile::keepalived::vips key have an array of FQDNs which are going to be made HA by keepalived. In profile::myprofile::keepalived::peers you include all the servers in the keepalived cluster, i.e, all the VMs that will share the VIPs. Finally, profile::myprofile::keepalived::password should contains a random string that enables auth in the cluster, and should probably live in the labs/private.git repo.

Example hiera config:

- paws-k8s-haproxy-1.paws.eqiad.wmflabs
- paws-k8s-haproxy-2.paws.eqiad.wmflabs

Neutron configuration

This documentation assumes the VIP is a VM instance address (i.e, 172.16.x.x). To future proof the setup, is best practice to pre-allocate the VIP address in Neutron, so the address is reserved and never allocated to anything else by mistake:

user@cloudcontrol1004:~$ sudo wmcs-openstack --os-project-id=myproject port create --network 7425e328-560c-4f00-8e99-706f3fb90bb4 my-vip-port
| Field                 | Value                                                                       |
| fixed_ips             | ip_address='', subnet_id='a69bdfad-d7d2-4cfa-8231-3d6d3e0074c9' |
| name                  | my-vip-port                                                                 |
| network_id            | 7425e328-560c-4f00-8e99-706f3fb90bb4                                        |
| port_security_enabled | True                                                                        |
| project_id            | myproject                                                                   |
| status                | DOWN                                                                        |

Note an IP address was allocated, in this example

In the actual neutron ports for the VM instances, allow traffic using that allocated address:

user@cloudcontrol1004:~$ sudo wmcs-openstack port set --allowed-address ip-address= $PORT_UUID_VM1
user@cloudcontrol1004:~$ sudo wmcs-openstack port set --allowed-address ip-address= $PORT_UUID_VM2

The port UUID for each VM can be queried using sudo wmcs-openstack --os-project-id myproject port list and searching for the VM IP address as reported in sudo wmcs-openstack --os-project-id myproject server list.


Make sure the VIP FQDN is using the address allocated by neutron for the port.

See also

  • nothing yet.