You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Portal:Cloud VPS/Admin/Neutron LVS

From Wikitech-static
< Portal:Cloud VPS‎ | Admin
Revision as of 13:03, 16 November 2018 by imported>Arturo Borrero Gonzalez (→‎Conclussions: mention IP/MAC whitelisting possibilities)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

This page contains information on running LVS in our CloudVPS environment using Neutron.

All the tests were conducted using the Debian Jessie (hw server)/Openstack Mitaka/Neutron/Stretch (VMs) combination. The Trusty (hw server)/Openstack Mitaka/Nova Network combination wasn't tested or investigated.

Experimental setup

All this setting was built in our labtestn deployment. To test LVS-DR, ldirectord was used to manage ipvsadm configuration. I didn't check any actual ops/puppet.git LVS roles to be able to simplify the setup. I didn't test with floating IPs as VIPs (public addresses) to simplify the setup.

  • Create a Cloud VPS project lvs-test in the codfw1dev-r region
  • Create 3 VM instances in the lvs-test project: lvs-server (172.16.128.214), lvs-backend-01 (172.16.128.212) and lvs-backend-02 (172.16.128.213).
  • Create and assign an additional IP address for lvs-server: 172.16.128.211 (this will be our VIP). nova add-fixed-ip <server> <network-id>
  • SSH to VMs and install basic packages: tcpdump, netcat, ipvsadm, ldirectord. Our current labtestn glance image requires some manual procedures to get in and to be able to install packages.
  • Add VIP to the lo interfacs on backend servers ip addr add 172.16.128.211/32 dev lo and to eth0 in the LVS server.
  • Configure ldirectord with a very basic config (in the lvs-server VM) /etc/ha.d/ldirectord.cf:
checktimeout=3
checkinterval=1
autoreload=yes
quiescent=no

virtual=172.16.128.211:80
        servicename=test
        comment=test
        real=172.16.128.212:80 gate
        real=172.16.128.213:80 gate
        service=http
        scheduler=rr
        protocol=tcp
        checktype=ping
  • Check resulting ipvsadm configuration:
root@lvs-server:~# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.128.211:80 rr
  -> 172.16.128.212:80            Route   1      0          0         
  -> 172.16.128.213:80            Route   1      0          0  
  • Start testing, from the Neutron server, nothing works with LVS yet (here, the Neutron server acts as the external client):
aborrero@labtestneutron2001:~$ nc 172.16.128.211 80
^C
  • Now (WARNING) beging disabling security features for the involved ports:
root@labtestcontrol2003:~# neutron port-list | egrep 211\|212\|214\|213
| 64d95341-9f1c-469b-a69a-da732b423c0d |                      | fa:16:3e:5e:9e:f0 | {"subnet_id": "7adfcebe-b3d0-4315-92fe-e8365cc80668", "ip_address": "172.16.128.213"} |
| 7306ee5f-42ca-43aa-a25a-9993c899c428 |                      | fa:16:3e:19:48:05 | {"subnet_id": "7adfcebe-b3d0-4315-92fe-e8365cc80668", "ip_address": "172.16.128.211"} |
|                                      |                      |                   | {"subnet_id": "7adfcebe-b3d0-4315-92fe-e8365cc80668", "ip_address": "172.16.128.214"} |
| b166d7aa-3b8f-49f0-a220-b341bda10f4a |                      | fa:16:3e:1f:35:9a | {"subnet_id": "7adfcebe-b3d0-4315-92fe-e8365cc80668", "ip_address": "172.16.128.212"} |
  • disable security group (iptables rules) and port-security (ebtables rules) for all instances:
root@labtestcontrol2003:~# neutron port-update 64d95341-9f1c-469b-a69a-da732b423c0d --no-security-group
Updated port: 64d95341-9f1c-469b-a69a-da732b423c0d
root@labtestcontrol2003:~# neutron port-update 64d95341-9f1c-469b-a69a-da732b423c0d --port_security_enabled=False
Updated port: 64d95341-9f1c-469b-a69a-da732b423c0d
  • check if LVS works now:
aborrero@labtestneutron2001:~ 1 $ nc 172.16.128.211 80
(UNKNOWN) [172.16.128.211] 80 (http) : Connection refused
aborrero@labtestneutron2001:~ 1 $ nc 172.16.128.211 80
(UNKNOWN) [172.16.128.211] 80 (http) : Connection refused

root@lvs-server:~# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.128.211:80 rr
  -> 172.16.128.212:80            Route   1      1          0         
  -> 172.16.128.213:80            Route   1      1          0

root@lvs-backend-01:~# tcpdump -n -i any tcp port 80
12:28:42.570302 IP 10.192.20.4.41844 > 172.16.128.211.80: Flags [S], seq 2177770907, win 29200, options [mss 1460,sackOK,TS val 2027567397 ecr 0,nop,wscale 9], length 0
12:28:42.570333 IP 172.16.128.211.80 > 10.192.20.4.41844: Flags [R.], seq 0, ack 2177770908, win 0, length 0

root@lvs-backend-02:~# tcpdump -n -i any tcp port 80
12:28:40.271707 IP 10.192.20.4.41842 > 172.16.128.211.80: Flags [S], seq 3771176807, win 29200, options [mss 1460,sackOK,TS val 2027567281 ecr 0,nop,wscale 9], length 0
12:28:40.271740 IP 172.16.128.211.80 > 10.192.20.4.41842: Flags [R.], seq 0, ack 3771176808, win 0, length 0
  • it works!
  • re-enable port-security (ebtables rules) and security groups (iptables rules) and the setup will stop working.

Conclussions

  • In the current Openstack version we are using (Mitaka) there is no way this setup can be self-service for users. Horizon can't do all the required operations.
  • We (CloudVPS admins, WMCS) need to configure a lot of stuff inside openstack to get this setup working (additional addresses, ports, etc). This shouldn't be a problem if only a couple of projects will need this special setup.
  • The only way this setup can work is by disabling port-security and security groups. To disable port-security you need to have no security groups assigned to a port.
  • Disabling port-security and security groups may have severe network security implications. Neutron will no longer check/enforce anything network-related for affected VMs. Addr spoofing possible. Local firewall in the VM highly desired.
    • we may try whitelisting IP/MAC pairs beforehand and keep port-security and security groups enabled. Worth trying, but that would be a even more manual procedure.
  • Above point specially important since all projects and all VMs share the same broadcast domain.
  • For additional security, we could consider having an additional network defined in Openstack (a 172.16.x.x/24 or the like) just for projects that require LVS setups.
  • Future Openstack versions may or may not add additional flexibility/security for this use case. That wasn't checked.