You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Difference between revisions of "Portal:Cloud VPS/Infrastructure"

From Wikitech-static
Jump to navigation Jump to search
imported>BryanDavis
(Update some things for rebranding)
 
imported>Framawiki
Line 1: Line 1:
{{outdated}}
'''Cloud VPS''' is a virtualization cloud that uses [http://www.openstack.org/software/openstack-compute OpenStack Compute].  Base images are managed with [http://docs.openstack.org/developer/glance/ Glance] and authentication uses LDAP-backed [http://docs.openstack.org/developer/keystone/ Keystone].
'''Cloud VPS''' is a virtualization cloud that uses [http://www.openstack.org/software/openstack-compute OpenStack Compute].  Base images are managed with [http://docs.openstack.org/developer/glance/ Glance] and authentication uses LDAP-backed [http://docs.openstack.org/developer/keystone/ Keystone].


Cloud VPS currently runs in a single datacenter in Ashburn, Virginia.  In the future it will span two or more datacenters, with a slightly different configuration in each.
Cloud VPS currently runs in a single datacenter in Ashburn, Virginia.  In the future it will span two or more datacenters, with a slightly different configuration in each.


For troubleshooting immediate labs issues, visit [[Portal:Cloud_VPS/Admin/Troubleshooting]].
For troubleshooting immediate issues, visit [[Portal:Cloud_VPS/Admin/Troubleshooting]].


[[File:Labs architecture.pdf|thumb|Slides from a brief presentation about Wikimedia Labs architecture]]
[[File:OpenStack_at_WMCS.pdf|thumb|Slides from a brief presentation about WMCS OpenStack architecture]]


== Cloud VPS Eqiad (Ashburn, VA) ==
== Cloud VPS Eqiad (Ashburn, VA) ==


[[File:Labs_cluster.png|thumb|The servers that make Wikimedia Labs]]
=== Horizon ===


=== Wikitech/OpenStackManager ===
Most users will manage their virtual servers using [[Horizon]].  Horizon is an upstream OpenStack web interface for the OpenStack APIs.  Our Horizon site also includes several custom dashboards to access special WMCS features not available in stock Horizon.


In addition to hosting the WMF's technical documentation, the [[wikitech]] web site also runs the OpenStackManager MediaWiki extension that provides a graphical interface for labs. Wikitech runs on a server internally named Silver.
Horizon is hosted on labweb1001.wikimedia.org and labweb1002.wikimedia.org and can be accessed at https://horizon.wikimedia.org.


An alternative, partially-functional labs GUI can be accessed at https://horizon.wikimedia.org/It runs the openstack-dashboard project and provides nova API access to project admins.
Individual user accounts on WMCS can also be created via Striker which is at https://toolsadmin.wikimedia.org.  Currently any account created there is automatically added to the Tools project.


=== Controller ===
=== Controller ===


The labs controller box (currently named 'labcontrol1001') runs the Glance and Keystone services, as well as a few nova services (conductor and scheduler.) Labcontrol1001 also runs a public DNS server (aka labs-ns0) which provides name services for the .wmflabs.org domain.
The OpenStack controller box (currently named 'labcontrol1001') runs the Glance and Keystone services, as well as nova-conductor and nova-scheduler.  It is also the preferred place to access the OpenStack command-line client.


A second server, labcontrol1002, serves as a hot spare for labcontrol1001.
A second server, labcontrol1002, serves as a hot spare for labcontrol1001.
Another duplicate server, labcontrol2001, runs in codfw and contains a duplicate config.  It is largely vestigial, but does provide backup DNS service via the labs-ns1 service name.


=== Network ===
=== Network ===


The network node ('labnet1001') hosts the nova-network service.  We currently run a single labs-wide network that supports all lab nodes and projects.  In the future we hope to use [http://docs.openstack.org/developer/neutron/ Openstack Neutron] for our network setup, but it doesn't support our use-case; to use neutron we'll need to switch to one network per project.
The network node ('labnet1001') hosts the nova-network service.  We currently run a single cloud-wide network that supports all lab nodes and projects.  In the near future we will move to [http://docs.openstack.org/developer/neutron/ Openstack Neutron] for our network setup.


Labnet1001 also hosts the nova-api service.
Labnet1001 also hosts the nova-api service.


Soon an additional server, labnet1002, will provide either redundant network service or as a hot spare.  To be determined.
Labnet1002 serves as a hot-spare for labnet1001Switching network service between the two hosts causes cloud-wide network downtime and requires several [[Portal:Cloud_VPS/Admin/Troubleshooting#Fail-over | delicate steps]].


=== Virtualization ===
=== Virtualization ===


[[File:Labs_cluster_instance_POV.png|thumb|The servers that a labs instance talks to]]
There are currently 22 virtualization nodes, named labvirt1001-1022, all running in eqiad:


There are currently thirteen virtualization nodes in labs, named labvirt1001-1013, all running in eqiad:
* 1001-1009 are high-powered multi-CPU [[HP_DL380p | HP servers]]; each of them hosts dozens of virtual machines.
* 1010-1022 are similar to 1001-1009 but with large SSD raids.
* 1019 and 1020 are used to host VMs that are themselves part of the Cloud infrastructure (e.g. database servers).
* Some labvirts (as of 2015-05-10, labvirt1018, 1021, and 1022) are always reserved as emergency spares; we try to keep around 10% excess capacity at all times.


* 1001-1009 are high-powered multi-CPU [[HP_DL380p | HP servers]]; each of them hosts dozens of virtual machines.
Labvirt hosts can be pooled or depooled in Puppet Hiera using the profile::openstack::main::nova::scheduler_pool setting.
* 1010-1013 are similar to 1001-1009 but with large SSD raids.
* 1014 is identical to 1012 and 1013 but kept empty as an emergency evacuation node.


=== Storage ===
=== Storage ===


Labs uses shared storage for several purposes:
Some VPS projects use shared NFS storage; most do not.  Options for each project are are:


* Each member of a project has a project-wide shared home directory.
* Each member of a project has a project-wide shared home directory.
* Each project has a public shared volume, generally mounted to /data/project
* The project has a public shared volume, generally mounted to /data/project


All of the above are hosted on an NFS server named labstore1001.  There's a hot-swappable backup, labstore1002, which is generally turned off.
All of the above are hosted on various NFS servers, labstore1xxx.


=== monitoring ===
=== monitoring ===
Most OpenStack services and related things are monitored in icinga just like other production services.


Labmon1001 runs statsd and graphite. It monitors the state of labs instances and collects stats and sends alerts as needed.
VMs in the 'Tools' and 'Deployment-Prep' projects are monitored with [http://shinken.wmflabs.org Shinken].


=== ldap ===
=== ldap ===


Ldap is used for services throughout the WMF.  The primary ldap server is Neptunium, running in eqiad.  The secondary server is Nembus, running in codfw.
Ldap is used for services throughout the WMF; the same Ldap database keeps track of project management and ssh keys for logins on VPS serversLdap is hosted on seaborgium and neptunium; The LDAP server software is openldap.


The LDAP server software is opendj.  Each labs instance has an /etc/ldap.conf file (managed by puppet) that maintains info about the ldap servers.
Each CPS instance has an /etc/ldap.conf file (managed by puppet) that maintains info about the ldap servers.


=== dns ===
=== dns ===
Line 68: Line 66:
[[Portal:Cloud_VPS/DNS|DNS]] is handled by PowerDNS.  Private DNS entries (e.g. foo.eqiad.wmflabs) are created via Designate Sink and stored in a PDNS server using a mysql backend.  Public DNS entries are created via Horizon and the designate API.
[[Portal:Cloud_VPS/DNS|DNS]] is handled by PowerDNS.  Private DNS entries (e.g. foo.eqiad.wmflabs) are created via Designate Sink and stored in a PDNS server using a mysql backend.  Public DNS entries are created via Horizon and the designate API.


[[File:Labs_dns_simplified.png|thumb|Future, simpler Labs DNS implementation using Horizon]]
[[File:Wmcs dns.pdf|thumb|DNS in WMCS]]

Revision as of 13:56, 8 September 2018

Cloud VPS is a virtualization cloud that uses OpenStack Compute. Base images are managed with Glance and authentication uses LDAP-backed Keystone.

Cloud VPS currently runs in a single datacenter in Ashburn, Virginia. In the future it will span two or more datacenters, with a slightly different configuration in each.

For troubleshooting immediate issues, visit Portal:Cloud_VPS/Admin/Troubleshooting.

File:OpenStack at WMCS.pdf

Cloud VPS Eqiad (Ashburn, VA)

Horizon

Most users will manage their virtual servers using Horizon. Horizon is an upstream OpenStack web interface for the OpenStack APIs. Our Horizon site also includes several custom dashboards to access special WMCS features not available in stock Horizon.

Horizon is hosted on labweb1001.wikimedia.org and labweb1002.wikimedia.org and can be accessed at https://horizon.wikimedia.org.

Individual user accounts on WMCS can also be created via Striker which is at https://toolsadmin.wikimedia.org. Currently any account created there is automatically added to the Tools project.

Controller

The OpenStack controller box (currently named 'labcontrol1001') runs the Glance and Keystone services, as well as nova-conductor and nova-scheduler. It is also the preferred place to access the OpenStack command-line client.

A second server, labcontrol1002, serves as a hot spare for labcontrol1001.

Network

The network node ('labnet1001') hosts the nova-network service. We currently run a single cloud-wide network that supports all lab nodes and projects. In the near future we will move to Openstack Neutron for our network setup.

Labnet1001 also hosts the nova-api service.

Labnet1002 serves as a hot-spare for labnet1001. Switching network service between the two hosts causes cloud-wide network downtime and requires several delicate steps.

Virtualization

There are currently 22 virtualization nodes, named labvirt1001-1022, all running in eqiad:

  • 1001-1009 are high-powered multi-CPU HP servers; each of them hosts dozens of virtual machines.
  • 1010-1022 are similar to 1001-1009 but with large SSD raids.
  • 1019 and 1020 are used to host VMs that are themselves part of the Cloud infrastructure (e.g. database servers).
  • Some labvirts (as of 2015-05-10, labvirt1018, 1021, and 1022) are always reserved as emergency spares; we try to keep around 10% excess capacity at all times.

Labvirt hosts can be pooled or depooled in Puppet Hiera using the profile::openstack::main::nova::scheduler_pool setting.

Storage

Some VPS projects use shared NFS storage; most do not. Options for each project are are:

  • Each member of a project has a project-wide shared home directory.
  • The project has a public shared volume, generally mounted to /data/project

All of the above are hosted on various NFS servers, labstore1xxx.

monitoring

Most OpenStack services and related things are monitored in icinga just like other production services.

VMs in the 'Tools' and 'Deployment-Prep' projects are monitored with Shinken.

ldap

Ldap is used for services throughout the WMF; the same Ldap database keeps track of project management and ssh keys for logins on VPS servers. Ldap is hosted on seaborgium and neptunium; The LDAP server software is openldap.

Each CPS instance has an /etc/ldap.conf file (managed by puppet) that maintains info about the ldap servers.

dns

DNS is handled by PowerDNS. Private DNS entries (e.g. foo.eqiad.wmflabs) are created via Designate Sink and stored in a PDNS server using a mysql backend. Public DNS entries are created via Horizon and the designate API.

File:Wmcs dns.pdf