You are browsing a read-only backup copy of Wikitech. The live site can be found at

Portal:Cloud VPS/Admin/notes/keystone

From Wikitech-static
< Portal:Cloud VPS‎ | Admin
Revision as of 10:27, 20 January 2020 by imported>Arturo Borrero Gonzalez (add deprecation warning)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

This page contains some notes on keystone, specifically how we deployed keystone for Cloud VPS.

previous and current status

Originally we had 1 keystone per deployment.

Then, Chase merged/collapsed the keystone services in labtest/labtestn, so a single keystone database serves both deployments. This means that the keystone database stores endpoints for both deployments, making use of the openstack concept of regions.

The reason for this merge is to allow as smooth migration from the nova-network to the neutron models.

On 2018-08-13 we merged also the main/eqiad1 keystone services (see phab:T201504).

Please note where the databases lives:

  • main/eqiad1: m5-master.eqiad.wmnet
  • labtest/labtestn:

basic deployment

Please follow the bootstrap instructions modules/openstack/templates/bootstrap/keystone/

creating a shared keystone

Things to consider when collapsing keystone to server more than one deployment (example tracking phabricator task: phab:T201504)

  • networking connectivity (ferm ACLs) between the clients, the endpoints and the databases
  • regions in the database are correctly set
  • multi-region awareness. Lots of code using the keystone API isn't region-aware.
  • nova_controller vs keystone_host: we have 2 different hiera keys for setting where the keystone daemon lives. The latter is preferred.
  • endpoints are updated in the database to reflect the final environment