You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Netbox: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>Volans
m (→‎Netbox Extras: Add --ff-only to the deploy command)
imported>Ayounsi
(refactoring, still WIP)
Line 3: Line 3:
* https://github.com/netbox-community/netbox
* https://github.com/netbox-community/netbox


At Wikimedia it is used as the [[:en:Data_center_management#Data_center_infrastructure_management|DCIM]] and [[:en:IP_address_management|IPAM]] system, as well as being used as an integration point for switch and port management, DNS management and similar operations.
At Wikimedia it is used as the [[:en:Data_center_management#Data_center_infrastructure_management|DCIM]] and [[:en:IP_address_management|IPAM]] system, as well as being used as an integration point for switch and port management, DNS management, and similar operations.


== History ==
== History ==
Line 9: Line 9:
* At Wikimedia it was evaluated in [[Phab:T170144]] as a replacement for [[Racktables]].
* At Wikimedia it was evaluated in [[Phab:T170144]] as a replacement for [[Racktables]].
* In [[Phab:T199083]] the actual migration between the systems took place
* In [[Phab:T199083]] the actual migration between the systems took place
* [[Phab:T296452]] documents the large upgrade from 2.10 to 3.2 and the subsequent improvements it brought


== Web UI ==
== Web UI ==


* https://netbox.wikimedia.org/
* https://netbox.wikimedia.org/
* login using your LDAP/Wikitech credentials
* Login using your LDAP/Wikitech credentials
* currently you need to be in either the "ops" or "wmf" LDAP group to be able to login
* Currently you need to be in either the "ops" or "wmf" LDAP group to be able to login (hiera <code>profile::netbox::cas_group_attribute_mapping</code>)


== Backups ==
== API ==


The following paths are backed up in [[Bacula]]:
* Endpoint
** From an internal host (eg. everything BUT your laptop), you MUST use netbox.discovery.wmnet
* REST API
** Create a token on https://netbox.wikimedia.org/user/api-tokens/, ideally read-only, ideally with an expiration date
** Doc: https://netbox.wikimedia.org/api/docs/
** Python library: https://github.com/netbox-community/pynetbox/
** Note that the REST API is quite slow, make sure to optimize your queries
* GraphQL
** Soon
* Spicerack
** See https://doc.wikimedia.org/spicerack/master/api/spicerack.netbox.html for the Netbox support
** It's preferred to use the built in wrapper functions and not the pynetbox interface directly as your cookbook might break if they're not updated when Netbox introduces breaking changes


/srv/netbox-dumps/
== Staging ==
/srv/postgres-backup/


A puppetized cron job (class postgresql::backup) automatically creates a daily dump file
* It consists of a single bullseye VM combining frontend and database
of all local [[Postgres]] databases (pg_dumpall) and stores it in /srv/postgres-backup.
* Reachable on netbox-next.wikimedia.org and netbox-next.discovery.wmnet
** Behind caches, similarly to the prod infrastructure
* It's data comes from a manual dump of production's database
** Reach out to Infrastructure Foundations if you need a more fresh database
** Be careful not to leak any of its data
* It is used to test Netbox upgrades, scripts, reports, etc
* This host is [https://icinga.wikimedia.org/cgi-bin/icinga/status.cgi?search_string=netbox-dev active in monitoring]  (with notifications disabled)
** As such, make sure that all the alerts cleared after your tests


This path is then backed up by Bacula.
== Production infrastructure ==
the production Netbox infrastructure consists of 4 bullseye VMs (see [https://netbox.wikimedia.org/virtualization/virtual-machines/?q=netbox all Netbox VMs]):


For more details, the related subtask to setup backups was [[Phab:T190184]].
* 2 active/passive frontends (netboxXXXX)
* 2 primary/replica postgresSQL databases (netboxdbXXXX)
 
By default the active/primary servers are the eqiad ones.
 
 
TODO: caches/TLS
 
== Monitoring ==
TODO
 
== Failover ==
 
=== Frontends ===
Using confctl, pool the passive server and depool the previous active one.
 
THIS NEEDS TO BE CONFIRMED TO BE THE PROPER COMMANDS DO NOT RUN UNTIL THIS WARNING IS REMOVED<syntaxhighlight>
confctl --object-type discovery select 'dnsdisc=netbox,name=codfw' set/pooled=true
confctl --object-type discovery select 'dnsdisc=netbox,name=eqiad' set/pooled=false
 
</syntaxhighlight>If the failover is going to last (eg. longer than a server reboot), change the <code>profile::netbox::active_server</code> Hiera key to the backup server. This will ensure the cron/systemd timers as well as Icinga checks are running.
 
Note that having the active frontend in a different datacenter than the primary database will result in Netbox being slower.
 
=== Databases ===
If the primary database server needs a short downtime it's recommended to not try a failover and instead have Netbox offline for a short amount of time.
 
There are currently no documented procedure on how to fail the database over, and even less how to fail back to the former primary.
 
See also [[Postgres]]
 
== Database ==


=== Restore ===
=== Restore ===
First of all analyze the [https://netbox.wikimedia.org/extras/changelog/ Netbox changelog] to choose what's the best action to perform a restore.
First of all analyze the [https://netbox.wikimedia.org/extras/changelog/ Netbox changelog] to choose what's the best action to perform a restore.


Line 41: Line 90:
To restore files from Bacula back to the client, use bconsole on helium and refer to [[Bacula#Restore_(aka_Panic_mode)]]
To restore files from Bacula back to the client, use bconsole on helium and refer to [[Bacula#Restore_(aka_Panic_mode)]]
for detailed steps.
for detailed steps.
=== PostgresSQL ===
==== Dumps backups ====
On the database servers, a puppetized cron job (class postgresql::backup) automatically creates a daily dump file of all local [[Postgres]] databases (pg_dumpall) and stores it in /srv/postgres-backup.
This path is then backed up by [[Bacula#Adding a new client]]
For more details, the related subtask to setup backups was [[Phab:T190184]].


==== Restore the DB dump ====
==== Restore the DB dump ====


* Check the dump list in both hosts (as of May 2020 <code>netboxdb[12]001</code> in <code>/srv/postgres-backup</code>, a more recent one might be in the other host.
* Check the dump list on both DB hosts in <code>/srv/postgres-backup</code>, a more recent one might be in the other host.
* Copy if needed the dump to the master host (as of May 2020 <code>netboxdb1001</code>)
* Copy if needed the dump to the master host (as of June 2022 <code>netboxdb1002</code>)
* Unzip the chosen dump file
* Unzip the chosen dump file
* Take a one-off backup right before starting the restore with (the .bak suffix is important to not be auto-evicted):
* Take a one-off backup right before starting the restore with (the .bak suffix is important to not be auto-evicted):
Line 63: Line 121:
</syntaxhighlight>
</syntaxhighlight>


==== Flush caches after a restore ====
===== Flush caches after a restore =====
 
After a restore Netbox caches must be flushed to ensure both consistency and see the changes.
After a restore Netbox caches must be flushed to ensure both consistency and see the changes.


To perform the flush SSH into the Netbox master host (as of May 2020 <code>netbox1001</code>) and execute:
To perform the flush SSH into the Netbox active host (as of June 2022 <code>netbox1002</code>) and execute:
<syntaxhighlight lang="shell">
<syntaxhighlight lang="shell">
cd /srv/deployment/netbox
cd /srv/deployment/netbox
Line 75: Line 132:
</syntaxhighlight>
</syntaxhighlight>


=== Automatic CSV Dumps ===
==== Sanitizing a database dump ====
Each hour at :37, a script dumps most pertinent tables to a target directory in <code>/srv/netbox-dumps</code> with a timestamp. Sixteen of these dumps are retained for backup purposes, which is executed by the script in <code>/srv/deployment/netbox/deploy/scripts/rotatedump</code>. This script only rotates directories in the pattern <code>20*</code>, so if a manual, retained dump is desired, one can simply run the script (<code>su netbox -c /srv/deployment/netbox/deploy/scripts/rotatedump)</code> and rename the resulting dump outside of this pattern, perhaps with a descriptive prefix. 
 
Note that historical copies are also available from Bacula, as this is one of the directories that are backed up.
 
== Dumping Database for Testing Purposes ==
The Netbox database contains a few bits of sensitive information, and if it is going to be used for testing purposes in WMCS it should be sanitized first.
The Netbox database contains a few bits of sensitive information, and if it is going to be used for testing purposes in WMCS it should be sanitized first.


Line 86: Line 138:
# Run the below SQL code on <code>netbox-sanitize</code> database.
# Run the below SQL code on <code>netbox-sanitize</code> database.
# Dump and drop database <code>pg_dump netbox-sanitize > netbox-sanitized.sql</code>; <code>dropdb netbox-sanitize</code>
# Dump and drop database <code>pg_dump netbox-sanitize > netbox-sanitized.sql</code>; <code>dropdb netbox-sanitize</code>
<syntaxhighlight lang="sql" line="1">
THE BELLOW COMMANDS ARE OUTDATED AND MIGHT NOT COVER EVERYTHING THAT NEEDS TO BE SANITIZED<syntaxhighlight lang="sql" line="1">
-- truncate secrets
-- truncate secrets
TRUNCATE secrets_secret CASCADE;
TRUNCATE secrets_secret CASCADE;
Line 112: Line 164:
</syntaxhighlight>
</syntaxhighlight>


== Custom Links ==
=== CSV ===
Netbox allow to setup custom links to other websites using Jinja2 templating for both the visualized name and the actual link, allowing for quite some flexibility.
 
The current setup (as of Feb. 2020) has the following links:
==== CSV backups ====
* Grafana (for all physical devices and VMs)
NOTE: This might be decommissioned in a short to medium term, see https://phabricator.wikimedia.org/T310615
* Icinga (for all physical devices and VMs)
 
* Debmonitor (for all physical devices and VMs)
TODO: OUTDATED (eg. script paths)
* Procurement Ticket (only for physical devices that have a ticket that matches either Phabricator or RT)
 
* Hardware config (for Dell and HP physical devices, pointing to the manufacturer page for warranty information based on their serial number)
On the active frontend, each hour at :37, a script dumps most pertinent tables to a target directory in <code>/srv/netbox-dumps</code> with a timestamp. 
 
Sixteen of these dumps are retained for backup purposes, which is executed by the script in <code>/srv/deployment/netbox/deploy/scripts/rotatedump</code>. 
 
This script only rotates directories in the pattern <code>20*</code>, so if a manual, retained dump is desired, one can simply run the script (<code>su netbox -c /srv/deployment/netbox/deploy/scripts/rotatedump)</code> and rename the resulting dump outside of this pattern, perhaps with a descriptive prefix. 
 
<code>/srv/netbox-dumps</code> is backed up in [[Bacula#Adding a new client]] for historical copies.


== Netbox Extras ==
== Netbox Extras ==
TODO check if still accurate


CustomScripts, Reports and other associated tools for Netbox are collected in the netbox-extras repository at https://gerrit.wikimedia.org/r/plugins/gitiles/operations/software/netbox-extras/. This repository is deployed to the Netbox frontends under /srv/deployment/netbox-extras. It is not automatically deployed on merged, and must be manually `git pull` after merge on both front-ends. This can be most comfortably accomplished with [[Cumin]] on a Cumin host ({{CuminHosts}}):
CustomScripts, Reports and other associated tools for Netbox are collected in the netbox-extras repository at https://gerrit.wikimedia.org/r/plugins/gitiles/operations/software/netbox-extras/. This repository is deployed to the Netbox frontends under /srv/deployment/netbox-extras. It is not automatically deployed on merged, and must be manually `git pull` after merge on both front-ends. This can be most comfortably accomplished with [[Cumin]] on a Cumin host ({{CuminHosts}}):
Line 129: Line 189:
This will have the dual purpose of resetting any local changes and updating the deployment to the latest version.
This will have the dual purpose of resetting any local changes and updating the deployment to the latest version.


== Reports ==
== Netbox features ==
 
=== Custom Links ===
https://netbox.wikimedia.org/extras/custom-links/
 
Netbox allow to setup custom links to other websites using Jinja2 templating for both the visualized name and the actual link, allowing for quite some flexibility.
The current setup (as of June 2022) has the following links:
* Grafana (for all physical devices and VMs)
* Icinga (for all physical devices and VMs)
* Debmonitor (for all physical devices and VMs)
* Procurement Ticket (only for physical devices that have a ticket that matches either Phabricator or RT)
* Hardware config (for Dell and HP physical devices, pointing to the manufacturer page for warranty information based on their serial number)
* LibreNMS (for Juniper, opengear and sentry devices)
* Puppetboard (for all physical devices and VMs)
 
=== Reports ===
TODO update
 
Netbox reports are a way of validating data within Netbox. They are available in https://netbox.wikimedia.org/extras/reports/., and are defined in the repository https://gerrit.wikimedia.org/r/plugins/gitiles/operations/software/netbox-extras/ under reports/.  
Netbox reports are a way of validating data within Netbox. They are available in https://netbox.wikimedia.org/extras/reports/., and are defined in the repository https://gerrit.wikimedia.org/r/plugins/gitiles/operations/software/netbox-extras/ under reports/.  


In summary, reports produce a series of log lines that indicate some status connected to a machine, and may be either <code>error</code>, <code>warning</code>, or <code>success</code>. Log lines with no particular disposition for information purposes may also be emitted.
In summary, reports produce a series of log lines that indicate some status connected to a machine, and may be either <code>error</code>, <code>warning</code>, or <code>success</code>. Log lines with no particular disposition for information purposes may also be emitted.


=== Report Conventions ===
==== Report Conventions ====
Because of limitations to the UI for Netbox reports, certain conventions have emerged:
Because of limitations to the UI for Netbox reports, certain conventions have emerged:


Line 145: Line 222:
# Summary log messages should be formatted like <count> <verb/condition> <noun/subobject>
# Summary log messages should be formatted like <count> <verb/condition> <noun/subobject>


=== Report Alert===
==== Report Alert ====
The report results are at https://netbox.wikimedia.org/extras/reports/
The report results are at https://netbox.wikimedia.org/extras/reports/


Line 184: Line 261:
|Whoever changed / reimaged host
|Whoever changed / reimaged host
|''<device> missing from PuppetDB'' or ''<device> missing from Netbox''. These occur because the data in PuppetDB does not match the data in Netbox, typically related to missing devices or unexpected devices. Generally these errors fix themselves once the reimage is complete, but the Netbox record for the host may need to be updated for decommissioning and similar operations.
|''<device> missing from PuppetDB'' or ''<device> missing from Netbox''. These occur because the data in PuppetDB does not match the data in Netbox, typically related to missing devices or unexpected devices. Generally these errors fix themselves once the reimage is complete, but the Netbox record for the host may need to be updated for decommissioning and similar operations.
|
|-
|Juniper (does not alert)
|DC-ops or Netops
|
|
|
|}
|}
=== Netbox scripts ===
TODO
==== Extra Errors, Notes and Procedures ====
===== '''Would like to remove interface''' =====
This error is produced in the Interface Automation script when cleaning up old interfaces during an import.
Interfaces are considered for removal if they don't appear in the list provided by the data source (generally speaking, PuppetDB); they are then checked if there is an IP address or a cable associated with the interface. If there is one of these the interface is left in place so as to not lose data. It is considered a bug if this happens, so if you see this error in an output feel free to open a ticket against #netbox in Phabricator.
===== '''Error removing interface after speed change''' =====
This error is produced in the Interface Automation script when cleaning up old interfaces when provisioning a server's network attributes.
Specifically for modular interfaces on Juniper devices, the interface name is determined by the speed of the interface, and the port number.  If an old interface exists, say xe-1/0/8, on a modular port and we replace the 10G SFP+ with a 25G SFP28, the name of the interface will change to et-1/0/8.  JunOS cannot have both defined so the import script will remove the old (xe-1/0/8) interface in Netbox before adding the new one.
This error will get thrown if the old interface still has a cable connected, or an IP address assigned.  This shouldn't normally happen, but if it does the old interface should be manually removed, and cables/IPs cleaned up as necessary.  Feel free to ping netops members on irc if there is any confusion, or open a Phabricator task.


== Exports ==
== Exports ==
Line 204: Line 293:
The repository is also mirrored in Phabricator: https://phabricator.wikimedia.org/source/netbox-exported-dns/ though it may not be immediately up-to-date.
The repository is also mirrored in Phabricator: https://phabricator.wikimedia.org/source/netbox-exported-dns/ though it may not be immediately up-to-date.


== Extra Errors, Notes and Procedures ==
=== Puppet ===
TODO
 
== Imports ==


=== '''Would like to remove interface''' ===
=== Ganeti sync ===
This error is produced in the Interface Automation script when cleaning up old interfaces during an import.
TODO


Interfaces are considered for removal if they don't appear in the list provided by the data source (generally speaking, PuppetDB); they are then checked if there is an IP address or a cable associated with the interface. If there is one of these the interface is left in place so as to not lose data. It is considered a bug if this happens, so if you see this error in an output feel free to open a ticket against #netbox in Phabricator.
== External scripts ==
Scripts and tools not previously listed that interact with Netbox, and thus need to be checked for compatibility after significant Netbox changes (eg. upgrades).


=== '''Error removing interface after speed change''' ===
[[Homer]]
This error is produced in the Interface Automation script when cleaning up old interfaces when provisioning a server's network attributes.


Specifically for modular interfaces on Juniper devices, the interface name is determined by the speed of the interface, and the port number.  If an old interface exists, say xe-1/0/8, on a modular port and we replace the 10G SFP+ with a 25G SFP28, the name of the interface will change to et-1/0/8.  JunOS cannot have both defined so the import script will remove the old (xe-1/0/8) interface in Netbox before adding the new one.
[[Spicerack]]


This error will get thrown if the old interface still has a cable connected, or an IP address assigned.  This shouldn't normally happen, but if it does the old interface should be manually removed, and cables/IPs cleaned up as necessary.  Feel free to ping netops members on irc if there is any confusion, or open a Phabricator task.
TODO


== Upgrading Netbox ==
== Upgrading Netbox ==

Revision as of 20:46, 14 June 2022

Netbox is a "IP address management (IPAM) and data center infrastructure management (DCIM) tool".

At Wikimedia it is used as the DCIM and IPAM system, as well as being used as an integration point for switch and port management, DNS management, and similar operations.

History

  • At Wikimedia it was evaluated in Phab:T170144 as a replacement for Racktables.
  • In Phab:T199083 the actual migration between the systems took place
  • Phab:T296452 documents the large upgrade from 2.10 to 3.2 and the subsequent improvements it brought

Web UI

  • https://netbox.wikimedia.org/
  • Login using your LDAP/Wikitech credentials
  • Currently you need to be in either the "ops" or "wmf" LDAP group to be able to login (hiera profile::netbox::cas_group_attribute_mapping)

API

Staging

  • It consists of a single bullseye VM combining frontend and database
  • Reachable on netbox-next.wikimedia.org and netbox-next.discovery.wmnet
    • Behind caches, similarly to the prod infrastructure
  • It's data comes from a manual dump of production's database
    • Reach out to Infrastructure Foundations if you need a more fresh database
    • Be careful not to leak any of its data
  • It is used to test Netbox upgrades, scripts, reports, etc
  • This host is active in monitoring (with notifications disabled)
    • As such, make sure that all the alerts cleared after your tests

Production infrastructure

the production Netbox infrastructure consists of 4 bullseye VMs (see all Netbox VMs):

  • 2 active/passive frontends (netboxXXXX)
  • 2 primary/replica postgresSQL databases (netboxdbXXXX)

By default the active/primary servers are the eqiad ones.


TODO: caches/TLS

Monitoring

TODO

Failover

Frontends

Using confctl, pool the passive server and depool the previous active one.

THIS NEEDS TO BE CONFIRMED TO BE THE PROPER COMMANDS DO NOT RUN UNTIL THIS WARNING IS REMOVED

confctl --object-type discovery select 'dnsdisc=netbox,name=codfw' set/pooled=true
confctl --object-type discovery select 'dnsdisc=netbox,name=eqiad' set/pooled=false

If the failover is going to last (eg. longer than a server reboot), change the profile::netbox::active_server Hiera key to the backup server. This will ensure the cron/systemd timers as well as Icinga checks are running.

Note that having the active frontend in a different datacenter than the primary database will result in Netbox being slower.

Databases

If the primary database server needs a short downtime it's recommended to not try a failover and instead have Netbox offline for a short amount of time.

There are currently no documented procedure on how to fail the database over, and even less how to fail back to the former primary.

See also Postgres

Database

Restore

First of all analyze the Netbox changelog to choose what's the best action to perform a restore.

The general options are:

  • Manually (or via the API) re-play the actions listed in the changelog in reverse order. The changelog entries don't have full raw data, some of them might show the names instead of the IDs required in the API.
  • Use the CSV dumps to recover data. Their restore is not trivial either due to the fact that some of the Netbox exports are not immediately re-importable due to reference resolution.
  • Restore a database dump. This ensure consistency at a given point in time, and could even be used to perform some partial restore using pg_restore.

To restore files from Bacula back to the client, use bconsole on helium and refer to Bacula#Restore_(aka_Panic_mode) for detailed steps.

PostgresSQL

Dumps backups

On the database servers, a puppetized cron job (class postgresql::backup) automatically creates a daily dump file of all local Postgres databases (pg_dumpall) and stores it in /srv/postgres-backup.

This path is then backed up by Bacula#Adding a new client

For more details, the related subtask to setup backups was Phab:T190184.

Restore the DB dump

  • Check the dump list on both DB hosts in /srv/postgres-backup, a more recent one might be in the other host.
  • Copy if needed the dump to the master host (as of June 2022 netboxdb1002)
  • Unzip the chosen dump file
  • Take a one-off backup right before starting the restore with (the .bak suffix is important to not be auto-evicted):
sudo -u postgres  /usr/bin/pg_dumpall | /bin/gzip > /srv/postgres-backup/${USER}-DESCRIPTION.psql-all-dbs-$(date +\%Y\%m\%d).sql.gz
  • Connect to the DB, list and drop the Netbox database:
psql
postgres=# \l
...
postgres=# DROP DATABASE netbox;
DROP DATABASE
postgres=#
  • Restore the DB with:
$ gunzip  < ${DUMP_FILE} | sudo -u postgres /usr/bin/psql
Flush caches after a restore

After a restore Netbox caches must be flushed to ensure both consistency and see the changes.

To perform the flush SSH into the Netbox active host (as of June 2022 netbox1002) and execute:

cd /srv/deployment/netbox
. venv/bin/activate  # Activate the Netbox Python virtualenv
cd deploy/src/netbox
python manage.py invalidate all  # Perform the flush

Sanitizing a database dump

The Netbox database contains a few bits of sensitive information, and if it is going to be used for testing purposes in WMCS it should be sanitized first.

  1. Create a copy of the main database createdb netbox-sanitize && pg_dump netbox | psql netbox-sanitize
  2. Run the below SQL code on netbox-sanitize database.
  3. Dump and drop database pg_dump netbox-sanitize > netbox-sanitized.sql; dropdb netbox-sanitize

THE BELLOW COMMANDS ARE OUTDATED AND MIGHT NOT COVER EVERYTHING THAT NEEDS TO BE SANITIZED

-- truncate secrets
TRUNCATE secrets_secret CASCADE;
TRUNCATE secrets_sessionkey CASCADE;
TRUNCATE secrets_userkey CASCADE;

-- sanitize dcim_serial
UPDATE dcim_device SET serial = concat('SERIAL', id::TEXT);

-- truncate user table
TRUNCATE auth_user CASCADE;

-- sanitize dcim_interface.mac_address
UPDATE dcim_interface SET mac_address = CONCAT(
                   LPAD(TO_HEX(FLOOR(random() * 255 + 1) :: INT)::TEXT, 2, '0'), ':',
                   LPAD(TO_HEX(FLOOR(random() * 255 + 1) :: INT)::TEXT, 2, '0'), ':',
                   LPAD(TO_HEX(FLOOR(random() * 255 + 1) :: INT)::TEXT, 2, '0'), ':',
                   LPAD(TO_HEX(FLOOR(random() * 255 + 1) :: INT)::TEXT, 2, '0'), ':',
                   LPAD(TO_HEX(FLOOR(random() * 255 + 1) :: INT)::TEXT, 2, '0'), ':',
                   LPAD(TO_HEX(FLOOR(random() * 255 + 1) :: INT)::TEXT, 2, '0')) :: macaddr;

-- sanitize cricuits_circuit.cid
UPDATE circuits_circuit SET cid = concat('CIRCUIT', id::TEXT);

CSV

CSV backups

NOTE: This might be decommissioned in a short to medium term, see https://phabricator.wikimedia.org/T310615

TODO: OUTDATED (eg. script paths)

On the active frontend, each hour at :37, a script dumps most pertinent tables to a target directory in /srv/netbox-dumps with a timestamp.

Sixteen of these dumps are retained for backup purposes, which is executed by the script in /srv/deployment/netbox/deploy/scripts/rotatedump.

This script only rotates directories in the pattern 20*, so if a manual, retained dump is desired, one can simply run the script (su netbox -c /srv/deployment/netbox/deploy/scripts/rotatedump) and rename the resulting dump outside of this pattern, perhaps with a descriptive prefix.

/srv/netbox-dumps is backed up in Bacula#Adding a new client for historical copies.

Netbox Extras

TODO check if still accurate

CustomScripts, Reports and other associated tools for Netbox are collected in the netbox-extras repository at https://gerrit.wikimedia.org/r/plugins/gitiles/operations/software/netbox-extras/. This repository is deployed to the Netbox frontends under /srv/deployment/netbox-extras. It is not automatically deployed on merged, and must be manually `git pull` after merge on both front-ends. This can be most comfortably accomplished with Cumin on a Cumin host (cumin1001.eqiad.wmnet, cumin2002.codfw.wmnet):

 sudo cumin 'A:netbox-all' 'cd /srv/deployment/netbox-extras; git pull --ff-only'

This will have the dual purpose of resetting any local changes and updating the deployment to the latest version.

Netbox features

Custom Links

https://netbox.wikimedia.org/extras/custom-links/

Netbox allow to setup custom links to other websites using Jinja2 templating for both the visualized name and the actual link, allowing for quite some flexibility. The current setup (as of June 2022) has the following links:

  • Grafana (for all physical devices and VMs)
  • Icinga (for all physical devices and VMs)
  • Debmonitor (for all physical devices and VMs)
  • Procurement Ticket (only for physical devices that have a ticket that matches either Phabricator or RT)
  • Hardware config (for Dell and HP physical devices, pointing to the manufacturer page for warranty information based on their serial number)
  • LibreNMS (for Juniper, opengear and sentry devices)
  • Puppetboard (for all physical devices and VMs)

Reports

TODO update

Netbox reports are a way of validating data within Netbox. They are available in https://netbox.wikimedia.org/extras/reports/., and are defined in the repository https://gerrit.wikimedia.org/r/plugins/gitiles/operations/software/netbox-extras/ under reports/.

In summary, reports produce a series of log lines that indicate some status connected to a machine, and may be either error, warning, or success. Log lines with no particular disposition for information purposes may also be emitted.

Report Conventions

Because of limitations to the UI for Netbox reports, certain conventions have emerged:

  1. Reports should emit one log_error line for each failed item. If the item doesn't exist as a Netbox object, None may be passed in place of the first argument.
  2. If any log_warning lines are produced, they should be grouped after the loop which produces log_error lines.
  3. Reports should emit one log_success which contains a summary of successes, as the last log in the report.
  4. Log messages referring to a single object should be formatted like <verb/condition> <noun/subobject>[: <explanatory extra information>]. Examples:
    1. malformed asset tag: WNF1212
    2. missing purchase date
  5. Summary log messages should be formatted like <count> <verb/condition> <noun/subobject>

Report Alert

The report results are at https://netbox.wikimedia.org/extras/reports/

Most reports that alert are non-critical data integrity mismatches due to changes in infrastructure, as a secondary check, and the responsibility of DC-ops.

Reports and their Errors
Report Typical Responsibility Typical Error(s) Note
Accounting Faidon or DC-ops
Cables DC-ops
Coherence (does not alert)
LibreNMS DC-ops or Netops You can ignore a LibreNMS device by setting its "ignore alert" flag in LibreNMS
Management DC-ops
PuppetDB Whoever changed / reimaged host <device> missing from PuppetDB or <device> missing from Netbox. These occur because the data in PuppetDB does not match the data in Netbox, typically related to missing devices or unexpected devices. Generally these errors fix themselves once the reimage is complete, but the Netbox record for the host may need to be updated for decommissioning and similar operations.

Netbox scripts

TODO

Extra Errors, Notes and Procedures

Would like to remove interface

This error is produced in the Interface Automation script when cleaning up old interfaces during an import.

Interfaces are considered for removal if they don't appear in the list provided by the data source (generally speaking, PuppetDB); they are then checked if there is an IP address or a cable associated with the interface. If there is one of these the interface is left in place so as to not lose data. It is considered a bug if this happens, so if you see this error in an output feel free to open a ticket against #netbox in Phabricator.

Error removing interface after speed change

This error is produced in the Interface Automation script when cleaning up old interfaces when provisioning a server's network attributes.

Specifically for modular interfaces on Juniper devices, the interface name is determined by the speed of the interface, and the port number. If an old interface exists, say xe-1/0/8, on a modular port and we replace the 10G SFP+ with a 25G SFP28, the name of the interface will change to et-1/0/8. JunOS cannot have both defined so the import script will remove the old (xe-1/0/8) interface in Netbox before adding the new one.

This error will get thrown if the old interface still has a cable connected, or an IP address assigned. This shouldn't normally happen, but if it does the old interface should be manually removed, and cables/IPs cleaned up as necessary. Feel free to ping netops members on irc if there is any confusion, or open a Phabricator task.

Exports

Set of resources that exports Netbox data in various formats.

DNS

A git repository of DNS zonefile snippets generated from Netbox data and exported via HTTPS in read-only mode to be consumed by the DNS#Authoritative_nameservers and the Continuous Integration tests run for the operations/dns Gerrit repository. The repository is available via:

 $ git clone https://netbox-exports.wikimedia.org/dns.git

To update the repository, see DNS/Netbox#Update_generated_records.

The repository is also mirrored in Phabricator: https://phabricator.wikimedia.org/source/netbox-exported-dns/ though it may not be immediately up-to-date.

Puppet

TODO

Imports

Ganeti sync

TODO

External scripts

Scripts and tools not previously listed that interact with Netbox, and thus need to be checked for compatibility after significant Netbox changes (eg. upgrades).

Homer

Spicerack

TODO

Upgrading Netbox

Upgrading Netbox is usually an extremely simple procedure as within patch-level releases they maintain a reasonable level of compatibility in the APIs that we use. When it comes to upgrading across minor versions, breaking changes may have occurred and careful reading of changelogs is needed, as well as testing of the scripts which consume and manipulate data in Netbox.

Overview

  1. Update WMF version of netbox
  2. Review changelog and note any changes that may interact with our integrations or deployment
  3. Update deploy repository
  4. Deploy to netbox-dev2001
  5. Simple tests
  6. Review UI and note any differences to call out during announcement
  7. (if API changes or minor version bump) Complex tests
  8. (if breaking changes) Port scripts
  9. (if breaking changes) Test scripts
  10. Deploy to production

Update the WMF Version of Netbox

We maintain a minimal fork of Netbox to change a few small things that are required for our configuration. This takes the form of two commits in the WMF version of the Netbox repository:

WMF Netbox Production Patches
Commit hash
03e3b07f5dcf5538fb0b90641a4e3d043684bb37 Switches swagger into Non-Public mode
824fe21597c251ce6e0667b97b258a23ff210949 Adds a way to pass settings directly into the configuration.py that we use for configuring the Swift storage backend
98f32d988d7e1bc146298025900ba54969e8d3c1 Add CAS authentication support

Generally these are cherry-picked into a copy of the upstream branch, and then pushed to main on https://gerrit.wikimedia.org/g/operations/software/netbox/, with the following procedure:

  1. In a working copy of operations/software/netbox, add an upstream remote
  2. Pull the upstream remote's commits into the working copy
  3. Cherry-pick the above hashes into the working copy's HEAD
$ git cherry-pick -x 03e3b07f5dcf5538fb0b90641a4e3d043684bb37
Auto-merging netbox/netbox/urls.py
[wmf-dev 6467d1651] Switch swagger to non-public mode
 Author: Cas Rusnov <crusnov@wikimedia.org>
 Date: Thu Aug 8 13:48:34 2019 -0700
 1 file changed, 1 insertion(+), 1 deletion(-)
$ git cherry-pick -x 824fe21597c251ce6e0667b97b258a23ff210949
Auto-merging netbox/netbox/settings.py
[wmf-dev a653243bc] Add a passthrough configuration system
 Author: Cas Rusnov <crusnov@wikimedia.org>
 Date: Tue Jul 30 15:22:28 2019 -0700
 1 file changed, 8 insertions(+)
$ git cherry-pick -x 98f32d988d7e1bc146298025900ba54969e8d3c1
Auto-merging requirements.txt
CONFLICT (content): Merge conflict in requirements.txt
Auto-merging netbox/users/views.py
Auto-merging netbox/netbox/settings.py
CONFLICT (content): Merge conflict in netbox/netbox/settings.py
Auto-merging .gitignore
error: could not apply 98f32d988... Add CAS authentication support
hint: after resolving the conflicts, mark the corrected paths
hint: with 'git add <paths>' or 'git rm <paths>'
hint: and commit the result with 'git commit'
  1. Remove the master branch git branch -D master
  2. Create a new master branch git branch master
  3. Create a tag for the version number, it is our standard to append -wmf to the version number, so for Netbox version 2.10.4 we would git tag v2.10.4-wmf
  4. Force push master to gerrit
    1. Go to the administrative panel of the netbox repository https://gerrit.wikimedia.org/r/admin/repos/operations/software/netbox, click Access, Edit, Reference: refs/heads/master and set to Allow pushing with or without force, and Save
    2. git push --set-upstream origin master --force --tags
    3. Return to administrative panel above, and switch access back to Allow pushing (but not force pushing) and save.

Build deploy repository

Netbox is deployed using Scap, and thus has a deployment repository with the collected artifacts (the virtual environment and associated libraries) is used to deploy it. This is updated separately from our branch of Netbox with the following procedure which uses the operations/software/netbox-deploy repository https://gerrit.wikimedia.org/g/operations/software/netbox-deploy/

  1. In a working copy of operations/software/netbox-deploy, update the src/ subdirectory, which is a submodule of this repository pointing at the above operations/software/netbox; to do this git pull in that directory, and then check out the tag of the version that is being updated to, for example git checkout v2.10.4-wmf.
  2. Update the .gitmodules file with the correct version as checked out above.
  3. Build the artifacts by doing a make clean and then make. This uses docker to collect all of the required libraries as specified in the various requirements.txt files. It creates the artifacts as artifacts/artifacts.buster.tar.gz and frozen-requirements.txt.
  4. Commit the changes to the repisitory and submit for review, be sure the following files have changes: frozen-requirements.txt,artifacts/artifacts.buster.tar.gz, .gitmodules, and src

Once the repository is reviewed and merged via gerrit, it is ready to deploy!

Deploy to Testing Server

The next phase, even for simple upgrades is to deploy to netbox-dev2001.wikimedia.org for basic testing prior to deploying to production. This is done via Scap on a deploy server.

  1. Login to a deploy server such as deploy1001.eqiad.wmnet
  2. Go to /srv/deployment/netbox/deploy; this is a check out of the -deploy repository from above.
  3. Pull to the latest version, and update the submodule in src by pulling and checking out the tag that is going to be deployed.
  4. Deploy with scap to netbox-dev2001, with bug reference in hand: scap deploy -l netbox-dev2001.wikimedia.org 'Deploying Netbox v2.10.4-wmf to netbox-dev Tbug'
  5. This process should go smoothly and leave the target machine ready to test.

It may be necessary to deploy a new production database dump to netbox-dev2001's database to ensure parity with production.

It may be necessary to make changes to the puppet side of the deployment due to changes in Netbox's requirements, which will be generally called out in the upstream changelog. In addition, Puppet is used to ship the configuration files, so any new or different requirements in those may need to be accounted for. Additionally, occationally new or different tools are invoked in upstream's upgrade.sh; we use netbox-deploy/scap/checks/netbox_setup.sh to perform these tasks and it may need to be updated.

Simple Testing

On https://netbox-next.wikimedia.org we can perform some basic tests:

  • Test login
  • Test each Report (Other/Reports menu) and compare to production
  • Test each CustomScript (Other/Scripts menu) and compare to production
  • Look at some samples of Devices (Devices/Devices menu) and compare to production
  • Look at some samples of IPAM (IPAM/IP Addresses menu) and compare to production
  • Look at a cable trace and compare to production (go to an active Device, Interfaces tab, click the 'Trace' button next to a connected interface)

Complex Testing

In the event that a more breaking update is being made, more extensive testing, and potential porting to account for API changes might need to be done. Note in the above Simple Testing, any of the reports or CustomScripts that produce errors due to API changes (errored state indicates some python error occured which most often is the result of API changes). If the outputs of the reports or scripts vary substantially from the production versions, for example unexpected failures or warnings, this may also indicate that porting is required.

In addition to the above, the following things need to be tested:

  • DNS generation. This should produce no diff if the database and DNS repository on netbox-dev2001 are updated to production contents.
  • CSV dumps. Should produce results similar to production. Note that there may be changes to the dumped tables required if models have changed.
  • Script proxy on getstats.GetDeviceStatistics. This should produce results similar to production.
  • The ganeti sync script. This should be a no-op on a recent production data. If additional tests are desired, removing Virtual devices and rerunning should recreate them. Note that an existing bug in the sync script produces ignorable errors when trying to remove ganeti nodes that are no longer in the network.
  • Homer. This should produce no diff if the database is updated to production contents.

Porting and Testing

This process doesn't generalize, but over time there are drifts in the internal and external APIs used by our integrations, and some porting work may be required to operate against them as they change. Generally these changes are minor such as changing method names or adding arguments, and other times they are rather more complicated such as splitting Virtual device interfaces from non-virtual device interfaces. In general once porting is thought to be complete, the changes should be deployed to netbox-dev2001 and a full run of testing should be done to verify that the changes made fix the problems that turned up in initial testing, including attempting to go down avenues of execution that may not normally be hit.

Any or all of the items tested in Simple and Complex testing may need porting depending on which internal or external APIs have changed.

Deploy to Production

After a final run through of any problem areas exposed in above testing and fixes are deployed to netbox-dev2001 it is finally time to deploy the new version to production, with the following procedure:

  1. Announce that the release will be occuring on #wikimedia-dcops, #mediawiki_security and, if necessary, coordinate a time when integration tools or DCops work will not be interrupted.
  2. Merge any outstanding changes to netbox-extras or homer repositories (if necessary).
  3. On netbox-db2001, perform a manual dump of the database.
  4. Deploy netbox-extras to production using cumin, as in #Netbox_Extras
  5. Announce on IRC that a deploy is happening, on #wikimedia-operations !log Deploying Netbox v2.10.4-wmf to production Tbug
  6. Deploy to production:
    1. Login to a deploy server such as deploy1001.eqiad.wmnet
    2. Go to /srv/deployment/netbox/deploy; this is a check out of the -deploy repository from above.
    3. Pull to the latest version, and update the submodule in src by pulling and checking out the tag that is going to be deployed.
    4. Deploy with scap with bug reference in hand: scap deploy 'Deploying Netbox v2.10.4-wmf to production Tbug'
    5. This process should go smoothly for netbox1001, and prompt for continuation. Allow it to continue to deploy to 2001 and -dev2001.
  7. Announce on IRC that the deploy is complete, on #wikimedia-operations !log Finished deploying Netbox v2.10.4-wmf to production Tbug
  8. Perform simple and Complex testing as above, and in general make sure everything is as expected.

Checklists

Here are cut-and-pastable checklists for tickets for doing this upgrade process. They should be used in a ticket titled "Update Netbox to vN.M.X-wmf" tagged with sre-tools and netbox.

Simple upgrade

Use when patch level updates or a review of the changelog shows nothing that should break things.

[] Update netbox repository + deploy repository
[] Upgrade -dev2001 
[] Rerun reports
[] Try a PuppetDB import for an existing host
[] Check diffs in DNS generation
[] Coordinate time with DCops and SRE for release
[] Dump a pre-upgrade copy of database
[] Release to production
[] Perform simple tests

Complex upgrade

Use with any update that may cause breaking changes to the API, or that the simple testing indicates may have extra work involved.

[] Review upgrade.sh
[] Examine change log for any major changes
[] Update netbox repository and deploy repository
[] Look around the UI for any changes, rearrangements or process changes
[] Upgrade -dev2001 
[] Check and make any necessary changes to reports:
  [] accounting.py
  [] cables.py
  [] coherence.py
  [] librenms.py
  [] management.py
  [] puppetdb.py
[] Check and make any necessary changes to scripts:
  [] getstats.py
  [] interface_automation.py
  [] offline_device.py
[] Check DNS generation, and review diffs
  [] Make any necessary changes to generate_dns_snippets.py
[] Check custom_script_proxy.py
[] Execute CSV dumps and examine dumps for any anomolies.
  [] Update dumpbackup.py for any model changes, and any issues
[] Execute Ganeti sync against all sites
  [] Make any necessary changes to ganeti-netbox-sync.py
[] Check and make any necessary changes to Homer
[] Coordinate time with DCops and SRE for release
[] Dump a pre-upgrade copy of database
[] Release to production
[] Perform simple tests
[] Recheck DNS generation and examine diffs