You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Redis: Difference between revisions
imported>Jobo No edit summary |
imported>Krinkle (Various updates and additions) |
||
Line 1: | Line 1: | ||
{{Navigation Wikimedia infrastructure|expand=mw}} | {{Navigation Wikimedia infrastructure|expand=mw}} | ||
{{See|For the Redis service at Toolforge, see [[Help:Toolforge/Redis for Toolforge]].}} | {{See|For the Redis service at Toolforge, see [[Help:Toolforge/Redis for Toolforge]].}} | ||
'''Redis''' | There are a number of '''Redis''' clusters and instance in Wikimedia production. | ||
* redis_sessions (mc* hosts) used by MediaWiki. | |||
* redis_maps (maps* hosts) used by [[Maps]]. | |||
* redis_misc (rdb*hosts) used by multiple services detailed below. | |||
* webperf (mwlog1001 host) used by [[Arc Lamp]] for collecting PHP profiling samples. | |||
Outside production, we have [[mw:MediaWiki-Vagrant|MediaWiki-Vagrant]] and [[Help:MediaWiki-Vagrant in Cloud VPS|MediaWiki-Vagrant in Cloud VPS]] which are configured by default to use a Redis instance for local object caching and session store. | |||
== Cluster redis_sessions == | |||
Currently co-located on a subset of the Memcached hosts. | |||
The servers are setup as 2 independent pairs. This is for HA purposes and it's up to the application to use it that way. Conversely not all applications are able to do so. | Current consumers in [[MediaWiki at WMF|MediaWiki]]: | ||
* ChronologyProtector offsets (short-lived). | |||
* CentralAuth session data and authentication tokens (short-lived). | |||
* [[mw:Extension:GettingStarted|GettingStarted]] extension, stores lists of articles for new editors to edit. | |||
* [[mw:MainStash|MainStash]] backend, generic interface used by various features and extensions to store secondary data that should persist for multiple weeks without LRU eviction. | |||
Future: | |||
* The MainStash backend will move out to the [[X2|x2 database]]. ([[phab:T212129|T212129]]) | |||
Past: | |||
* Prior to 2020, MediaWiki core session data was stored in Redis, via [[mw:Manual:$wgSessionCacheType|$wgSessionCacheType]], and has since moved to Kask/Cassandra ([[phab:T206016|T206016]]). | |||
== Cluster redis_maps == | |||
See [[Maps]]. | |||
== Cluster redis_misc == | |||
The role <code>redis::misc</code> is for our general purpose master-replica cluster in eqiad and codfw DCs. Each <code>rdb*</code> node has 5 instances (ports 6378, 6379, 6380, 6381, 6382) because redis is single threaded. A mapping of usages is below. | |||
The servers are setup as 2 independent pairs. This is for HA purposes and it's up to the application to use it that way. Conversely not all applications are able to do so. | |||
Consumers: | |||
* [[Changeprop]]: Uses Redis for rate limiting (actively uses both instances). | |||
* changeprop-jobqueue: Uses Redis for job deduplication (actively uses both instances). | |||
* [[ORES]]: Uses Redis for caching and queueing (one active instance). | |||
* docker-registry: (one active instance). | |||
=== Pair 1 === | === Pair 1 === | ||
Line 50: | Line 77: | ||
| 6382 || 0 || docker-registry | | 6382 || 0 || docker-registry | ||
|} | |} | ||
=== Servers === | === Servers === | ||
Each master has | Each master has a replica. Masters use odd numbers (e.g. rdb1005) and replicas the subsequent even number (e.g. rdb1006). Master-replica instances use the same ports e.g. <code>rdb0003:6379</code> would replicate to <code>rdb0004:6379</code> | ||
eqiad: | |||
*Pair 1: rdb1005 and rdb1006 (April 2011: being replaced by rdb1011 and rdb1012, [[phab:T281217|T281217]]) | |||
*Pair 2: rdb1009 and rdb1010 | |||
codfw: | |||
*Pair 1: rdb2003 and rdb2004 | |||
*Pair 2: rdb2005 and rdb2006 | |||
=== Services === | === Services === | ||
Change propagation (or changeprop) is a service that runs on [[Kubernetes]] nodes by listening to topics on Kafka for events, and then translating them into HTTP requests to various systems. It is also responsible for cache evictions to happen on all services like [[RESTBase]]. Changeprop talks to Redis via [[Nutcracker]]. | |||
*[https://logstash.wikimedia.org/app/kibana#/dashboard/change-prop?_g=h@44136fa&_a=h@8c02121 Kibana changeprop Dashboard] | *[https://logstash.wikimedia.org/app/kibana#/dashboard/change-prop?_g=h@44136fa&_a=h@8c02121 Kibana changeprop Dashboard] | ||
*[[phab:source/operations-deployment-charts/browse/master/helmfile.d/services/changeprop/|Helmfile service definition]] | *[[phab:source/operations-deployment-charts/browse/master/helmfile.d/services/changeprop/|Helmfile service definition]] | ||
Line 79: | Line 101: | ||
=== Other Info === | === Other Info === | ||
* Instance passwords can be | * Instance passwords can be found under <code>/etc/redis/<instance>.conf</code> | ||
* [https://grafana.wikimedia.org/d/000000174/redis?orgId=1 Grafana | * [https://grafana.wikimedia.org/d/000000174/redis?orgId=1 Grafana dashboard: Redis] | ||
== Using Redis == | == Using Redis == |
Revision as of 18:50, 19 August 2021
There are a number of Redis clusters and instance in Wikimedia production.
- redis_sessions (mc* hosts) used by MediaWiki.
- redis_maps (maps* hosts) used by Maps.
- redis_misc (rdb*hosts) used by multiple services detailed below.
- webperf (mwlog1001 host) used by Arc Lamp for collecting PHP profiling samples.
Outside production, we have MediaWiki-Vagrant and MediaWiki-Vagrant in Cloud VPS which are configured by default to use a Redis instance for local object caching and session store.
Cluster redis_sessions
Currently co-located on a subset of the Memcached hosts.
Current consumers in MediaWiki:
- ChronologyProtector offsets (short-lived).
- CentralAuth session data and authentication tokens (short-lived).
- GettingStarted extension, stores lists of articles for new editors to edit.
- MainStash backend, generic interface used by various features and extensions to store secondary data that should persist for multiple weeks without LRU eviction.
Future:
- The MainStash backend will move out to the x2 database. (T212129)
Past:
- Prior to 2020, MediaWiki core session data was stored in Redis, via $wgSessionCacheType, and has since moved to Kask/Cassandra (T206016).
Cluster redis_maps
See Maps.
Cluster redis_misc
The role redis::misc
is for our general purpose master-replica cluster in eqiad and codfw DCs. Each rdb*
node has 5 instances (ports 6378, 6379, 6380, 6381, 6382) because redis is single threaded. A mapping of usages is below.
The servers are setup as 2 independent pairs. This is for HA purposes and it's up to the application to use it that way. Conversely not all applications are able to do so.
Consumers:
- Changeprop: Uses Redis for rate limiting (actively uses both instances).
- changeprop-jobqueue: Uses Redis for job deduplication (actively uses both instances).
- ORES: Uses Redis for caching and queueing (one active instance).
- docker-registry: (one active instance).
Pair 1
Port | redis db | Usage |
---|---|---|
6378 | 0 | ORES cache |
6379 | 0 | changeprop/cpjobqueue/api-gateway |
6380 | 0 | ORES queue |
6381 | 0 | unallocated |
6382 | 0 | Reserved for docker-registry |
Pair 2
Port | redis db | Usage |
---|---|---|
6378 | 0 | Reserved for ORES cache |
6379 | 0 | changeprop/cpjobqueue/api-gateway |
6380 | 0 | Reserved for ORES queue |
6381 | 0 | unallocated |
6382 | 0 | docker-registry |
Servers
Each master has a replica. Masters use odd numbers (e.g. rdb1005) and replicas the subsequent even number (e.g. rdb1006). Master-replica instances use the same ports e.g. rdb0003:6379
would replicate to rdb0004:6379
eqiad:
- Pair 1: rdb1005 and rdb1006 (April 2011: being replaced by rdb1011 and rdb1012, T281217)
- Pair 2: rdb1009 and rdb1010
codfw:
- Pair 1: rdb2003 and rdb2004
- Pair 2: rdb2005 and rdb2006
Services
Change propagation (or changeprop) is a service that runs on Kubernetes nodes by listening to topics on Kafka for events, and then translating them into HTTP requests to various systems. It is also responsible for cache evictions to happen on all services like RESTBase. Changeprop talks to Redis via Nutcracker.
Related puppet code
hieradata/role/common/redis/misc/master.yaml
hieradata/role/common/redis/misc/slave.yaml
modules/role/manifests/redis/misc/master.pp
modules/role/manifests/redis/misc/slave.pp
Other Info
- Instance passwords can be found under
/etc/redis/<instance>.conf
- Grafana dashboard: Redis
Using Redis
Connecting
redis-cli
is installed on all servers where redis-server is installed. This will leave you at a redis prompt where you can enter commands interactively.
Some useful commands
AUTH <somepass>
authenticateINFO
status information, including:
# Replication role:slave master_host:10.64.0.24 master_port:6379 master_link_status:up master_last_io_seconds_ago:0 <snip> # Keyspace db0:keys=9351936,expires=9291239,avg_ttl=0
KEYS <pattern-here>
list of all keys matching the given pattern. Use this sparingly! This query could take seconds to completeQUIT
closes the connection.
Using Redis from other Services
Some services may require or be able to use Redis, and this Redis cluster is appropriate for that.
As noted above, each pair of Redis servers in each data center have five separate instances on different ports, a majority of which are not in use; the first step to using the Redis server in production service is to choose an unused instance/port pair which can be located by examining Hiera data for what is currently in use: a relatively straight forward way to do this is to use git grep '\Wrdb[12]'
within a Puppet tree, which shows every use of an rdb address. A similar procedure may be used to find a port that is unallocated.
Once a port/host combination for each datacenter is chosen, it is as simple as referring to those from the Puppet state which will use them.
Using Redis from a service requires a password; the password may be obtained from the Hiera key ::passwords::redis::main_password
in hieradata/role/common/redis/misc/master.yaml
in the private repository. It is currently the convention to introduce a new private Hiera key to store the password for your service's use, however this is obviously inefficient and subject to change.
Other references
Commands are easy, they all depend on the data type (hash, set, list, etc). Here's a quick reference.
Configuration is likewise pretty straightforward with perhaps the exception of the snapshotting, aof and memory settings; here's the sample config file.
See also
- memcached
- nutcracker (AKA twemproxy), the proxy used by all application servers to contact memcached (but not redis as of 2015, except it does again as of 2016)