You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org
There are a number of Redis clusters and instance in Wikimedia production.
redis_maps (maps* hosts) used by Maps.
- redis_misc (rdb*hosts) used by multiple services detailed below.
- webperf (mwlog1002 host) used by Arc Lamp for collecting PHP profiling samples.
Cluster redis_maps See Maps.
redis::misc is for our general purpose master-replica cluster in eqiad and codfw DCs. Each
rdb* node has 5 instances (ports 6378, 6379, 6380, 6381, 6382) because redis is single threaded. A mapping of usages is below.
The servers are setup as 2 independent pairs. This is for HA purposes and it's up to the application to use it that way. Conversely not all applications are able to do so.
- Changeprop: Uses Redis for rate limiting (actively uses both instances).
- changeprop-jobqueue: Uses Redis for job deduplication (actively uses both instances).
- API_Gateway: Uses redis for rate-limitting
- ORES: Uses Redis for caching and queueing (one active instance).
- docker-registry: (one active instance).
Each master has a replica. Masters use odd numbers (e.g. rdb1005) and replicas the subsequent even number (e.g. rdb1006). Master-replica instances use the same ports e.g.
rdb0003:6379 would replicate to
Change propagation (or changeprop) is a service that runs on Kubernetes nodes by listening to topics on Kafka for events, and then translating them into HTTP requests to various systems. It is also responsible for cache evictions to happen on all services like RESTBase. Changeprop talks to Redis via Nutcracker.
Related puppet code
- Instance passwords can be found under
- Grafana dashboard: Redis
redis-cli is installed on all servers where redis-server is installed. This will leave you at a redis prompt where you can enter commands interactively.
Some useful commands
INFOstatus information, including:
# Replication role:slave master_host:10.64.0.24 master_port:6379 master_link_status:up master_last_io_seconds_ago:0 <snip> # Keyspace db0:keys=9351936,expires=9291239,avg_ttl=0
KEYS <pattern-here>list of all keys matching the given pattern. Use this sparingly! This query could take seconds to complete
QUITcloses the connection.
Using Redis from other Services
Some services may require or be able to use Redis, and this Redis cluster is appropriate for that.
As noted above, each pair of Redis servers in each data center have five separate instances on different ports, a majority of which are not in use; the first step to using the Redis server in production service is to choose an unused instance/port pair which can be located by examining Hiera data for what is currently in use: a relatively straight forward way to do this is to use
git grep '\Wrdb' within a Puppet tree, which shows every use of an rdb address. A similar procedure may be used to find a port that is unallocated.
Once a port/host combination for each datacenter is chosen, it is as simple as referring to those from the Puppet state which will use them.
Using Redis from a service requires a password; the password may be obtained from the Hiera key
hieradata/role/common/redis/misc/master.yaml in the private repository. It is currently the convention to introduce a new private Hiera key to store the password for your service's use, however this is obviously inefficient and subject to change.
Commands are easy, they all depend on the data type (hash, set, list, etc). Here's a quick reference.
Configuration is likewise pretty straightforward with perhaps the exception of the snapshotting, aof and memory settings; here's the sample config file.
The "redis_sessions" cluster was co-located on the main
mc* hosts that also serve memcached, as was used by MediaWiki.
The cluster had a capacity of 8GB in total (16 shards with 520MB each, downsized to 8 shards as of April 2021, T280582).
The cluster was stable in its utilization at a fairly constant 3GB of live data at any given time (as of July 2021, T212129#6283230).
Past consumers in MediaWiki:
- MainStash backend, generic interface used by various features and extensions to store secondary data that should persist for multiple weeks without LRU eviction. The MainStash backend was moved out to the x2 database as part of T212129
- Prior to 2020, MediaWiki core session data was stored in Redis, via $wgSessionCacheType, and has since moved to Cassandra (T206016).
- Prior to Oct 2021, GettingStarted extension, which stored lists of articles for new editors to edit.
- Prior to Jul 2022, CentralAuth authentication tokens (short-lived). Moved to memcached via mcrouter-primary-dc (T278392).
- Pror to Jul 2022, CentralAuth session data. Moved to Cassandra (T267270).
- Prior to Aug 2022, Rdbms-ChronologyProtector offsets (short-lived). Moved to dc-local memcached (T314453).
The decomission task is T267581: Phasing out "redis_sessions" cluster