You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Event Platform: Difference between revisions
imported>ODimitrijevic No edit summary |
imported>Ottomata |
||
Line 59: | Line 59: | ||
== References == | == References == | ||
https://phabricator.wikimedia.org/T185233 | [https://phabricator.wikimedia.org/T185233 T185233 - Modern Event Platform ] parent task | ||
[[Category:Services]] | [[Category:Services]] | ||
[[Category:Event Platform]] | [[Category:Event Platform]] |
Revision as of 14:27, 23 November 2021
The Wikimedia Event Platform exists to enable software and systems engineers to build loosely coupled software systems with event driven architectures. Systems built this way can be easily distributed and decoupled, allowing for inherent horizontal scalability and versatility with respect to yet-unknown use cases.
Philosophy and data model
In typical web stacks, when data changes occur, those changes are reflected as an update to a state store. For example, if a user renames a wikipedia page title, that page's record in a database table would have its page title field updated with the new page title value. Modeling data in this way works if all you care about is the current state. But by keeping only the latest state, we discard a huge piece of useful information: the history of changes.
Instead of updating a database when a state change happens, we can choose to model the state change as what is is: an 'event'. An event is something happening at a specific time, e.g. 'page id 123 was renamed from title_A to title_B by user X at 2020-06-25T12:34:00Z'. If we keep the history of all events, we can always use them recreate the current state as well as the state at any point in the past. An event based data model decouples the occurrence of an event from any downstream state changes. Multiple consumers can be notified of the events occurrence, which enables engineers to build new systems based on the events without interfering with the producers or other consumers of those events.
Background reading
- Martin Fowler - What do you mean by "Event-Driven"
- Confluent - Turning the Database Inside Out
- Confluent - Event Driven 2.0
- Confluent - Designing Event Driven Systems
- Confluent - The Data Dichotomy: Rethinking the Way We Treat Data and Services
- Confluent - Stream data platform 1
- Confluent - Messaging as the Single Source of Truth
- Confluent - Build Services on a Backbone of Events
- What they dont tell you about event sourcing
- Confluent - Kafka Streams: Stream Processing Made Simple (This is about Kafka streams, but the discussion of streams vs. table and stateful streaming is good.)
Event Platform generic concepts
event | A strongly typed and schema-ed piece of data, usually representing something happening at a definite time. E.g. 'revision create','user button click', 'page view', etc. |
event schema (AKA 'schema') | A datatype of an event. The event schema describes the data model of any given event and is used for validation upon receipt of events, as well as for data storage integration (an event schema can be mapped to an RDBS schema, etc.). An event schema is just like a programming data type. |
event stream (AKA 'stream') | A continuous collection of events (loosely) ordered by time. |
event bus | A publish & subscribe (PubSub) message queue where events are initially produced to and consumed from. WMF uses Apache Kafka. (NOTE: here 'event bus' is meant to be a generic term and is not referencing the MW EventBus extension). |
event producer | Producers create events and produce them to a specific event streams in the event bus. |
event consumer | Consumers consume events from specific event streams in the event bus. |
event intake service | An HTTP service that accepts events, validates them against their schema, and produces them to a specific event stream in the event bus. WMF uses EventGate. |
Event stream processing | Refers to any software that consumes, transforms, and produces event streams. This includes simple event processing, as well as complex event processing and stateful stream processing. This is usually done using a distributed framework of some kind, e.g. Apache Flink, Apache Spark, or Kafka Streams, but also includes simpler home grown technologies like Change-propogation. |
Platform Architecture Diagram
History
The first 'event platform' at WMF was EventLogging. This system originally used ZeroMQ to transport messages between its various 'services' but was later improved to use Kafka. It was built for WMF product teams to be able to instrument and measure WMF features and usage on websites and apps. It used a (hardcoded) on-wiki event 'schema repository' to validate incoming events.
EventLogging was not designed to scale and be used in ways that we wanted to use it. It was initially intended for instrumenting features for telemetry – tracking interactions and recording measurements to give us insights into how features were used by actual users. You couldn't have a system, for example, that responded to events – that took actions based on information it received.
In 2015, an effort was made to extend the analytics focus of EventLogging to production events. This effort was dubbed 'EventBus' and culminated in three new components: the Mediawiki EventBus extension, the mediawiki/event-schemas git repository, and eventlogging-service-eventbus. eventlogging-service-eventbus is the first HTTP POST (internal) endpoint. It validated and produced events against more tightly controlled production schemas, and produced them to Kafka. EventBus was used to build the Change Propagation service. We originally intended to merge the analytics vs. production uses of EventLogging.
In 2018, we started the Modern Event Platform program, which included EventBus's original analytics+production unification goal as well as other parts of WMF's event processing stack using open source (non-homegrown) components where possible. The EventLogging python codebase was too WMF and Mediawiki specific to easily accomplish the unification, so it was decided to build a new more generic and extensible JSONSchema event service, eventually entitled EventGate.
In 2019, EventGate, along with other Modern Event Platform components, replaced eventlogging-service-eventbus, and is intended to eventually replace as the 'analytics' deployment of EventLogging services (e.g. eventlogging-processor).
Today, events logged via EventLogging don't go to a MySQL database, for instance, but instead to Hadoop, which enables a greater volume of events to be stored. Similarly, the pipeline for processing received events them has also been replaced by Kafka-based EventGate, which is one of Event Platform's core components and can ingest a much greater volume of events much faster.
References
T185233 - Modern Event Platform parent task