imported>Neil P. Quinn-WMF
|(14 intermediate revisions by 6 users not shown)|
This page is the entry-point of the Analytics Data Lake (ADL) documentation. The ADL is a large, analytics-oriented repository of data , both raw and aggregated, about Wikimedia projects (in industry terms, a [[data lake ]]). All of the data contained in the lake can be accessed through systems allowing to join them.
Analytics Data Lake(ADL)is a large, analytics-oriented repository of data about Wikimedia projects (in industry terms, a [[data lakedata lake]].
* [[Analytics/Data Lake/Traffic|Traffic data]] -- webrequest, pageviews, unique devices ...
* [[Analytics/Data Lake/Edits|Edits data]] -- Historical data about revisions, pages, and users [in beta as of 2017-04-07].
As the Data Lake matures, we will add any and all data, and try to safely make them public as much as possible.
Technical aspects of the data lake pipelines, see [[Analytics/Systems/Data Lake]].
the data ,
The Analytics Data Lake (ADL), or the Data Lake for short, is a large, analytics-oriented repository of data about Wikimedia projects (in industry terms, a data lake).
- Traffic data
- Webrequest, pageviews, and unique devices
- Edits data
- Historical data about revisions, pages, and users (e.g. MediaWiki History)
- Content data
- Wikitext (latest & historical) and wikidata-entities
- Events data
- EventLogging, EventBus and event streams data (raw, refined, sanitized)
- ORES scores
- Machine learning predictions (available as events as of 2020-02-27)
Some of these datasets (such as webrequests) are only available in Hive, while others (such as pageviews) are also available as data cubes (usually in more aggregated capacity).
The main way to access the data in the Data Lake is to run queries using one of the three available SQL engines: Presto, Hive, and Spark.
You can access these engines through several different routes:
All three engines also have command-line programs which you can use on one of the analytics clients. This is probably the least convenient way, but if you want to use it, consult the engine's documentation page.
Differences between the SQL engines
For the most part, Presto, Hive, and Spark work the same way, but they have some differences in SQL syntax and processing power.
- Spark and Hive use
STRING as the keyword for string data, while Presto uses
- One consequence is a different method for transforming integer
day fields to a date string.
- Spark and Hive:
CONCAT(year, '-', LPAD(month, 2, '0'), '-', LPAD(day, 2, '0')) (casting to
STRING is not actually required)
CONCAT(CAST(year AS VARCHAR), '-', LPAD(CAST(month AS VARCHAR), 2, '0'), '-', LPAD(CAST(day AS VARCHAR), 2, '0')) (casting to
VARCHAR is required)
- In Spark and Hive, you use the
SIZE function to get the length of an array, while in Presto you use
- In Spark and Hive, double quoted text (like
"foo") is interpreted as a string, while in Presto it is interpreted as a column name. It's easiest to use single quoted text (like
'foo') for strings, since all three engines interpret it the same way.
- Spark and Hive have a
CONCAT_WS ("concatenate with separator") function, but Presto does not.
- Spark supports both
REAL as keywords for the 32-bit floating-point number data type, while Presto supports only
- Presto has no FIRST and LAST functions
- If you need to use a keyword like
DATE as a column name, you use backticks (
`date`) in Spark and Hive, but double quotes (
"date") in Presto.
- To convert an ISO 8601 timestamp string (e.g.
"2021-11-01T01:23:02Z") to an SQL timestamp:
- If you divide integers, Hive and Spark will return a floating-point number if necessary (e.g.
1 / 3 returns
0.333333). However, Presto will return only an integer (e.g.
1 / 3 returns
CAST(x AS REAL) to work around this.
- See also: Presto's guide to migrating from Hive
Data Lake datasets which are available in Hive are stored in the Hadoop Distributed File System (HDFS), usually in the Parquet file format. The Hive metastore is a centralized repository for metadata about these data files, and all three SQL query engines we use (Presto, Spark SQL, and Hive) rely on it.
Some Data Lake datasets are available in Druid, which is separate from Hive and HDFS, and allows quick exploration and dashboarding of those datasets in Turnilo and Superset.
The Analytics cluster, which consists of Hadoop servers and related components, provides the infrastructure for the Data Lake.