You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org

Analytics/Systems/Cluster/Hive: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>Milimetric
m (Milimetric moved page Analytics/Cluster/Hive to Analytics/Systems/Cluster/Hive: Reorganizing documentation)
 
imported>Urbanecm
(→‎TSV file: apparently, STORED AS TEXTFILE is needed for this to work)
 
(8 intermediate revisions by 4 users not shown)
Line 2: Line 2:
[[File:Apache Hive logo.svg|thumb|Apache Hive logo]]
[[File:Apache Hive logo.svg|thumb|Apache Hive logo]]
[http://hive.apache.org Apache Hive] is an abstraction built on top of MapReduce that allows SQL to be used on various file formats stored in [[:en:Apache_Hadoop#HDFS|HDFS]].  WMF's first use case was to enable querying of unsampled [[Analytics/Data/Webrequest|webrequest]] logs.
[http://hive.apache.org Apache Hive] is an abstraction built on top of MapReduce that allows SQL to be used on various file formats stored in [[:en:Apache_Hadoop#HDFS|HDFS]].  WMF's first use case was to enable querying of unsampled [[Analytics/Data/Webrequest|webrequest]] logs.
As of February 2021, we are running Hive 2.3.6.


__TOC__
__TOC__
Line 7: Line 9:
{{anchor|Cluster access}}
{{anchor|Cluster access}}
== Access ==
== Access ==
In order to get shell access to the Analytics Cluster through Hive you need to be added to either the <code>analytics-privatedata-users</code> or the <code>analytics-users</code> shell user group. Per [[Requesting shell access]], create a Phabricator ticket for such request.  Some analytics team generated data (like the webrequest logs) are considered private data, and only <code>analytics-privatedata-users</code> can access it.  If you are getting access to Hive, you will probably want to be in this group.
In order to access Hive, you need an account with [[production shell access]] in the <code>analytics-privatedata-users</code> user group. For more instructions, see [[Analytics/Data access]].


For how to access the servers (once you have the credentials), see: [[Analytics/Cluster/Access]]
Some of the data in Hive, like the [[Analytics/Data Lake/Traffic/Webrequest|webrequest]] logs, are private data so only <code>analytics-privatedata-users</code> can access it.  If you are requesting access to Hive, you probably want to be in this group.
 
Once you have the credentials, see [[Analytics/Cluster/Access|Analytics/Systems/Cluster/Access]] for instructions on using the web UI and SSH tunneling.
 
== Create your own database ==
 
Hive uses databases to organize tables. You can create databases for your own use, and by convention we use our shell username as database name. Here is an example of command to create a database:
<syntaxhighlight lang="sql">
CREATE DATABASE my_user_name;
</syntaxhighlight>


== Querying ==
== Querying ==
Line 17: Line 28:
* [[Analytics/Cluster/Hive/QueryUsingUDF]]
* [[Analytics/Cluster/Hive/QueryUsingUDF]]
While hive supports SQL, there are some differences: see the [https://cwiki.apache.org/confluence/display/Hive/LanguageManual Hive Language Manual] for more info.
While hive supports SQL, there are some differences: see the [https://cwiki.apache.org/confluence/display/Hive/LanguageManual Hive Language Manual] for more info.
== Loading data ==
=== TSV file ===
If you have a data file you'd like to load in to Hive (perhaps to join with an existing Hive table), start by copying it onto one of the stats or notebook machines. Then, create a table in Hive with a "delimited" row format:
<syntaxhighlight lang="sql">
CREATE TABLE tablename (tablespec)
ROW FORMAT DELIMITED FIELDS TERMINATED BY "\t" STORED AS TEXTFILE
</syntaxhighlight>
You can easily change the terminator string from "\t" to "," if you have a CSV file.
Finally, use the <code>hive</code> command line client on that machine to run the following query:
<syntaxhighlight lang="sql">
LOAD DATA LOCAL INPATH '{{local path to file}}'
OVERWRITE INTO TABLE {{name}}
</syntaxhighlight>
Note that you cannot use <code>beeline</code> since it will look on the Hive server instead for your data file, even when you use the <code>LOCAL</code> keywoard.


== Maintained tables ==
== Maintained tables ==
Line 26: Line 55:
* [[Analytics/Data/Mediacounts|mediacounts]]
* [[Analytics/Data/Mediacounts|mediacounts]]
* [[Analytics/Data/mobile_apps_session_metrics|mobile_apps_session_metrics]]
* [[Analytics/Data/mobile_apps_session_metrics|mobile_apps_session_metrics]]
* ...
* [[Analytics/Systems/EventLogging|EventLogging data]], in the <code>event</code> database ([[Analytics/Systems/EventLogging#Hadoop_&_Hive|details]])


=== Notes ===
=== Notes ===
* The wmf_raw and wmf databases contain Hive tables maintained by Ops.  You can create your own tables in Hive, but please be sure to create them in a different database, preferably one named after your shell username.''
* The wmf_raw and wmf databases contain Hive tables maintained by Analytics.  You can create your own tables in Hive, but please be sure to create them in a different database, preferably one named after your shell use''rname.
* Hive has the ability to map tables on top of almost any data structure.  Since webrequest logs are JSON, the Hive tables must be told to use a JSON [https://cwiki.apache.org/confluence/display/Hive/SerDe SerDe] to be able to serialize/deserialize to/from JSON.  We use the JsonSerDe included with [https://cwiki.apache.org/confluence/display/Hive/HCatalog Hive-HCatalog].
* Hive has the ability to map tables on top of almost any data structure.  Since webrequest logs are JSON, the Hive tables must be told to use a JSON [https://cwiki.apache.org/confluence/display/Hive/SerDe SerDe] to be able to serialize/deserialize to/from JSON.  We use the JsonSerDe included with [https://cwiki.apache.org/confluence/display/Hive/HCatalog Hive-HCatalog].
* The HCatalog .jar will be automatically added to a Hive client's auxpath.  You shouldn't need to think about it.
* The HCatalog .jar will be automatically added to a Hive client's auxpath.  You shouldn't need to think about it.
* It is also possible to [[Analytics/EventLogging|import EventLogging data into Hive]], although (as of April 2016) this is not widely tested yet.


== Troubleshooting ==
== Troubleshooting ==
Line 39: Line 67:
== Subpages of {{PAGENAME}} ==
== Subpages of {{PAGENAME}} ==
{{Special:PrefixIndex/{{PAGENAME}}/|stripprefix=1}}
{{Special:PrefixIndex/{{PAGENAME}}/|stripprefix=1}}
== See also ==
* [[Analytics/Cluster/Access]] - on how to access Hive and Hue (Hadoop User Experience, a GUI for Hive)


== References ==
== References ==

Latest revision as of 19:57, 6 September 2021

File:Pageview @ Wikimedia (WMF Analytics lightning talk, June 2015).pdf

Apache Hive logo

Apache Hive is an abstraction built on top of MapReduce that allows SQL to be used on various file formats stored in HDFS. WMF's first use case was to enable querying of unsampled webrequest logs.

As of February 2021, we are running Hive 2.3.6.

Access

In order to access Hive, you need an account with production shell access in the analytics-privatedata-users user group. For more instructions, see Analytics/Data access.

Some of the data in Hive, like the webrequest logs, are private data so only analytics-privatedata-users can access it. If you are requesting access to Hive, you probably want to be in this group.

Once you have the credentials, see Analytics/Systems/Cluster/Access for instructions on using the web UI and SSH tunneling.

Create your own database

Hive uses databases to organize tables. You can create databases for your own use, and by convention we use our shell username as database name. Here is an example of command to create a database:

CREATE DATABASE my_user_name;

Querying

File:Introduction to Hive.pdf

While hive supports SQL, there are some differences: see the Hive Language Manual for more info.

Loading data

TSV file

If you have a data file you'd like to load in to Hive (perhaps to join with an existing Hive table), start by copying it onto one of the stats or notebook machines. Then, create a table in Hive with a "delimited" row format:

CREATE TABLE tablename (tablespec)
ROW FORMAT DELIMITED FIELDS TERMINATED BY "\t" STORED AS TEXTFILE

You can easily change the terminator string from "\t" to "," if you have a CSV file.

Finally, use the hive command line client on that machine to run the following query:

LOAD DATA LOCAL INPATH '{{local path to file}}'
OVERWRITE INTO TABLE {{name}}

Note that you cannot use beeline since it will look on the Hive server instead for your data file, even when you use the LOCAL keywoard.

Maintained tables

(see also Analytics/Data)

Notes

  • The wmf_raw and wmf databases contain Hive tables maintained by Analytics. You can create your own tables in Hive, but please be sure to create them in a different database, preferably one named after your shell username.
  • Hive has the ability to map tables on top of almost any data structure. Since webrequest logs are JSON, the Hive tables must be told to use a JSON SerDe to be able to serialize/deserialize to/from JSON. We use the JsonSerDe included with Hive-HCatalog.
  • The HCatalog .jar will be automatically added to a Hive client's auxpath. You shouldn't need to think about it.

Troubleshooting

See the FAQ

Subpages of Analytics/Systems/Cluster/Hive

References