You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Analytics/Cluster/Hive: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>HaeB
imported>HaeB
(bzzz / mention Analytics/EventLogging#Hive)
Line 1: Line 1:
[[File:Pageview_@_Wikimedia_(WMF_Analytics_lightning_talk,_June_2015).pdf|thumb|350px|page=6|Hive/Hadoop (rounded box at the bottom) within the Wikimedia Foundation's pageview data pipeline]]
[[File:Pageview_@_Wikimedia_(WMF_Analytics_lightning_talk,_June_2015).pdf|thumb|350px|page=6|Hive/Hadoop (rounded box at the bottom) within the Wikimedia Foundation's pageview data pipeline]]
[http://hive.apache.org Apache Hive] is an abstraction built on top of MapReduce that allows SQL to be used on various file formats stored in [[:en:Apache_Hadoop#HDFS|HDFS]].  WMF's first use case was to enable querying of unsampled webrequest logs.
[[File:Apache Hive logo.svg|thumb|Apache Hive logo]]
[http://hive.apache.org Apache Hive] is an abstraction built on top of MapReduce that allows SQL to be used on various file formats stored in [[:en:Apache_Hadoop#HDFS|HDFS]].  WMF's first use case was to enable querying of unsampled [[Analytics/Data/Webrequest|webrequest]] logs.




Line 14: Line 15:
* [[Analytics/Cluster/Hive/Queries]] (includes a FAQ about common tasks and problems)
* [[Analytics/Cluster/Hive/Queries]] (includes a FAQ about common tasks and problems)
* [[Analytics/Cluster/Hive/QueryUsingUDF]]
* [[Analytics/Cluster/Hive/QueryUsingUDF]]
While hive supports SQL there are some differences: see the [https://cwiki.apache.org/confluence/display/Hive/LanguageManual Hive Language Manual] for more info.
While hive supports SQL, there are some differences: see the [https://cwiki.apache.org/confluence/display/Hive/LanguageManual Hive Language Manual] for more info.


== Maintained tables ==
== Maintained tables ==
Line 29: Line 30:
* Hive has the ability to map tables on top of almost any data structure.  Since webrequest logs are JSON, the Hive tables must be told to use a JSON [https://cwiki.apache.org/confluence/display/Hive/SerDe SerDe] to be able to serialize/deserialize to/from JSON.  We use the JsonSerDe included with [https://cwiki.apache.org/confluence/display/Hive/HCatalog Hive-HCatalog].
* Hive has the ability to map tables on top of almost any data structure.  Since webrequest logs are JSON, the Hive tables must be told to use a JSON [https://cwiki.apache.org/confluence/display/Hive/SerDe SerDe] to be able to serialize/deserialize to/from JSON.  We use the JsonSerDe included with [https://cwiki.apache.org/confluence/display/Hive/HCatalog Hive-HCatalog].
* The HCatalog .jar will be automatically added to a Hive client's auxpath.  You shouldn't need to think about it.
* The HCatalog .jar will be automatically added to a Hive client's auxpath.  You shouldn't need to think about it.
* It is also possible to [[Analytics/EventLogging|import EventLogging data into Hive]], although (as of April 2016) this is not widely tested yet.


== Troubleshooting ==
== Troubleshooting ==

Revision as of 00:01, 22 April 2016

File:Pageview @ Wikimedia (WMF Analytics lightning talk, June 2015).pdf

Apache Hive logo

Apache Hive is an abstraction built on top of MapReduce that allows SQL to be used on various file formats stored in HDFS. WMF's first use case was to enable querying of unsampled webrequest logs.


Access

Cluster access

In order to get shell access to the analytics cluster through hive you need access to stat1002, and be added to the analytics-privatedata-users shell user group. Per Requesting shell access, create a Phabricator ticket for such request.

For how to access the servers (once you have the credentials), see: Analytics/Cluster/Access

Querying

File:Introduction to Hive.pdf

While hive supports SQL, there are some differences: see the Hive Language Manual for more info.

Maintained tables

(see also Analytics/Data)

Notes

  • The wmf_raw and wmf databases contain Hive tables maintained by Ops. You can create your own tables in Hive, but please be sure to create them in a different database, preferably one named after your shell username.
  • Hive has the ability to map tables on top of almost any data structure. Since webrequest logs are JSON, the Hive tables must be told to use a JSON SerDe to be able to serialize/deserialize to/from JSON. We use the JsonSerDe included with Hive-HCatalog.
  • The HCatalog .jar will be automatically added to a Hive client's auxpath. You shouldn't need to think about it.
  • It is also possible to import EventLogging data into Hive, although (as of April 2016) this is not widely tested yet.

Troubleshooting

See the FAQ

Subpages of Analytics/Cluster/Hive

See also

References