You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org

Analytics/Cluster/Hive

From Wikitech-static
< Analytics‎ | Cluster
Revision as of 19:37, 18 August 2015 by imported>HaeB (mention Analytics/Cluster/Access and FAQ)
Jump to navigation Jump to search

Apache Hive is an abstraction built on top of MapReduce that allows SQL to be used on various file formats stored in HDFS. WMF's first use case was to enable querying of unsampled webrequest logs.

Maintained Tables

Querying

To access data, use

 hive

Inside Hive CLI, you can query the wmf.webrequest and other tables:

-- Calculate per-country mobile page views for 2015-04-10 00:00--01:00 (1st hour)
SELECT
    geocoded_data['country_code'] country, COUNT(*) cc_count
FROM wmf.webrequest
WHERE
    webrequest_source = 'mobile'  -- could be 'text', 'upload', 'bits', 'misc', ...
    AND year = 2015              
    AND month = 4
    AND day = 10
    AND hour = 0
    AND is_pageview
GROUP BY
    geocoded_data['country_code'];

Notice the webrequest_source, year, month, day, and hour fields. These are Hive partitions, and are explicit mappings to hourly imports in HDFS. You must include at least one partition predicate in the where clause of your queries (even if it is just year > 0). Partitions allow you to reduce the amount of data that Hive must parse and process before it returns you results. For example, if are only interested in data during a particular day, you could add where year = 2014 and month = 1 and day = 12. This will instruct Hive to only process data for partitions that match that partition predicate. You may use partition fields as you would any normal field, even though the field values are not actually stored in the data files.

Most SQL is supported by Hive, with a few differences. See the Hive Language Manual for more info.

Notes

  • The wmf_raw and wmf databases contain Hive tables maintained by Ops. You can create your own tables in Hive, but please be sure to create them in a different database, preferably one named after your shell username.
  • Hive has the ability to map tables on top of almost any data structure. Since webrequest logs are JSON, the Hive tables must be told to use a JSON SerDe to be able to serialize/deserialize to/from JSON. We use the JsonSerDe included with Hive-HCatalog.
  • The HCatalog .jar will be automatically added to a Hive client's auxpath. You shouldn't need to think about it.

Sample Queries

Killing a running query

Once you submit a query, it is handed off to Hadoop. Hadoop runs the query as a YARN application. The Hive CLI is then detached from the actual application. If you Ctrl-C your Hive CLI, you will quit the interface you used to submit the query, but will not actually kill the application. To kill the application, you have to tell YARN you want it dead.

Note the application ID from when your query started. You should see something like:

 Starting Job = job_1387838787660_12241, Tracking URL = http://analytics1010.eqiad.wmnet:8088/proxy/application_1387838787660_12241/

The application ID in this case is application_1387838787660_12241. To kill this application, run:

 yarn application -kill application_1387838787660_12241

Troubleshooting

Analytics/Cluster/Hive/Troubleshooting

See also

References