You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Analytics/Systems/Cluster/Oozie
Oozie
Apache Oozie is a workflow scheduler system to manage Apache Hadoop jobs.
Oozie is a job scheduler with fancy features. Most relevantly, jobs may be scheduled based on the existence of data in HDFS. This allows jobs to be scheduled to be run not based only on a current timestamp, but for when the data needed to run a particular job is available.
Some terms
- action
- An action generally represents a single step in a job workflow. Examples include pig scripts, failure notifications, map-reduce jobs, etc.
- workflow
- Workflows are used to chain actions together. A workflow is synonymous with a job. Workflows describe how actions should run, and how actions should flow. Actions can be chained based on success and failure conditions.
- coordinator
- Coordinators are used to schedule recurring runs of workflows. They can abstractly describe input and output datasets based on on periodicity. Coordinators will submit workflow jobs based on the existence of data.
- bundle
A bundle is a logical grouping of coordinators that share commonalities. You can use bundles to start and stop whole groups of coordinators at once.
Oozie 101. An Example
Let's run through a simple example of how to set up runs of an Oozie job running on the cluster. Have in mind that this is an example just to get you started and that more steps than the ones outlined below are needed to get a job running according to our production standards.
In our example we will run via Oozie a job that runs a parametized hive query. Note that we assume you have access to stat1004.eqiad.wmnet from which we normally access the Analytics Cluster.
Something to know is that Oozie overrides hive default and forces it to use only 1 reducer for its map-reduce jobs. If you want to let hive decide on the number of reducers it should use (default behavior), then explicitly set mapred.reduce.tasks
to -1 (see below in the workflow.xml file)
CLI
The job we are going to set up will use just a workflow, this means that -in the absence of a coordinator - we will run it by hand using the Oozie command line interface (CLI). There is a lot you can do through Oozie's CLI, please take a look at docs [here https://oozie.apache.org/docs/3.1.3-incubating/DG_CommandLineTool.html]
There are three files needed:
- A file with the hive query.
- A workflow.xml file that oozie is going to use to see what job to run
- A workflow.properties file that sets concrete values for the properties defined in the workflow.
Both workflow.xml and the hive query need to be available inside HDFS, so we will be putting them into HDFS /tmp directory and the Oozie job will run from there.
hive query
Please note placeholders for parameters and replace <user> with your username.
DROP VIEW IF EXISTS <user>_oozie_test;
CREATE VIEW <user>_oozie_test AS
SELECT
CASE WHEN user_agent LIKE('%iPhone%') THEN 'iOS'
ELSE 'Android' END AS platform,
parse_url(concat('http://bla.org/woo/', uri_query), 'QUERY', 'appInstallID') AS uuid
FROM ${source_table}
WHERE year=${year}
AND month=${month}
AND day=${day}
AND hour=${hour};
-- Now get a count of totals that will be inserted in some file
INSERT OVERWRITE DIRECTORY "${destination_directory}"
SELECT platform, COUNT(DISTINCT(uuid))
FROM <user>_oozie_test
GROUP BY platform;
Workflow.xml and workflow.properties
<workflow-app name="cmd-param-demo" xmlns="uri:oozie:workflow:0.4">
<parameters>
<property>
<name>queue_name</name>
<value>default</value>
</property>
<!-- Required properties -->
<property><name>name_node</name></property>
<property><name>job_tracker</name></property>
<property>
<name>hive_site_xml</name>
<description>hive-site.xml file path in HDFS</description>
</property>
<!-- specifying parameter values in file to test running -->
<property>
<name>source_table</name>
<description>
Hive table to read data from.
</description>
</property>
<property>
<name>year</name>
<description>The partition's year</description>
</property>
<property>
<name>month</name>
<description>The partition's month</description>
</property>
<property>
<name>day</name>
<description>The partition's day</description>
</property>
<property>
<name>hour</name>
<description>The partition's hour</description>
</property>
</parameters>
<start to="hive-demo"/>
<action name="hive-demo">
<hive xmlns="uri:oozie:hive-action:0.2">
<job-tracker>${job_tracker}</job-tracker>
<name-node>${name_node}</name-node>
<job-xml>${hive_site_xml}</job-xml>
<configuration>
<property>
<name>mapreduce.job.queuename</name>
<value>${queue_name}</value>
</property>
<property>
<name>hive.exec.scratchdir</name>
<value>/tmp/hive-${user}</value>
</property>
<!--Let hive decide on the number of reducers -->
<property>
<name>mapred.reduce.tasks</name>
<value>-1</value>
</property>
</configuration>
<script>generate_daily_uniques.hql</script>
<param>source_table=${source_table}</param>
<param>destination_directory=/tmp/test-mobile-apps/${wf:id()}</param>
<param>year=${year}</param>
<param>month=${month}</param>
<param>day=${day}</param>
<param>hour=${hour}</param>
</hive>
<ok to="end"/>
<error to="kill"/>
</action>
<kill name="kill">
<message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
Worflow.properties:
@stat1004:~/workplace/refinery/oozie/mobile-apps/generate_daily_uniques$ more workflow.properties name_node = hdfs://analytics-hadoop job_tracker = resourcemanager.analytics.eqiad.wmnet:8032 queue_name = default oozie_directory = ${name_node}/wmf/refinery/current/oozie # for testing locally, this won't work: # hive_site_xml = ${oozie_directory}/util/hive/hive-site.xml hive_site_xml = ${refinery_directory}/oozie/util/hive/hive-site.xml # Workflow app to run. oozie.wf.application.path = hdfs://analytics-hadoop/tmp/tests-<some>/workflow.xml oozie.use.system.libpath = true oozie.action.external.stats.write = true # parameters source_table = wmf_raw.webrequest year = 2014 month = 11 day = 20 hour = 10 user = <your-user-in1002>
Validating worflow.xml
After creating the files you should make sure they are valid according to oozie's schema:
oozie validate workflow.xml
Moving files to hdfs
The easiest place where to put stuff is the /tmp directory. You should move there the workflow.xml and hive file
hdfs dfs -mkdir /tmp/tests-$USER hdfs dfs -put workflow.xml /tmp/tests-$USER/workflow.xml hdfs dfs -cat /tmp/tests-$USER/workflow.xml
Running oozie job
From your local directory -where you have workflow.properties- run:
oozie job -config workflow.properties -run
Running your job, say, once a day
In order to use oozie's crontab you will need a coordinator file
Good docs about coordinators can be found here: [1]
Running a real oozie example
The main difference with the 101 example is that in this case we will likely need to override the oozie directory. This testing needs to be done from 1002/1004. Let's assume that your refinery code you want to test is deployed to ~/workplace/refinery/oozie
Rsync code to 1002/1004
rsync -rva --delete ./oozie/ stat1002.eqiad.wmnet:~/oozie
Create tables if needed
If your oozie job accesses new tables you would need to create them in your db, rememeber you are working of your user space
hive -f blah.hql --database nuria
Put your oozie directory on hdfs
>nuria@stat1004:~/some$ ls oozie >hdfs dfs -rmr /tmp/oozie-nuria ; hdfs dfs -mkdir /tmp/oozie-nuria; hdfs dfs -put oozie/ /tmp/oozie-nuria
Run oozie job overriding what pertains
Note that we are overriding the refinery-directory variable
oozie job -run -Duser=nuria -Darchive_directory=hdfs://analytics-hadoop/tmp/nuria -Doozie_directory=/tmp/oozie-nuria/oozie -config ./oozie/pageview/hourly/coordinator.properties -Dstart_time=2015-09-03T00:00Z -Dstop_time=2015-09-03T04:00Z -Drefinery_directory=hdfs://analytics-hadoop$(hdfs dfs -ls -d /wmf/refinery/2015* | tail -n 1 | awk '{print $NF}')
Troubleshooting
Checking logs:
oozie job -log <job_id>
Seeing the last couple jobs that run:
oozie jobs -localtime -len 2
Get more info on a job id that failed
oozie job -info <job-id>
This should display your hadoop job id (see below)
Job ID : 0005783-141210154539499-oozie-oozi-W ------------------------------------------------------------------------------------------------------------------------------------ Workflow Name : cmd-param-demo App Path : hdfs://analytics-hadoop/tmp/tests-mobile-apps/workflow.xml Status : KILLED Run : 0 ..... CoordAction ID: - Actions ------------------------------------------------------------------------------------------------------------------------------------ ID Status Ext ID Ext Status Err Code ------------------------------------------------------------------------------------------------------------------------------------ 0005783-141210154539499-oozie-oozi-W@:start: OK - OK - ------------------------------------------------------------------------------------------------------------------------------------ 0005783-141210154539499-oozie-oozi-W@hive-demo ERROR job_1415917009743_45854 FAILED/KILLED40000 ------------------------------------------------------------------------------------------------------------------------------------ 0005783-141210154539499-oozie-oozi-W@kill OK - OK E0729 ------------------------------------------------------------------------------------------------------------------------------------
Logs for hadoop job id: 1415917009743_45854
Should be at:
>yarn logs --applicationId application_1415917009743_45854
Also, you can try: https://yarn.wikimedia.org/cluster/scheduler
Kill a job
Get job id from following command:
>oozie jobs -jobtype coord
>oozie job -kill <id>
Check how the cluster is doing
You can see how the queues are being utilized here:
http://localhost:8088/cluster/scheduler
You'll have to set up the ssh tunnel as specified in Analytics/Cluster/Access
More info
Naming Convention
Refinery oozie jobs follow a naming convention allowing to automate job restart (see below). This convention is based on folders in which jobs configuration files are stored. The base folder for the convention is, in refinery git repo, the oozie
folder.
- Each refinery top level job (no parent job) is named after its folder hierarchy with
-
instead of/
, with either-bundle
or-coord
as a postfix. For instance, thewebrequest/load/bundle.properties
defined job is namedwebrequest-load-bundle
. - Children jobs follow the same pattern, except they are postfixed with parameters information. For instance, coordinator jobs lauched by
webrequest-load-bundle
arewebrequest-load-coord-upload
,webrequest-load-coord-text
,webrequest-load-coord-maps
andwebrequest-load-coord-misc
.
How to see jobs scheduled to run
The bird's eye view over currently submitted Refinery Oozie jobs is depicted in the following diagram:
On stat1004
run
oozie jobs -jobtype bundle -filter status=RUNNING
to see the 100 most recent, RUNNING bundles.
On stat1004
run
oozie jobs -jobtype coordinator -filter status=RUNNING
to see the 100 most recent, RUNNING coordinators
On stat1004
run
oozie jobs -jobtype wf -filter status=RUNNING
to see the 100 most recent, RUNNING workflows.
To consider more than the 100 jobs, add a -len
option at the end. Like -len 2000
to get the 2000 most recent ones.
To not limit to RUNNING jobs, drop the -filter status=RUNNING
from the command.
The source for the refinery's oozie production jobs can be found at https://phabricator.wikimedia.org/diffusion/ANRE/browse/master/oozie .
In the cluster, Oozie's job definitions can be found at /wmf/refinery/...
.
Never deploy jobs from the /wmf/refinery/current/...
, always use one of refinery-variants that have a concrete time and commit in the directory name. Like /wmf/refinery/2015-01-09T12.39.20Z--2007cb8
.
Administration
Documentation intended for analytics team on how to restart jobs in the cluster and such: Analytics/Cluster/Oozie/Administration