Wiki Markup |
---|
{scrollbar}
{ |
Excerpt |
---|
...
How |
...
to |
...
use |
...
Pentaho |
...
MapReduce |
...
to |
...
transform |
...
and |
...
summarize |
...
detailed |
...
data |
...
into |
...
an |
...
aggregate |
...
dataset. |
...
...
is
...
a
...
common
...
use
...
case
...
when
...
preparing
...
data
...
for
...
extraction
...
to
...
an
...
RDBMS-based
...
data
...
warehouse
...
or
...
mart.
...
You
...
will
...
use
...
parsed
...
weblog
...
data
...
as
...
the
...
details
...
and
...
build
...
an
...
aggregate
...
file
...
containing
...
a
...
count
...
of
...
page
...
views
...
by
...
IP
...
address
...
and
...
month.
...
The
...
steps
...
in
...
this
...
guide
...
include
...
- Loading
...
- the
...
- sample
...
- data
...
- file
...
- into
...
- HDFS
...
- Developing
...
- a
...
- PDI
...
- transformation
...
- which
...
- will
...
- serve
...
- as
...
- a
...
- Mapper
...
- Developing
...
- a
...
- PDI
...
- transformation
...
- which
...
- will
...
- serve
...
- as
...
- a
...
- Reducer
...
- Developing
...
- a
...
- PDI
...
- job
...
- which
...
- will
...
- invoke
...
- a
...
- Pentaho
...
- MapReduce
...
- step
...
- that
...
- runs
...
- MapReduce
...
- using
...
- the
...
- developed
...
- mapper
...
- and
...
- reducer
...
- transformation.
...
- Executing
...
- and
...
- reviewing
...
- output
...
Prerequisites
In order follow along with this how-to
...
guide
...
you
...
will
...
need
...
the
...
following:
...
- Hadoop
- Pentaho Data Integration
- Pentaho Hadoop Node Distribution
Sample Files
The sample data file needed for this guide is:
File Name | Content |
...
Tab-delimited, |
...
parsed |
...
weblog |
...
data |
NOTE:
...
If
...
you
...
have
...
completed
...
the
...
...
...
...
...
...
...
...
guide,
...
then
...
the
...
necessary
...
files
...
will
...
already
...
be
...
in
...
the
...
proper
...
location.
...
This
...
file
...
should
...
be
...
placed
...
into
...
HDFS
...
at
...
/user/pdi/weblogs/parse
...
using
...
the
...
following
...
commands:
...
Code Block |
---|
hadoop fs -mkdir /user/pdi/weblogs
hadoop fs -mkdir /user/pdi/weblogs/parse
hadoop fs -put weblogs_parse.txt /user/pdi/weblogs/parse/
{code}
h1. |
Step-By-Step
...
Instructions
Setup
Start Hadoop if it is not already running.
Include Page | ||||
---|---|---|---|---|
|
Create a Job to Execute a MapReduce Process
In this task you will create a job that will execute a MapReduce process that runs the newly created mapper and reducer transformations.
- Start PDI on your desktop. Once it is running choose 'File' -> 'New' -> 'Job' from the menu system or click on the 'New file' icon on the toolbar and choose the 'Job' option.
Tip title Speed Tip You can download the Kettle Job aggregate_mr.kjb already completed
- Add a Start Job Entry: You need to tell PDI where to start the job, so expand the 'General' section of the Design palette and drag a 'Start' node onto the job canvas. Your canvas should look like:
- Add a Pentaho Map Reduce Job Entry: You are creating the job to execute a Pentaho MapReduce transformation, so expand the 'Big Data' section of the Design palette and drag a 'Pentaho MapReduce' node onto the job canvas. Your canvas should look like:
- Connect the Start and MapReduce Job Entries: Hover the mouse over the 'Start' node and a tooltip will appear. Click on the output connector (the green arrow pointing to the right) and drag a connector arrow to the 'Pentaho MapReduce' node.
Your canvas should look like this:
- Edit the MapReduce Job Entry: Double-click on the 'Pentaho MapReduce' node to edit its properties. Enter this information:
- Hadoop Job Name: Enter 'Aggregate Map Reduce'
- Mapper Transformation: Enter <PATH>/aggregate_mapper.ktr
<PATH> is the folder path you saved the mapper in. - Mapper Input Step Name: Enter 'Map/Reduce Input'
- Mapper Output Step Name: Enter 'Map/Reduce Output'
When you are done the window should look like:
- Configure the Reducer: Switch to the 'Reducer' tab and enter the following:
- Reducer Transformation: Enter <PATH>/aggregate_reducer.ktr
- Reducer Input Step Name: Enter 'Map/Reduce Input'
- Reducer Output Step Name: Enter 'Map/Reduce Output'
When you are done the window should look like:
- Configure the MapReduce Job: Switch to the 'Job Setup' tab. Enter this information:
- Input Path: Enter '/user/pdi/weblogs/parse'
...
- Output
...
- Path:
...
- Enter
...
- '/user/pdi/weblogs/aggregate_mr'
...
- Input
...
- Format:
...
- Enter
...
- 'org.apache.hadoop.mapred.TextInputFormat'
...
- Output
...
- Format:
...
- Enter
...
- 'org.apache.hadoop.mapred.TextOutputFormat'
...
- Check 'Clean
...
- output
...
- path
...
- before
...
- execution'
...
When
...
- you
...
- are
...
- done
...
- your
...
- window
...
- should
...
- look
...
- like:
- like:
- Configure the Cluster Properties: Switch to the 'Cluster' tab. Enter this information:
- Hadoop distribution: Select your Hadoop distribution
- Working Directory: Enter '/tmp'
- HDFS Hostname, HDFS Port, Job Tracker Hostname, Job Tracker Port: Your connection information.
- Number of Mapper Tasks: Enter '3'. You can play around with this to get the best performance based on the size of your data and the number of nodes in your cluster.
- Number of Reducer Tasks: Enter '1'. You can play around with this to get the best performance based on the size of your data and the number of nodes in your cluster.
- Check 'Enable Blocking'
- Logging Interval: Enter '10'. The number of seconds between pinging Hadoop for completion status messages
When you are done your window should look like:
Click 'OK' to close the window.
- Save the Job: Choose 'File' -> 'Save as...' from the menu system. Save the transformation as 'aggregate_mr.kjb'
...
- into
...
- a
...
- folder
...
- of
...
- your
...
- choice.
...
- Run the Job: Choose 'Action'
...
- ->
...
- 'Run'
...
- from
...
- the
...
- menu
...
- system
...
- or
...
- click
...
- on
...
- the
...
- green
...
- run
...
- button
...
- on
...
- the
...
- job
...
- toolbar.
...
- A
...
- 'Execute
...
- a
...
- job'
...
- window
...
- will
...
- open.
...
- Click
...
- on
...
- the
...
- 'Launch'
...
- button.
...
- An
...
- 'Execution
...
- Results'
...
- panel
...
- will
...
- open
...
- at
...
- the
...
- bottom
...
- of
...
- the
...
- PDI
...
- window
...
- and
...
- it
...
- will
...
- show
...
- you
...
- the
...
- progress
...
- of
...
- the
...
- job
...
- as
...
- it
...
- runs.
...
- After
...
- a
...
- few
...
- seconds
...
- the
...
- job
...
- should
...
- finish
...
- successfully:
If any errors occurred the job step that failed will be highlighted in red and you can use the 'Logging' tab to view error messages.
Check Hadoop for Aggregated Web Log
- Run the following command to view the aggregated results:
Code Block hadoop fs -cat /user/pdi/weblogs/aggregate_mr/part-00000 | head
...
This should return the first few rows of the aggregated file.
Summary
During this guide you learned how to create and execute a Pentaho MapReduce job on a Hadoop cluster. You consumed detailed weblog data and generated an aggregate datafile which is suitable for load into an RDBMS-based data warehouse or mart.
Wiki Markup |
---|
{scrollbar} |