Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin
Wiki Markup
{scrollbar}
{

Excerpt

...

How

...

to

...

use

...

Pentaho

...

MapReduce

...

to

...

transform

...

and

...

summarize

...

detailed

...

data

...

into

...

an

...

aggregate

...

dataset.

...

It

...

is

...

a

...

common

...

use

...

case

...

when

...

preparing

...

data

...

for

...

extraction

...

to

...

an

...

RDBMS-based

...

data

...

warehouse

...

or

...

mart.

...

You

...

will

...

use

...

parsed

...

weblog

...

data

...

as

...

the

...

details

...

and

...

build

...

an

...

aggregate

...

file

...

containing

...

a

...

count

...

of

...

page

...

views

...

by

...

IP

...

address

...

and

...

month.

...

The

...

steps

...

in

...

this

...

guide

...

include

...

  1. Loading

...

  1. the

...

  1. sample

...

  1. data

...

  1. file

...

  1. into

...

  1. HDFS

...

  1. Developing

...

  1. a

...

  1. PDI

...

  1. transformation

...

  1. which

...

  1. will

...

  1. serve

...

  1. as

...

  1. a

...

  1. Mapper

...

  1. Developing

...

  1. a

...

  1. PDI

...

  1. transformation

...

  1. which

...

  1. will

...

  1. serve

...

  1. as

...

  1. a

...

  1. Reducer

...

  1. Developing

...

  1. a

...

  1. PDI

...

  1. job

...

  1. which

...

  1. will

...

  1. invoke

...

  1. a

...

  1. Pentaho

...

  1. MapReduce

...

  1. step

...

  1. that

...

  1. runs

...

  1. MapReduce

...

  1. using

...

  1. the

...

  1. developed

...

  1. mapper

...

  1. and

...

  1. reducer

...

  1. transformation.

...

  1. Executing

...

  1. and

...

  1. reviewing

...

  1. output

...

Prerequisites

In order follow along with this how-to

...

guide

...

you

...

will

...

need

...

the

...

following:

...

  • Hadoop
  • Pentaho Data Integration
  • Pentaho Hadoop Node Distribution

Sample Files

The sample data file needed for this guide is:

File Name

Content

weblogs_parse.txt.zip

...

Tab-delimited,

...

parsed

...

weblog

...

data


NOTE:

...

If

...

you

...

have

...

completed

...

the

...

Using

...

Pentaho

...

MapReduce

...

to

...

Parse

...

Weblog

...

Data

...

guide,

...

then

...

the

...

necessary

...

files

...

will

...

already

...

be

...

in

...

the

...

proper

...

location.

...


This

...

file

...

should

...

be

...

placed

...

into

...

HDFS

...

at

...

/user/pdi/weblogs/parse

...

using

...

the

...

following

...

commands:

...

}
Code Block
hadoop fs -mkdir /user/pdi/weblogs
hadoop fs -mkdir /user/pdi/weblogs/parse
hadoop fs -put weblogs_parse.txt /user/pdi/weblogs/parse/
{code}

h1. 

Step-By-Step

...

Instructions

Setup

Start Hadoop if it is not already running.

Include Page
Create Mapper and Reducer for Aggregate Dataset
Create Mapper and Reducer for Aggregate Dataset

Create a Job to Execute a MapReduce Process

In this task you will create a job that will execute a MapReduce process that runs the newly created mapper and reducer transformations.

  1. Start PDI on your desktop. Once it is running choose 'File' -> 'New' -> 'Job' from the menu system or click on the 'New file' icon on the toolbar and choose the 'Job' option.
    Tip
    titleSpeed Tip

    You can download the Kettle Job aggregate_mr.kjb already completed



  2. Add a Start Job Entry: You need to tell PDI where to start the job, so expand the 'General' section of the Design palette and drag a 'Start' node onto the job canvas. Your canvas should look like:
    Image Added

  3. Add a Pentaho Map Reduce Job Entry: You are creating the job to execute a Pentaho MapReduce transformation, so expand the 'Big Data' section of the Design palette and drag a 'Pentaho MapReduce' node onto the job canvas. Your canvas should look like:
    Image Added

  4. Connect the Start and MapReduce Job Entries: Hover the mouse over the 'Start' node and a tooltip will appear. Click on the output connector (the green arrow pointing to the right) and drag a connector arrow to the 'Pentaho MapReduce' node.

    Your canvas should look like this:
    Image Added

  5. Edit the MapReduce Job Entry: Double-click on the 'Pentaho MapReduce' node to edit its properties. Enter this information:
    1. Hadoop Job Name: Enter 'Aggregate Map Reduce'
    2. Mapper Transformation: Enter <PATH>/aggregate_mapper.ktr
      <PATH> is the folder path you saved the mapper in.
    3. Mapper Input Step Name: Enter 'Map/Reduce Input'
    4. Mapper Output Step Name: Enter 'Map/Reduce Output'
      When you are done the window should look like:

      Image Added
  6. Configure the Reducer: Switch to the 'Reducer' tab and enter the following:
    1. Reducer Transformation: Enter <PATH>/aggregate_reducer.ktr
    2. Reducer Input Step Name: Enter 'Map/Reduce Input'
    3. Reducer Output Step Name: Enter 'Map/Reduce Output'
      When you are done the window should look like:
      Image Added

  7. Configure the MapReduce Job: Switch to the 'Job Setup' tab. Enter this information:
    1. Input Path: Enter '/user/pdi/weblogs/parse'

...

    1. Output

...

    1. Path:

...

    1. Enter

...

    1. '/user/pdi/weblogs/aggregate_mr'

...

    1. Input

...

    1. Format:

...

    1. Enter

...

    1. 'org.apache.hadoop.mapred.TextInputFormat'

...

    1. Output

...

    1. Format:

...

    1. Enter

...

    1. 'org.apache.hadoop.mapred.TextOutputFormat'

...

    1.  
    2. Check 'Clean

...

    1. output

...

    1. path

...

    1. before

...

    1. execution'

...


    1. When

...

    1. you

...

    1. are

...

    1. done

...

    1. your

...

    1. window

...

    1. should

...

    1. look

...

    1. like:
      Image Added

  1. Configure the Cluster Properties: Switch to the 'Cluster' tab. Enter this information:
    1. Hadoop distribution: Select your Hadoop distribution
    2. Working Directory: Enter '/tmp'
    3. HDFS Hostname, HDFS Port, Job Tracker Hostname, Job Tracker Port: Your connection information.
    4. Number of Mapper Tasks: Enter '3'. You can play around with this to get the best performance based on the size of your data and the number of nodes in your cluster.
    5. Number of Reducer Tasks: Enter '1'. You can play around with this to get the best performance based on the size of your data and the number of nodes in your cluster.
    6. Check 'Enable Blocking'
    7. Logging Interval: Enter '10'. The number of seconds between pinging Hadoop for completion status messages
      When you are done your window should look like:
      Image Added
      Click 'OK' to close the window.

  2. Save the Job: Choose 'File' -> 'Save as...' from the menu system. Save the transformation as 'aggregate_mr.kjb'

...

  1. into

...

  1. a

...

  1. folder

...

  1. of

...

  1. your

...

  1. choice.

...



  1. Run the Job: Choose 'Action'

...

  1. ->

...

  1. 'Run'

...

  1. from

...

  1. the

...

  1. menu

...

  1. system

...

  1. or

...

  1. click

...

  1. on

...

  1. the

...

  1. green

...

  1. run

...

  1. button

...

  1. on

...

  1. the

...

  1. job

...

  1. toolbar.

...

  1. A

...

  1. 'Execute

...

  1. a

...

  1. job'

...

  1. window

...

  1. will

...

  1. open.

...

  1. Click

...

  1. on

...

  1. the

...

  1. 'Launch'

...

  1. button.

...

  1. An

...

  1. 'Execution

...

  1. Results'

...

  1. panel

...

  1. will

...

  1. open

...

  1. at

...

  1. the

...

  1. bottom

...

  1. of

...

  1. the

...

  1. PDI

...

  1. window

...

  1. and

...

  1. it

...

  1. will

...

  1. show

...

  1. you

...

  1. the

...

  1. progress

...

  1. of

...

  1. the

...

  1. job

...

  1. as

...

  1. it

...

  1. runs.

...

  1. After

...

  1. a

...

  1. few

...

  1. seconds

...

  1. the

...

  1. job

...

  1. should

...

  1. finish

...

  1. successfully:
    Image Added
    If any errors occurred the job step that failed will be highlighted in red and you can use the 'Logging' tab to view error messages.

Check Hadoop for Aggregated Web Log

  1. Run the following command to view the aggregated results:
    Code Block
    
    hadoop fs -cat /user/pdi/weblogs/aggregate_mr/part-00000 | head
    
    

...


  1. This should return the first few rows of the aggregated file.

Summary

During this guide you learned how to create and execute a Pentaho MapReduce job on a Hadoop cluster. You consumed detailed weblog data and generated an aggregate datafile which is suitable for load into an RDBMS-based data warehouse or mart.

Wiki Markup
{scrollbar}