Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 4.0
Wiki Markup
{scrollbar}
{

Excerpt

...

How

...

to

...

use

...

Pentaho

...

MapReduce

...

to

...

convert

...

raw

...

weblog

...

data

...

into

...

parsed,

...

delimited

...

records.

...


The

...

steps

...

in

...

this

...

guide

...

include

...

  1. Loading

...

  1. the

...

  1. sample

...

  1. data

...

  1. file

...

  1. into

...

  1. CLDB

...

  1. Developing

...

  1. a

...

  1. PDI

...

  1. transformation

...

  1. which

...

  1. will

...

  1. serve

...

  1. as

...

  1. a

...

  1. Mapper

...

  1. Developing

...

  1. a

...

  1. PDI

...

  1. job

...

  1. which

...

  1. will

...

  1. invoke

...

  1. a

...

  1. Pentaho

...

  1. MapReduce

...

  1. step

...

  1. that

...

  1. runs

...

  1. a

...

  1. map-only

...

  1. job,

...

  1. using

...

  1. the

...

  1. developed

...

  1. mapper

...

  1. transformation.

...

  1. Executing

...

  1. and

...

  1. reviewing

...

  1. output

...

Prerequisites

In order to follow along with this how-to

...

guide

...

you

...

will

...

need

...

the

...

following:

...

  • MapR
  • Pentaho Data Integration
  • Pentaho Hadoop Node Distribution

Sample Files

The sample data file needed for this guide is:

File Name

Content

weblogs_rebuild.txt.zip

...

Unparsed,

...

raw

...

weblog

...

data


NOTE:

...

If

...

you

...

have

...

completed

...

the

...

Loading

...

Data

...

into

...

the

...

MapR

...

filesystem

...

guide,

...

then

...

the

...

necessary

...

file

...

will

...

already

...

be

...

in

...

the

...

proper

...

location.

...


This

...

file

...

should

...

be

...

placed

...

in

...

CLDB

...

at

...

/weblogs/raw

...

using

...

the

...

following

...

commands.

...

}
Code Block
hadoop fs -mkdir /weblogs
hadoop fs -mkdir /weblogs/raw
hadoop fs -put weblogs_rebuild.txt /weblogs/raw/{code}

h1. 

Step-By-Step

...

Instructions

Setup

Start MapR if it is not already running.

Include Page
Create Mapper Transformation to Parse Weblog File
Create Mapper Transformation to Parse Weblog File

Create a PDI Job to Execute a Map Only MapReduce Process

In this task you will create a job that will execute a "map-only"

...

MapReduce

...

process

...

using

...

the

...

mapper

...

transformation

...

you

...

created

...

in

...

the

...

previous

...

section.

...

:= }
Tip
title
Speed
Tip

You

can

download

the

Kettle

Job

[

weblog

_parse_mr.kjb|Using Pentaho MapReduce to Parse Weblog Data in MapR^weblog

_parse_mr.kjb

]

already

completed{tip} # Within

completed

  1. Within PDI,

...

  1. choose

...

  1. 'File'

...

  1. ->

...

  1. 'New'

...

  1. ->

...

  1. 'Job'

...

  1. from

...

  1. the

...

  1. menu

...

  1. system

...

  1. or

...

  1. click

...

  1. on

...

  1. the

...

  1. 'New

...

  1. file'

...

  1. icon

...

  1. on

...

  1. the

...

  1. toolbar

...

  1. and

...

  1. choose

...

  1. the

...

  1. 'Job'

...

  1. option.

...



  1. Add a Start Job Entry: You need to tell PDI where to start the job, so expand the 'General' section of the Design palette and drag a 'Start' node onto the job canvas. Your canvas should look like:
    Image Added

  2. Add a Pentaho MapReduce Job Entry: Expand the 'Big Data' section of the Design palette and drag a 'Pentaho MapReduce' job entry onto the job canvas. Your canvas should look like:
    Image Added

  3. Connect the Start and MapReduce Job Entries: Hover the mouse over the 'Delete folders' job entry and a tooltip will appear. Click on the output connector (the green arrow pointing to the right) and drag a connector arrow to the 'Pentaho MapReduce' job entry.
    Your canvas should look like this:
    Image Added

  4. Edit the MapReduce Job Entry: Double-click on the 'Pentaho MapReduce' job entry to edit its properties. Enter this information:
    1. Hadoop Job Name: Enter 'Web Log Parser'
    2. Mapper Transformation: Enter <PATH>/weblog
      Anchor
      _GoBack
      _GoBack
      _parse_mapper.ktr
      <PATH> is the folder path you saved the mapper in.
    3. Mapper Input Step Name: Enter 'Map/Reduce Input'
    4. Mapper Output Step Name: Enter 'Map/Reduce Output'
      When you are done the window should look like:
      Image Added

  5. Configure the MapReduce Job: Switch to the 'Job Setup' tab. Enter this information:
    1. Check 'Suppress Output of Map Key'
    2. Input Path: Enter '/weblogs/raw'
    3. Output Path: Enter '/weblogs/parse'
    4. Input Format: Enter 'org.apache.hadoop.mapred.TextInputFormat'

...

    1. Output

...

    1. Format:

...

    1. Enter

...

    1. 'org.apache.hadoop.mapred.TextOutputFormat'

...

    1. Check

...

    1. 'Clean

...

    1. output

...

    1. path

...

    1. before

...

    1. execution'

...


    1. When

...

    1. you

...

    1. are

...

    1. done

...

    1. your

...

    1. window

...

    1. should

...

    1. look

...

    1. like:
      Image Added

  1. Configure the Cluster Properties: Switch to the 'Cluster' tab. Enter this information:
    1. Hadoop distribution: Select 'MapR'
    2. Working Directory: Enter '/tmp'
    3. HDFS Hostname, HDFS Port, Job Tracker Hostname, Job Tracker Port: Your connection information. For a local single node cluster leave blank.
    4. Number of Mapper Tasks: Enter '3'. You can play around with this to get the best performance based on the size of your data and the number of nodes in your cluster.
    5. Number of Reducer Tasks: Enter '0'
    6. Check 'Enable Blocking'
    7. Logging Interval: Enter '10'. The number of seconds between pinging MapR for completion status messages
      When you are done your window should look like:
      Image Added
      Click 'OK' to close the window.

  2. Save the Job: Choose 'File' -> 'Save as...' from the menu system. Save the transformation as 'weblogs_parse_mr.kjb'

...

  1. into

...

  1. a

...

  1. folder

...

  1. of

...

  1. your

...

  1. choice.

...



  1. Run the Job: Choose 'Action'

...

  1. ->

...

  1. 'Run'

...

  1. from

...

  1. the

...

  1. menu

...

  1. system

...

  1. or

...

  1. click

...

  1. on

...

  1. the

...

  1. green

...

  1. run

...

  1. button

...

  1. on

...

  1. the

...

  1. job

...

  1. toolbar.

...

  1. An

...

  1. 'Execute

...

  1. a

...

  1. job'

...

  1. window

...

  1. will

...

  1. open.

...

  1. Click

...

  1. on

...

  1. the

...

  1. 'Launch'

...

  1. button.

...

  1. An

...

  1. 'Execution

...

  1. Results'

...

  1. panel

...

  1. will

...

  1. open

...

  1. at

...

  1. the

...

  1. bottom

...

  1. of

...

  1. the

...

  1. PDI

...

  1. window

...

  1. and

...

  1. it

...

  1. will

...

  1. show

...

  1. you

...

  1. the

...

  1. progress

...

  1. of

...

  1. the

...

  1. job

...

  1. as

...

  1. it

...

  1. runs.

...

  1. After

...

  1. a

...

  1. few

...

  1. seconds

...

  1. the

...

  1. job

...

  1. should

...

  1. finish

...

  1. successfully:
    Image Added

If any errors occurred the job entry that failed will be highlighted in red and you can use the 'Logging' tab to view error messages.

Check MapR for Parsed Weblog Data

  1. If you have mounted your MapR CLDB onto your local machine you may verify the file loaded by navigating to the MapR directory.
    Code Block
    ls /mapr/my.cluster.com/weblogs/parse

...

  1. This

...

  1. should

...

  1. return:

...


  1. _logs

...

  1. part-00000

...

  1. part-00001

...

  1. part-00002

...

  1. _SUCCESS

...


  1. If you have not mounted your MapR CLDB onto your local machine you may alternatively check MapR by:
    Code Block
    hadoop fs \-ls /weblogs/parse

...


  1. This should return:
    -rwxrwxrwx 3 demo demo 27132365 2012-01-04

...

  1. 16:52

...

  1. /weblogs/parse/part-00001

...


  1. -rwxrwxrwx

...

  1. 3

...

  1. demo

...

  1. demo

...

  1. 0

...

  1. 2012-01-04

...

  1. 16:52

...

  1. /weblogs/parse/_SUCCESS

...


  1. -rwxrwxrwx

...

  1. 3

...

  1. demo

...

  1. demo

...

  1. 27188268

...

  1. 2012-01-04

...

  1. 16:52

...

  1. /weblogs/parse/part-00002

...


  1. drwxrwxrwx

...

  1. -

...

  1. demo

...

  1. demo

...

  1. 1

...

  1. 2012-01-04

...

  1. 16:52

...

  1. /weblogs/parse/_logs

...


  1. -rwxrwxrwx

...

  1. 3

...

  1. demo

...

  1. demo

...

  1. 27147417

...

  1. 2012-01-04

...

  1. 16:52

...

  1. /weblogs/parse/part-00000

...

Summary

During this guide you learned how to create and execute a Pentaho MapReduce job to parse raw weblog data.

Wiki Markup
{scrollbar}