Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin
Wiki Markup
{scrollbar}
{

Excerpt

...

How

...

to

...

invoke

...

a

...

Pig

...

script

...

from

...

a

...

PDI

...

job.

...

Prerequisites

In order follow along with this how-to

...

guide

...

you

...

will

...

need

...

the

...

following:

...

  • Hadoop
  • Pig
  • Pentaho Data Integration

Sample Files

The sample data file needed for this guide is:

File Name

Content

weblogs_parse.txt.zip

...


Tab-delimited,

...

parsed

...

weblog

...

data


NOTE:If

...

you

...

have

...

completed

...

the

...

Using

...

Pentaho

...

MapReduce

...

to

...

Parse

...

Weblog

...

Data

...

guide,

...

then

...

the

...

necessary

...

files

...

will

...

already

...

be

...

in

...

the

...

proper

...

location.

...


This

...

file

...

should

...

be

...

placed

...

in

...

the

...

/weblogs/parse

...

directory

...

of

...

the

...

HDFS

...

using

...

the

...

following

...

commands.

...

}
Code Block
hadoop fs -mkdir /user/pdi/weblogs
hadoop fs -mkdir /user/pdi/weblogs/parse
hadoop fs -put weblogs_parse.txt /user/pdi/weblogs/parse/part-00000{code}

h1. 

Step-by-Step

...

Create a Pig Script

In this task you are going to create a Pig Script that you will call from within a PDI job.

Tip
titleSpeed Tip

You can download the script aggregate_pig.pig already completed

  1. Create Pig Script: Using a text editor create a new file containing the following PigLatin script:
    Code Block
    
    weblogs = LOAD '/user/pdi/weblogs/parse/part*' USING PigStorage('\t')
            AS (
    client_ip:chararray,
    full_request_date:chararray,
    day:int,
    month:chararray,
    month_num:int,
    year:int,
    hour:int,
    minute:int,
    second:int,
    timezone:chararray,
    http_verb:chararray,
    uri:chararray,
    http_status_code:chararray,
    bytes_returned:chararray,
    referrer:chararray,
    user_agent:chararray
    );
    
    weblog_group = GROUP weblogs by (client_ip, year, month_num);
    weblog_count = FOREACH weblog_group GENERATE group.client_ip, group.year, group.month_num,  COUNT_STAR(weblogs) as pageviews;
    
    STORE weblog_count INTO '/user/pdi/weblogs/aggregate_pig';
    

...


  1. Save Script: Save the script as aggregate_pig.pig

...

  1. in

...

  1. a

...

  1. folder

...

  1. of

...

  1. your

...

  1. choice.

...

Create

...

a

...

Job

...

to

...

Aggregate

...

Web

...

Log

...

Data

...

Using

...

a

...

Pig

...

Script

...

In

...

this

...

task

...

you

...

will

...

create

...

a

...

job

...

that

...

runs

...

the

...

created

...

Pig

...

script

...

to

...

build

...

an

...

aggregate

...

file

...

of

...

weblog

...

data.

...

The

...

file

...

will

...

contain

...

a

...

count

...

of

...

pageviews

...

for

...

each

...

IP

...

address

...

by

...

month

...

and

...

year.

...



  1. Start PDI on your desktop. Once it is running choose 'File' -> 'New' -> 'Job' from the menu system or click on the 'New file' icon on the toolbar and choose the 'Job' option.
    Tip
    titleSpeed Tip

    You can download the Kettle Job aggregate_pig.kjb already completed


  2. Add a Start Job Entry: You need to tell PDI where to start the job, so expand the 'General' section of the Design palette and drag a 'Start' node onto the job canvas. Your canvas should look like:
    Image Added

  3. Add a Pig Script Executor Job Entry: You are going to execute a Pig script in this job, so expand the 'Big Data' section of the Design palette and drag a 'Pig Script Executor' node onto the job canvas. Your canvas should look like:
    Image Added

  4. Connect the Start and Pig Script steps: Hover the mouse over the 'Start' node and a tooltip will appear. Image Added Click on the output connector (the green arrow pointing to the right) and drag a connector arrow to the 'Pig Script Executor' node. Your canvas should look like this:
    Image Added

  5. Edit the Pig Script Job Entry: Double-click on the 'Pig Script Executor' node to edit its properties. Enter this information:
    1. Hadoop distribution: Select your Hadoop distribution.
    2. HDFS hostname, HDFS port, Job tracker hostname, Job tracker port: your Hadoop connection information.
    3. Pig script: Browse to the Pig Script you just created and select it.
    4. Check 'Enable blocking'
      When you are done your window should look like:
      Image Added
      Click 'OK' to close the window.

  6. Save the Job: Choose 'File' -> 'Save as...' from the menu system. Save the transformation as 'aggregate_pig.kjb' into a folder of your choice.

  7. Run the Job: Choose 'Action' -> 'Run' from the menu system or click on the green run button on the job toolbar. A 'Execute a job' window will open. Click on the 'Launch' button. An 'Execution Results' panel will open at the bottom of the PDI window and it will show you the progress of the job as it runs. After a few seconds the job should finish successfully:
    Image Added
    If any errors occurred the job step that failed will be highlighted in red and you can use the 'Logging' tab to view error messages.

Check Hadoop for the Pig Generated File

  1. Run the following command to view the file generated by Pig:
    Code Block
    
    hadoop fs -cat /user/pdi/weblogs/aggregate_pig/part-r-00000 | head
    

...

  1. This

...

  1. should

...

  1. return

...

  1. the

...

  1. first

...

  1. few

...

  1. rows

...

  1. of

...

  1. the

...

  1. aggregated

...

  1. file.

...

Summary

During this guide you learned how to invoke a Pig Script from a PDI Job.

Wiki Markup
{scrollbar}