Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin
Wiki Markup
{scrollbar}
{

Excerpt

...

How

...

to

...

use

...

a

...

PDI

...

job

...

to

...

load

...

a

...

data

...

file

...

into

...

a

...

Hive

...

table.

...

Info
titleNote

For those of you familiar with Hive, you will note that a Hive table could be defined with "external" data. Using the external option, you could define a Hive table that simply uses the CLDB directory that contains the parsed file. For this how-to, we chose not to use the external option so that you can see the ease with which files can be added to non-external Hive tables.

Prerequisites

In order follow along with this how-to guide you will need the following:

  • MapR
  • Pentaho Data Integration
  • Hive

Sample Files

The sample data file needed for this guide is:

File Name

Content

weblogs_parse.txt.zip

Unparsed, raw weblog data

NOTE: If you have previously completed the "Using Pentaho MapReduce to Parse Weblog Data" guide the necessary files will already be the proper directory.

This file should be placed in the /weblogs/parse directory of the CLDB using the following commands.

Code Block
hadoop fs -mkdir /weblogs
hadoop fs -mkdir /weblogs/parse
hadoop fs -put weblogs_parse.txt /weblogs/parse/part-00000
{code}

h1. 

Step-By-Step

...

Instructions

...

Setup

Start MapR if it is not already running.
Start Hive Server if it is not already running.

Create a Hive Table

  1. Open the Hive Shell: Open the Hive shell so you can manually create a Hive table by entering 'hive' at the command line.
  2. Create the Table in Hive: You need a hive table to load the data to, so enter the following in the hive shell.
    Code Block
    
    create table weblogs (
    client_ip    string,
    full_request_date string,
    day    string,
    month    string,
    month_num int,
    year    string,
    hour    string,
    minute    string,
    second    string,
    timezone    string,
    http_verb    string,
    uri    string,
    http_status_code    string,
    bytes_returned        string,
    referrer        string,
    user_agent    string)
    row format delimited
    fields terminated by '\t';
    

...

  1. Close the Hive Shell: You are done with the Hive Shell for now, so close it by entering 'quit;'

...

  1. in

...

  1. the

...

  1. Hive

...

  1. Shell.

...

Create

...

a

...

Job

...

to

...

Load

...

Hive

...

In

...

this

...

task

...

you

...

will

...

be

...

creating

...

a

...

job

...

to

...

load

...

parsed

...

and

...

delimited

...

weblog

...

data

...

into

...

a

...

Hive

...

table.

...

Once

...

the

...

data

...

is

...

loaded

...

into

...

the

...

table,

...

you

...

will

...

be

...

able

...

to

...

run

...

HiveQL

...

statements

...

to

...

query

...

this

...

data.

...

:= }
Tip
title
Speed
Tip

You

can

download

the

Kettle

Job

[

load

_hive.kjb|Loading Data into MapR Hive^load

_hive.kjb

]

already

completed{tip} # *Start PDI on your desktop*. Once it is running choose 'File' \-> 'New' \-> 'Job' from the menu system or click on the 'New file' icon on the toolbar and choose the 'Job' option. # *Add a Start Job Entry:* You need to tell PDI where to start the job, so expand the 'General' section of the Design palette and drag a 'Start' job entry onto the job canvas. Your canvas should look like: !BAD:Common Images^Add a Start Job Entry.png|width=459,height=255!\\ # *Add a Copy File Job Entry:* You will need to copy the parsed file into the Hive table, so expand the 'Big Data' section of the Design palette and drag a 'Hadoop Copy Files' job entry onto the job canvas. Your canvas should look like: !BAD:Common Images^Add Hadoop Copy Files Job Entry.PNG|width=401,height=193!\\ # *Connect the Start and Copy Files job entries*: Hover the mouse over the 'Start' job entry and a tooltip will appear. Click on the output connector (the green arrow pointing to the right) and drag a connector arrow to the 'Hadoop Copy Files' node. Your canvas should look like: !BAD:Common Images^Connect Start and Copy Files.PNG|width=179,height=125!\\ # *Edit the Copy Files Job Entry*: Double-click on the 'Copy Files' job entry to edit its properties. Enter this information: ## File/Folder source:

completed

  1. Start PDI on your desktop. Once it is running choose 'File' -> 'New' -> 'Job' from the menu system or click on the 'New file' icon on the toolbar and choose the 'Job' option.
  2. Add a Start Job Entry: You need to tell PDI where to start the job, so expand the 'General' section of the Design palette and drag a 'Start' job entry onto the job canvas. Your canvas should look like:
    Image Added
  3. Add a Copy File Job Entry: You will need to copy the parsed file into the Hive table, so expand the 'Big Data' section of the Design palette and drag a 'Hadoop Copy Files' job entry onto the job canvas. Your canvas should look like:
    Image Added
  4. Connect the Start and Copy Files job entries: Hover the mouse over the 'Start' job entry and a tooltip will appear. Click on the output connector (the green arrow pointing to the right) and drag a connector arrow to the 'Hadoop Copy Files' node. Your canvas should look like:
    Image Added
  5. Edit the Copy Files Job Entry: Double-click on the 'Copy Files' job entry to edit its properties. Enter this information:
    1. File/Folder source: maprfs://<CLDB>:<PORT>/weblogs/parse

...


    1. When

...

    1. running

...

    1. PDI

...

    1. on

...

    1. the

...

    1. same

...

    1. machine

...

    1. as

...

    1. the

...

    1. MapR

...

    1. cluster

...

    1. use:

...

    1. maprfs:///weblogs/parse

...

    1. the

...

    1. CLDB

...

    1. and

...

    1. port

...

    1. are

...

    1. not

...

    1. required.

...


    1. <CLDB>

...

    1. is

...

    1. the

...

    1. server

...

    1. name

...

    1. of

...

    1. the

...

    1. machine

...

    1. running

...

    1. the

...

    1. MapR

...

    1. CLDB.

...


    1. <PORT>

...

    1. is

...

    1. the

...

    1. port

...

    1. the

...

    1. MapR

...

    1. CLDB

...

    1. is

...

    1. running

...

    1. on.

...

    1. File/Folder

...

    1. destination:

...

    1. maprfs://<CLDB>:<PORT>/user/hive/warehouse/weblogs

...


    1. When

...

    1. running

...

    1. PDI

...

    1. on

...

    1. the

...

    1. same

...

    1. machine

...

    1. as

...

    1. the

...

    1. MapR

...

    1. cluster

...

    1. use:

...

    1. maprfs:///user/hive/warehouse/weblogs

...

    1. the

...

    1. CLDB

...

    1. and

...

    1. port

...

    1. are

...

    1. not

...

    1. required.

...


    1. <CLDB>

...

    1. is

...

    1. the

...

    1. server

...

    1. name

...

    1. of

...

    1. the

...

    1. machine

...

    1. running

...

    1. the

...

    1. MapR

...

    1. CLDB.

...


    1. <PORT>

...

    1. is

...

    1. the

...

    1. port

...

    1. the

...

    1. MapR

...

    1. CLDB

...

    1. is

...

    1. running

...

    1. on.

...

    1. Wildcard

...

    1. (RegExp):

...

    1. Enter

...

    1. 'part-.*'

...

    1. Click

...

    1. the

...

    1. 'Add'

...

    1. button

...

    1. to

...

    1. add

...

    1. the

...

    1. files

...

    1. to

...

    1. the

...

    1. list

...

    1. of

...

    1. files

...

    1. to

...

    1. copy.

...


    1. When

...

    1. you

...

    1. are

...

    1. done

...

    1. your

...

    1. window

...

    1. should

...

    1. look

...

    1. like

...

    1. (your

...

    1. folder

...

    1. path

...

    1. may

...

    1. be

...

    1. different):

...


    1. Image Added
      Click 'OK'

...

    1. to

...

    1. close

...

    1. the

...

    1. window.

...


    1. Notice

...

    1. that

...

    1. you

...

    1. could

...

    1. also

...

    1. load

...

    1. a

...

    1. local

...

    1. file

...

    1. into

...

    1. hive

...

    1. using

...

    1. this

...

    1. step.

...

    1. The

...

    1. file

...

    1. does

...

    1. not

...

    1. already

...

    1. have

...

    1. to

...

    1. be

...

    1. in

...

    1. MapR.

...

  1. Save

...

  1. the

...

  1. Job

...

  1. :

...

  1. Choose

...

  1. 'File'

...

  1. ->

...

  1. 'Save

...

  1. as...'

...

  1. from

...

  1. the

...

  1. menu

...

  1. system.

...

  1. Save

...

  1. the

...

  1. transformation

...

  1. as

...

  1. 'load_hive.kjb'

...

  1. into

...

  1. a

...

  1. folder

...

  1. of

...

  1. your

...

  1. choice.

...

  1. Run

...

  1. the

...

  1. Job

...

  1. :

...

  1. Choose

...

  1. 'Action'

...

  1. ->

...

  1. 'Run'

...

  1. from

...

  1. the

...

  1. menu

...

  1. system

...

  1. or

...

  1. click

...

  1. on

...

  1. the

...

  1. green

...

  1. run

...

  1. button

...

  1. on

...

  1. the

...

  1. job

...

  1. toolbar.

...

  1. A

...

  1. 'Execute

...

  1. a

...

  1. job'

...

  1. window

...

  1. will

...

  1. open.

...

  1. Click

...

  1. on

...

  1. the

...

  1. 'Launch'

...

  1. button.

...

  1. An

...

  1. 'Execution

...

  1. Results'

...

  1. panel

...

  1. will

...

  1. open

...

  1. at

...

  1. the

...

  1. bottom

...

  1. of

...

  1. the

...

  1. PDI

...

  1. window

...

  1. and

...

  1. it

...

  1. will

...

  1. show

...

  1. you

...

  1. the

...

  1. progress

...

  1. of

...

  1. the

...

  1. job

...

  1. as

...

  1. it

...

  1. runs.

...

  1. After

...

  1. a

...

  1. few

...

  1. seconds

...

  1. the

...

  1. job

...

  1. should

...

  1. finish

...

  1. successfully:

...

  1. Image Added

If any errors occurred the job step that failed will be highlighted in red and you can use the 'Logging' tab to view error messages.

Check Hive

  1. Open the Hive Shell: Open the Hive shell so you can manually create a Hive table by entering 'hive' at the command line.
  2. Query Hive for Data: Verify the data has been loaded to Hive by querying the weblogs table.
    Code Block
    select * from weblogs limit 10;

...

  1. Close the Hive Shell: You are done with the Hive Shell for now, so close it by entering 'quit;'

...

  1. in

...

  1. the

...

  1. Hive

...

  1. Shell.

...

Summary

During this guide you learned how to load data into a Hive table using a PDI job. PDI jobs can be used to put files into Hive from many different sources.
Other guides in this series cover how to transform data in Hive, get data out of the Hive, and report on data within the Hive.

Wiki Markup
{scrollbar}