Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin

...

Anchor
Create a Hive Table
Create a Hive Table

Create a Hive Table

NOTE: This task may be skipped if you have completed the Loading Data into Hive guide.

  1. Open the Hive Shell: Open the Hive shell so you can manually create a Hive table by entering 'hive' at the command line.

  2. Create the Table in Hive: You need a hive table to load the data to, so enter the following in the hive shell.
    Code Block
    
    create table weblogs (
        client_ip    string,
        full_request_date string,
        day    string,
        month    string,
        month_num int,
        year    string,
        hour    string,
        minute    string,
        second    string,
        timezone    string,
        http_verb    string,
        uri    string,
        http_status_code    string,
        bytes_returned        string,
        referrer        string,
        user_agent    string)
    row format delimited
    fields terminated by '\t';
    

...

  1. Close the Hive Shell: You are done with the Hive Shell for now, so close it by entering 'quit;'

...

  1. in

...

  1. the

...

  1. Hive

...

  1. Shell.

...



  1. Load the Table: Load the Hive table by running the following commands:
    Code Block
    
    hadoop fs -cp /weblogs/parse/part-00000 /user/hive/warehouse/weblogs/
    
    

...

Include Page
Create Hive Database Connection
Create Hive Database Connection

Create a Job to Aggregate Web Log Data into a Hive Table

In this task you will create a job that runs a Hive script to build an aggregate table, weblogs_agg, using the detailed data found in the Hive weblogs table. The new Hive weblogs_agg table will contain a count of page views for each IP address by month and year.

Tip
titleSpeed Tip

You can download the Kettle Job aggregate_hive.kjb already completed

  1. Start PDI on your desktop. Once it is running choose 'File' -> 'New' -> 'Job' from the menu system or click on the 'New file' icon on the toolbar and choose the 'Job' option.

  2. Add a Start Job Entry: You need to tell PDI where to start the job, so expand the 'General' section of the Design palette and drag a 'Start' node onto the job canvas. Your canvas should look like:
    Image Added

  3. Add a SQL Job Entry: You are going to run a HiveQL script to create the aggregate table, so expand the 'Scripting' section of the Design palette and drag a 'SQL' node onto the job canvas. Your canvas should look like:
    Image Added

  4. Connect the Start and SQL Job Entries: Hover the mouse over the 'Start' node and a tooltip will appear. Image Added Click on the output connector (the green arrow pointing to the right) and drag a connector arrow to the 'SQL' node. Your canvas should look like this:
    Image Added

  5. Edit the SQL Job Entry: Double-click on the 'SQL' node to edit its properties. Enter this information:
    1. Connection: Select 'Hive'
    2. SQL Script: Enter the following
      Code Block
      
      create table weblogs_agg
      as
      select
        client_ip
      , year
      , month
      , month_num
      , count(*) as pageviews
      from weblogs
      group by   client_ip, year, month, month_num
      

...

    1. When

...

    1. you

...

    1. are

...

    1. done

...

    1. your

...

    1. window

...

    1. should

...

    1. look

...

    1. like:

...


    1. Image Added
      Click 'OK'

...

    1. to

...

    1. close

...

    1. the

...

    1. window.

...



  1. Save the Job: Choose 'File'

...

  1. ->

...

  1. 'Save

...

  1. as...'

...

  1. from

...

  1. the

...

  1. menu

...

  1. system.

...

  1. Save

...

  1. the

...

  1. transformation

...

  1. as

...

  1. 'aggregate_hive.kjb'

...

  1. into

...

  1. a

...

  1. folder

...

  1. of

...

  1. your

...

  1. choice.

...



  1. Run the Job: Choose 'Action'

...

  1. ->

...

  1. 'Run'

...

  1. from

...

  1. the

...

  1. menu

...

  1. system

...

  1. or

...

  1. click

...

  1. on

...

  1. the

...

  1. green

...

  1. run

...

  1. button

...

  1. on

...

  1. the

...

  1. job

...

  1. toolbar.

...

  1. A

...

  1. 'Execute

...

  1. a

...

  1. job'

...

  1. window

...

  1. will

...

  1. open.

...

  1. Click

...

  1. on

...

  1. the

...

  1. 'Launch'

...

  1. button.

...

  1. An

...

  1. 'Execution

...

  1. Results'

...

  1. panel

...

  1. will

...

  1. open

...

  1. at

...

  1. the

...

  1. bottom

...

  1. of

...

  1. the

...

  1. PDI

...

  1. window

...

  1. and

...

  1. it

...

  1. will

...

  1. show

...

  1. you

...

  1. the

...

  1. progress

...

  1. of

...

  1. the

...

  1. job

...

  1. as

...

  1. it

...

  1. runs.

...

  1. After

...

  1. a

...

  1. few

...

  1. seconds

...

  1. the

...

  1. job

...

  1. should

...

  1. finish

...

  1. successfully:

...

  1. Image Added
    If any errors occurred the job step that failed will be highlighted in red and you can use the 'Logging' tab to view error messages.

Check Hive

  1. Open the Hive Shell: Open the Hive shell by entering 'hive' at the command line.
  2. Query Hive for Data: Verify the data has been loaded to Hive by querying the weblogs table.
    Code Block
    
    select * from weblogs_agg limit 10;
    
    

...



  1. Close the Hive Shell: You are done with the Hive Shell for now, so close it by entering 'quit;'

...

  1. in

...

  1. the

...

  1. Hive

...

  1. Shell.

...

Summary

During this guide you learned how to transform data within Hive within a PDI job flow.