Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

NOTE: If you have previously completed the "Using Pentaho MapReduce to Parse Weblog Data" guide the necessary files will already be the proper Anchor_GoBack_GoBack directory.

This file should be placed in the /weblogs/parse directory of the CLDB using the following commands.

Code Block

hadoop fs --mkdir /weblogs

...


hadoop fs --mkdir /weblogs/parse

...


hadoop fs --put weblogs_parse.txt /weblogs/parse/part-00000

Step-By-Step Instructions

...

  1. Open the Hive Shell: Open the Hive shell so you can manually create a Hive table by entering 'hive' at the command line.
  2. Create the Table in Hive: You need a hive table to load the data to, so enter the following in the hive shell.
    Code Block
    
    create table weblogs (

...

  1. 
    client_ip    string,

...

  1. 
    full_request_date string,

...

  1. 
    day    string,

...

  1. 
    month    string,

...

  1. 
    month_num int,

...

  1. 
    year    string,

...

  1. 
    hour    string,

...

  1. 
    minute    string,

...

  1. 
    second    string,

...

  1. 
    timezone    string,

...

  1. 
    http_verb    string,

...

  1. 
    uri    string,

...

  1. 
    http_status_code    string,

...

  1. 
    bytes_returned        string,

...

  1. 
    referrer        string,

...

  1. 
    user_agent    string)

...

  1. 
    row format delimited

...

  1. 
    fields terminated by '\t';
    
  2. Close the Hive Shell: You are done with the Hive Shell for now, so close it by entering 'quit;' in the Hive Shell.

...

In this task you will be creating a job to load parsed and delimited weblog data into a Hive table. Once the data is loaded into the table, you will be able to run HiveQL statements to query this data.

Tip
titleSpeed Tip

You can download the Kettle Job load_hive.kjb already completed

  1. Start PDI on your desktop. Once it is running choose 'File' -> 'New' -> 'Job' from the menu system or click on the 'New file' icon on the toolbar and choose the 'Job' option.
  2. Add a Start Job Entry: You need to tell PDI where to start the job, so expand the 'General' section of the Design palette and drag a 'Start' job entry onto the job canvas. Your canvas should look like:

...

  1. Open the Hive Shell: Open the Hive shell so you can manually create a Hive table by entering 'hive' at the command line.
  2. Query Hive for Data: Verify the data has been loaded to Hive by querying the weblogs table.
Code Block
select * from weblogs limit 10;


  1. Close the Hive Shell: You are done with the Hive Shell for now, so close it by entering 'quit;' in the Hive Shell.

...