Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin

...

  1. Open the Hive Shell: Open the Hive shell so you can manually create a Hive table by entering 'hive' at the command line.

  2. Create the Table in Hive: You need a hive table to load the data to, so enter the following in the hive shell.
    Code Block
    create table weblogs (
        client_ip    string,
        full_request_date string,
        day    string,
        month    string,
        month_num int,
        year    string,
        hour    string,
        minute    string,
        second    string,
        timezone    string,
        http_verb    string,
        uri    string,
        http_status_code    string,
        bytes_returned        string,
        referrer        string,
        user_agent    string)
    row format delimited
    fields terminated by '\t';
    
  3. Close the Hive Shell: You are done with the Hive Shell for now, so close it by entering 'quit;' in the Hive Shell.

  4. Load the Table: Load the Hive table by running the following commands:
    Code Block
    hadoop fs -put part-00000.txt /user/hive/warehouse/weblogs/
Include Page
BAD:Create Hive Database ConnectionBAD:
Create Hive Database Connection
Include Page
BAD:Create RDBMS ConnectionBAD:
Create RDBMS Connection

Create a Transformation to Extract Data from Hive

...

  1. Start PDI on your desktop. Once it is running choose 'File' -> 'New' -> 'Transformation' from the menu system or click on the 'New file' icon on the toolbar and choose the 'Transformation' option.
  2. Add a Table Input Step: You are going to extract data from Hive using HiveQL, so expand the 'Input' section of the Design palette and drag a 'Table Input' node onto the transformation canvas. Your transformation should look like:


  3. Edit the Table Input Step: Double-click on the 'Table Input' node to edit its properties. Enter this information:
    1. Connection: Select 'Hive'
    2. SQL: Enter the following
      Code Block
      select client_ip, year, month, month_num, count(*) as pageviews
      from weblogs
      group by client_ip, year, month, month_num

      When you are done your 'Map/Reduce Input' window should look like this:

      Click 'OK' to close the window.

  4. Add a Table Output Step: You are going to write the data to a database, so expand the 'Output' section of the Design palette and drag a 'Table Output' node onto the transformation canvas. Your transformation should look like:

    Notice that there are lots of other output formats you can use such as Text file, XML, and Microsoft Excel.

  5. Connect the Input and Output steps: Hover the mouse over the 'Table Input' node and a tooltip will appear. Click on the output connector (the green arrow pointing to the right) and drag a connector arrow to the 'Table Output' node. Your canvas should look like this:


  6. Edit the Table Output Step: Double-click on the 'Table Output' node to edit its properties. Do the following:
  7. Connection: Select 'RDBMS'
  8. Target Table: Enter 'aggregate_hive'
  9. Check 'Truncate table' so you can re-run this transformation.
  10. Click the 'SQL' button to create the table in your target database.
  11. Click the 'Execute' button to run the SQL.

    The 'Results of the SQL statements' window will appear telling you if the SQL was succesffully executed or give you troubleshooting information if the SQL failed.
  12. Click 'OK' to close the window.
  13. Click 'Close' to close the 'Simple SQL editor' window.
    When you are done your window should look like:

    Click 'OK' to close the window.

  14. Save the Transformation: Choose 'File' -> 'Save as...' from the menu system. Save the transformation as 'hive_to_rdbms.kjbktr' into a folder of your choice.

  15. Run the Transformation: Choose 'Action' -> 'Run' from the menu system or click on the green run button on the transformation toolbar. A 'Execute a transformation' window will open. Click on the 'Launch' button. An 'Execution Results' panel will open at the bottom of the PDI window and it will show you the progress of the transformation as it runs. After several seconds the transformation should finish successfully:
    If any errors occurred the job step that failed will be highlighted in red and you can use the 'Logging' tab to view error messages.

    Check RDBMS for Hive Aggregated Table

  16. Explore the Database: Choose 'Tools' -> 'Database' -> 'Explore' from the menu system.

  17. Select the Connection: In the 'Make your selection' window select 'RDBMS' and click 'OK'.


  18. Preview the Table: Expand RDBMS -> Tables. Right click on 'aggregate_hive' and select 'Preview first 100'

...