Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Wiki Markup
{scrollbar}
{excerpt}How to use a PDI transformation to extract data from Hive and load it into a RDBMS table. {excerpt} The new RDBMS table will contain the count of page views by IP address and month.
{info:title=Note}For brevity's sake, this transformation will only contain two steps:  Table Input and a Table Output.    In practice, the full expressiveness of the PDI transformation semantic is available.  Further, PDI supports bulk loading many RDBMS and that would be a viable, and common, alternative to using a Table Output approach.{info}

h1. Prerequisites

In order follow along with this how-to guide you will need the following:
* Hadoop
* Pentaho Data Integration
* Hive

h1. Sample Files

The source data for this guide will reside in a Hive table called weblogs.    If you have previously completed the "Loading Data into Hive" guide, then you can skip to "Create a Database Connection to Hive".   You do not have to load the following sample data.
The sample data file needed for the "Create a Hive Table" instructions is:
| File Name | Content |
| [How To's^weblogs_parse.txt.zip] | Tab-delimited, parsed weblog data |

\\
NOTE:  If you have previously completed the "Using Pentaho MapReduce to Parse Weblog Data" guide, then the necessary files will already be in the proper location.
This file should be placed in the /weblogs/parse directory of the HDFS using the following commands.
{code}hadoop fs -mkdir /user/pdi/weblogs
hadoop fs -mkdir /user/pdi/weblogs/parse
hadoop fs -put weblogs_parse.txt /user/pdi/weblogs/parse/part-00000{code}

h1. Step-By-Step Instructions


h2. Setup

Start Hadoop if it is not already running.
Start Hive Server if it is not already running.

{include:Include Extracting Data from Hive to Load an RDBMS}
{scrollbar}