Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migration of unmigrated content due to installation of a new plugin
Wiki Markup
{scrollbar}
{

Excerpt

...

How

...

to

...

use

...

a

...

PDI

...

transformation

...

to

...

extract

...

data

...

from

...

Hive

...

and

...

load

...

it

...

into

...

a

...

RDBMS

...

table.

...

The

...

new

...

RDBMS

...

table

...

will

...

contain

...

the

...

count

...

of

...

page

...

views

...

by

...

IP

...

address

...

and

...

month.

...

:=}
Info
title
Note

For

brevity's

sake,

this

transformation

will

only

contain

two

steps:

Table

Input

and

a

Table

Output.

In

practice,

the

full

expressiveness

of

the

PDI

transformation

semantic

is

available.

Further,

PDI

supports

bulk

loading

many

RDBMS

and

that

would

be

a

viable,

and

common,

alternative

to

using

a

Table

Output

approach.

{info} h1. Prerequisites In order follow along with this

Prerequisites

In order follow along with this how-to

...

guide

...

you

...

will

...

need

...

the

...

following:

...

  • Hadoop
  • Pentaho Data Integration
  • Hive

Sample Files

The source data for this guide will reside in a Hive table called weblogs. If you have previously completed the "Loading Data into Hive" guide, then you can skip to "Create a Database Connection to Hive". You do not have to load the following sample data.
The sample data file needed for the "Create a Hive Table" instructions is:

File Name

Content

weblogs_parse.txt.zip

Tab-delimited, parsed weblog data


NOTE: If you have previously completed the "Using Pentaho MapReduce to Parse Weblog Data" guide, then the necessary files will already be in the proper location.
This file should be placed in the /weblogs/parse directory of the HDFS using the following commands.

Code Block
hadoop fs -mkdir /user/pdi/weblogs
hadoop fs -mkdir /user/pdi/weblogs/parse
hadoop fs -put weblogs_parse.txt /user/pdi/weblogs/parse/part-00000{code}

h1. 

Step-By-Step

...

Instructions

Setup

Start Hadoop if it is not already running.
Start Hive Server if it is not already running.

Include Page
Include Extracting Data from Hive to Load an RDBMS
Include Extracting Data from Hive to Load an RDBMS
Wiki Markup
{scrollbar}