Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 14 Next »

Unknown macro: {scrollbar}

How to read data from a Hive table, transform it, and write it to a Hive table within the workflow of a PDI job.

Prerequisites

In order follow along with this how-to guide you will need the following:

  • MapR
  • Pentaho Data Integration
  • Hive

Sample Files

The source data for this guide will reside in a Hive table called weblogs. If you have previously completed the Loading Data into MapR Hive guide, then you can skip to Create a Database Connection to Hive. If not, then you will need the following datafile and perform the Create a Hive Table] instructions before proceeding.
The sample data file needed for the Create a Hive Table instructions is:

File Name

Content

weblogs_parse.txt.zip

Tab-delimited, parsed weblog data


NOTE: If you have previously completed the Using Pentaho MapReduce to Parse Weblog Data in MapR guide, then the necessary files will already be in the proper location.
This file should be placed in the /weblogs/parse directory of the CLDB using the following commands.

hadoop fs -mkdir /weblogs
hadoop fs -mkdir /weblogs/parse
hadoop fs -put weblogs_parse.txt /weblogs/parse/part-00000

Step-By-Step Instructions

Setup

Start MapR if it is not already running.
Start Hive Server if it is not already running.

Error formatting macro: include: java.lang.IllegalArgumentException: No link could be created for 'BAD:Include Transforming Data within Hive in MapR'.
  • No labels