Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Wiki Markup
!http://www.mapr.com/images/spotlight/mapr.png|align=right! This section contains how-tos that will get you started with Pentaho if you are using the MapR distribution of Hadoop


h1. Overview

This wiki contains a series of How-Tos that demonstrate the integration between Pentaho and MapR using a sample weblog dataset.   The how-tos are organized by function with each set explaining various techniques for loading, transforming, extracting and reporting on data within a MapR cluster.  You are encouraged to perform the how-tos in order as often the output of one is used as the input of another.    However, if you would like to jump to a how-to in the middle of the flow, instructions for preparing input data are provided.

h1. Pre-Requisites

In order to perform all of the how-tos in this guide, you will need the following.   Since not every how-to uses every component (e.g. HBase, Hive, ReportDesigner), specific component requirements will be identified within each how-to.   This section enumerates all of the components with some additional configuration and installation tips.

h2. MapR

A single-node local cluster is sufficient for these exercises but a larger and/or remote configuration will also work. You will need to know the addresses and ports for MapR.

These guides were developed using the MapR M3 distribution version 1.2. You can find MapR downloads here: [http://mapr.com/download|http://mapr.com/download]

h2. Pentaho Data Integration

PDI will be the primary development environment for the how-tos.    You will need version \[TODO\]. You can download the software here: \[TODO\]

h2. Pentaho Hadoop Distribution

A Hadoop node distribution of the Pentaho Data Integration (PDI) tool.  Pentaho Hadoop Distribution (referred to as PHD from this point on) allows you to execute Pentaho MapReduce jobs on the MapR cluster.

You can find instructions to download and install the software here: \[TODO\]

h2. Pentaho Report Designer

A desktop installation of Pentaho Report Designer tool called with the PDI jars in the lib directory.

You must copy all jars from PDI's libext directory and sub folders with the exception of the JDBC folder into Report Designers lib directory.

h2. Hive

A MapR supported version of Hive.  Hive is a Map/Reduce abstraction layer that provides SQL-like access to MapR data.   

You can find instructions to install Hive for MapR here: [http://mapr.com/doc/display/MapR/Hive|http://mapr.com/doc/display/MapR/Hive]

h2. HBase

A MapR supported version of HBase.  HBase is a NoSQL database that leverages MapR's CLDB storage.

You can find instructions to install HBase for MapR here: [http://mapr.com/doc/display/MapR/HBase|http://mapr.com/doc/display/MapR/HBase]

h2. Sample Data

The how-to's in this guide were built with sample weblog data.     The following files which are used and/or generated by the how-to's in this guide.    Each specific how-to will explain which file(s) it requires. 
| *File Name* | *Content* |
| *[weblogs_rebuild.txt|https://pentaho.box.com/shared/static/nm20m6jpvddk9d0gvxl8.zip]* | Unparsed, raw weblog data |
| *[weblogs_parse.txt\||]* | Tab-delimited, parsed weblog data |
| *[weblogs_hive.txt\||]* | Tab-delimited, aggregated weblog data for a Hive weblogs_agg   table |
| *[weblogs_aggregate.txt\||]* | Tab-delimited, aggregated weblog data |
| *[webogs_hbase.txt\||]* | Prepared data for HBase load |
<<< Probably need to add links to download these files >>>

h1. Loading Data into a MapR Cluster

The how-tos in this section will demonstrate how to load data into CLDB (MapR's distributed file system), Hive and HBase.

Loading Data into CLDB - <<<link to content>>>

Loading Data into Hive -- &nbsp;<<<link to content>>>

Loading Data into HBase -- &nbsp;<<<link to content>>>

h1. Transforming Data within a MapR Cluster

The how-tos in this section will demonstrate how to leverage the massively parallel, fault tolerant MapR processing engine to transform resident cluster data.

Using Pentaho MapReduce to Parse Weblog Data -- <<<link to content>>>

Using Pentaho MapReduce to Generate an Aggregate Dataset&nbsp; - <<<link to content>>>

Transforming Data with Pig -- <<<link to content>>>

Transforming Data within Hive -- <<<link to content>>>

h1. Extracting Data from the MapR Cluster

The how-tos in this section will demonstrate how to extract data from the MapR cluster and load it into an RDBMS table.

Extracting data from CLDB to load an RDBMS &nbsp;-\- <<<link to content>>>

Extracting data from Hive to load an RDBMS&nbsp; -- <<<link to content>>>

Extracting data from HBase to load an RDBMS&nbsp; -- <<<link to content>>>

h1. Reporting on Data in the MapR Cluster

The how-tos in this section will demonstrate how to report on data that is resident within the MapR cluster.

Reporting on CLDB file data&nbsp; -- <<<link to content>>>

Reporting on Hive data&nbsp; -- <<<link to content>>>

Reporting on HBase data&nbsp; -- <<<link to content>>>
\\
\\
{scrollbar}