Panel | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
PLEASE NOTE: This documentation applies to Pentaho 6.1 and earlier. For Pentaho 7.0 and later, see Spark Submit on the Pentaho Enterprise Edition documentation site. |
...
Before you use this entry, you will need to install and configure a Spark client on any node from which you will run Spark jobs.
Installation Prerequisites
...
Option | Description |
---|---|
Entry Name | Name of the entry. You can customize this, or leave it as the default. |
Spark-Submit Utility | Script that launches the spark job. |
Spark Master URL | The master URL for the cluster. Two options are supported:
|
Jar | Path to a bundled jar including your application and all dependencies. The URL must be globally visible inside of your cluster, for instance, an hdfs:// path or a file:// path that is present on all nodes. |
Class Name | The entry point for your application. |
Arguments | Arguments passed to the main method of your main class, if any. |
Executor | Amount of memory to use per executor process. Use the JVM format (e.g. 512m, 2g). |
Driver | Amount of memory to use per driver. Use the JVM format (e.g. 512m, 2g). |
Block Execution | This option is enabled by default. If this option is selected, the job entry waits until the spark job finishes running. If it is not, job proceeds with its execution once the spark job is submitted for execution. |
Help | Displays documentation on this entry. |
OK | Saves the information and closes the window. |
Cancel | Closes the window without saving changes. |
...