Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

(warning) PLEASE NOTE: This documentation applies to Pentaho 7.0 and earlier. For Pentaho 7.1 and later, see Job (Job Entry) on the Pentaho Enterprise Edition documentation site.

Description

You can use the Job job entry to execute a previously defined job.

For ease of use, it is also possible to create a new job within the dialog, pressing the New Job button.

Use the Job entry to execute a previously defined job. This allows you to perform "functional decomposition." That is, you use them to break out jobs into more manageable units. For example, you would not write a data warehouse load using one job that contains 500 entries. It is better to create smaller jobs and aggregate them.

(warning) Warning: Although it is possible to create a recursive, never ending job that points to itself, you should be aware that such a job will eventually fail with an out of memory or stack error.

Options

Transformation Specification tab

Option

Description

Name of the Job Entry

The unique name of the job entry on the canvas. A job entry can be placed on the canvas several times; however it will be the same job entry

Job Filename

If you are not working in a repository, specify the XML file name of the transformation to start. Click to browse through your local files.

Specify by Name and Directory

If you are working in the DI Repository or database repository, specify the name of the transformation to start. Click the button to browse through the DI Repository.

Specify by Reference

If you specify a transformation or job by reference, you can rename or move it around in the DI Repository. The reference (identifier) is stored, not the name and directory.

Advanced tab

Option

Description

Copy previous results to args?

The results from a previous transformation can copied as arguments of the job using the "Copy rows to result" step. If Execute for every input row is enabled then each row is a set of command line arguments to be passed into the job, otherwise only the first row is used to generate the command line arguments.

Copy previous results to parameters?

If Execute for every input row is enabled then each row is a set of command line jobarguments to be passed into the , otherwise only the first row is used to generate the command line arguments.

Execute for every input row?

Implements looping; if the previous job entry returns a set of result rows, the job executes once for every row found. One row is passed to the job at every execution. For example, you can execute a job for each file found in a directory.

Remote slave server

The slave server on which to execute the job

Pass job export to slave

Pass the complete job (including referenced sub-jobs and sub-transformations) to the remote server.

Wait for the remote job to finish?

Enable to block until the job on the slave server has finished executing

Follow local abort to remote job

Enable to send the abort signal to the remote job if it is called locally

Expand child jobs and transformations on the server

When the remote job starts child jobs and transformations, they are exposed on the slave server and can be monitored.

Logging Settings tab

By default, if you do not set logging, Pentaho Data Integration will take log entries that are being generated and create a log record inside the job. For example, suppose a job has three transformations to run and you have not set logging. The transformations will not output logging information to other files, locations, or special configuration. In this instance, the job executes and puts logging information into its master job log.
In most instances, it is acceptable for logging information to be available in the job log. For example, if you have load dimensions, you want logs for your load dimension runs to display in the job logs. If there are errors in the transformations, they will be displayed in the job logs. If, however, you want all your log information kept in one place, you must set up logging.

Option

Description

Specify logfile?

Enable to specify a separate logging file for the execution of this job

Append logfile?

Enable to append to the logfile as opposed to creating a new one

Name of logfile

The directory and base name of the log file; for example C:\logs

Create parent folder

Create the parent folder for the log file if it does not exist

Extension of logfile

The file name extension; for example, log or txt

Include date in logfile?

Adds the system date to the filename with format YYYYMMDD (_20051231).

Include time in logfile?

Adds the system time to the filename with format HHMMSS (_235959).

Loglevel

Specifies the logging level for the execution of the job. See also the logging window in Logging||||||\

Argument tab

Option

Description

Arguments

Specify which command-line arguments will be passed to the transformation.

Parameters tab

Specify which parameters will be passed to the transformation:

Option

Description

Pass all parameter values down to the sub-transformation

Enable this option to pass all parameters of the job down to the sub-transformation.

Parameters

Specify the parameter name that will be passed to the transformation.

Stream column name

Allows you to capture fields of incoming records of a result set as a parameter.

Value

Allows you to specify the values for the transformation's parameters. You can do this by:

  • Manually typing a value (Ex: ETL Job)
  • Use a parameter to set the value (Ex: ${Internal.Job.Name}
  • Using a combination of manually specified values and parameter values (Ex: ${FILE_PREFIX}_${FILE_DATE}.txt)
  • No labels