Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Cloudera CDH6 HDFS Connector for Data Prep

User Persona: Data Prep Admin, Data Source Admin, or IT/DevOps

Availability information

This Connector is not available to Data Prep SaaS customers.

Note

This document covers all configuration fields available during connector setup. Some fields may have already been filled out by your Administrator at an earlier step of configuration and may not be visible to you. For more information on Data Prep's connector framework, see Data Prep Connector setup. Also, your Admin may have named this connector something else in the list of Data Sources.

Configuring Data Prep

This connector allows you to connect to an HDFS cluster for imports and exports. The fields you are required to set up here depend on the authentication method you select—Simple or Kerberos. The type of authentication you select will apply to all data sources that you create based on a connector configuration.

Note

Configuring this Connector requires file system access on the Data Prep Server and a core-site.xml with the Hadoop cluster configuration. Please reach out to your Customer Success representative for assistance with this step.

General

  • Name: Name of the data source as it will appear to users in the UI.
  • Description: Description of the data source as it will appear to users in the UI.

Tip

You can connect Data Prep to multiple HDFS clusters. Using a descriptive name can be a big help to users in identifying the appropriate data source.

Hadoop Cluster

  • Authentication Method: Choose between Simple or Kerberos. The type of authentication you select will apply to all Data Sources that you create based on a connector configuration. See Simple or Kerberos Configuration section below for more details depending on your selection.
  • Cluster Core Site XML Path: Fully qualified path of core-site.xml on webserver. Example: /path/to/core-site.xml
  • Cluster HDFS Site XML Path: Fully qualified path of hdfs-site.xml on webserver. Example: /path/to/hdfs-site.xml
  • Native Hadoop Library Path: Fully qualified path of native Hadoop libraries on webserver. Example: /path/to/libraries

Simple Configuration (only for Simple authentication)

  • Username: The application web server will connect to your HDFS cluster as the username you provide here.

Kerberos Configuration

The following parameters are required for Kerberos and Hybrid authentication.

  • Principal: Kerberos Principal.
  • Realm: Kerberos Realm.
  • KDC Hostname: Kerberos Key Distribution Center Hostname.
  • Kerberos Configuration File: Fully-qualified path of Kerberos configuration file on webserver.
  • Keytab File: Fully-qualified path of Kerberos Keytab File on webserver.
  • Use Application User: Check this box to read/write as the logged-in application user, or uncheck to use proxy user.
  • Proxy User: The proxy used to authenticate with the cluster. ${user.name} can be entered as the proxy user. ${user.name} works similar to selecting Use Application User but allows for more flexibility. For example:
    • To add a domain to the user’s credentials, enter \domain_name\${user.name} in the Proxy User field. Data Prep will pass the username and the domain.
      • Example: \Accounts\${user.name} results in AccountsJoe (assuming Joe is the username).
    • To apply a text modifier to the username, add .modifier to the key ${user.name}. The acceptable modifiers are: toLower, toUpper, toLowerCase, toUpperCase, and trim.
      • For example ${user.name.toLowerCase} converts Joe into joe (assuming Joe is the username).

Configuration

  • Data Store Root Directory: The ’parent directory’ on your cluster where the Data Library will read from and write to for import and export operations. This also supports import and export for sub-directories of the root.
  • Map INT96 to Datetime: Check to convert INT96 type fields to Datetime values on import.

Credentials

  • Hive User: The username used to access Hive for Simple and Hybrid authentication.
  • Hive Password: The password used to access Hive for Simple and Hybrid authentication.

Hive Options

  • Pre-Import SQL: SQL to be executed before import process. This SQL may execute multiple times (for preview and import) and could be multiple SQL statements, newline-delimited.
  • Post-Import SQL: SQL to be executed after import process. This SQL may execute multiple times (for preview and import) and could be multiple SQL statements, newline-delimited.

Note

As the Pre- and Post-Import SQL may be executed multiple times throughout the import process, please take care when specifying these values in the Connector/Datasource Configuration as they will be executed for every import performed with this configuration.*

  • Pre-Export SQL: SQL to be executed before export process. This SQL will execute once and could be multiple SQL statements, newline-delimited.
  • Post-Export SQL: SQL to be executed after export process. This SQL will execute once and could be multiple SQL statements, newline-delimited.

Data Import Information

Via Browsing

  • Browse:
    • Delimited datasets: comma, tab...
    • XML
    • JSON
    • Excel: Xls and XLSX
    • Avro
    • Parquet
    • Fixed format
    • Browse to a file and select it for import
    • Supported data formats:
  • Wildcard:
    • Globbing is supported

Via SQL Query

Using SQL Select queries

Export

Supported using one of the stream-based formats listed under Import Via Browser.


Updated October 28, 2021
Back to top