Skip to content

Click in-app to access the full platform documentation for your version of DataRobot.

Amazon Redshift Connector for Data Prep

User Persona: Data Prep User, Data Prep Admin, IT/DevOps

Note

This document covers all configuration fields available during connector setup. Some fields may have already been filled out by your Administrator at an earlier step of configuration and may not be visible to you. For more information on Data Prep's connector framework, see Data Prep Connector setup. Also, your Admin may have named this connector something else in the list of Data Sources.

Configure Data Prep

This connector allows you to connect to Amazon Redshift as an import and export source. The following fields are used to define the connection parameters.

General

  • Name: Name of the data source as it will appear to users in the UI.
  • Description: Description of the data source as it will appear to users in the UI.

Tip

You can connect Data Prep to multiple Redshift data warehouses. Using a descriptive name can be a big help to users in identifying the appropriate data source.

Database URL

  • JDBC URL: the JDBC connection string; the database name can be included in the URL.
  • Example JDBC URL: jdbc:redshift://examplecluster.abc123xyz789_._us-west-2.redshift.amazonaws.com:5439/dev?ssl=true
  • For more information on connection string options, please see this AWS Documentation.

Visibility Settings

You can control the database, schemas, and tables that are shown to users when they browse a data source during import. For databases, schemas, and tables, you can choose to:

  • "Show only" which returns only the databases, schemas or tables that you specify here.
  • "Hide" which hides the databases, schemas, and tables that you specify here.
  • "Show all" which is the default setting to display everything in the data source.

When you select the "Show only" or "Hide" options, a field is provided for specifying the databases, schemas or tables on which you want the option enforced.

Note

These settings are not enforced when users query against the data source; query results still return a complete list of matches. For example, if you choose to "hide" a specific database, users can still execute queries that pull data from tables within that database. However, that database will not be displayed to users when they browse the data source.

Import Configuration

  • Pre-import SQL: a SQL statement to execute before beginning import, after the table’s schema is determined.

  • Post-import SQL: a SQL statement to execute after import completes.

Export Configuration

  • Pre-export SQL: a SQL statement to execute before beginning export, after the table is created if auto-create is enabled.
  • Post-export SQL: a SQL statement to execute after export completes.

Redshift Credentials

The user authentication can be done through a shared account or an individual account. Depending on your choice, the following fields are required:

  • Individual account:
    • User: the username for an individual account used to authenticate to the database.
    • Password: the password for an individual account used to authenticate to the database.
  • Shared Account:
    • User: the username for a shared account used to authenticate to the database.
    • Password: the password for a shared account used to authenticate to the database.
    • Role: if roles have been implemented for this database, authenticated users with this user role can perform queries after authentication.

Amazon S3 Client Configuration

  • Export using S3?: This option specifies whether the connector will export to Redshift by uploading data into Amazon S3 and then copying it into Redshift, or by inserting data directly into Redshift.
  • Export using S3: The connector will upload data into Amazon S3 and copy it into Redshift. This is the recommended approach for larger datasets as it will allow for a more performant export.
  • Bucket name: An S3 Bucket represents a collection of objects stored in Amazon S3.
  • Prefix: Limits results to only those keys that begin with the specified prefix.
  • Socket Timeout Seconds: The number of seconds to wait for a response from an established S3 connection. The default value is 5 minutes and may need to be increased to handle the export of large files.
  • Export using SQL insert statements: The connector will insert data directly into Redshift. This option will result in slower exports. If you plan to only perform imports from Redshift, you can select this option in order to not have to enter your S3 account details.

Note

The connector requires the s3:ListBucket permission on the bucket. Bucket contents require permissions s3:ListBucket, s3:GetObject, and (for export only) s3:PutObject. In addition, if there is a SourceIP condition block specified in your bucket policy, then you must include the IP addresses for your Data Prep server and any servers that are used to run automation jobs.

See Amazon S3 Connector Setup for more details.

Amazon S3 Authentication Settings

  • AWS Credentials: The Access Key ID and Secret Key are associated with the user’s AWS Access Key.
  • Instance Profile (IAM Role): No additional fields are required.

See AWS Security Credentials for more details.

Web Proxy

If you connect to Amazon Redshift through a proxy server, these fields define the proxy details.

  • Web Proxy: 'None' if no proxy is required or 'Proxied' if connection to the Amazon Redshift REST Endpoint should be made via a proxy server. If a web proxy server is required, the following fields are required to enable a proxied connection.
  • Proxy host: The host name or IP address of the web proxy server.
  • Proxy port: The port on the proxy server for Data Source.
  • Proxy username: The username for the proxy server.
  • Proxy password: The password for the proxy server. *Leave username & password blank for an unauthenticated proxy connection.

Data Import & Export Information

Via Browsing

The browsing experience will vary and is based on the database, schemas, and tables visibility settings selected above along with the user credentials supplied.

Via SQL Query

As noted in the visibility section, the limits to what a user can import via query are only limited to their authorization as determined by the credentials supplied for the connection.


Updated April 12, 2022
Back to top