Push Data to Redshift

   Note: Data Warehouse Push is currently only available in beta for Postscript brands on the enterprise plan. To begin using Data Warehouse Push, please be sure to first contact your Customer Success Manager.

Data Warehouse Push allows Postscript to automatically send both historical and new data stored in-app, including clicks, order attribution, and more, to your designated data warehouse. 

In this article, we'll walk you through the three steps for configuring your Redshift destination.

Before Getting Started

Please consider the following prerequisites before configuring your Redshift destination:

  • If your Redshift security posture requires IP whitelisting, have our data syncing service's static IP available during the following steps. It will be required in Step 2.
  • By default, Redshift authentication uses role-based access. You will need the trust policy prepopulated with the data-syncing service's identifier to grant access. It should look similar to the following JSON object with a proper service account identifier:
  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Action": [
      "Principal": {
        "Federated": "accounts.google.com"
      "Condition": {
        "StringEquals": {
          "accounts.google.com:sub": "107218628929600359339"

Step 1: Create a Limited User in Redshift

  1. Connect to Redshift using the SQL client.
  2. Execute the following query to create a user to write the data (replace <password> with a password of your choice).
CREATE USER <username> PASSWORD '<password>';

Creating a user without a password.

Role based auth does not require a password. You may create the user using CREATE USER <username> PASSWORD DISABLE;

3. Grant user create and temporary privileges on the database. create allows the service to create new schemas and temporary allows the service to create temporary tables.


The schema will be created during the first sync

The schema name supplied as part of Step 4 will be created during the first connection. It does not need to be created manually in the destination ahead of time.

If the schema already exists

By default, the service creates a new schema based on the destination configuration. If you prefer to create the schema yourself before connecting the destination, you must ensure that the writer user has the proper permissions on the schema, using GRANT ALL ON schema <schema> TO <username>;.

Once you've provided the GRANT ALL permission on the schema, you can safely remove the CREATE permission on the database (but you must retain the TEMPORARY permission on the database).

Step 2: Whitelist connection

1. In the Redshift console, click Clusters, and make a note of the cluster name.

2. Select the cluster you would like to connect.

3. In the General information pane, make note of the Endpoint details. You may need to use the copy icon to copy the full details to discover the full endpoint and port number.

4. Select the Properties tab.

5. Scroll down to the Network and security settings section.

6. In the VPC security group field, select a security group to open it.

7. In the Security Groups window, click Inbound rules.

8. Click Edit inbound rules.

9. In the Edit the Inbound rules window, follow the steps below to create custom TCP rules for the static IP:

a. Select Custom TCP in the drop-down menu.

b. Enter your Redshift port number (likely 5439).

c. Enter the static IP.

d. Click Add rule.

Step 3: Create a staging bucket

Create a staging bucket

1. Navigate to the S3 service page.

2. Click Create bucket.

3. Enter a Bucket name and modify any of the default settings as desired. Note: Object Ownership can be set to "ACLs disabled" and Block Public Access settings for this bucket can be set to "Block all public access" as recommended by AWS. Make note of the Bucket name and AWS Region.

4. Click Create bucket.

Create policy

1. Navigate to the IAM service page, click on the Policies navigation tab, and click Create policy.

2. Click the JSON tab, and paste the following policy, being sure to replace BUCKET_NAME with the name of the bucket chosen above, and REGION_NAME, ACCOUNT_ID, CLUSTER_NAME, USERNAME, and DATABASE_NAME with the proper Redshift values (Note: the first bucket permission in the list applies to BUCKET_NAME whereas the second permission applies only to the bucket's contents — BUCKET_NAME/* — an important distinction.

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::BUCKET_NAME"
            "Effect": "Allow",
            "Action": [
            "Resource": "arn:aws:s3:::BUCKET_NAME/*"
            "Effect": "Allow",
            "Action": "redshift:GetClusterCredentials",
            "Resource": [

3. Click through to the Review step, choose a name for the policy, for example, transfer-service-policy (this will be referenced in the next step), add a description, and click Create policy.

Create role

1. Navigate to the IAM service page.

2. Navigate to the Roles navigation tab, and click Create role.

3. Select Custom trust policy and paste the provided trust policy (from the prerequisite) to allow AssumeRole access to this role. Click Next.

4. Add the permissions policy created above, and click Next.

5. Enter a Role name, for example, transfer-role, and click Create role.

6. Once successfully created, search for the created role in the Roles list, click the role name, and make a note of the ARN value.

Alternative authentication method: AWS User with HMAC Access Key ID & Secret Access Key

Role based authentication is the preferred authentication mode for Redshift based on AWS recommendations, however, HMAC Access Key ID & Secret Access Key is an alternative authentication method that can be used if preferred.

  1. Navigate to the IAM service page.
  2. Navigate to the Users navigation tab, and click Add users.
  3. Enter a User name for the service, for example, transfer-service, click Next. Under Select AWS access type, select the Access key - Programatic access option. Click Next: Permissions.
  4. Click the Attach existing policies directly option, and search for the name of the policy created in the previous step. Select the policy, and click Next: Tags.
  5. Click Next: Review and click Create user.
  6. In the Success screen, record the Access key ID and the Secret access key.

Step 4: Add your destination

1. Securely share your host, database, cluster, your chosen schema, IAM role ARN, and staging bucket details with us to complete the connection.

Get Support

Have questions? Please feel free to reach out to our wonderful Support team at support@postscript.io or via live chat. You can also submit a support request here!

Need ongoing channel strategy guidance? Please fill out this form and we'll connect you to one of our certified partners.

Was this article helpful?
0 out of 0 found this helpful