How to Deploy Amazon Redshift RG Instances with Graviton for Faster and Cheaper Analytics

By

Introduction

Amazon Redshift has consistently evolved to make data warehousing faster and more cost-effective. The latest innovation, Redshift RG instances, leverages AWS Graviton processors to deliver up to 2.2x faster performance than previous RA3 instances at 30% lower price per vCPU. These instances also include an integrated data lake query engine that lets you seamlessly run SQL analytics across your data warehouse and Amazon S3 data lake from a single engine—achieving up to 2.4x faster queries for Apache Iceberg and 1.5x faster for Apache Parquet. This guide walks you through everything you need to get started with RG instances, whether you're launching a new cluster or migrating an existing one.

How to Deploy Amazon Redshift RG Instances with Graviton for Faster and Cheaper Analytics
Source: aws.amazon.com

What You Need

Step-by-Step Guide

Step 1: Evaluate Your Workload and Compare Instance Types

Before deploying, assess your current workload requirements. Use the table below to match your needs with the appropriate RG instance:

Current RA3 InstanceRecommended RG InstancevCPUMemory (GB)Primary Use Case
ra3.xlplusrg.xlarge432Small cluster departmental analytics
ra3.4xlargerg.4xlarge12 → 16 (1.33:1)96 → 128 (1.33:1)Standard production workloads, medium data volumes

Note the 30% lower price per vCPU for RG instances. For other sizes, consult the Redshift documentation.

Step 2: Launch a New RG Cluster or Migrate an Existing One

You can create a new cluster using the AWS Management Console, AWS CLI, or AWS API. Here’s how via the console:

  1. Navigate to the Amazon Redshift console.
  2. Click Create cluster.
  3. Under Node type, select an RG instance (e.g., rg.xlarge).
  4. Configure other settings (cluster identifier, database name, master user password, etc.).
  5. Click Create cluster to launch.

To migrate an existing RA3 cluster, take a snapshot of the cluster, restore it as a new cluster with RG instances, then redirect applications to the new endpoint. This minimizes downtime.

Step 3: Enable the Integrated Data Lake Query Engine

This engine is enabled by default on all RG instances. You don’t need to perform any extra configuration. It allows you to run SQL queries that span both warehouse tables and external data in Amazon S3 using a single engine—no separate query editor required.

How to Deploy Amazon Redshift RG Instances with Graviton for Faster and Cheaper Analytics
Source: aws.amazon.com

Verify it’s active by running SELECT * FROM svl_s3query_summary; in the Redshift query editor.

Step 4: Configure Access to Your Amazon S3 Data Lake

To query data in S3, you need to create an external schema that maps to your S3 buckets using AWS Glue or Athena metadata. Example:

CREATE EXTERNAL SCHEMA my_s3_schema
FROM DATA CATALOG
DATABASE 'my_database'
IAM_ROLE 'arn:aws:iam::123456789012:role/MyRedshiftSpectrumRole';

Then you can query Iceberg or Parquet tables directly: SELECT * FROM my_s3_schema.my_table;

Step 5: Run Queries and Monitor Performance

Execute your typical BI, ETL, or ad-hoc queries. Use the Redshift console’s Query Monitoring and Performance dashboards to compare execution times against your previous instance type. You should see improvements of up to 2.2x for warehouse workloads and up to 2.4x for Iceberg queries.

Step 6: Optimize Costs with the AWS Pricing Calculator

Use the AWS Pricing Calculator to estimate savings based on your specific usage. Factor in the lower vCPU pricing and any reductions in data transfer or storage costs from the integrated lake engine.

Tips for Success

Tags:

Related Articles

Recommended

Discover More

7 Hidden Barriers Why Climate Action Stalls Despite Widespread Public BackingGitHub to Host OpenClaw: After Hours Event During Microsoft Build 2026Greenland’s Ice Sheet Melt Accelerates Sixfold, Raising Global ConcernsMastering Daemon Management on Amazon ECS: A Q&A GuideForza Horizon 6 Pre-Orders Signal Racing Games’ Last Stand as a Mainstream Genre