Sometimes We have to keep our database backup for longer retention for Security and DR Compliances in RDS. But as we know, if we have a longer backup period in RDS, it will also create more burden on the pocket, as RDS charges for storage in Normal S3 bucket format. So we can export our backup snapshots from RDS to an S3 bucket with Apache Parquet Format, which can save some charges, and we can use those snapshot data for future reports or compliance checks as well using Athena and Glue.
- We can export all types of backup from RDS to S3
- Steps to export to s3:
- Create an S3 bucket with Proper IAM Permission
- Create a KMS key for server-side encryption (SSE)
- We can use CLI or GUI to perform this action
- No Impact on Performance as export runs in the background
- Data will be exported into S3 in Apache Parquet format. It is 2x faster and stores 1/6 of the storage compared to the normal test format
- We can use that exported data to create reports using Athena and EMR
Architecture of Migration Process
Used Tools and AWS Components –
- AWS RDS
- AWS S3
- AWS KMS
- AWS IAM
- AWS Glue
- AWS Crawler
- AWS Athena
Steps Performed for this Activity
- Create an S3 bucket in AWS
- Create a KMS Key or use the Existing Key for Encryption
- Take RDS Database Snapshot
- Export Snapshot to S3 Bucket with Parquet Format
- Validate the Snapshot Migration (Check Size and Cost)
- Create a Glue Crawler and Run it to fetch data
- Use Athena to Execute Query and generate Report – Data Will be stored in CSV format in the S3 bucket location
- You can also use those output files to import data into another database.
Execution Steps –
- This process has been performed on the Beta Database of PostgreSQL Snapshot.
Step 1: Export of RDS DB Snapshot to S3 Bucket
- Take a Snapshot of the Beta Database from RDS
- Export Database Snapshot to S3 bucket from Snapshot Menu
- Created KMS key as “glue-kms” in Key Management Service with default settings and copied the arn of the key created to export db snapshot configuration, and Crawler will use same to extract data by AWS Glue.
- Select S3 Bucket to upload and provide a proper IAM Role with a Newly created KMS Key to encrypt the database snapshot.
- Snapshot has been exported in the S3 bucket and checked directly from the S3 Console.
- After completion of the export task to the S3 bucket, I am able to see the databases and tables in the bucket named test-beta.
Step 2: Create and Run Crawler in AWS Glue to export S3 data in Glue Data Catalog
- In AWS Glue Console, Go to the crawler option and click on the add crawler button. Then, give the crawler name as beta-demo and click next
- Click on Next and select a data source
- Select the S3 folder and provide the path that you want to extract and get data from a particular table or full database.
- Here, we have selected a specific table customer_master to check.
- Select the IAM role which AWS Glue and Crawler will use to access Snapshot from S3 Bucket
1. Review the Created Crawler
2. Select Data Source for Crawler
3. Create a schedule for Crawler
4. Run The Crawler and see the process
5. Check the Database in AWS Glue, and you will find the table that the crawler extracted from the S3 Bucket
Step 3: In Athena, run of queries and store of queries output in S3 bucket
1. Check Query Editor
2. In Query Editor, You will find the Data Source as AWS Data Catalog, and demodb will be there as a database because it is the only database available right now.
3. Go to Settings and specify the location to store output, which you will get after the execution of queries in Athena , we have provided the Same S3 bucket with a New Folder in it.
4. Execute the query and see the result.
5. Check the Output in the Query editor and the S3 bucket, where it will be saved in CSV format.
You can now download the CSV file open it, and check the result to verify.
- A crawler will be used if the requirement comes to get a report or data from any historical snapshot
- Costing of the crawler will be based on the minutes of usage
- Costing of Athena will be based on the data explored from snapshot
Cost Estimation for the Complete Operation
|Total Size of Snapshots (TB)
|Rate (per quantity’s dimension)
|Trial Performed on a snapshot
|Size of Snapshot (TB)
|S3 Storage (GB)
|Size after Compression (TB)
|S3 API (number of PUT objects)
|Objects created from snapshot migration
|S3 API Cost
|RDS Export (per GB)
|Crawler Running Minutes
|Athena Data Scanned (TB)
|Number of Objects per GB after Compression
|API Cost per GB after compression
|API Cost per object
|NOTE: Additional KMS API charges will be applicable when Crawler and Athena will decrypt the objects
|Crawler running hours per month
|Data Scanned by Athena per month (TB)
|Number of KMS Keys
Comparison Of Costing
|New Estimated Cost
|1500 USD per Month
|Snapshot migrated to S3 and size will be reduced up to 8 times
|190 USD per Month