Unlock Seamless Multi-Account Logging: Stream CloudWatch logs to Central S3 with Kinesis Firehose
Introduction
Look, if you’re running stuff across multiple AWS accounts – dev, staging, prod, maybe even separate accounts because your security team said so – you already know this pain. Something breaks, alarms start screaming, and suddenly you’re bouncing between six different accounts trying to figure out what the hell happened.
What I’m going to show you:
- The Real Problem: Why scattered logs cost us money and sleep
- What We Built: A streaming setup that cut our log storage costs by 70%
- The Nitty-Gritty Details: Every resource, every permission, every gotcha
- Step-by-Step Setup: Everything’s in the GitHub repo
What We Actually Built
Here’s what we did: we built a pipeline that sucks up CloudWatch logs from every account and dumps them into one massive S3 bucket. No more account hopping, no more missing pieces of the puzzle.
The secret sauce? Kinesis Data Firehose. This thing batches up logs, compresses the hell out of them, and moves everything across account boundaries while we sleep. It’s like having a dedicated intern whose only job is moving logs around, except this intern never calls in sick.
What this thing actually does:
- Grabs logs from any CloudWatch group automatically
- Squashes them down (we’re seeing 70% size reduction)
- Moves everything to our main account without breaking security
- Sorts everything by time so you can find stuff later
- Works with your existing apps (zero code changes required)
The Technical Bits
Your apps keep dumping logs into CloudWatch like they always have. A subscription filter sits there quietly watching every log that comes in, then forwards it to a Firehose stream. Firehose is pretty clever – it waits until it collects 5MB of logs OR 5 minutes pass (whichever happens first), then squashes everything with GZIP and ships it off to S3.

Architecture Diagram: Complete flow from CloudWatch to centralized S3 storage
Yeah, there’s a 5-15 minute delay, but honestly? That’s perfect for log analysis. You get near real-time data without paying through the nose for constant small transfers.
Setting Up The Infrastructure
This is where it gets interesting. You need stuff in two places: the accounts where logs come from (we call them member accounts) and the account where everything gets stored (the central account).
What Goes in Each Member Account
🔐 IAM Roles (The Security Layer)
Two roles handle everything:
- CloudWatch Logs Role: Can only send data to Firehose, that’s it
- Firehose Role: Can write to the central S3 bucket plus some basic monitoring stuff
We’re obsessive about permissions. Each role gets exactly what it needs to do its job, nothing extra.
🚀 Kinesis Data Firehose Stream
This is where the magic happens. We set it up with:
- 5MB or 300-second buffering (whichever hits first)
- GZIP compression turned on
- Error handling (failed stuff goes to a separate folder)
- CloudWatch monitoring so we know when things break
📊 CloudWatch Components
The pieces that make it all work together:
- Subscription filters:Gets attached to your log groups
- Logs destination: Routes the filtered data to Firehose
- Destination policy: Controls which AWS accounts/roles can create subscription filters to this destination (crucial for cross-account access)
What Goes in the Central Account
The central account keeps things simple – just one S3 bucket with a really strict policy.
🪣 The Central S3 Bucket
Our bucket policy is locked down tight:
- Only specific Firehose roles from member accounts can write
- All connections must use HTTPS (no exceptions)
- Versioning is turned on for data protection
- Logs get organized automatically by year/month/day/hour
No wildcards, no broad permissions – every single ARN is listed explicitly.
How We Organize Everything
The time-based folder structure isn’t just pretty – it makes queries fast and cheap:
s3://our-central-logs/
├── cloudwatch-logs/
│ ├── year=2025/month=01/day=15/hour=14/
│ │ ├── batch-001.gz
│ │ ├── batch-002.gz
│ │ └── batch-003.gz
│ └── year=2025/month=01/day=15/hour=15/
│ └── ...
└── cloudwatch-logs-errors/
└── delivery-failures/
When you need to search logs from a specific time period, AWS Athena/Splunk only looks at the relevant folders. This saves both time and money – a lot of money if you’re dealing with terabytes of logs.
Security Stuff (The Important Part)
Cross-account anything makes security teams break out in cold sweats, so we had to nail this part.
Our Security Approach
Every role has the bare minimum permissions needed. The CloudWatch role can’t touch S3 directly, and the Firehose role can’t peek at other log groups. The central bucket policy explicitly lists which member account roles are allowed – new accounts don’t get automatic access.
All data moves over HTTPS. We actually have a bucket policy condition that flat-out rejects any request that isn’t encrypted.
The Trade-offs
Nothing’s perfect. Here’s what you should know going in:
- There’s a 5-15 minute delay from log creation to S3 availability
- Each Firehose stream handles about 5,000 records per second (you can run multiple streams if needed)
- This works within a single AWS region
- During really high volume periods, you might see some brief delays
Getting Started
We’ve documented everything in the GitHub repository. Seriously, everything:
- All the AWS CLI command you need to run to complete the setup
- All the IAM policies (just update your account IDs)
- Test scripts to verify everything’s working
Before you start, make sure you have:
- AWS CLI access to both member and central accounts
- Permission to create IAM roles and policies
- Basic understanding of CloudWatch and S3
If you can create an S3 bucket and attach an IAM policy, you can build this.
🔗 Implementation Repository
GitHub Link: cloudwatch-logs-exporter-cross-account-s3
The whole setup takes about 15-20 minutes if you follow our guide step-by-step.