{"id":77744,"date":"2026-02-15T18:03:06","date_gmt":"2026-02-15T12:33:06","guid":{"rendered":"https:\/\/www.tothenew.com\/blog\/?p=77744"},"modified":"2026-03-16T15:43:28","modified_gmt":"2026-03-16T10:13:28","slug":"dns-migration-done-right-lessons-from-moving-to-route-53","status":"publish","type":"post","link":"https:\/\/www.tothenew.com\/blog\/dns-migration-done-right-lessons-from-moving-to-route-53\/","title":{"rendered":"DNS Migration Done Right: Lessons from Moving to Route 53"},"content":{"rendered":"<h2><span style=\"text-decoration: underline;\"><strong>Introduction<\/strong><\/span><\/h2>\n<p>DNS migrations don\u2019t usually get much attention. They\u2019re invisible when done right and very loud when done wrong.<\/p>\n<p>At TO THE NEW, we recently migrated DNS for an ad tech client from <a href=\"https:\/\/www.ibm.com\/products\/ns1-connect\">NS1 (an IBM Product)<\/a> to <a href=\"https:\/\/aws.amazon.com\/route53\/\">AWS Route 53<\/a> as part of their large move to AWS and cost savings. On paper, this looked like a simple vendor switch. In reality, it was a careful exercise in cost optimization, planning, validation, and controlled risk. This post isn\u2019t just about what we did \u2014 it\u2019s how the dns migration really works behind the scenes, the tools we used, and the checks that helped us sleep during cutover.<\/p>\n<h2><span style=\"text-decoration: underline;\"><strong>Why We Even Looked at DNS Costs<\/strong><\/span><\/h2>\n<p>When the client initially moved from <a href=\"https:\/\/www.tothenew.com\/blog\/data-center-to-aws-cloud-migration\/\">Data Center to AWS<\/a> in 2022, they chose NS1 with a 2-billion queries\/month plan to stay future-ready..<\/p>\n<p>A few months later, we looked at actual usage:<\/p>\n<ul>\n<li>Average daily queries: <strong>~4 million<\/strong><\/li>\n<li>Traffic pattern: stable, predictable<\/li>\n<li>Peak spikes: minimal<\/li>\n<li>In short, we were paying for headroom we didn\u2019t need.<\/li>\n<\/ul>\n<p>Route 53, with its pay-as-you-go pricing and tight AWS integration, became the obvious choice, but only after validating that it could reliably handle current and future traffic.<\/p>\n<h2><span style=\"text-decoration: underline;\"><strong>Step 1: Understanding DNS Traffic (Before Touching Anything)<\/strong><\/span><\/h2>\n<p>Before creating even a single Route 53 hosted zone, we focused on query behavior.<\/p>\n<p>Two key observations were:<\/p>\n<ul>\n<li>Most DNS traffic was steady-state<\/li>\n<li>TTL values were lower than necessary<\/li>\n<li>We increased <strong>TTLs to 3600 seconds<\/strong> (1 hour) well before the migration.<\/li>\n<\/ul>\n<p>This change helped in two ways:<\/p>\n<ul>\n<li>First, it reduced the overall number of DNS queries almost immediately.<\/li>\n<li>Second, it made the final nameserver switch much safer and more predictable.<\/li>\n<\/ul>\n<p>TTL tuning is an example of a minor configuration change that can have a significant effect. It&#8217;s easy, yet frequently disregarded.<\/p>\n<h2><span style=\"text-decoration: underline;\"><strong>Step 2: DNS Reconstruction: Handle It Like Actual Infrastructure<\/strong><\/span><\/h2>\n<p>We thoroughly rebuilt everything rather than transferring records by hand between consoles. <strong>A, AAAA, CNAME, TXT, MX,<\/strong> and other records were recreated exactly as they were in NS1 after we created the hosted zones in Route 53. We treated DNS the same way we treat the rest of our infrastructure \u2014 structured, reviewed, and controlled \u2014 not something managed casually in a browser tab.<\/p>\n<h3><span style=\"text-decoration: underline;\"><strong>Terraform as the Source of Truth<\/strong><\/span><\/h3>\n<p>Everything was managed using Terraform, which gave us:<\/p>\n<ul>\n<li>Version control for DNS (yes, that matters)<\/li>\n<li>Easy review and approvals<\/li>\n<li>Instant rollback if something went wrong<\/li>\n<\/ul>\n<p>This also ensured environment parity \u2014 no \u201c<strong>prod-only surprises<\/strong>\u201d.<\/p>\n<h2><span style=\"text-decoration: underline;\"><strong>Step 3: Validating Before the Cutover (This Is Where Most Time Goes)<\/strong><\/span><\/h2>\n<p>Before touching nameservers, we spent time validating Route 53 responses in isolation for one of the test domains. Useful Commands During Validation:<\/p>\n<p>Check authoritative answers from Route 53:<\/p>\n<pre>dig example.com NS<\/pre>\n<p>Trace DNS resolution end-to-end:<\/p>\n<pre>dig +trace test.example.com<\/pre>\n<p>These checks helped us confirm:<\/p>\n<ul>\n<li>Records resolve correctly<\/li>\n<li>No missing entries<\/li>\n<li>No unexpected TTL behavior<\/li>\n<\/ul>\n<p><strong>Behind the scenes reality:<\/strong> Most migration time is spent proving nothing will break, not actually switching DNS.<\/p>\n<h2><span style=\"text-decoration: underline;\"><strong>Step 4: The Actual Cutover (Quiet, Boring, Perfect)<\/strong><\/span><\/h2>\n<p>By the time we updated nameservers:<\/p>\n<ul>\n<li>TTLs were already high<\/li>\n<li>Route 53 was fully validated<\/li>\n<li>Monitoring was in place<\/li>\n<li>Nameserver Update<\/li>\n<li>The domain registrar(GoDaddy in our case) was updated to point to Route 53 nameservers.<\/li>\n<\/ul>\n<p>Because of earlier TTL tuning, propagation was smooth and predictable.<\/p>\n<p><span style=\"text-decoration: underline;\"><strong>Real-Time Checks During Cutover<\/strong><\/span><\/p>\n<pre>dig example.com NS<\/pre>\n<pre>dig example.com +trace<\/pre>\n<p>We watched:<\/p>\n<ul>\n<li>Which nameservers were responding?<\/li>\n<li>How quickly did traffic shift?<\/li>\n<li>Any unexpected resolution failures?<\/li>\n<li>No drama. Exactly how DNS migrations should be.<\/li>\n<\/ul>\n<h2><span style=\"text-decoration: underline;\"><strong>Monitoring After Migration (Do Not Skip This)<\/strong><\/span><\/h2>\n<p>Once traffic fully moved to Route 53, we kept a close eye on:<\/p>\n<ul>\n<li>CloudWatch Metrics<\/li>\n<li>DNS query count<\/li>\n<li>Health check status<\/li>\n<li>Latency anomalies<\/li>\n<li>Route 53 Health Checks &amp; Failover<\/li>\n<\/ul>\n<p><strong>Critical endpoints were backed by:<\/strong><\/p>\n<ul>\n<li>ALBs with health checks<\/li>\n<li>Route 53 failover routing<\/li>\n<li>Automated recovery paths<\/li>\n<li>Alarms were configured early, before issues could become incidents.<\/li>\n<\/ul>\n<h2><span style=\"text-decoration: underline;\"><strong>Why Were There Still Queries Hitting NS1 Weeks Later?<\/strong><\/span><\/h2>\n<p>One interesting thing we noticed after the migration was that NS1 was still receiving a small number of queries, roughly <strong>1,000 in a 24-hour window,<\/strong> even weeks after we had updated the domain nameservers at the registrar. At first glance, this can feel concerning. If the nameservers were changed correctly, why would any traffic still reach the old provider?<\/p>\n<p>In practice, this is completely normal.<\/p>\n<p>DNS behavior across the Internet isn\u2019t perfectly uniform. Some recursive resolvers cache nameserver (NS) records longer than the published TTL. Internal DNS forwarders that don&#8217;t refresh instantly are used by some business networks. Information about authoritative nameservers may also be cached for a long time by third-party vendors, security scanners, or monitoring tools. Rarely, resolver paths may even be hardcoded into systems. All of this can result in a small amount of residual traffic continuing to hit the old authoritative servers.<\/p>\n<p>The important part was context. The query volume was very small compared to overall traffic, it was steadily declining, and Route 53 was correctly serving authoritative responses globally. For this reason, we intentionally did not shut down NS1 immediately. We kept it running during a safe decommission window to ensure there was zero risk to production traffic.<\/p>\n<p>DNS propagation is predictable. Resolver behavior across the internet is not.<\/p>\n<h2><span style=\"text-decoration: underline;\"><strong>What We Gained (Beyond Just Cost Savings)<\/strong><\/span><\/h2>\n<ul>\n<li>Cost Optimization<\/li>\n<li>Eliminated unused query capacity<\/li>\n<li>Pay only for what we actually use<\/li>\n<li>Operational Clarity<\/li>\n<li>DNS is fully versioned and auditable via Terraform<\/li>\n<li>No manual console drift<\/li>\n<li>Better Reliability<\/li>\n<li>Native AWS integrations (ALB, ACM, health checks)<\/li>\n<li>Automated failover for critical paths<\/li>\n<li>Future-Proofing<\/li>\n<li>Route 53 scales automatically: No need to pre-purchase massive plans<\/li>\n<\/ul>\n<h2><span style=\"text-decoration: underline;\"><strong>Key Learnings from the Migration<\/strong><\/span><\/h2>\n<ul>\n<li>Data beats assumptions<\/li>\n<li>Always analyze real traffic before choosing enterprise plans.<\/li>\n<li>TTL tuning is powerful<\/li>\n<li>One config change reduced risk across the entire migration.<\/li>\n<li>DNS deserves IaC: IAAC isn\u2019t just for servers; DNS is infrastructure too.<\/li>\n<li>Validation &gt; Execution<\/li>\n<li>The safest migrations are the most boring ones.<\/li>\n<\/ul>\n<h2><span style=\"text-decoration: underline;\"><strong>Conclusion<\/strong><\/span><\/h2>\n<p>This ad tech client discovered that moving from NS1 to Route 53 had a major business impact, including lower costs, increased reliability, and simpler operations, despite the fact that DNS migrations may not seem glamorous.<\/p>\n<p>At <a href=\"https:\/\/www.tothenew.com\/\">To The New<\/a>, we look for these opportunities where small infrastructure decisions have big effects. This migration was a reminder that even the quietest layers of the stack require careful engineering. For your DNS workloads, contact us!<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction DNS migrations don\u2019t usually get much attention. They\u2019re invisible when done right and very loud when done wrong. At TO THE NEW, we recently migrated DNS for an ad tech client from NS1 (an IBM Product) to AWS Route 53 as part of their large move to AWS and cost savings. On paper, this [&hellip;]<\/p>\n","protected":false},"author":1601,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"iawp_total_views":56},"categories":[2348],"tags":[8370,8372,8369,8365,7502,6620,7570,6961,8364,8368,8371,6835,8366,1274,8367],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/77744"}],"collection":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/users\/1601"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/comments?post=77744"}],"version-history":[{"count":5,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/77744\/revisions"}],"predecessor-version":[{"id":78537,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/77744\/revisions\/78537"}],"wp:attachment":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/media?parent=77744"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/categories?post=77744"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/tags?post=77744"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}