Introduction Building AI solutions without any framework can be difficult to maintain. LangChain is a framework that helps in the development of LLM-powered applications. It provides a set of building blocks for almost every stage of the LLM application lifecycle. To add generative AI functionality to applications, LangChain offers components and features that makes pipelining, […]
Introduction Travel planning is no longer just about deciding where to go, it’s about how, when, what to experience, and being prepared at every step. Traditional itinerary tools focus only on dates and places, leaving travelers to juggle multiple apps for hotels, packing lists, activities, maps, and research. To solve this, we built a Smart […]
Introduction Large Language Models (LLMs) like ChatGPT, Gemini, and Claude are very powerful. They can answer questions, write text, generate code, and help in many tasks. However, these models are trained on very general data from the internet. Because of this, they may not always understand your specific business, domain, or writing style. This is […]
Introduction In DevOps, upgrades are rarely exciting. They don’t ship new features (most of the time). They don’t impress clients. They don’t always get leadership applause. And yet, over the years at To The New, one thing has become very clear to us: DevOps teams that do upgrades regularly move faster, stay safer, and break […]
Introduction When teams start on their DevOps journey, the excitement is real. CI/CD pipelines, faster deployments, cloud-native tools, automation everywhere – it feels like everything is finally going to be smooth. But in reality, the first year of DevOps is rarely smooth. It’s messy, experimental, and full of learning. At To The New, while working […]
Introduction Reducing cloud costs is always the top priority and biggest headache for Devops Engineers, especially when using managed AWS services like ECS Fargate. For one of our Ad-Tech clients at TO THE NEW, we were already utilising Fargate Spot to reduce the ECS bill significantly. But we found that we could save even more […]
Introduction If you’ve ever worked with Kafka, you know the problem: data grows fast. Every click, impression, or event adds up, and before you know it, your Kafka broker’s disks are full. Disk is not very cheap on AWS, and storing everything on expensive broker storage is costly, and scaling up to handle growth feels […]
The other day, my client asked for something that sounded simple at first – he wanted an alert in his inbox whenever the date changed on a specific website. And, of course, the first thought that came to my mind was: “Yes, that’s doable with Python!” Because that’s what I have always done in my […]
One Database, Infinite Context: Why Your Next RAG App Should Start in SQL: The biggest challenge in Generative AI is “hallucination.” Retrieval-Augmented Generation (RAG) solves this by giving an LLM access to your private data. While most RAG stacks require complex Python “glue code,” Google Cloud’s AlloyDB AI allows you to handle the entire retrieval […]