Data Science

LangChain – Components Overview

Introduction Building AI solutions without any framework can be difficult to maintain. LangChain is a framework that helps in the development of LLM-powered applications. It provides a set of building blocks for almost every stage of the LLM application lifecycle. To add generative AI functionality to applications, LangChain offers components and features that makes pipelining, […]

by Rupa
January 16, 2026

Data Science

Beyond itineraries: Building an AI-powered smart travel planner with agents, maps, voice, and more

Introduction Travel planning is no longer just about deciding where to go, it’s about how, when, what to experience, and being prepared at every step. Traditional itinerary tools focus only on dates and places, leaving travelers to juggle multiple apps for hotels, packing lists, activities, maps, and research. To solve this, we built a Smart […]

January 16, 2026

Data Science

LLM Fine-Tuning Explained: What It Is, Why It Matters, and How It Works

Introduction Large Language Models (LLMs) like ChatGPT, Gemini, and Claude are very powerful. They can answer questions, write text, generate code, and help in many tasks. However, these models are trained on very general data from the internet. Because of this, they may not always understand your specific business, domain, or writing style. This is […]

January 16, 2026

Data Science

GCP: Building a RAG Pipeline with AlloyDB AI and Vertex AI

One Database, Infinite Context: Why Your Next RAG App Should Start in SQL: The biggest challenge in Generative AI is “hallucination.” Retrieval-Augmented Generation (RAG) solves this by giving an LLM access to your private data. While most RAG stacks require complex Python “glue code,” Google Cloud’s AlloyDB AI allows you to handle the entire retrieval […]

January 2, 2026

Data Science

MCP : Model Context Protocol

Introduction If you’ve built anything around LLMs – chatbots, internal assistants, developer tools—you’ve probably hit the same wall: the model is smart, but it’s “trapped.” It can’t directly fetch the latest numbers from your database, read files from your system, or trigger real workflows unless you wire everything up manually. That’s where MCP (Model Context […]

January 1, 2026

Data Science

Finding the Right GenAI Model for Right Task

Where It All Began The inspiration for exploring this topic arose while developing a POC to generate accurate graphical reports and charts from quantitative data. Naturally, the first thought was GPT. It’s everywhere—the “default” AI for almost any task. ChatGPT was given a try. It worked to an extent, displaying text-based charts or even generating […]

September 17, 2025

Data Science

How to Create CI/CD Pipeline for GenAI Application using Jenkins

Introduction Generative AI (GenAI) applications are becoming increasingly popular in enterprises, powering use cases like chatbots, text summarization, code generation, and more. However, developing these applications is only half the battle — ensuring smooth deployment, scalability, and continuous improvement is where CI/CD (Continuous Integration/Continuous Deployment) pipelines play a critical role. In this blog, we’ll walk […]

September 17, 2025

Data Science

Empowering a Food Delivery App with AI: From Smart Tags to Meal Planning

Introduction The food delivery space is growing increasingly competitive, and personalisation is now more critical than ever. Our team recently had the opportunity to design and implement a suite of AI-driven features for a food delivery client—each powered by OpenAI’s latest models, intelligent agents, and Python-based apps. These features were crafted to elevate the user […]

August 11, 2025

Data Science

Unlocking the Power of Open LLMs Locally: A Guide to Using LM Studio for Cost-Efficiency, Privacy, and Customization

Introduction The domain of large language models (LLMs) is evolving quickly, and it’s revolutionizing the way we tackle everything from AI-powered chatbots to machine learning-based code generation. Traditionally, operating these models came at the cost of sky-high cloud bills, latency, or committing sensitive data to third-party servers. But what if you could host robust LLMs […]

August 5, 2025