LangChain – Components Overview

16 / Jan / 2026 by Rupa 0 comments

Introduction
Building AI solutions without any framework can be difficult to maintain. LangChain is a framework that helps in the development of LLM-powered applications. It provides a set of building blocks for almost every stage of the LLM application lifecycle. To add generative AI functionality to applications, LangChain offers components and features that makes pipelining, model invocation, prompting, tool calling, and other tasks significantly simpler.

What Is LangChain?
LangChain is an open-source framework that has been designed to simplify the creation of GenAI applications( that uses LLM). It provides a standard interface for integration of other tools and end-to-end chains/pipelines.
This framework comes for both Python and JavaScript.
This framework is for the application that has to be integrated with LLM or applications that requires GenAI.

Core Components of LangChain
To simplify model calls, tool orchestration, memory, streaming, and structured outputs, LangChain provides core components that facilitate these features. These components enable clean model reasoning, external tool interactions, message context management, and agent decision-making within an application, while also improving code readability.
The use of these components makes API calls, model switching, response streaming, and response format definition much easier. Together, they also make it more convenient to introduce agentic workflows into an application.

Core components of Langchain are:

  • Agents
  • Models
  • Messages
  • Tools
  • Short-Term Memory
  • Streaming
  • Structured Output

 

langchain

Core components of langchain

Agents
Agents are a combination of language models with tools to create systems that can reason about tasks, decide which tools to use, and work iteratively. It is the combination of all the other core components of LangChain.
An agent runs in a loop:

  • The model decides the next step(based on the prompt/ instructions provided)
  • Calls a tool if needed
  • Observes the result
  • Repeats until a final answer is produced

BASIC CODE :
from langchain.agents import create_agent

agent = create_agent(
model="gpt-4o-mini",
tools=tools
)

result = agent.invoke({
"messages": [{"role": "user", "content": "What is LangChain?"}]
})

Models
Models are the reasoning engine of agents. They drive the decisions making, tools calling, interrupt in a process or final answering of agents. LangChain provides a wide range of model integrations that make it easier for developers to switch between different models.
Models can be dynamically called in agents or in a standalone manner for simpler tasks.

Standalone model call
import os
from langchain.chat_models import init_chat_model

os.environ["OPENAI_API_KEY"] = "sk-..."
model = init_chat_model("gpt-4.1")

Messages
Messages define how information is exchanged with chat models. They provide the input, output, metadata in a manner that maintains the context of conversation while interacting with LLMs.
Langchain provides a standard message that works for all the models(provided by LangChain), which makes it easier for developers to switch the model as messages format requires no change.
Messages contains :

  • roles : to identify the message type [User, System]
  • Content: Actual content of the message[ text, file, image, etc.]
  • Metadata: these are the optional fields such as response information, message IDs, and token usage

Basic usage
from langchain.messages import SystemMessage, HumanMessage

messages = [
SystemMessage(content="You are a helpful assistant"),
HumanMessage(content="What is LangChain?")
]

model.invoke(messages)

Tools
As models are not connected to outside world or personal tools of any user, tools allow models and agents to interact with the outside world i.e. APIs, databases, or custom logic.
Tools are callable functions with defined input schemas that the model can invoke when needed. The agent decides when to call the model based on the context and instructions provided.

Code Example :
from langchain.tools import tool

@tool
def search(query: str) -> str:
"""Fucntion"""
return f"Results for: {query}"

Tools are passed to agents or bound to models for tool calling.

Short-Term Memory (State)
To make the conversation persistent or to maintain the conversational flow it is important to keep the previous conversation in memory. Long conversations or full history may not fit inside an LLM’s context window, resulting in a context loss or errors.
Short term memory lets your application remember previous interactions within a single thread or conversation.
To add short-term memory to an agent, you need to specify a checkpointer when creating an agent.

Basic Code usage

from langchain.agents import create_agent
from langgraph.checkpoint.memory import InMemorySaver

agent = create_agent(
model="gpt-4o-mini",
tools=[get_user_info],
checkpointer=InMemorySaver(),
)

agent.invoke(
{"messages": [{"role": "user", "content": "Hi! My name is Bob."}]},
{"configurable": {"thread_id": "1"}},
)

Streaming
Some of the model calls, especially the one which generates a long descriptive text take more time to return the complete response, Instead of waiting for the complete response, it can be streamed.
It improves the latency and user experience ( as it seems like model is typing in real time).

Basic code usage:

for chunk in agent.stream({
"messages": [{"role": "user", "content": "Explain LangChain"}]
}):
print(chunk["messages"][-1].content)

Structured Output
Structured output allows the agents to return data in a specific, predictable format. Developers can set their desired structured output schema, and the model generated response is returned in the ‘structured_response’.

Basic usage

def create_agent(
...
response_format: Union[
ToolStrategy[StructuredResponseT],
ProviderStrategy[StructuredResponseT],
type[StructuredResponseT],
]

Conclusion
To conclude I would say– LangChain is more than a library—it is an architectural framework for building production-ready GenAI applications. As LLM systems move beyond simple prompt–response patterns, challenges like tool orchestration, memory, streaming, structured outputs.
It enforces a clean separation of concerns: models handle reasoning, tools manage external interactions, messages maintain context, and agents orchestrate decisions. This makes applications easier to extend, debug, and adapt as requirements or model providers evolve.
While simple use cases may not require it, LangChain excels when applications demand composition, state, and consistency—making it a strong foundation for building reliable and scalable LLM-powered systems.

FOUND THIS USEFUL? SHARE IT

Tag -

Langchain

Leave a Reply

Your email address will not be published. Required fields are marked *