{"id":77227,"date":"2026-01-16T11:30:01","date_gmt":"2026-01-16T06:00:01","guid":{"rendered":"https:\/\/www.tothenew.com\/blog\/?p=77227"},"modified":"2026-01-27T13:03:18","modified_gmt":"2026-01-27T07:33:18","slug":"langchain-components-overview","status":"publish","type":"post","link":"https:\/\/www.tothenew.com\/blog\/langchain-components-overview\/","title":{"rendered":"LangChain \u2013 Components Overview"},"content":{"rendered":"<p style=\"text-align: justify;\"><span style=\"text-decoration: underline;\"><strong>Introduction<\/strong><\/span><br \/>\nBuilding AI solutions without any framework can be difficult to maintain. LangChain is a framework that helps in the development of LLM-powered applications. It provides a set of building blocks for almost every stage of the LLM application lifecycle. To add generative AI functionality to applications, LangChain offers components and features that makes pipelining, model invocation, prompting, tool calling, and other tasks significantly simpler.<\/p>\n<p style=\"text-align: justify;\"><span style=\"text-decoration: underline;\"><strong>What Is LangChain?<\/strong><\/span><br \/>\nLangChain is an open-source framework that has been designed to simplify the creation of GenAI applications( that uses LLM). It provides a standard interface for integration of other tools and end-to-end chains\/pipelines.<br \/>\nThis framework comes for both Python and JavaScript.<br \/>\nThis framework is for the application that has to be integrated with LLM or applications that requires GenAI.<\/p>\n<p style=\"text-align: justify;\"><span style=\"text-decoration: underline;\"><strong>Core Components of LangChain<\/strong><\/span><br \/>\nTo simplify model calls, tool orchestration, memory, streaming, and structured outputs, LangChain provides core components that facilitate these features. These components enable clean model reasoning, external tool interactions, message context management, and agent decision-making within an application, while also improving code readability.<br \/>\nThe use of these components makes API calls, model switching, response streaming, and response format definition much easier. Together, they also make it more convenient to introduce agentic workflows into an application.<\/p>\n<p>Core components of Langchain are:<\/p>\n<ul>\n<li>Agents<\/li>\n<li>Models<\/li>\n<li>Messages<\/li>\n<li>Tools<\/li>\n<li>Short-Term Memory<\/li>\n<li>Streaming<\/li>\n<li>Structured Output<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<div id=\"attachment_77226\" style=\"width: 750px\" class=\"wp-caption aligncenter\"><img aria-describedby=\"caption-attachment-77226\" decoding=\"async\" loading=\"lazy\" class=\" wp-image-77226\" src=\"https:\/\/www.tothenew.com\/blog\/wp-ttn-blog\/uploads\/2025\/12\/Screenshot-2025-12-18-at-8.41.05\u202fPM-300x229.png\" alt=\"langchain\" width=\"740\" height=\"565\" srcset=\"\/blog\/wp-ttn-blog\/uploads\/2025\/12\/Screenshot-2025-12-18-at-8.41.05\u202fPM-300x229.png 300w, \/blog\/wp-ttn-blog\/uploads\/2025\/12\/Screenshot-2025-12-18-at-8.41.05\u202fPM-1024x782.png 1024w, \/blog\/wp-ttn-blog\/uploads\/2025\/12\/Screenshot-2025-12-18-at-8.41.05\u202fPM-768x586.png 768w, \/blog\/wp-ttn-blog\/uploads\/2025\/12\/Screenshot-2025-12-18-at-8.41.05\u202fPM-1536x1173.png 1536w, \/blog\/wp-ttn-blog\/uploads\/2025\/12\/Screenshot-2025-12-18-at-8.41.05\u202fPM-624x477.png 624w, \/blog\/wp-ttn-blog\/uploads\/2025\/12\/Screenshot-2025-12-18-at-8.41.05\u202fPM.png 1540w\" sizes=\"(max-width: 740px) 100vw, 740px\" \/><p id=\"caption-attachment-77226\" class=\"wp-caption-text\">Core components of langchain<\/p><\/div>\n<p style=\"text-align: justify;\"><strong>Agents<\/strong><br \/>\nAgents are a combination of language models with tools to create systems that can reason about tasks, decide which tools to use, and work iteratively. It is the combination of all the other core components of LangChain.<br \/>\nAn agent runs in a loop:<\/p>\n<ul style=\"text-align: justify;\">\n<li>The model decides the next step(based on the prompt\/ instructions provided)<\/li>\n<li>Calls a tool if needed<\/li>\n<li>Observes the result<\/li>\n<li>Repeats until a final answer is produced<\/li>\n<\/ul>\n<p style=\"text-align: justify;\"><em>BASIC CODE :<\/em><br \/>\n<code>from langchain.agents import create_agent<\/code><\/p>\n<p style=\"text-align: justify;\"><code>agent = create_agent(<br \/>\nmodel=\"gpt-4o-mini\",<br \/>\ntools=tools<br \/>\n)<\/code><\/p>\n<p style=\"text-align: justify;\"><code>result = agent.invoke({<br \/>\n\"messages\": [{\"role\": \"user\", \"content\": \"What is LangChain?\"}]<br \/>\n})<\/code><\/p>\n<p><strong>Models<\/strong><br \/>\nModels are the reasoning engine of agents. They drive the decisions making, tools calling, interrupt in a process or final answering of agents. LangChain provides a wide range of model integrations that make it easier for developers to switch between different models.<br \/>\nModels can be dynamically called in agents or in a standalone manner for simpler tasks.<\/p>\n<p style=\"text-align: justify;\"><em>Standalone model call<\/em><br \/>\n<code>import os<br \/>\nfrom langchain.chat_models import init_chat_model<\/code><\/p>\n<p style=\"text-align: justify;\"><code>os.environ[\"OPENAI_API_KEY\"] = \"sk-...\"<br \/>\nmodel = init_chat_model(\"gpt-4.1\")<\/code><\/p>\n<p><strong>Messages<\/strong><br \/>\nMessages define how information is exchanged with chat models. They provide the input, output, metadata in a manner that maintains the context of conversation while interacting with LLMs.<br \/>\nLangchain provides a standard message that works for all the models(provided by LangChain), which makes it easier for developers to switch the model as messages format requires no change.<br \/>\nMessages contains :<\/p>\n<ul style=\"text-align: justify;\">\n<li>roles : to identify the message type [User, System]<\/li>\n<li>Content: Actual content of the message[ text, file, image, etc.]<\/li>\n<li>Metadata: these are the optional fields such as response information, message IDs, and token usage<\/li>\n<\/ul>\n<p style=\"text-align: justify;\"><em>Basic usage<\/em><br \/>\n<code>from langchain.messages import SystemMessage, HumanMessage<\/code><\/p>\n<p style=\"text-align: justify;\"><code>messages = [<br \/>\nSystemMessage(content=\"You are a helpful assistant\"),<br \/>\nHumanMessage(content=\"What is LangChain?\")<br \/>\n]<\/code><\/p>\n<p style=\"text-align: justify;\"><code>model.invoke(messages)<\/code><\/p>\n<p><strong>Tools<\/strong><br \/>\nAs models are not connected to outside world or personal tools of any user, tools allow models and agents to interact with the outside world i.e. APIs, databases, or custom logic.<br \/>\nTools are callable functions with defined input schemas that the model can invoke when needed. The agent decides when to call the model based on the context and instructions provided.<\/p>\n<p style=\"text-align: justify;\"><em>Code Example :<\/em><br \/>\n<code>from langchain.tools import tool<\/code><\/p>\n<p style=\"text-align: justify;\"><code><\/code><code>@tool<br \/>\ndef search(query: str) -&gt; str:<br \/>\n\"\"\"Fucntion\"\"\"<br \/>\nreturn f\"Results for: {query}\"<\/code><br \/>\nTools are passed to agents or bound to models for tool calling.<\/p>\n<p><strong>Short-Term Memory (State)<\/strong><br \/>\nTo make the conversation persistent or to maintain the conversational flow it is important to keep the previous conversation in memory. Long conversations or full history may not fit inside an LLM\u2019s context window, resulting in a context loss or errors.<br \/>\nShort term memory lets your application remember previous interactions within a single thread or conversation.<br \/>\nTo add short-term memory to an agent, you need to specify a checkpointer when creating an agent.<\/p>\n<p style=\"text-align: justify;\"><em>Basic Code usage<\/em><\/p>\n<p style=\"text-align: justify;\"><code>from langchain.agents import create_agent<br \/>\nfrom langgraph.checkpoint.memory import InMemorySaver<\/code><\/p>\n<p style=\"text-align: justify;\"><code>agent = create_agent(<br \/>\nmodel=\"gpt-4o-mini\",<br \/>\ntools=[get_user_info],<br \/>\ncheckpointer=InMemorySaver(),<br \/>\n)<\/code><\/p>\n<p style=\"text-align: justify;\"><code>agent.invoke(<br \/>\n{\"messages\": [{\"role\": \"user\", \"content\": \"Hi! My name is Bob.\"}]},<br \/>\n{\"configurable\": {\"thread_id\": \"1\"}},<br \/>\n)<\/code><\/p>\n<p><strong>Streaming<\/strong><br \/>\nSome of the model calls, especially the one which generates a long descriptive text take more time to return the complete response, Instead of waiting for the complete response, it can be streamed.<br \/>\nIt improves the latency and user experience ( as it seems like model is typing in real time).<\/p>\n<p style=\"text-align: justify;\"><em>Basic code usage:<\/em><\/p>\n<p style=\"text-align: justify;\"><code>for chunk in agent.stream({<br \/>\n\"messages\": [{\"role\": \"user\", \"content\": \"Explain LangChain\"}]<br \/>\n}):<br \/>\nprint(chunk[\"messages\"][-1].content)<\/code><\/p>\n<p><strong>Structured Output<\/strong><br \/>\nStructured output allows the agents to return data in a specific, predictable format. Developers can set their desired structured output schema, and the model generated response is returned in the &#8216;structured_response&#8217;.<\/p>\n<p style=\"text-align: justify;\"><em>Basic usage<\/em><\/p>\n<p style=\"text-align: justify;\"><code>def create_agent(<br \/>\n...<br \/>\nresponse_format: Union[<br \/>\nToolStrategy[StructuredResponseT],<br \/>\nProviderStrategy[StructuredResponseT],<br \/>\ntype[StructuredResponseT],<br \/>\n]<\/code><\/p>\n<p><strong>Conclusion<\/strong><br \/>\nTo conclude I would say\u2013 LangChain is more than a library\u2014it is an architectural framework for building production-ready GenAI applications. As LLM systems move beyond simple prompt\u2013response patterns, challenges like tool orchestration, memory, streaming, structured outputs.<br \/>\nIt enforces a clean separation of concerns: models handle reasoning, tools manage external interactions, messages maintain context, and agents orchestrate decisions. This makes applications easier to extend, debug, and adapt as requirements or model providers evolve.<br \/>\nWhile simple use cases may not require it, LangChain excels when applications demand composition, state, and consistency\u2014making it a strong foundation for building reliable and scalable LLM-powered systems.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction Building AI solutions without any framework can be difficult to maintain. LangChain is a framework that helps in the development of LLM-powered applications. It provides a set of building blocks for almost every stage of the LLM application lifecycle. To add generative AI functionality to applications, LangChain offers components and features that makes pipelining, [&hellip;]<\/p>\n","protected":false},"author":2207,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"iawp_total_views":74},"categories":[5871],"tags":[6263],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/77227"}],"collection":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/users\/2207"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/comments?post=77227"}],"version-history":[{"count":3,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/77227\/revisions"}],"predecessor-version":[{"id":77546,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/77227\/revisions\/77546"}],"wp:attachment":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/media?parent=77227"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/categories?post=77227"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/tags?post=77227"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}