{"id":74009,"date":"2025-08-17T18:30:58","date_gmt":"2025-08-17T13:00:58","guid":{"rendered":"https:\/\/www.tothenew.com\/blog\/?p=74009"},"modified":"2025-09-17T11:48:40","modified_gmt":"2025-09-17T06:18:40","slug":"building-your-first-langgraph-agent-a-beginners-guide-to-ai-powered-candidate-shortlisting","status":"publish","type":"post","link":"https:\/\/www.tothenew.com\/blog\/building-your-first-langgraph-agent-a-beginners-guide-to-ai-powered-candidate-shortlisting\/","title":{"rendered":"Building Your First LangGraph Agent: A Beginner\u2019s Guide to AI-Powered Candidate Shortlisting"},"content":{"rendered":"<h1>Using LangGraph to Create an AI Agent- Beginner Guide<\/h1>\n<p>Are you ready to create an AI agent effectively? In this post, we&#8217;ll use LangGraph, a robust framework for developing stateful, multi-step AI applications, to develop a basic candidate shortlisting agent that will demonstrate multiple LangGraph features.<\/p>\n<h2>What is LangGraph?<\/h2>\n<p>LangGraph, built on Langchain, is a Python library used for\u00a0creating and managing complex, stateful workflows for large language model (LLM) agents.\u00a0It excels at building AI applications with multiple agents that need to interact and share state, enabling features like cyclic data flows for improved agent performance and more personalized responses.<\/p>\n<ul>\n<li>Remember previous conversations (state management)<\/li>\n<li>Use tools to perform specific tasks<\/li>\n<li>Make decisions about what to do next<\/li>\n<li>Handle complex, multi-step workflows<\/li>\n<\/ul>\n<p>Think of it as a way to build AI agents that don&#8217;t just chat, but actually <em>do things<\/em>.<\/p>\n<h2>Quick Setup<\/h2>\n<p>We\u2019ll use <a href=\"https:\/\/github.com\/astral-sh\/uv#comparison\">uv<\/a> as our package manager instead of pip for this project. uv is a super-fast tool, written in Rust, that helps you manage Python packages.<\/p>\n<h3>1. Install uv<\/h3>\n<pre><code># On macOS\/Linux\r\nbrew install uv<\/code><\/pre>\n<h3>2. Create Project<\/h3>\n<pre><code>mkdir candidate_agent_lg\r\ncd candidate_agent_lg\r\nuv init\r\nuv add langgraph langchain-core langchain-groq python-dotenv pydantic<\/code><\/pre>\n<h3>3. Get API Key<\/h3>\n<p>Sign up at <a href=\"https:\/\/console.groq.com\/\" target=\"_blank\" rel=\"noopener\">Groq Console<\/a> for a free API key, then create <code>.env<\/code>:<\/p>\n<pre><code>GROQ_API_KEY=your_api_key_here<\/code><\/pre>\n<h1>The Agent Architecture<\/h1>\n<p>This LangGraph-powered AI agent is designed to assist HR teams in mobile development hiring. It showcases how LangGraph can be used to build versatile agents for various tasks. The demo covers key LangGraph concepts, including nodes, edges, state management, memory, and tool integration, offering a comprehensive understanding of the framework.<\/p>\n<h2><strong>Key Components Explained:<\/strong><\/h2>\n<p>a) <span style=\"text-decoration: underline;\">State<\/span>: Manages conversation flow and candidate data<br \/>\nb) <span style=\"text-decoration: underline;\">Tools<\/span>: Functions the agent can call to perform specific tasks<br \/>\nc) <span style=\"text-decoration: underline;\">Nodes<\/span>: Processing units that handle different aspects of the workflow<br \/>\nd) <span style=\"text-decoration: underline;\">Edges<\/span>: Define the flow between nodes<br \/>\ne) <span style=\"text-decoration: underline;\">Memory<\/span>: Stores conversation history and candidate insights<\/p>\n<h2><strong>Memory System:<\/strong><\/h2>\n<p>a) <span style=\"text-decoration: underline;\">Short-term Memory<\/span>: Maintains the current conversation history and session state.<br \/>\nb) <span style=\"text-decoration: underline;\">Long-term Memory<\/span>: Stores HR preferences, agent insights, and records of past candidate evaluations.<\/p>\n<h2><strong>Workflow Steps<\/strong><\/h2>\n<div id=\"attachment_76350\" style=\"width: 635px\" class=\"wp-caption alignnone\"><img aria-describedby=\"caption-attachment-76350\" decoding=\"async\" loading=\"lazy\" class=\"size-large wp-image-76350\" src=\"https:\/\/www.tothenew.com\/blog\/wp-ttn-blog\/uploads\/2025\/09\/agent_flowchart-1024x961.png\" alt=\"agent_flowchart\" width=\"625\" height=\"587\" srcset=\"\/blog\/wp-ttn-blog\/uploads\/2025\/09\/agent_flowchart-1024x961.png 1024w, \/blog\/wp-ttn-blog\/uploads\/2025\/09\/agent_flowchart-300x282.png 300w, \/blog\/wp-ttn-blog\/uploads\/2025\/09\/agent_flowchart-768x721.png 768w, \/blog\/wp-ttn-blog\/uploads\/2025\/09\/agent_flowchart-1536x1441.png 1536w, \/blog\/wp-ttn-blog\/uploads\/2025\/09\/agent_flowchart-624x586.png 624w, \/blog\/wp-ttn-blog\/uploads\/2025\/09\/agent_flowchart-24x24.png 24w, \/blog\/wp-ttn-blog\/uploads\/2025\/09\/agent_flowchart.png 1688w\" sizes=\"(max-width: 625px) 100vw, 625px\" \/><p id=\"caption-attachment-76350\" class=\"wp-caption-text\">langgraph_agent<\/p><\/div>\n<p>a) <span style=\"text-decoration: underline;\">Start<\/span>: The candidate selection process begins.<br \/>\nb) <span style=\"text-decoration: underline;\">LLM Call Node<\/span>: The agent processes user requests, retrieves relevant memories, and determines the next action.<br \/>\nc) <span style=\"text-decoration: underline;\">Decision Point<\/span>: The agent decides whether tool calls are needed.<br \/>\n&#8211; If yes, it uses the available tools (evaluate, assess, schedule, verify).<br \/>\n&#8211; If no, it updates memory with new insights for future use.<br \/>\nd) <span style=\"text-decoration: underline;\">Tool Node<\/span>: Executes the selected tools and updates candidate evaluation data.<br \/>\ne) <span style=\"text-decoration: underline;\">Memory Updates:<\/span> In parallel, the agent updates:<br \/>\n&#8211; Agent Memory: Stores recommendations and evaluation patterns.<br \/>\n&#8211; User Memory: Stores HR preferences and hiring criteria.<br \/>\nf) <span style=\"text-decoration: underline;\">End<\/span>: The workflow completes, leaving the system with updated<\/p>\n<hr \/>\n<h1><strong>Detailed Summary:<\/strong><\/h1>\n<ul>\n<li><strong>Creating a Simple LLM Bot with LangChain: <\/strong>A tiny\u00a0bot that takes a\u00a0prompt and\u00a0replies. The prompt includes a messages placeholder so you can later add history.<\/li>\n<\/ul>\n<pre>from langchain_groq import ChatGroq\r\nfrom langchain_core.prompts import ChatPromptTemplate\r\nfrom langchain_core.prompts.chat import MessagesPlaceholder\r\n\r\nllm = ChatGroq(model=\"gemma2-9b-it\", temperature=0)\r\nprompt = ChatPromptTemplate.from_messages([\r\n(\"system\", \"You are an HR assistant for mobile dev hiring. Respond in {language}.\"),\r\nMessagesPlaceholder(\"messages\"),\r\n])\r\nbot = prompt | llm<\/pre>\n<ul>\n<li><strong>Converting to LangGraph: Nodes and Edges<\/strong>: Turn the simple bot into a graph with explicit steps.<\/li>\n<\/ul>\n<pre>from pydantic import BaseModel\r\nfrom typing import Annotated\r\nfrom langgraph.graph import StateGraph, START, END\r\nfrom langgraph.graph.message import add_messages\r\n\r\nclass State(BaseModel):\r\nmessages: Annotated[list, add_messages]\r\nlanguage: str | None = \"English\"\r\n\r\ndef llm_call(state: State):\r\nmsg = (prompt | llm).invoke({\"messages\": state.messages, \"language\": state.language})\r\nreturn {\"messages\": [msg]}\r\n\r\ng = StateGraph(State)\r\ng.add_node(\"llm_call\", llm_call)\r\ng.add_edge(START, \"llm_call\")\r\ng.add_edge(\"llm_call\", END)\r\ngraph = g.compile()<\/pre>\n<ul>\n<li><strong>Adding Short-term Memory<\/strong>: Enable checkpoints so each thread keeps its own conversation history.<\/li>\n<\/ul>\n<pre>from langgraph.checkpoint.memory import InMemorySaver\r\ngraph = g.compile(checkpointer=InMemorySaver())\r\nconfig = {\"configurable\": {\"thread_id\": \"1\"}}<\/pre>\n<ul>\n<li><strong>Adding Long-term Memory<\/strong>: Persist summaries (e.g., HR preferences, agent insights) across sessions with a store.<\/li>\n<\/ul>\n<pre>from langgraph.store.memory import InMemoryStore\r\nstore = InMemoryStore()\r\n\r\ndef concat(items): return \"\\n\".join(i.dict()[\"value\"][\"memory\"] for i in items if \"memory\" in i.dict().get(\"value\", {}))\r\n\r\ndef llm_call_with_mem(state: State, *, store=store):\r\nhr = concat(store.search((\"user_1\", \"HR Preferences\")))\r\nai = concat(store.search((\"user_1\", \"Agent Insights\")))\r\nmsg = (prompt | llm).invoke({\r\n\"messages\": state.messages, \"language\": state.language,\r\n\"hr_summary\": hr, \"agent_summary\": ai\r\n})\r\nreturn {\"messages\": [msg]}<\/pre>\n<ul>\n<li><strong>Implementing Parallelism<\/strong>: Route conditionally. If\u00a0no\u00a0tools\u00a0are needed, run\u00a0both\u00a0memory\u00a0updates\u00a0in\u00a0parallel and\u00a0finish.\n<pre>def go_to(state: State):\r\nlast = state.messages[-1]\r\nreturn \"tools\" if getattr(last, \"tool_calls\", []) else [\"update_user_memory\", \"update_agent_memory\"]\r\n\r\ng = StateGraph(State)\r\ng.add_node(\"llm_call\", llm_call_with_mem)\r\ng.add_node(\"tools\", lambda s: s) # placeholder\r\ng.add_node(\"update_user_memory\", lambda s, *, store=None: {})\r\ng.add_node(\"update_agent_memory\", lambda s, *, store=None: {})\r\ng.add_edge(START, \"llm_call\")\r\ng.add_conditional_edges(\"llm_call\", go_to, [\"tools\", \"update_user_memory\", \"update_agent_memory\"])\r\ng.add_edge(\"tools\", \"llm_call\")\r\ng.add_edge(\"update_user_memory\", END)\r\ng.add_edge(\"update_agent_memory\", END)\r\ngraph = g.compile(checkpointer=InMemorySaver(), store=store)<\/pre>\n<\/li>\n<li><strong>Adding Tool Calling for Autonomous Agents<\/strong>: Expose just two tools; LLM decides when to call them.\n<pre>from langchain_core.tools import tool, InjectedToolCallId\r\nfrom langchain_core.messages import ToolMessage\r\nfrom langgraph.types import Command\r\nfrom langgraph.prebuilt import ToolNode\r\nfrom typing import Annotated\r\n\r\n@tool\r\ndef evaluate_candidate(name: str, role: str, tool_call_id: Annotated[str, InjectedToolCallId]):\r\nresult = f\"{name} for {role}: 75\/100\"\r\nreturn Command(update={\"messages\": [ToolMessage(result, tool_call_id=tool_call_id)]})\r\n\r\n@tool\r\ndef schedule_interview(name: str, date: str, tool_call_id: Annotated[str, InjectedToolCallId]):\r\ninfo = f\"Interview scheduled for {name} on {date}\"\r\nreturn Command(update={\"messages\": [ToolMessage(info, tool_call_id=tool_call_id)]})\r\n\r\ntools = [evaluate_candidate, schedule_interview]\r\ntool_node = ToolNode(tools)\r\nllm_with_tools = llm.bind_tools(tools)\r\n\r\ndef llm_call(state: State):\r\nmsg = (prompt | llm_with_tools).invoke({\"messages\": state.messages, \"language\": state.language})\r\nreturn {\"messages\": [msg]}<\/pre>\n<\/li>\n<\/ul>\n<hr \/>\n<h1><strong>Key Concepts Summary<\/strong><\/h1>\n<p><strong>1. Runnables<\/strong><\/p>\n<p>a) Building blocks that can be chained together<br \/>\nb) Examples: Prompts, LLMs, Tools<br \/>\nc) Chain with `|` operator: `prompt | llm`<\/p>\n<p><strong>2. State Management<\/strong><\/p>\n<p>a) Central data structure flowing through workflow<br \/>\nb) Defined using Pydantic models<br \/>\nc) Updated by nodes and tools<\/p>\n<p><strong>3. Nodes<\/strong><\/p>\n<p>a) Processing units in the workflow<br \/>\nb) Python functions that take state and return updates<br \/>\nc) Handle specific aspects (conversation, tools, memory)<\/p>\n<p><strong>4. Edges<\/strong><\/p>\n<p>a) Define flow between nodes<br \/>\nb) Can be conditional (based on state)<br \/>\nc) Enable complex decision-making workflows<\/p>\n<p><strong>5. Memory Systems<\/strong><\/p>\n<p>a) Short-term: Conversation history within session<br \/>\nb) Long-term: Persistent insights across sessions<br \/>\nc) Namespaced: Organised by user and memory type<\/p>\n<p><strong>6. Tools<\/strong><\/p>\n<p>a) Functions the AI can call autonomously<br \/>\nb) Decorated with `@tool`<br \/>\nc) Enable specialised capabilities<\/p>\n<p>7. <strong>Parallelism<\/strong><\/p>\n<p>a) Multiple operations run simultaneously<br \/>\nb) Conditional edges can return multiple destinations<br \/>\nc) Improves performance and efficiency<\/p>\n<hr \/>\n<h1><strong>Advanced Features Demonstrated<\/strong><\/h1>\n<p><strong>1. Memory Summarisation<\/strong><\/p>\n<p>a) LLM creates concise summaries of long conversations<br \/>\nb) Maintains context while reducing token usage<br \/>\nc) Separate prompts for user and agent memories<\/p>\n<p><strong>2. Tool State Updates<\/strong><\/p>\n<p>a) Tools can update workflow state using `Command` objects<br \/>\nb) Enables complex state modifications from within tools<br \/>\nc) Maintains traceability with `tool_call_id`<\/p>\n<p><strong>3. Conditional Workflow<\/strong><\/p>\n<p>a) Dynamic routing based on conversation state<br \/>\nb) Agent decides whether to use tools or update memory<br \/>\nc) Flexible workflow adaptation<\/p>\n<p><strong>4. Multi-LLM Support<\/strong><\/p>\n<p>a) Easy switching between LLM providers (Groq, OpenAI)<br \/>\nb) Consistent interface across different models<br \/>\nc) Fallback options for reliability<\/p>\n<hr \/>\n<h1><strong>Best Practices for Beginners<\/strong><\/h1>\n<p><strong>1. Start Simple<\/strong><\/p>\n<p>a) Begin with basic prompt + LLM chain<br \/>\nb) Add complexity gradually (nodes, then tools, then memory)<br \/>\nc) Test each component before adding the next<\/p>\n<p><strong>2. State Design<\/strong><\/p>\n<p>a) Keep state model focused and minimal<br \/>\nb) Use Optional fields for flexibility<br \/>\nc) Document what each field represents<\/p>\n<p><strong>3. Tool Development<\/strong><\/p>\n<p>a) Write clear docstrings for each tool<br \/>\nb) Start with simple tools, add complexity later<br \/>\nc) Test tools independently before integration<\/p>\n<p><strong>4. Memory Strategy<\/strong><\/p>\n<p>a) Design memory structure early<br \/>\nb) Use namespaces to organise different memory types<br \/>\nc) Implement memory summarisation to control size<\/p>\n<p><strong>5. Debugging<\/strong><\/p>\n<p>a) Add print statements to understand flow<br \/>\nb) Enable LangChain tracing for detailed debugging<br \/>\nc) Test with simple inputs first<\/p>\n<hr \/>\n<h1>Useful Resources<\/h1>\n<ul>\n<li><strong>LangGraph Documentation<\/strong>: <a href=\"https:\/\/langchain-ai.github.io\/langgraph\/\" target=\"_blank\" rel=\"noopener\">https:\/\/langchain-ai.github.io\/langgraph\/<\/a><\/li>\n<li><strong>LangChain Academy<\/strong>: <a href=\"https:\/\/academy.langchain.com\/courses\/intro-to-langgraph\" target=\"_blank\" rel=\"noopener\">https:\/\/academy.langchain.com\/courses\/intro-to-langgraph<\/a><\/li>\n<li><strong>uv Documentation<\/strong>: <a href=\"https:\/\/docs.astral.sh\/uv\/\" target=\"_blank\" rel=\"noopener\">https:\/\/docs.astral.sh\/uv\/<\/a><\/li>\n<li><strong>LangGraph Tutorials<\/strong>: <a href=\"https:\/\/python.langchain.com\/docs\/tutorials\/\" target=\"_blank\" rel=\"noopener\">https:\/\/python.langchain.com\/docs\/tutorials\/<\/a><\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Using LangGraph to Create an AI Agent- Beginner Guide Are you ready to create an AI agent effectively? In this post, we&#8217;ll use LangGraph, a robust framework for developing stateful, multi-step AI applications, to develop a basic candidate shortlisting agent that will demonstrate multiple LangGraph features. What is LangGraph? LangGraph, built on Langchain, is a [&hellip;]<\/p>\n","protected":false},"author":1786,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"iawp_total_views":99},"categories":[5879],"tags":[7765,5733,5918,7766,6263,7764],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/74009"}],"collection":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/users\/1786"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/comments?post=74009"}],"version-history":[{"count":9,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/74009\/revisions"}],"predecessor-version":[{"id":76516,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/74009\/revisions\/76516"}],"wp:attachment":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/media?parent=74009"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/categories?post=74009"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/tags?post=74009"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}