Building Your First LangGraph Agent: A Beginner’s Guide to AI-Powered Candidate Shortlisting

17 / Aug / 2025 by Ajmal Hasan 0 comments

Using LangGraph to Create an AI Agent- Beginner Guide

Are you ready to create an AI agent effectively? In this post, we’ll use LangGraph, a robust framework for developing stateful, multi-step AI applications, to develop a basic candidate shortlisting agent that will demonstrate multiple LangGraph features.

What is LangGraph?

LangGraph, built on Langchain, is a Python library used for creating and managing complex, stateful workflows for large language model (LLM) agents. It excels at building AI applications with multiple agents that need to interact and share state, enabling features like cyclic data flows for improved agent performance and more personalized responses.

  • Remember previous conversations (state management)
  • Use tools to perform specific tasks
  • Make decisions about what to do next
  • Handle complex, multi-step workflows

Think of it as a way to build AI agents that don’t just chat, but actually do things.

Quick Setup

We’ll use uv as our package manager instead of pip for this project. uv is a super-fast tool, written in Rust, that helps you manage Python packages.

1. Install uv

# On macOS/Linux
brew install uv

2. Create Project

mkdir candidate_agent_lg
cd candidate_agent_lg
uv init
uv add langgraph langchain-core langchain-groq python-dotenv pydantic

3. Get API Key

Sign up at Groq Console for a free API key, then create .env:

GROQ_API_KEY=your_api_key_here

The Agent Architecture

This LangGraph-powered AI agent is designed to assist HR teams in mobile development hiring. It showcases how LangGraph can be used to build versatile agents for various tasks. The demo covers key LangGraph concepts, including nodes, edges, state management, memory, and tool integration, offering a comprehensive understanding of the framework.

Key Components Explained:

a) State: Manages conversation flow and candidate data
b) Tools: Functions the agent can call to perform specific tasks
c) Nodes: Processing units that handle different aspects of the workflow
d) Edges: Define the flow between nodes
e) Memory: Stores conversation history and candidate insights

Memory System:

a) Short-term Memory: Maintains the current conversation history and session state.
b) Long-term Memory: Stores HR preferences, agent insights, and records of past candidate evaluations.

Workflow Steps

agent_flowchart

langgraph_agent

a) Start: The candidate selection process begins.
b) LLM Call Node: The agent processes user requests, retrieves relevant memories, and determines the next action.
c) Decision Point: The agent decides whether tool calls are needed.
– If yes, it uses the available tools (evaluate, assess, schedule, verify).
– If no, it updates memory with new insights for future use.
d) Tool Node: Executes the selected tools and updates candidate evaluation data.
e) Memory Updates: In parallel, the agent updates:
– Agent Memory: Stores recommendations and evaluation patterns.
– User Memory: Stores HR preferences and hiring criteria.
f) End: The workflow completes, leaving the system with updated


Detailed Summary:

  • Creating a Simple LLM Bot with LangChain: A tiny bot that takes a prompt and replies. The prompt includes a messages placeholder so you can later add history.
from langchain_groq import ChatGroq
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.prompts.chat import MessagesPlaceholder

llm = ChatGroq(model="gemma2-9b-it", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are an HR assistant for mobile dev hiring. Respond in {language}."),
MessagesPlaceholder("messages"),
])
bot = prompt | llm
  • Converting to LangGraph: Nodes and Edges: Turn the simple bot into a graph with explicit steps.
from pydantic import BaseModel
from typing import Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages

class State(BaseModel):
messages: Annotated[list, add_messages]
language: str | None = "English"

def llm_call(state: State):
msg = (prompt | llm).invoke({"messages": state.messages, "language": state.language})
return {"messages": [msg]}

g = StateGraph(State)
g.add_node("llm_call", llm_call)
g.add_edge(START, "llm_call")
g.add_edge("llm_call", END)
graph = g.compile()
  • Adding Short-term Memory: Enable checkpoints so each thread keeps its own conversation history.
from langgraph.checkpoint.memory import InMemorySaver
graph = g.compile(checkpointer=InMemorySaver())
config = {"configurable": {"thread_id": "1"}}
  • Adding Long-term Memory: Persist summaries (e.g., HR preferences, agent insights) across sessions with a store.
from langgraph.store.memory import InMemoryStore
store = InMemoryStore()

def concat(items): return "\n".join(i.dict()["value"]["memory"] for i in items if "memory" in i.dict().get("value", {}))

def llm_call_with_mem(state: State, *, store=store):
hr = concat(store.search(("user_1", "HR Preferences")))
ai = concat(store.search(("user_1", "Agent Insights")))
msg = (prompt | llm).invoke({
"messages": state.messages, "language": state.language,
"hr_summary": hr, "agent_summary": ai
})
return {"messages": [msg]}
  • Implementing Parallelism: Route conditionally. If no tools are needed, run both memory updates in parallel and finish.
    def go_to(state: State):
    last = state.messages[-1]
    return "tools" if getattr(last, "tool_calls", []) else ["update_user_memory", "update_agent_memory"]
    
    g = StateGraph(State)
    g.add_node("llm_call", llm_call_with_mem)
    g.add_node("tools", lambda s: s) # placeholder
    g.add_node("update_user_memory", lambda s, *, store=None: {})
    g.add_node("update_agent_memory", lambda s, *, store=None: {})
    g.add_edge(START, "llm_call")
    g.add_conditional_edges("llm_call", go_to, ["tools", "update_user_memory", "update_agent_memory"])
    g.add_edge("tools", "llm_call")
    g.add_edge("update_user_memory", END)
    g.add_edge("update_agent_memory", END)
    graph = g.compile(checkpointer=InMemorySaver(), store=store)
  • Adding Tool Calling for Autonomous Agents: Expose just two tools; LLM decides when to call them.
    from langchain_core.tools import tool, InjectedToolCallId
    from langchain_core.messages import ToolMessage
    from langgraph.types import Command
    from langgraph.prebuilt import ToolNode
    from typing import Annotated
    
    @tool
    def evaluate_candidate(name: str, role: str, tool_call_id: Annotated[str, InjectedToolCallId]):
    result = f"{name} for {role}: 75/100"
    return Command(update={"messages": [ToolMessage(result, tool_call_id=tool_call_id)]})
    
    @tool
    def schedule_interview(name: str, date: str, tool_call_id: Annotated[str, InjectedToolCallId]):
    info = f"Interview scheduled for {name} on {date}"
    return Command(update={"messages": [ToolMessage(info, tool_call_id=tool_call_id)]})
    
    tools = [evaluate_candidate, schedule_interview]
    tool_node = ToolNode(tools)
    llm_with_tools = llm.bind_tools(tools)
    
    def llm_call(state: State):
    msg = (prompt | llm_with_tools).invoke({"messages": state.messages, "language": state.language})
    return {"messages": [msg]}

Key Concepts Summary

1. Runnables

a) Building blocks that can be chained together
b) Examples: Prompts, LLMs, Tools
c) Chain with `|` operator: `prompt | llm`

2. State Management

a) Central data structure flowing through workflow
b) Defined using Pydantic models
c) Updated by nodes and tools

3. Nodes

a) Processing units in the workflow
b) Python functions that take state and return updates
c) Handle specific aspects (conversation, tools, memory)

4. Edges

a) Define flow between nodes
b) Can be conditional (based on state)
c) Enable complex decision-making workflows

5. Memory Systems

a) Short-term: Conversation history within session
b) Long-term: Persistent insights across sessions
c) Namespaced: Organised by user and memory type

6. Tools

a) Functions the AI can call autonomously
b) Decorated with `@tool`
c) Enable specialised capabilities

7. Parallelism

a) Multiple operations run simultaneously
b) Conditional edges can return multiple destinations
c) Improves performance and efficiency


Advanced Features Demonstrated

1. Memory Summarisation

a) LLM creates concise summaries of long conversations
b) Maintains context while reducing token usage
c) Separate prompts for user and agent memories

2. Tool State Updates

a) Tools can update workflow state using `Command` objects
b) Enables complex state modifications from within tools
c) Maintains traceability with `tool_call_id`

3. Conditional Workflow

a) Dynamic routing based on conversation state
b) Agent decides whether to use tools or update memory
c) Flexible workflow adaptation

4. Multi-LLM Support

a) Easy switching between LLM providers (Groq, OpenAI)
b) Consistent interface across different models
c) Fallback options for reliability


Best Practices for Beginners

1. Start Simple

a) Begin with basic prompt + LLM chain
b) Add complexity gradually (nodes, then tools, then memory)
c) Test each component before adding the next

2. State Design

a) Keep state model focused and minimal
b) Use Optional fields for flexibility
c) Document what each field represents

3. Tool Development

a) Write clear docstrings for each tool
b) Start with simple tools, add complexity later
c) Test tools independently before integration

4. Memory Strategy

a) Design memory structure early
b) Use namespaces to organise different memory types
c) Implement memory summarisation to control size

5. Debugging

a) Add print statements to understand flow
b) Enable LangChain tracing for detailed debugging
c) Test with simple inputs first


Useful Resources

FOUND THIS USEFUL? SHARE IT

Leave a Reply

Your email address will not be published. Required fields are marked *