Day 2: LangChain & LangGraph Foundations
Welcome Back! 👋
Yesterday you learned about Messages. Today we complete the LangChain foundation and level up to LangGraph for production-grade agents.
Today's Journey
What We'll Cover (3 Hours)
- Keep this open alongside your VS Code
- Use the sidebar to navigate between sections
- Click "Copy Code" buttons for instant code snippets
- Expand "Show Hints" and "Show Solution" during assignments
- Check off items in the progress tracker as you go
Learning Outcomes
By the end of today, you'll be able to:
- ✅ Create tools using the
@tooldecorator - ✅ Initialize and configure LLMs with LangChain
- ✅ Build working agents with AgentExecutor
- ✅ Understand the agent loop architecture
- ✅ Identify when to use LangGraph vs simple agents
- ✅ Build stateful graphs with conditional routing
- ✅ Create production-ready agent workflows
1️⃣ Tools - Giving Agents Capabilities
What Are Tools?
Tools are Python functions that agents can call to interact with the outside world. Think of them as the agent's hands and eyes.
The @tool Decorator
LangChain uses the @tool decorator to convert regular Python functions into agent tools.
from langchain.tools import tool
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city.
Args:
city: The name of the city
"""
# Mock implementation
weather_data = {
"bangalore": "Sunny, 28°C",
"mumbai": "Rainy, 26°C",
"delhi": "Cloudy, 22°C"
}
return weather_data.get(city.lower(), "Weather data not available")
Anatomy of a Tool
Three Critical Parts:
- Function Name → Becomes the tool name the LLM sees
- Docstring → Becomes the description (LLM reads this to decide when to use the tool)
- Type Hints → Provides parameter validation and helps the LLM understand inputs
Example: Multiple Tools
@tool
def book_flight(origin: str, destination: str, date: str) -> dict:
"""Book a flight between two cities.
Args:
origin: Departure city
destination: Arrival city
date: Travel date in YYYY-MM-DD format
"""
return {
"booking_id": "FL12345",
"route": f"{origin} → {destination}",
"date": date,
"status": "confirmed"
}
@tool
def cancel_booking(booking_id: str) -> dict:
"""Cancel a flight booking.
Args:
booking_id: The flight booking ID
"""
return {
"booking_id": booking_id,
"status": "cancelled",
"refund_amount": 5000
}
The LLM decides which tool to use based on the docstring. A vague description = confused agent.
Bad: "Does stuff with cities"
Good: "Get current weather for a city. Use when user asks about weather, temperature, or climate."
Best Practices
- ✅ Use descriptive function names (
get_weathernotweather) - ✅ Write clear docstrings with examples
- ✅ Add type hints for all parameters
- ✅ Return structured data (dict/list) not just strings
- ✅ Handle errors gracefully (return error messages, don't crash)
2️⃣ LLMs - The Brain of Your Agent
What is an LLM?
The Large Language Model (LLM) is the reasoning engine that decides which tools to call and synthesizes responses.
Initializing ChatOpenAI
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4", # Model selection
temperature=0, # Creativity level (0 = deterministic)
api_key=os.getenv("OPENAI_API_KEY") # Your API key
)
Key Parameters
Model Selection
gpt-4orgpt-4-turbo- Most capable, best for complex reasoninggpt-3.5-turbo- Faster and cheaper, good for simple tasks- Recommendation for agents: Start with gpt-4, optimize later
Temperature
0- Deterministic, consistent answers (production default)0.7-1.0- Creative, varied responses (good for creative writing)- For agents: Use 0 for predictable behavior
Testing Your LLM
from langchain_core.messages import HumanMessage
# Quick test
response = llm.invoke([
HumanMessage(content="Say hello in 3 languages")
])
print(response.content)
3️⃣ Agent Architecture - How It All Fits Together
The 4-Step Pattern
Every LangChain agent follows the same pattern. Master this, and you understand all agents.
Step-by-Step Agent Creation
Step 1: Define Tools
from langchain.tools import tool
@tool
def get_weather(city: str) -> str:
"""Get weather for a city"""
return f"Sunny, 25°C in {city}"
@tool
def book_flight(city: str) -> dict:
"""Book a flight to a city"""
return {"booking_id": "FL123", "destination": city}
# Create tools list
tools = [get_weather, book_flight]
Step 2: Initialize LLM
from langchain_openai import ChatOpenAI llm = ChatOpenAI(model="gpt-4", temperature=0)
Step 3: Create Prompt Template
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful travel assistant. Be friendly and concise."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}") # Tool results go here
])
{agent_scratchpad} is where tool results get inserted. LangChain manages this automatically.
Step 4: Create Agent & Executor
from langchain.agents import AgentExecutor, create_tool_calling_agent
# Create the agent
agent = create_tool_calling_agent(llm, tools, prompt)
# Create the executor (runs the loop)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True, # Show what's happening
max_iterations=5, # Safety limit
handle_parsing_errors=True
)
Step 5: Run It!
result = agent_executor.invoke({
"input": "What's the weather in Bangalore? If it's good, book me a flight."
})
print(result["output"])
What Happens Under The Hood?
┌─────────────────────────────────────────┐ │ THE AGENT LOOP │ ├─────────────────────────────────────────┤ │ │ │ 1. Call LLM with prompt + tools │ │ ↓ │ │ 2. LLM decides: Tool call or answer? │ │ ↙ ↘ │ │ Tool Call Final Answer │ │ ↓ ↓ │ │ 3. Execute 5. Return │ │ function to user │ │ ↓ │ │ 4. Add result │ │ to context │ │ ↓ │ │ Loop back to step 1 │ │ │ └─────────────────────────────────────────┘
Complete Working Example
# Full working agent - copy and run!
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain.tools import tool
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
load_dotenv()
# Tools
@tool
def get_weather(city: str) -> str:
"""Get current weather for a city"""
return f"Sunny, 25°C in {city}"
@tool
def book_flight(destination: str) -> dict:
"""Book a flight to a destination"""
return {"booking_id": "FL123", "destination": destination, "status": "confirmed"}
# Setup
tools = [get_weather, book_flight]
llm = ChatOpenAI(model="gpt-4", temperature=0)
prompt = ChatPromptTemplate.from_messages([
("system", "You are a travel assistant. Help users with weather and flights."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}")
])
# Create agent
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
max_iterations=5
)
# Run
result = agent_executor.invoke({
"input": "Check Bangalore weather and book a flight if it's good"
})
print("\n" + "="*50)
print("RESULT:", result["output"])
print("="*50)
🎯 Task
Create a new tool and add it to the travel agent. Choose ONE of these options:
@tool
def check_hotel_availability(city: str, dates: str) -> dict:
"""Check hotel availability in a city.
Args:
city: City name
dates: Date range (e.g., 'Jan 15-20')
"""
# YOUR CODE HERE
# Return a dict with hotel info
pass
@tool
def currency_converter(amount: float, from_currency: str, to_currency: str) -> dict:
"""Convert currency from one to another.
Args:
amount: Amount to convert
from_currency: Source currency (USD, EUR, INR)
to_currency: Target currency
"""
# YOUR CODE HERE
# Mock exchange rates are fine
pass
@tool
def get_time_zone(city: str) -> str:
"""Get the current timezone for a city.
Args:
city: City name
"""
# YOUR CODE HERE
# Return timezone info
pass
✅ Success Criteria
- Function has
@tooldecorator - Has proper docstring
- Returns something (mock data is fine)
- Can be added to tools list
- Agent can successfully use it
💡 Hints:
- Use a dictionary for mock data (like we did with weather)
- The docstring should explain WHEN to use this tool
- Return a dict or string, not complex objects
- Test your tool directly before adding to agent:
tool.invoke({"city": "Bangalore"})
✅ Example Solution (Hotel Availability):
@tool
def check_hotel_availability(city: str, dates: str) -> dict:
"""Check hotel availability in a city.
Use this when user asks about hotels, accommodation, or places to stay.
Args:
city: City name
dates: Date range (e.g., 'Jan 15-20')
"""
# Mock data
hotels = {
"bangalore": [
{"name": "Taj Hotel", "price": 8000, "available": True},
{"name": "ITC Gardenia", "price": 12000, "available": True}
],
"mumbai": [
{"name": "Taj Mahal Palace", "price": 15000, "available": False},
{"name": "The Oberoi", "price": 18000, "available": True}
]
}
city_hotels = hotels.get(city.lower(), [])
return {
"city": city,
"dates": dates,
"hotels": city_hotels,
"count": len(city_hotels)
}
# Add to tools list:
tools = [get_weather, book_flight, check_hotel_availability]
# Test it:
result = check_hotel_availability.invoke({"city": "Bangalore", "dates": "Jan 15-20"})
print(result)
4️⃣ Why LangGraph? - Understanding the Limitations
The Problem
AgentExecutor is great for simple, linear tasks. But production systems need more control.
AgentExecutor Limitations
❌ Limitation 1: Linear Flow Only
What you can't do: "If user is premium, auto-approve. If standard, ask for approval."
Why it matters: Real workflows have conditional logic, not just Tool → Tool → Done.
AgentExecutor:
Start → Tool → Tool → Tool → End
Can't do:
Start
↓
Check User Tier
↙ ↘
Premium Standard
↓ ↓
Auto-OK Need Approval
❌ Limitation 2: No Persistent State
What you can't do: User closes app, comes back → remembers context.
Why it matters: Multi-session conversations, human-in-the-loop workflows need state.
# Session 1
result = agent.invoke({"input": "Start booking to Paris"})
# Agent: "Great! When do you want to travel?"
# User closes app, comes back later
# Session 2
result = agent.invoke({"input": "January 15th"})
# Agent: "What are you referring to?"
# ❌ Lost context!
❌ Limitation 3: No Parallel Execution
What you can't do: Check weather AND check prices simultaneously.
Why it matters: Wasting time doing things sequentially when they could run in parallel.
AgentExecutor (Sequential): Check Weather (2s) → Check Prices (2s) → Check Hotels (2s) = 6s total LangGraph (Parallel): Check Weather (2s) ┐ Check Prices (2s) ├→ All done in 2s! Check Hotels (2s) ┘
When to Use What?
✅ Use AgentExecutor When:
- Simple Q&A with tools
- Single-session interactions
- Linear workflows (no branching logic)
- Prototyping / POCs
Example: "What's the weather and book me a flight" ✅
🚀 Use LangGraph When:
- Complex workflows with conditions
- Multi-step processes requiring state
- Human-in-the-loop approvals
- Parallel tool execution
- Production systems
Example: "Start booking, let me approve before payment, resume later" ✅
The Transition
We built with AgentExecutor to understand the fundamentals. Now we level up to LangGraph for production power.
5️⃣ LangGraph Core Concepts
Three Building Blocks
LangGraph = State + Nodes + Edges. Master these three, and you can build any workflow.
Concept 1: State
A dictionary that flows through your graph. Every node can read and update it.
from typing import TypedDict
class TravelState(TypedDict):
city: str # Where the user wants to go
weather: str # Weather info
budget: float # User's budget
booking: dict # Booking details
user_approved: bool # Did user approve?
Concept 2: Nodes
Each node receives state, does something, and returns state updates.
def check_weather_node(state: TravelState):
"""Node that checks weather"""
city = state["city"]
weather = get_weather(city) # Call your tool
return {"weather": weather} # Update state
def check_budget_node(state: TravelState):
"""Node that validates budget"""
if state["budget"] < 10000:
return {"error": "Budget too low"}
return {"budget_ok": True}
def book_flight_node(state: TravelState):
"""Node that books the flight"""
booking = book_flight(state["city"])
return {"booking": booking}
Concept 3: Edges
Edges connect nodes. They can be normal (always go to B) or conditional (go to B or C based on state).
Normal Edges
workflow.add_edge("check_weather", "book_flight")
# Always: check_weather → book_flight
Conditional Edges
def should_book(state: TravelState) -> str:
"""Decide where to go next"""
if state["weather"] == "sunny":
return "book_flight"
else:
return "suggest_alternative"
workflow.add_conditional_edges(
"check_weather", # From this node
should_book, # Use this function to decide
{
"book_flight": "book_flight", # If returns "book_flight"
"suggest_alternative": "suggest_alternative" # If returns "suggest_alternative"
}
)
Graph Visualization:
START
↓
check_weather
↙ ↘
sunny rainy
↓ ↓
book_flight suggest
↓ ↓
END END
Putting It Together: Simple Graph
from langgraph.graph import StateGraph, END
from typing import TypedDict
# 1. Define State
class SimpleState(TypedDict):
city: str
weather: str
# 2. Define Nodes
def check_weather_node(state):
return {"weather": "sunny"}
def book_node(state):
return {"booking": "confirmed"}
# 3. Build Graph
workflow = StateGraph(SimpleState)
workflow.add_node("check_weather", check_weather_node)
workflow.add_node("book", book_node)
# 4. Add Edges
workflow.set_entry_point("check_weather")
workflow.add_edge("check_weather", "book")
workflow.add_edge("book", END)
# 5. Compile & Run
app = workflow.compile()
result = app.invoke({"city": "Bangalore"})
print(result)
🎯 Task
This agent has 3 bugs. Find and fix them!
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain.tools import tool
from langchain_core.prompts import ChatPromptTemplate
@tool
def calculate(expression: str) -> float:
"""Evaluate a mathematical expression"""
return eval(expression)
# BUG 1: Wrong model name
llm = ChatOpenAI(model="gpt-5", temperature=0)
tools = [calculate]
# BUG 2: Missing placeholder
prompt = ChatPromptTemplate.from_messages([
("system", "You are a math assistant"),
("human", "{input}")
])
agent = create_tool_calling_agent(llm, tools, prompt)
# BUG 3: Typo in parameter
executor = AgentExecutor(
agent=agent,
tools=tools,
verbos=True # <- typo
)
result = executor.invoke({"input": "What is 25 * 4?"})
print(result["output"])
✅ Success Criteria
- Code runs without errors
- Agent successfully calculates 25 * 4 = 100
- Verbose output shows the tool being called
💡 Hints:
- Bug 1: GPT-5 doesn't exist yet! What's the correct model name?
- Bug 2: The prompt is missing the placeholder for tool results. What goes in the third message?
- Bug 3: There's a typo in one of the parameters. Read carefully!
✅ Solution with Fixes:
# FIX 1: Use correct model name
llm = ChatOpenAI(model="gpt-4", temperature=0) # or "gpt-3.5-turbo"
# FIX 2: Add the agent_scratchpad placeholder
prompt = ChatPromptTemplate.from_messages([
("system", "You are a math assistant"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}") # Add this!
])
# FIX 3: Fix typo
executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True # Was "verbos"
)
# Now it works!
result = executor.invoke({"input": "What is 25 * 4?"})
print(result["output"]) # Should output: 100
What Each Bug Teaches:
- Bug 1: Always check model names against OpenAI's available models
- Bug 2: The placeholder is required for tool results to flow back to the LLM
- Bug 3: Python parameters are case-sensitive and exact
6️⃣ Build Support Agent Graph
Real-World Example
We're building a customer support agent that routes based on user type and issue complexity.
The Workflow
START
↓
Handle Request
↙ ↘
Simple Complex
↓ ↓
Respond Escalate
↓ ↓
END END
Step 1: Define State
from typing import TypedDict, Annotated
from langchain_core.messages import BaseMessage
import operator
class SupportState(TypedDict):
messages: Annotated[list[BaseMessage], operator.add]
should_escalate: bool
issue_type: str
Step 2: Define Tools
from langchain.tools import tool
@tool
def check_order_status(order_id: str) -> dict:
"""Check the status of an order."""
# Mock implementation
return {
"order_id": order_id,
"status": "shipped",
"eta": "2024-01-20"
}
@tool
def create_ticket(issue: str, priority: str) -> dict:
"""Create a support ticket for human review."""
return {
"ticket_id": "TKT12345",
"issue": issue,
"priority": priority
}
Step 3: Define Nodes
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
tools = [check_order_status, create_ticket]
llm = ChatOpenAI(model="gpt-4", temperature=0)
llm_with_tools = llm.bind_tools(tools)
def agent_node(state: SupportState):
"""Agent decides what to do"""
messages = state["messages"]
response = llm_with_tools.invoke(messages)
return {"messages": [response]}
def should_continue(state: SupportState) -> str:
"""Decide if we should continue or end"""
messages = state["messages"]
last_message = messages[-1]
# If no tool calls, we're done
if not last_message.tool_calls:
return "end"
return "continue"
Step 4: Build the Graph
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode
# Create graph
workflow = StateGraph(SupportState)
# Add nodes
workflow.add_node("agent", agent_node)
workflow.add_node("tools", ToolNode(tools))
# Set entry point
workflow.set_entry_point("agent")
# Add conditional edges
workflow.add_conditional_edges(
"agent",
should_continue,
{
"continue": "tools",
"end": END
}
)
# Always return to agent after tools
workflow.add_edge("tools", "agent")
# Compile
app = workflow.compile()
Step 5: Run It!
from langchain_core.messages import HumanMessage
# Test the graph
result = app.invoke({
"messages": [HumanMessage(content="Check order ORD123 status")],
"should_escalate": False,
"issue_type": ""
})
# Print conversation
print("\n" + "="*50)
print("CONVERSATION:")
print("="*50)
for msg in result["messages"]:
if hasattr(msg, 'content'):
print(f"{msg.type}: {msg.content}")
print("="*50)
🎉 What You Just Built:
- ✅ Stateful graph with persistent context
- ✅ Conditional routing based on tool calls
- ✅ Tool execution managed by LangGraph
- ✅ Automatic loop until completion
Complete Code (Copy & Run)
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode
from typing import TypedDict, Annotated, Literal
from langchain_core.messages import BaseMessage, HumanMessage
from langchain_openai import ChatOpenAI
from langchain.tools import tool
import operator
# Tools
@tool
def check_order_status(order_id: str) -> dict:
"""Check the status of an order."""
return {"order_id": order_id, "status": "shipped", "eta": "2024-01-20"}
@tool
def create_ticket(issue: str, priority: str) -> dict:
"""Create a support ticket."""
return {"ticket_id": "TKT12345", "issue": issue, "priority": priority}
# State
class SupportState(TypedDict):
messages: Annotated[list[BaseMessage], operator.add]
should_escalate: bool
issue_type: str
# Setup
tools = [check_order_status, create_ticket]
llm = ChatOpenAI(model="gpt-4", temperature=0)
llm_with_tools = llm.bind_tools(tools)
# Nodes
def agent_node(state: SupportState):
messages = state["messages"]
response = llm_with_tools.invoke(messages)
return {"messages": [response]}
def should_continue(state: SupportState) -> Literal["continue", "end"]:
messages = state["messages"]
last_message = messages[-1]
if not last_message.tool_calls:
return "end"
return "continue"
# Build graph
workflow = StateGraph(SupportState)
workflow.add_node("agent", agent_node)
workflow.add_node("tools", ToolNode(tools))
workflow.set_entry_point("agent")
workflow.add_conditional_edges("agent", should_continue, {"continue": "tools", "end": END})
workflow.add_edge("tools", "agent")
# Compile
app = workflow.compile()
# Run
result = app.invoke({
"messages": [HumanMessage(content="Check order ORD123 status")],
"should_escalate": False,
"issue_type": ""
})
print("\n" + "="*50)
for msg in result["messages"]:
if hasattr(msg, 'content'):
print(f"{msg.type}: {msg.content}")
🎯 Task
Extend the support agent with user tier-based routing:
Current:
Start → Handle Request → [Escalate OR Respond]
Your Task:
Start → Check User Tier → [VIP Path OR Standard Path]
VIP Path: Auto-resolve (no escalation)
Standard Path: May escalate if complex
What You Need to Add:
- A new node:
check_user_tier_node - Update state to track
user_tier - A routing function:
route_by_tier - Conditional edge from check_tier node
Starter Code:
# 1. Update State
class SupportState(TypedDict):
messages: Annotated[list[BaseMessage], operator.add]
should_escalate: bool
issue_type: str
user_tier: str # ADD THIS
# 2. Create check_tier node
def check_user_tier_node(state: SupportState):
# YOUR CODE HERE
# Mock: return {"user_tier": "vip"} or {"user_tier": "standard"}
pass
# 3. Create routing function
def route_by_tier(state: SupportState) -> str:
# YOUR CODE HERE
# Return "vip_path" or "standard_path" based on state["user_tier"]
pass
# 4. Add to graph
workflow.add_node("check_tier", check_user_tier_node)
workflow.set_entry_point("check_tier") # Start here now
workflow.add_conditional_edges(
"check_tier",
route_by_tier,
{
# YOUR ROUTING DICT HERE
# "vip_path": "???",
# "standard_path": "???"
}
)
✅ Success Criteria
- Graph has new
check_tiernode - Conditional routing works (VIP vs Standard)
- Can trace different paths in output
- VIP users skip escalation, Standard users may escalate
💡 Hints:
- The check_tier node should look at user info (for now, just return mock data)
- The routing function just checks
state["user_tier"]and returns a string - VIP path should go directly to agent (skip escalation logic)
- Standard path should go through normal flow (may escalate)
- Test both paths: Try with tier="vip" and tier="standard"
✅ Example Solution:
# 1. Updated State
class SupportState(TypedDict):
messages: Annotated[list[BaseMessage], operator.add]
should_escalate: bool
issue_type: str
user_tier: str
# 2. Check tier node
def check_user_tier_node(state: SupportState):
"""Check user tier (mock implementation)"""
# In production, look up user in database
# For now, mock based on message content
messages = state["messages"]
first_message = messages[0].content.lower()
if "vip" in first_message or "premium" in first_message:
return {"user_tier": "vip"}
else:
return {"user_tier": "standard"}
# 3. Routing function
def route_by_tier(state: SupportState) -> str:
"""Route based on user tier"""
if state.get("user_tier") == "vip":
return "vip_path"
return "standard_path"
# 4. VIP-specific node (auto-resolves)
def vip_agent_node(state: SupportState):
"""VIP agent - no escalation"""
messages = state["messages"]
# Could use different prompt for VIP
response = llm_with_tools.invoke(messages)
return {"messages": [response], "should_escalate": False}
# 5. Build updated graph
workflow = StateGraph(SupportState)
# Add all nodes
workflow.add_node("check_tier", check_user_tier_node)
workflow.add_node("vip_agent", vip_agent_node)
workflow.add_node("standard_agent", agent_node)
workflow.add_node("tools", ToolNode(tools))
# Set entry point
workflow.set_entry_point("check_tier")
# Route by tier
workflow.add_conditional_edges(
"check_tier",
route_by_tier,
{
"vip_path": "vip_agent",
"standard_path": "standard_agent"
}
)
# Both agents can use tools
workflow.add_edge("vip_agent", "tools")
workflow.add_edge("standard_agent", "tools")
# Tools return to respective agents
workflow.add_edge("tools", END)
# Compile
app = workflow.compile()
# Test VIP
print("Testing VIP user:")
result = app.invoke({
"messages": [HumanMessage(content="I'm a VIP customer, check my order")],
"should_escalate": False,
"issue_type": "",
"user_tier": ""
})
# Test Standard
print("\nTesting Standard user:")
result = app.invoke({
"messages": [HumanMessage(content="Check my order please")],
"should_escalate": False,
"issue_type": "",
"user_tier": ""
})
📚 Resources & Next Steps
Congratulations! 🎉
You've completed Day 2 and now understand both LangChain and LangGraph fundamentals.
What You Learned Today
- ✅ Creating tools with
@tooldecorator - ✅ Initializing LLMs with LangChain
- ✅ Building agents with AgentExecutor
- ✅ Understanding agent architecture and loops
- ✅ Identifying AgentExecutor limitations
- ✅ Building stateful graphs with LangGraph
- ✅ Implementing conditional routing
- ✅ Creating production-ready workflows
Official Documentation
- LangChain Docs: python.langchain.com/docs
- LangGraph Docs: langchain-ai.github.io/langgraph
- OpenAI API: platform.openai.com/docs
Practice Exercises (Homework)
Exercise 1: Extend Your Tools
Add 2-3 more tools to your travel agent:
- Check visa requirements
- Get restaurant recommendations
- Book car rental
Exercise 2: Build a New Graph
Create an e-commerce assistant with this flow:
Start → Check Inventory
↙ ↘
In Stock Out of Stock
↓ ↓
Add to Cart Suggest Alternative
↓ ↓
Checkout END
↓
END
Exercise 3: Add Human-in-the-Loop
Modify the support graph to pause before escalation and wait for human approval.
Hint: Look up LangGraph's interrupt_before feature
Week 2 Preview
What's Next?
- 🤖 Multi-agent systems (agents working together)
- 📚 RAG (Retrieval Augmented Generation)
- 💾 Persistent memory across sessions
- 🔄 Human-in-the-loop workflows
- ⚡ Parallel execution patterns
- 🚀 Production deployment strategies
- 📊 Monitoring and debugging
Questions?
Stuck on something? Here's how to get help:
- 💬 Ask in the course Slack/Discord
- 📧 Email your instructor
- 🔍 Search LangChain docs
- 💡 Review this artifact anytime
🎯 You're Ready for Production Agent Development!
Keep this resource bookmarked and see you in Week 2! 🚀