Overview
Before you build sophisticated agents and retrieval-augmented generation (RAG) systems, you need a solid Python foundation, a working development environment, and at least one LLM provider configured.
This phase ensures you can:
- Write and run basic Python scripts (sync and async).
- Create isolated project environments with
venvandpip. - Install and configure LangChain, LangGraph, and dependencies.
- Call an LLM (e.g. OpenAI) from Python.
In this chapter
1. Core Python Skills
LangChain and LangGraph are Python libraries. You don’t need to be a “Python expert”, but you should be comfortable with:
- Functions and classes
- Type hints
- Basic async/await
- Project layout and virtual environments
1.1 Functions and Type Hints
Functions let you encapsulate logic. Type hints make your code easier to understand
and catch mistakes early with tools like mypy or your IDE.
from typing import List
def summarize_points(points: List[str]) -> str:
"""Return a short summary of a list of bullet points."""
joined = "; ".join(points)
return f"In summary, we covered: {joined}."
if __name__ == "__main__":
topics = ["LangChain basics", "LangGraph workflows", "RAG"]
print(summarize_points(topics))
1.2 Classes
Many LangChain components are classes (models, prompts, chains). Understanding how classes work helps you customize and wrap them.
class SimpleCounter:
def __init__(self) -> None:
self.value = 0
def increment(self, step: int = 1) -> None:
self.value += step
def reset(self) -> None:
self.value = 0
if __name__ == "__main__":
counter = SimpleCounter()
counter.increment()
counter.increment(5)
print("Counter value:", counter.value)
1.3 Virtual Environments & Project Structure
A virtual environment isolates your project’s dependencies so they don’t conflict with other Python projects.
# 1. Create a folder for this course
mkdir langchain-langgraph-study
cd langchain-langgraph-study
# 2. Create a virtual environment (Linux/macOS)
python -m venv .venv
source .venv/bin/activate
# On Windows PowerShell:
# python -m venv .venv
# .venv\Scripts\Activate.ps1
# 3. Upgrade pip
python -m pip install --upgrade pip
A simple layout for this ebook could be:
langchain-langgraph-study/
├─ .venv/
├─ src/
│ ├─ phase0/
│ ├─ phase1/
│ └─ ...
├─ Readme.md
└─ requirements.txt
1.4 Basic AsyncIO
LangChain and LangGraph both support async execution. Knowing how to use
async def and await lets you:
- Call models in parallel.
- Build responsive APIs.
import asyncio
async def fetch_answer(question: str) -> str:
# In real code, you'll call an LLM here.
await asyncio.sleep(0.5)
return f"Mock answer to: {question}"
async def main() -> None:
questions = ["What is LangChain?", "What is LangGraph?"]
tasks = [fetch_answer(q) for q in questions]
answers = await asyncio.gather(*tasks)
for q, a in zip(questions, answers):
print(q, "->", a)
if __name__ == "__main__":
asyncio.run(main())
2. LLM Basics
Large Language Models (LLMs) are the core engine behind LangChain applications. LangChain provides a unified interface to different providers (OpenAI, Anthropic, local models such as Ollama, and more).
2.1 Key Concepts
- Prompt – the text you send to the model.
- Context window – maximum tokens (input + output) the model can process.
- Tokens – chunks of text; pricing and limits are per token.
- Temperature – randomness; low = deterministic, high = creative.
- Chat vs. completion models – structured messages vs. raw text.
2.2 Chat vs. Text Completion
Modern APIs (like OpenAI’s gpt-4o) use a chat-style interface,
where you send a list of messages with roles like system, user,
and assistant.
# Pseudo-example using OpenAI's chat completion style (without LangChain)
from openai import OpenAI
import os
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a concise assistant."},
{"role": "user", "content": "Explain LangChain in one paragraph."},
],
temperature=0.2,
)
print(response.choices[0].message.content)
LangChain wraps this in a higher-level interface that integrates with prompts, tools, and document workflows. You will see that starting in Phase 1.
3. Environment Setup
Next, you’ll set up a project that can run both LangChain and LangGraph.
3.1 Install Dependencies
# Inside your activated virtual environment
pip install \
langchain langchain-core langchain-community \
langchain-openai langgraph python-dotenv \
faiss-cpu chromadb
This installs:
langchainandlangchain-core– core abstractions and LCEL.langchain-openai– OpenAI chat and embedding wrappers.langchain-community– community integrations (vector stores, loaders).langgraph– orchestration framework for stateful workflows.python-dotenv– load API keys from.env.faiss-cpu/chromadb– common vector stores for RAG.
3.2 Store Secrets in .env
Never hard-code API keys directly in your source code. Instead, store them in
a .env file and load them at runtime.
# .env (do not commit to Git)
OPENAI_API_KEY=sk-your-openai-key-here
ANTHROPIC_API_KEY=your-anthropic-key-optional
# src/phase0/load_env_example.py
import os
from dotenv import load_dotenv
def load_env() -> None:
load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
raise RuntimeError("OPENAI_API_KEY is not set in your .env file")
print("API key loaded successfully (hidden).")
if __name__ == "__main__":
load_env()
3.3 Verifying the Installation
Create a quick script to ensure LangChain and LangGraph import correctly.
# src/phase0/verify_install.py
from langchain_core.runnables import RunnableLambda
from langgraph.graph import StateGraph
def say_hello(name: str) -> str:
return f"Hello, {name}!"
def main() -> None:
# Test LangChain LCEL
chain = RunnableLambda(say_hello)
print(chain.invoke("LangChain"))
# Test LangGraph basic graph
class State(dict):
pass
def greet(state: State) -> State:
name = state.get("name", "world")
return {"greeting": f"Hello from LangGraph, {name}!"}
graph = StateGraph(State)
graph.add_node("greet", greet)
graph.set_entry_point("greet")
app = graph.compile()
result = app.invoke({"name": "LangGraph"})
print(result["greeting"])
if __name__ == "__main__":
main()
4. Reading Docs Strategically
LangChain and LangGraph have extensive documentation. At this point, you don’t need to read everything in depth. Instead:
- Skim the main sections (models, prompts, chains, agents, document loaders).
- Skim LangGraph’s concepts (graphs, nodes, state, edges).
- Note where examples live so you can refer back later.
- Briefly look at observability tools like LangSmith (or similar tracing systems) so you know that detailed traces and evaluations are available once you reach the production and evaluation phases.
Mini Task – First LangChain Script
To finish Phase 0, you’ll write a small script that:
- Loads your API key from
.env. - Asks the user for a question.
- Calls a chat model using LangChain.
- Prints the answer.
5.1 Basic Chat Script with LangChain
# src/phase0/first_langchain_chat.py
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
def main() -> None:
load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
if not api_key:
raise RuntimeError("Set OPENAI_API_KEY in your .env file first.")
# Configure a chat model (you can change model name)
model = ChatOpenAI(
model="gpt-4o-mini",
temperature=0.2,
)
print("Ask a question about LangChain or LangGraph:")
question = input("> ").strip()
if not question:
print("No question provided, exiting.")
return
messages = [
SystemMessage(content="You are a helpful assistant for LangChain and LangGraph."),
HumanMessage(content=question),
]
response = model.invoke(messages)
print("\n--- Answer ---")
print(response.content)
if __name__ == "__main__":
main()
5.2 Running the Script
# From your project root
export OPENAI_API_KEY="sk-..." # or configure in .env and just run
python src/phase0/first_langchain_chat.py
If everything is wired correctly, you’ll see a response from the model. You’ve now completed the minimum prerequisites to start building with LangChain and LangGraph.