AI Beginner Python

LangChain Quick Start

LLM 应用开发框架 LangChain 快速入门指南,从安装到构建你的第一个 AI Agent

llmai-agentsragopenaianthropicpython

What is LangChain?

LangChain is an open-source Python framework for building LLM-powered applications. It provides:

  • Multi-Provider Abstraction — Unified interface across OpenAI, Anthropic, Google and more, no vendor lock-in
  • Composable Chains — Pipe operator (|) to combine prompts, models, parsers into pipelines
  • Agent Architecture — Pre-built agent system with tool calling, under 10 lines of code
  • Built on LangGraph — Durable execution, persistence, human-in-the-loop support
  • Developer Ecosystem — LangSmith for tracing, debugging and observability

Installation

# Basic installation
pip install langchain

# With OpenAI support
pip install langchain langchain-openai

# With Anthropic Claude support
pip install langchain langchain-anthropic

Requirements: Python 3.10+, an API key from your chosen LLM provider.

Set up your API key as an environment variable:

# OpenAI
export OPENAI_API_KEY="sk-..."

# or Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."

Your First Chain

The simplest LangChain pattern — a prompt template piped into a model:

from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate

# Create components
model = ChatAnthropic(model="claude-sonnet-4-20250514")

prompt = ChatPromptTemplate.from_template(
    "Translate this text to {language}: {text}"
)

# Combine into a chain using pipe operator
chain = prompt | model

# Run the chain
result = chain.invoke({
    "language": "Spanish",
    "text": "Hello, how are you?"
})

print(result.content)
# → "Hola, ¿cómo estás?"

Key takeaway: The | operator is LangChain’s core pattern for composing components.

Building a Simple Chatbot

from langchain_anthropic import ChatAnthropic
from langchain_core.messages import HumanMessage, SystemMessage

model = ChatAnthropic(model="claude-sonnet-4-20250514")

messages = [
    SystemMessage(content="You are a helpful coding assistant."),
    HumanMessage(content="How do I read a JSON file in Python?")
]

response = model.invoke(messages)
print(response.content)

To maintain conversation history, append messages to the list:

from langchain_core.messages import AIMessage

# Append the AI response
messages.append(AIMessage(content=response.content))

# Add a follow-up question
messages.append(HumanMessage(content="Can you show me error handling?"))

# The model now has the full conversation context
response = model.invoke(messages)

Creating an Agent with Tools

Agents can call functions (tools) to interact with external systems:

from langchain import create_agent
from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool

@tool
def get_weather(location: str) -> str:
    """Get the current weather for a location."""
    # In production, call a real weather API
    return f"The weather in {location} is 22°C and sunny."

@tool
def calculate(expression: str) -> str:
    """Evaluate a math expression."""
    return str(eval(expression))

model = ChatAnthropic(model="claude-sonnet-4-20250514")

agent = create_agent(
    tools=[get_weather, calculate],
    model=model,
    system_prompt="You are a helpful assistant with access to weather data and a calculator."
)

response = agent.invoke({
    "input": "What's the weather in Tokyo? Also what's 42 * 17?"
})

print(response["output"])

Key takeaway: The @tool decorator turns any function into an agent-callable tool. Write clear docstrings — the agent uses them to decide when to call each tool.

Common Patterns

Streaming Responses

for chunk in chain.stream({"language": "French", "text": "Good morning"}):
    print(chunk.content, end="", flush=True)

Batch Processing

inputs = [
    {"language": "Spanish", "text": "Hello"},
    {"language": "French", "text": "Hello"},
    {"language": "Japanese", "text": "Hello"},
]

results = chain.batch(inputs)

Output Parsing

from langchain_core.output_parsers import JsonOutputParser

chain = prompt | model | JsonOutputParser()
result = chain.invoke(inputs)  # Returns parsed dict

Best Practices

  1. Start simple — Begin with prompt + model chains, add agents only when you need dynamic tool selection
  2. Store API keys in env vars — Never hardcode secrets in source code
  3. Use clear system prompts — Specific instructions produce more consistent results
  4. Keep tools focused — Each tool should do one thing well, with a clear docstring
  5. Test with cheaper models — Use Haiku/GPT-4o-mini during development, switch to production models later
  6. Enable LangSmith — Connect early for tracing and debugging
  7. Monitor token usage — Track costs, especially with long conversations

FAQ

Chains vs Agents — when to use which?

Chains for fixed, predictable workflows (translate, summarize, format). Agents when the model needs to decide dynamically what to do next.

How to handle long conversations?

Implement conversation summarization, or use a rolling window (keep only the last N messages). LangGraph Memory Store provides built-in persistence.

LangChain vs LangGraph vs Deep Agents?

  • LangChain — Quick start, pre-built patterns (start here)
  • LangGraph — Fine-grained control, deterministic workflows
  • Deep Agents — Complex multi-step tasks with planning and subagents

My agent is slow, what can I do?

Reduce the number of tools, simplify tool descriptions, use invoke() instead of stream() for speed, consider a faster model.

Next Steps