← Back to Blog

LangChain v1 vs LangGraph v1: Production Patterns for Agentic AI

Production patterns for create_agent, middleware, structured output, and migration

Written by Sithumini Abeysekara Feb 24, 2026 ~11 min read
LangChain v1 vs LangGraph v1 cover

Introduction

Part 1 focused on when to use LangChain and when to use LangGraph. In Part 2, we focus on the v1 generation and what changed for engineering teams shipping real systems. The short version: create_agent is now the default, middleware is first-class, and structured outputs are much more reliable.


What's New in LangChain v1 and LangGraph v1

1. Standardized Agent Development with create_agent

LangChain v1 introduces a unified, production-ready interface for agents through create_agent. Legacy patterns like AgentExecutor and initialize_agent are deprecated and moved to langchain-classic.

from langchain.agents import create_agent
from langchain.tools import tool

@tool
def calculate(expression: str) -> str:
    """Perform a mathematical calculation."""
    import sympy
    return str(sympy.sympify(expression))

agent = create_agent(
    model="groq:meta-llama/llama-4-scout-17b-16e-instruct",
    tools=[calculate],
)

result = agent.invoke({
    "messages": [{"role": "user", "content": "Calculate 15 * 23 + 42"}]
})
print(result["messages"][-1].content)

Why this matters: streaming works out of the box, and persistence/durable execution are available once you configure a checkpointer.

2. Middleware over custom hook hacks

LCEL runnables still matter for non-agent pipelines, but for agent-level pre/post behavior, middleware is the v1 recommendation.

from langchain.agents import create_agent
from langchain.agents.middleware import SummarizationMiddleware

agent = create_agent(
    model="groq:llama-3.3-70b-versatile",
    tools=[],
    middleware=[
        SummarizationMiddleware(
            model="groq:llama-3.3-70b-versatile",
            trigger=("tokens", 4000),
            keep=("messages", 20),
        )
    ],
)

3. Structured Output that holds up in production

For direct model calls, use .with_structured_output(). For agents, use response_format with a strategy (ProviderStrategy or ToolStrategy).

from pydantic import BaseModel
from langchain.agents import create_agent
from langchain.agents.structured_output import ProviderStrategy

class ContactInfo(BaseModel):
    name: str
    email: str
    phone: str

agent = create_agent(
    model="groq:meta-llama/llama-4-scout-17b-16e-instruct",
    tools=[],
    response_format=ProviderStrategy(ContactInfo),
)

result = agent.invoke({
    "messages": [{"role": "user", "content": "Return Jane's contact info"}]
})
print(result["structured_response"])

4. Message content model updates

In v1, message content can be multi-part. Prefer content_blocks for a typed, provider-agnostic view. Also, use the .text property instead of deprecated .text().

5. Simplified namespace and migration path

The top-level langchain package is leaner. Legacy patterns moved to langchain-classic. Teams should explicitly migrate older abstractions now instead of carrying compatibility debt.

6. LangGraph v1 for explicit orchestration

When you need custom graph topology, deterministic routing, interrupts, or graph-level checkpoint behavior, use raw StateGraph.


Decision Framework: LangChain vs LangGraph

Scenario LangChain (create_agent) LangGraph (StateGraph)
Simple tool calling Recommended Not needed
Custom graph topology Limited Recommended
Complex state machines Limited Recommended
Streaming Built in Built in
Persistence / durable execution With checkpointer With checkpointer

Migration Considerations

  1. Migrate from AgentExecutor/initialize_agent to create_agent.
  2. Use TypedDict-style AgentState extensions for create_agent state schemas.
  3. Move from .text() to .text and adopt content_blocks.
  4. Use schema-based structured output; prompted JSON output via response_format is removed in v1.

Production Best Practices

Observability

Error Handling

Performance

Security


Code Repository

All examples for this series are available at github.com/linfieldlabs/Agentic_AI.

Conclusion

LangChain v1 and LangGraph v1 give teams a cleaner path to production: start with create_agent, then drop to raw StateGraph when workflow complexity demands explicit graph-level control. You can scale architecture without rewriting the core agent logic.

← Back to Blog