Back to Skill Directory

Agent Framework

95K+ StarsMost AdoptedMIT License

LangChain Agents

by LangChain Inc · langchain.com

LangChain is the foundational framework for building AI agents and LLM-powered applications. With 95,000+ GitHub stars, it is the most widely adopted library in the AI agent ecosystem. Its Agents module lets you create AI systems that use tools, maintain memory, reason through complex problems, and interact with external APIs and databases.

While newer frameworks like CrewAI and Hermes Agent provide higher-level abstractions, LangChain remains the infrastructure layer that many of them are built on. Understanding LangChain is essential for anyone building AI agents — it provides the building blocks that everything else composes.

95K+
GitHub Stars
most popular
Python + JS
Languages
dual SDK
700+
Integrations
tools & models
MIT
License
fully open-source

Quick Install

pip install langchain langchain-openai langchain-community

Key Features

ReAct Agents

Agents that reason about which tool to use, execute it, observe the result, and decide the next step. The core pattern for building intelligent, autonomous AI systems.

Tool Integration

700+ pre-built tool integrations for search, databases, APIs, file systems, and more. Create custom tools as simple Python functions with type annotations.

Memory Systems

Short-term (conversation buffer), long-term (vector store), and entity memory. Agents remember context across interactions and build knowledge over time.

LangGraph Orchestration

Graph-based workflow engine for complex multi-agent systems. Define state machines with conditional branching, loops, and human-in-the-loop approval.

RAG (Retrieval-Augmented Generation)

Built-in support for vector stores, document loaders, text splitters, and retrievers. Build agents that answer questions from your own documents and databases.

Multi-Model Support

Unified interface for OpenAI, Anthropic, Google, Mistral, Llama, and 50+ model providers. Switch models without changing agent code.

Execution Brief

Use this page as a rollout checklist, not just reference text.

Suggest update

Tool Mapping Lens

Organize Tools by Workflow Phase

Catalog-oriented pages work best when users can map discovery, evaluation, and rollout in a clear path instead of reading an undifferentiated list.

  • Define the job-to-be-done first
  • Group tools by stage
  • Prioritize by adoption friction

Actionable Utility Module

Skill Implementation Board

Use this board for LangChain Agents before rollout. Capture inputs, apply one decision rule, execute the checklist, and log outcome.

Input: Objective

Deliver one measurable improvement with langchain agents python tool use memory review

Input: Baseline Window

20-30 minutes

Input: Fallback Window

8-12 minutes

Decision TriggerActionExpected Output
Input: one workflow objective and release owner are definedRun preview execution with fixed acceptance criteria.Go or hold decision backed by repeatable evidence.
Input: output quality below baseline or retries increaseLimit scope, isolate root issue, and rerun controlled test.One confirmed correction path before wider rollout.
Input: checks pass for two consecutive replay windowsPromote to broader traffic with fallback path active.Stable rollout with low operational surprise.

Execution Steps

  1. Record objective, owner, and stop condition.
  2. Execute one controlled preview run.
  3. Measure quality, latency, and correction burden.
  4. Promote only when pass criteria are stable.

Output Template

tool=langchain agents python tool use memory review
objective=
preview_result=pass|fail
primary_metric=
next_step=rollout|patch|hold

What Is LangChain Agents?

LangChain is an open-source framework created by Harrison Chase in late 2022 that quickly became the standard library for building LLM-powered applications. Its Agents module provides the primitives for creating AI systems that can reason about tasks, use tools, maintain memory, and interact with external systems — the foundational building blocks of what we now call "AI agents."

The core agent pattern in LangChain follows the ReAct (Reasoning + Acting) framework. Given a task, the agent reasons about what information it needs, selects and executes a tool, observes the result, and decides whether to take another action or produce a final answer. This loop is the basis for most AI agent architectures, and LangChain provides the cleanest implementation.

LangChain's ecosystem has evolved into three layers: LangChain Core (model wrappers, tool definitions, output parsers), LangChain Community (700+ third-party integrations), and LangGraph (graph-based orchestration for complex workflows). Most developers start with Core, add Community integrations as needed, and graduate to LangGraph when they need multi-agent coordination or complex state machines.

The competitive landscape positions LangChain as the infrastructure layer. CrewAI builds on LangChain for multi-agent orchestration. Hermes Agent builds on its own runtime. AutoGen (Microsoft) takes a conversation-based approach. LangChain's advantage is breadth: 700+ integrations, dual Python/JS SDKs, and the largest community of AI agent developers.

How to Calculate Better Results with langchain agents python tool use memory review

Install LangChain: pip install langchain langchain-openai. Set your model API key as an environment variable (OPENAI_API_KEY or ANTHROPIC_API_KEY).

Create a simple agent: import the model, define tools as Python functions with @tool decorator, and initialize an AgentExecutor with the model and tools.

Add memory for context retention: use ConversationBufferMemory for short conversations or ConversationSummaryMemory for long sessions. Attach memory to the agent executor.

For complex workflows, upgrade to LangGraph: define a StateGraph with nodes (agent steps) and edges (transitions). This gives you conditional branching, loops, and human-in-the-loop patterns.

Treat this page as a decision map. Build a shortlist fast, then run a focused second pass for security, ownership, and operational fit.

When a team keeps one shared selection rubric, tool adoption speeds up because evaluators stop debating criteria every time a new option appears.

Worked Examples

Building a research agent with tools

  1. Define tools: web_search (Brave API), read_url (fetch page content), write_file (save results)
  2. Create a ChatOpenAI model instance with GPT-4o
  3. Initialize AgentExecutor with model, tools, and ConversationBufferMemory
  4. Ask: "Research the top 5 AI coding tools in 2026 and save a comparison report"
  5. Agent reasons: needs to search the web first → calls web_search
  6. Agent reads top results with read_url for detailed comparison data
  7. Agent synthesizes findings and calls write_file to save the report
  8. Final output: structured markdown report saved to disk

Outcome: A complete research workflow executed autonomously through ReAct reasoning. The agent decided which tools to use, in what order, and when it had enough information to produce the final report.

Multi-agent system with LangGraph

  1. Define a StateGraph with three nodes: Researcher, Analyst, Writer
  2. Researcher node: searches web and collects raw data
  3. Analyst node: processes data into structured insights
  4. Writer node: produces final report from insights
  5. Add conditional edge: if Analyst finds data gaps, route back to Researcher
  6. Add human-in-the-loop: Writer output goes to human for approval before final save
  7. Run the graph with: graph.invoke({"task": "Quarterly market analysis"})
  8. The system routes between agents until the report passes human review

Outcome: A self-correcting multi-agent pipeline that handles data gaps automatically and includes human oversight. LangGraph manages the state transitions that would be complex to implement manually.

Frequently Asked Questions

What is LangChain?

LangChain is an open-source framework for building applications powered by large language models. Its Agents module lets you create AI agents that can use tools, maintain memory, reason through multi-step problems, and interact with external systems. Available in Python (langchain) and JavaScript (langchain-js), it is the most widely adopted framework for LLM application development.

How do LangChain Agents work?

LangChain Agents use the ReAct (Reasoning + Acting) pattern. The agent receives a task, reasons about what tool to use, executes the tool, observes the result, and decides the next action. This loop continues until the agent has enough information to produce a final answer. You define the tools available and the agent decides when and how to use them.

What is the difference between LangChain and LangGraph?

LangChain provides the building blocks: model wrappers, tool definitions, memory, and basic agent loops. LangGraph (built on top of LangChain) adds graph-based orchestration for complex multi-agent workflows with state machines, conditional branching, and human-in-the-loop patterns. Use LangChain for simple agents, LangGraph for complex multi-step or multi-agent systems.

How does LangChain compare to CrewAI?

LangChain is a lower-level framework — you build agents from primitives (models, tools, memory, chains). CrewAI is a higher-level framework built partly on LangChain — you define agents with roles and goals, and the framework handles orchestration. LangChain gives more control and flexibility. CrewAI gives faster time-to-multi-agent-system with less code.

Is LangChain still relevant with native tool use?

Yes. While model providers now offer native tool use (Claude tool use, OpenAI function calling), LangChain adds value through: unified interface across providers, memory management, output parsing, chain composition, retrieval-augmented generation (RAG), and the LangGraph orchestration layer. It is infrastructure, not just a tool use wrapper.

How much does LangChain cost?

LangChain is free and open-source (MIT license). You pay only for model API calls. LangSmith (their observability platform) offers a free tier for development and paid plans starting at $39/month for teams. Most developers use LangChain without LangSmith initially.

Missing a better tool match?

Send the exact workflow you are solving and we will prioritize a new comparison or rollout guide.