Skip to content

Lang

LangChain Academy - courses

Overviews

Integrations - by component

  • Models - chat

  • Models - embedding

  • Tools & toolkits

  • Text splitters

  • Vector stores

  • Doc loaders

  • Key-value stores

semantic search tutorial

tool decorator

  • does: configure tool to attach raw docs as artifacts to each ToolMessage.

  • purpose: lets us access document metadata


LangChain

LangChain | Core components

<image>

  1. Agents: a framework to orchestrate tool-use via LLM's reasoning. They run tools in a loop to achieve a goal.

  2. Models: reasoning engines that take input ⟶ gen output

  3. Messages and tools

    1. Messages (memory): units of context that carry context & conversation state

    2. Tools: callable functions/APIs that retrieve external data

  4. Short-term memory, Streaming, Structured output

    1. Short-term memory: in-session context stores that retain recent interaction history

    2. Streaming: for displaying output progressively (even before a complete response is ready)

    3. Structured output: format for model output (eg JSON) for the LLM

LangChain | Quickstart | Build a real-world agent

Quickstart Create these to build an agent:

  1. write prompts

  2. tools: integrate w external data

    1. depends on runtime context 

    2. interacts with agent memory

  3. model config: for consistent responses

  4. response format for predictable results

  5. memory for context across chats

  6. invoke agent