Tuesday, April 28, 2026

LangChain LCEL building blocks and interfaces

 1. Building Blocks (Modular Components)

These are the functional units used to build chains and agents.

TitleDescription
PromptsTemplates that translate user input into a format the model understands.
ModelsLLMs or Chat Models that process input and generate language responses.
Output ParsersTools that convert raw model output into structured formats (JSON, lists, strings).
RetrieversLogic for fetching relevant documents from data sources to provide context.
ToolsExternal functions (like Google Search or calculators) that an agent can call.
Vector StoresDatabases designed to store and query text embeddings for semantic search.


2. Core Interfaces
These define the standards for how data, history, and feedback are handled across the framework.
TitleDescription
RunnableThe primary interface for LCEL, providing standard methods like invoke and stream.  (invoke, ainvoke - asynchronous invoke, stream)
BaseChatMessageHistoryStandardizes how conversation messages are saved and retrieved from memory.
BaseStoreA universal interface for simple key-value storage used in caching and persistent state.
BaseCallbackHandlerAn interface for building custom listeners to log, monitor, or stream internal events.
BaseRetrieverA specialized search interface that defines how documents are queried.
BaseLanguageModelThe root interface defining how all LLMs and Chat Models must behave.


Runnable methods :  invoke, ainvoke and stream
  • invoke: Executes the component synchronously with a single input.
    • Model: Returns the complete generated message.
    • Agent: Executes the entire workflow (including tool calls) and returns the final response.
  • ainvoke: The asynchronous version of invoke, recommended for production web backends to prevent blocking the event loop.
  • stream: Incremental output generation.
    • Model: Streams the response token-by-token as they are generated.
    • Agent: Streams the agent's "steps" or progress. By setting stream_mode="updates", you can receive real-time updates after each tool call or reasoning step. 
Key Differences in Streaming
While both support these methods, the output they produce during streaming differs:
  • Models primarily stream text tokens.
  • Agents (especially those built with LangGraph) can stream a wider variety of data, including model reasoning, tool call arguments, tool results, and the final answer. 





No comments:

Post a Comment

AI Agent to extract info from a static web page

  # STEP 1.1 INSTALL THE REQUIRED PACKAGES ! pip install langchain_community langchain_google_genai ! pip install -U duckduckgo-search #you ...