Home / File/ output_parser.py — langchain Source File

output_parser.py — langchain Source File

Architecture documentation for output_parser.py, a python file in the langchain codebase. 11 imports, 0 dependents.

Entity Profile

Dependency Diagram

graph LR
  2857bb68_61ce_bbf0_3d7f_30a049157b64["output_parser.py"]
  7025b240_fdc3_cf68_b72f_f41dac94566b["json"]
  2857bb68_61ce_bbf0_3d7f_30a049157b64 --> 7025b240_fdc3_cf68_b72f_f41dac94566b
  2a7f66a7_8738_3d47_375b_70fcaa6ac169["logging"]
  2857bb68_61ce_bbf0_3d7f_30a049157b64 --> 2a7f66a7_8738_3d47_375b_70fcaa6ac169
  67ec3255_645e_8b6e_1eff_1eb3c648ed95["re"]
  2857bb68_61ce_bbf0_3d7f_30a049157b64 --> 67ec3255_645e_8b6e_1eff_1eb3c648ed95
  80d582c5_7cc3_ac96_2742_3dbe1cbd4e2b["langchain_core.agents"]
  2857bb68_61ce_bbf0_3d7f_30a049157b64 --> 80d582c5_7cc3_ac96_2742_3dbe1cbd4e2b
  75137834_4ba7_dc43_7ec5_182c05eceedf["langchain_core.exceptions"]
  2857bb68_61ce_bbf0_3d7f_30a049157b64 --> 75137834_4ba7_dc43_7ec5_182c05eceedf
  ba43b74d_3099_7e1c_aac3_cf594720469e["langchain_core.language_models"]
  2857bb68_61ce_bbf0_3d7f_30a049157b64 --> ba43b74d_3099_7e1c_aac3_cf594720469e
  6e58aaea_f08e_c099_3cc7_f9567bfb1ae7["pydantic"]
  2857bb68_61ce_bbf0_3d7f_30a049157b64 --> 6e58aaea_f08e_c099_3cc7_f9567bfb1ae7
  91721f45_4909_e489_8c1f_084f8bd87145["typing_extensions"]
  2857bb68_61ce_bbf0_3d7f_30a049157b64 --> 91721f45_4909_e489_8c1f_084f8bd87145
  e160f068_75de_4342_6673_9969b919de85["langchain_classic.agents.agent"]
  2857bb68_61ce_bbf0_3d7f_30a049157b64 --> e160f068_75de_4342_6673_9969b919de85
  603d4a41_89da_d8bb_46f0_ff263f3c2fb3["langchain_classic.agents.structured_chat.prompt"]
  2857bb68_61ce_bbf0_3d7f_30a049157b64 --> 603d4a41_89da_d8bb_46f0_ff263f3c2fb3
  48ed1a89_a8c1_890c_8db6_cafc60317e2f["langchain_classic.output_parsers"]
  2857bb68_61ce_bbf0_3d7f_30a049157b64 --> 48ed1a89_a8c1_890c_8db6_cafc60317e2f
  style 2857bb68_61ce_bbf0_3d7f_30a049157b64 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

from __future__ import annotations

import json
import logging
import re
from re import Pattern

from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.exceptions import OutputParserException
from langchain_core.language_models import BaseLanguageModel
from pydantic import Field
from typing_extensions import override

from langchain_classic.agents.agent import AgentOutputParser
from langchain_classic.agents.structured_chat.prompt import FORMAT_INSTRUCTIONS
from langchain_classic.output_parsers import OutputFixingParser

logger = logging.getLogger(__name__)


class StructuredChatOutputParser(AgentOutputParser):
    """Output parser for the structured chat agent."""

    format_instructions: str = FORMAT_INSTRUCTIONS
    """Default formatting instructions"""

    pattern: Pattern = re.compile(r"```(?:json\s+)?(\W.*?)```", re.DOTALL)
    """Regex pattern to parse the output."""

    @override
    def get_format_instructions(self) -> str:
        """Returns formatting instructions for the given output parser."""
        return self.format_instructions

    @override
    def parse(self, text: str) -> AgentAction | AgentFinish:
        try:
            action_match = self.pattern.search(text)
            if action_match is not None:
                response = json.loads(action_match.group(1).strip(), strict=False)
                if isinstance(response, list):
                    # gpt turbo frequently ignores the directive to emit a single action
                    logger.warning("Got multiple action responses: %s", response)
                    response = response[0]
                if response["action"] == "Final Answer":
                    return AgentFinish({"output": response["action_input"]}, text)
                return AgentAction(
                    response["action"],
                    response.get("action_input", {}),
                    text,
                )
            return AgentFinish({"output": text}, text)
        except Exception as e:
            msg = f"Could not parse LLM output: {text}"
            raise OutputParserException(msg) from e

    @property
    def _type(self) -> str:
        return "structured_chat"


class StructuredChatOutputParserWithRetries(AgentOutputParser):
    """Output parser with retries for the structured chat agent."""

    base_parser: AgentOutputParser = Field(default_factory=StructuredChatOutputParser)
    """The base parser to use."""
    output_fixing_parser: OutputFixingParser | None = None
    """The output fixing parser to use."""

    @override
    def get_format_instructions(self) -> str:
        return FORMAT_INSTRUCTIONS

    @override
    def parse(self, text: str) -> AgentAction | AgentFinish:
        try:
            if self.output_fixing_parser is not None:
                return self.output_fixing_parser.parse(text)
            return self.base_parser.parse(text)
        except Exception as e:
            msg = f"Could not parse LLM output: {text}"
            raise OutputParserException(msg) from e

    @classmethod
    def from_llm(
        cls,
        llm: BaseLanguageModel | None = None,
        base_parser: StructuredChatOutputParser | None = None,
    ) -> StructuredChatOutputParserWithRetries:
        """Create a StructuredChatOutputParserWithRetries from a language model.

        Args:
            llm: The language model to use.
            base_parser: An optional StructuredChatOutputParser to use.

        Returns:
            An instance of StructuredChatOutputParserWithRetries.
        """
        if llm is not None:
            base_parser = base_parser or StructuredChatOutputParser()
            output_fixing_parser: OutputFixingParser = OutputFixingParser.from_llm(
                llm=llm,
                parser=base_parser,
            )
            return cls(output_fixing_parser=output_fixing_parser)
        if base_parser is not None:
            return cls(base_parser=base_parser)
        return cls()

    @property
    def _type(self) -> str:
        return "structured_chat_with_retries"

Subdomains

Dependencies

  • json
  • langchain_classic.agents.agent
  • langchain_classic.agents.structured_chat.prompt
  • langchain_classic.output_parsers
  • langchain_core.agents
  • langchain_core.exceptions
  • langchain_core.language_models
  • logging
  • pydantic
  • re
  • typing_extensions

Frequently Asked Questions

What does output_parser.py do?
output_parser.py is a source file in the langchain codebase, written in python. It belongs to the AgentOrchestration domain, ToolInterface subdomain.
What does output_parser.py depend on?
output_parser.py imports 11 module(s): json, langchain_classic.agents.agent, langchain_classic.agents.structured_chat.prompt, langchain_classic.output_parsers, langchain_core.agents, langchain_core.exceptions, langchain_core.language_models, logging, and 3 more.
Where is output_parser.py in the architecture?
output_parser.py is located at libs/langchain/langchain_classic/agents/structured_chat/output_parser.py (domain: AgentOrchestration, subdomain: ToolInterface, directory: libs/langchain/langchain_classic/agents/structured_chat).

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free