Home / Class/ LLMSummarizationCheckerChain Class — langchain Architecture

LLMSummarizationCheckerChain Class — langchain Architecture

Architecture documentation for the LLMSummarizationCheckerChain class in base.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  94675137_e239_b0ed_4295_d86f72dadd0c["LLMSummarizationCheckerChain"]
  f3cef70e_11b0_61c9_7ec0_7308f4b45056["Chain"]
  94675137_e239_b0ed_4295_d86f72dadd0c -->|extends| f3cef70e_11b0_61c9_7ec0_7308f4b45056
  4bd52d1b_a9be_e6bd_75e2_1b015e6954f4["base.py"]
  94675137_e239_b0ed_4295_d86f72dadd0c -->|defined in| 4bd52d1b_a9be_e6bd_75e2_1b015e6954f4
  7ae7d589_e20c_0c98_1b01_12802ff422f8["_raise_deprecation()"]
  94675137_e239_b0ed_4295_d86f72dadd0c -->|method| 7ae7d589_e20c_0c98_1b01_12802ff422f8
  c358f034_baae_a9db_575b_76b128193e5e["input_keys()"]
  94675137_e239_b0ed_4295_d86f72dadd0c -->|method| c358f034_baae_a9db_575b_76b128193e5e
  bf000f00_494e_bb94_1cd6_c3bd5e5cef06["output_keys()"]
  94675137_e239_b0ed_4295_d86f72dadd0c -->|method| bf000f00_494e_bb94_1cd6_c3bd5e5cef06
  90aa30a1_894c_0c7f_2f26_64e1e1157b85["_call()"]
  94675137_e239_b0ed_4295_d86f72dadd0c -->|method| 90aa30a1_894c_0c7f_2f26_64e1e1157b85
  69f3be36_1db0_88c9_d382_7b16b1f4e8a3["_chain_type()"]
  94675137_e239_b0ed_4295_d86f72dadd0c -->|method| 69f3be36_1db0_88c9_d382_7b16b1f4e8a3
  1e0bb2d4_7826_f84f_e437_901c40aeb242["from_llm()"]
  94675137_e239_b0ed_4295_d86f72dadd0c -->|method| 1e0bb2d4_7826_f84f_e437_901c40aeb242

Relationship Graph

Source Code

libs/langchain/langchain_classic/chains/llm_summarization_checker/base.py lines 80–213

class LLMSummarizationCheckerChain(Chain):
    """Chain for question-answering with self-verification.

    Example:
        ```python
        from langchain_openai import OpenAI
        from langchain_classic.chains import LLMSummarizationCheckerChain

        model = OpenAI(temperature=0.0)
        checker_chain = LLMSummarizationCheckerChain.from_llm(model)
        ```
    """

    sequential_chain: SequentialChain
    llm: BaseLanguageModel | None = None
    """[Deprecated] LLM wrapper to use."""

    create_assertions_prompt: PromptTemplate = CREATE_ASSERTIONS_PROMPT
    """[Deprecated]"""
    check_assertions_prompt: PromptTemplate = CHECK_ASSERTIONS_PROMPT
    """[Deprecated]"""
    revised_summary_prompt: PromptTemplate = REVISED_SUMMARY_PROMPT
    """[Deprecated]"""
    are_all_true_prompt: PromptTemplate = ARE_ALL_TRUE_PROMPT
    """[Deprecated]"""

    input_key: str = "query"
    output_key: str = "result"
    max_checks: int = 2
    """Maximum number of times to check the assertions. Default to double-checking."""

    model_config = ConfigDict(
        arbitrary_types_allowed=True,
        extra="forbid",
    )

    @model_validator(mode="before")
    @classmethod
    def _raise_deprecation(cls, values: dict) -> Any:
        if "llm" in values:
            warnings.warn(
                "Directly instantiating an LLMSummarizationCheckerChain with an llm is "
                "deprecated. Please instantiate with"
                " sequential_chain argument or using the from_llm class method.",
                stacklevel=5,
            )
            if "sequential_chain" not in values and values["llm"] is not None:
                values["sequential_chain"] = _load_sequential_chain(
                    values["llm"],
                    values.get("create_assertions_prompt", CREATE_ASSERTIONS_PROMPT),
                    values.get("check_assertions_prompt", CHECK_ASSERTIONS_PROMPT),
                    values.get("revised_summary_prompt", REVISED_SUMMARY_PROMPT),
                    values.get("are_all_true_prompt", ARE_ALL_TRUE_PROMPT),
                    verbose=values.get("verbose", False),
                )
        return values

    @property
    def input_keys(self) -> list[str]:
        """Return the singular input key."""
        return [self.input_key]

    @property
    def output_keys(self) -> list[str]:
        """Return the singular output key."""
        return [self.output_key]

    def _call(
        self,
        inputs: dict[str, Any],
        run_manager: CallbackManagerForChainRun | None = None,
    ) -> dict[str, str]:
        _run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
        all_true = False
        count = 0
        output = None
        original_input = inputs[self.input_key]
        chain_input = original_input
        while not all_true and count < self.max_checks:
            output = self.sequential_chain(
                {"summary": chain_input},

Extends

Frequently Asked Questions

What is the LLMSummarizationCheckerChain class?
LLMSummarizationCheckerChain is a class in the langchain codebase, defined in libs/langchain/langchain_classic/chains/llm_summarization_checker/base.py.
Where is LLMSummarizationCheckerChain defined?
LLMSummarizationCheckerChain is defined in libs/langchain/langchain_classic/chains/llm_summarization_checker/base.py at line 80.
What does LLMSummarizationCheckerChain extend?
LLMSummarizationCheckerChain extends Chain.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free