Home / Class/ LLMChain Class — langchain Architecture

LLMChain Class — langchain Architecture

Architecture documentation for the LLMChain class in llm.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  8d3a235d_a08f_2979_f52a_1772067dd1d3["LLMChain"]
  097a4781_5519_0b5d_6244_98c64eadc0d6["Chain"]
  8d3a235d_a08f_2979_f52a_1772067dd1d3 -->|extends| 097a4781_5519_0b5d_6244_98c64eadc0d6
  cd3c2a54_68ad_0d2e_ac51_e9be79fd1958["BaseLanguageModel"]
  8d3a235d_a08f_2979_f52a_1772067dd1d3 -->|extends| cd3c2a54_68ad_0d2e_ac51_e9be79fd1958
  b9553aad_b797_0a7b_73ed_8d05b0819c0f["BaseMessage"]
  8d3a235d_a08f_2979_f52a_1772067dd1d3 -->|extends| b9553aad_b797_0a7b_73ed_8d05b0819c0f
  28fd4d93_e217_c2bb_26b3_3d5146afb677["llm.py"]
  8d3a235d_a08f_2979_f52a_1772067dd1d3 -->|defined in| 28fd4d93_e217_c2bb_26b3_3d5146afb677
  e158aa14_fcfd_73c0_95ec_97077d3b577b["is_lc_serializable()"]
  8d3a235d_a08f_2979_f52a_1772067dd1d3 -->|method| e158aa14_fcfd_73c0_95ec_97077d3b577b
  4c7f38a3_1916_ce41_143e_3b3019d6e12b["input_keys()"]
  8d3a235d_a08f_2979_f52a_1772067dd1d3 -->|method| 4c7f38a3_1916_ce41_143e_3b3019d6e12b
  2c55f3d7_e968_b962_8186_4993789e0ac3["output_keys()"]
  8d3a235d_a08f_2979_f52a_1772067dd1d3 -->|method| 2c55f3d7_e968_b962_8186_4993789e0ac3
  54f256c5_11dc_ddae_4a4a_a7448ab8f917["_call()"]
  8d3a235d_a08f_2979_f52a_1772067dd1d3 -->|method| 54f256c5_11dc_ddae_4a4a_a7448ab8f917
  abc57c52_5586_0bf6_b569_9833e61ea1a8["generate()"]
  8d3a235d_a08f_2979_f52a_1772067dd1d3 -->|method| abc57c52_5586_0bf6_b569_9833e61ea1a8
  e6555622_5d54_cc72_5e6b_209377e21fe9["agenerate()"]
  8d3a235d_a08f_2979_f52a_1772067dd1d3 -->|method| e6555622_5d54_cc72_5e6b_209377e21fe9
  d17a0d18_ec61_9a33_aa21_fdf6708d0a03["prep_prompts()"]
  8d3a235d_a08f_2979_f52a_1772067dd1d3 -->|method| d17a0d18_ec61_9a33_aa21_fdf6708d0a03
  4bf39a44_f610_93a3_c354_3985893c6fcf["aprep_prompts()"]
  8d3a235d_a08f_2979_f52a_1772067dd1d3 -->|method| 4bf39a44_f610_93a3_c354_3985893c6fcf
  600d1b9d_0474_b79b_33c3_ba7eed8b9c20["apply()"]
  8d3a235d_a08f_2979_f52a_1772067dd1d3 -->|method| 600d1b9d_0474_b79b_33c3_ba7eed8b9c20
  bf4339ba_8d5f_8c83_6e62_791dfee3eea3["aapply()"]
  8d3a235d_a08f_2979_f52a_1772067dd1d3 -->|method| bf4339ba_8d5f_8c83_6e62_791dfee3eea3

Relationship Graph

Source Code

libs/langchain/langchain_classic/chains/llm.py lines 45–416

class LLMChain(Chain):
    """Chain to run queries against LLMs.

    This class is deprecated. See below for an example implementation using
    LangChain runnables:

        ```python
        from langchain_core.output_parsers import StrOutputParser
        from langchain_core.prompts import PromptTemplate
        from langchain_openai import OpenAI

        prompt_template = "Tell me a {adjective} joke"
        prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
        model = OpenAI()
        chain = prompt | model | StrOutputParser()

        chain.invoke("your adjective here")
        ```

    Example:
        ```python
        from langchain_classic.chains import LLMChain
        from langchain_openai import OpenAI
        from langchain_core.prompts import PromptTemplate

        prompt_template = "Tell me a {adjective} joke"
        prompt = PromptTemplate(input_variables=["adjective"], template=prompt_template)
        model = LLMChain(llm=OpenAI(), prompt=prompt)
        ```
    """

    @classmethod
    @override
    def is_lc_serializable(cls) -> bool:
        return True

    prompt: BasePromptTemplate
    """Prompt object to use."""
    llm: Runnable[LanguageModelInput, str] | Runnable[LanguageModelInput, BaseMessage]
    """Language model to call."""
    output_key: str = "text"
    output_parser: BaseLLMOutputParser = Field(default_factory=StrOutputParser)
    """Output parser to use.
    Defaults to one that takes the most likely string but does not change it
    otherwise."""
    return_final_only: bool = True
    """Whether to return only the final parsed result.
    If `False`, will return a bunch of extra information about the generation."""
    llm_kwargs: dict = Field(default_factory=dict)

    model_config = ConfigDict(
        arbitrary_types_allowed=True,
        extra="forbid",
    )

    @property
    def input_keys(self) -> list[str]:
        """Will be whatever keys the prompt expects."""
        return self.prompt.input_variables

    @property
    def output_keys(self) -> list[str]:
        """Will always return text key."""
        if self.return_final_only:
            return [self.output_key]
        return [self.output_key, "full_generation"]

    def _call(
        self,
        inputs: dict[str, Any],
        run_manager: CallbackManagerForChainRun | None = None,
    ) -> dict[str, str]:
        response = self.generate([inputs], run_manager=run_manager)
        return self.create_outputs(response)[0]

    def generate(
        self,
        input_list: list[dict[str, Any]],
        run_manager: CallbackManagerForChainRun | None = None,
    ) -> LLMResult:
        """Generate LLM result from inputs."""

Frequently Asked Questions

What is the LLMChain class?
LLMChain is a class in the langchain codebase, defined in libs/langchain/langchain_classic/chains/llm.py.
Where is LLMChain defined?
LLMChain is defined in libs/langchain/langchain_classic/chains/llm.py at line 45.
What does LLMChain extend?
LLMChain extends Chain, BaseLanguageModel, BaseMessage.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free