Home / Function/ from_llm() — langchain Function Reference

from_llm() — langchain Function Reference

Architecture documentation for the from_llm() function in base.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  beef7466_c02e_ed04_8603_81363105b155["from_llm()"]
  4ca44101_1e0d_43e9_f420_35c5afe4173a["HypotheticalDocumentEmbedder"]
  beef7466_c02e_ed04_8603_81363105b155 -->|defined in| 4ca44101_1e0d_43e9_f420_35c5afe4173a
  style beef7466_c02e_ed04_8603_81363105b155 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/chains/hyde/base.py lines 102–123

    def from_llm(
        cls,
        llm: BaseLanguageModel,
        base_embeddings: Embeddings,
        prompt_key: str | None = None,
        custom_prompt: BasePromptTemplate | None = None,
        **kwargs: Any,
    ) -> HypotheticalDocumentEmbedder:
        """Load and use LLMChain with either a specific prompt key or custom prompt."""
        if custom_prompt is not None:
            prompt = custom_prompt
        elif prompt_key is not None and prompt_key in PROMPT_MAP:
            prompt = PROMPT_MAP[prompt_key]
        else:
            msg = (
                f"Must specify prompt_key if custom_prompt not provided. Should be one "
                f"of {list(PROMPT_MAP.keys())}."
            )
            raise ValueError(msg)

        llm_chain = prompt | llm | StrOutputParser()
        return cls(base_embeddings=base_embeddings, llm_chain=llm_chain, **kwargs)

Subdomains

Frequently Asked Questions

What does from_llm() do?
from_llm() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/chains/hyde/base.py.
Where is from_llm() defined?
from_llm() is defined in libs/langchain/langchain_classic/chains/hyde/base.py at line 102.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free