Home / Function/ from_llm() — langchain Function Reference

from_llm() — langchain Function Reference

Architecture documentation for the from_llm() function in listwise_rerank.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  37782356_811e_e15b_6a04_79e18ff6f0fa["from_llm()"]
  f5dac976_7ac3_cb43_2244_72d962ef6afb["LLMListwiseRerank"]
  37782356_811e_e15b_6a04_79e18ff6f0fa -->|defined in| f5dac976_7ac3_cb43_2244_72d962ef6afb
  style 37782356_811e_e15b_6a04_79e18ff6f0fa fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/retrievers/document_compressors/listwise_rerank.py lines 102–146

    def from_llm(
        cls,
        llm: BaseLanguageModel,
        *,
        prompt: BasePromptTemplate | None = None,
        **kwargs: Any,
    ) -> "LLMListwiseRerank":
        """Create a LLMListwiseRerank document compressor from a language model.

        Args:
            llm: The language model to use for filtering. **Must implement
                BaseLanguageModel.with_structured_output().**
            prompt: The prompt to use for the filter.
            kwargs: Additional arguments to pass to the constructor.

        Returns:
            A LLMListwiseRerank document compressor that uses the given language model.
        """
        if type(llm).with_structured_output == BaseLanguageModel.with_structured_output:
            msg = (
                f"llm of type {type(llm)} does not implement `with_structured_output`."
            )
            raise ValueError(msg)

        class RankDocuments(BaseModel):
            """Rank the documents by their relevance to the user question.

            Rank from most to least relevant.
            """

            ranked_document_ids: list[int] = Field(
                ...,
                description=(
                    "The integer IDs of the documents, sorted from most to least "
                    "relevant to the user question."
                ),
            )

        _prompt = prompt if prompt is not None else _DEFAULT_PROMPT
        reranker = RunnablePassthrough.assign(
            ranking=RunnableLambda(_get_prompt_input)
            | _prompt
            | llm.with_structured_output(RankDocuments),
        ) | RunnableLambda(_parse_ranking)
        return cls(reranker=reranker, **kwargs)

Domain

Subdomains

Frequently Asked Questions

What does from_llm() do?
from_llm() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/retrievers/document_compressors/listwise_rerank.py.
Where is from_llm() defined?
from_llm() is defined in libs/langchain/langchain_classic/retrievers/document_compressors/listwise_rerank.py at line 102.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free