Home / Class/ LLMChainFilter Class — langchain Architecture

LLMChainFilter Class — langchain Architecture

Architecture documentation for the LLMChainFilter class in chain_filter.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  f8e12f25_9d7f_59ff_d901_84354945ca2e["LLMChainFilter"]
  1c219081_6061_3fb9_0ccd_08e0b97c9474["BaseDocumentCompressor"]
  f8e12f25_9d7f_59ff_d901_84354945ca2e -->|extends| 1c219081_6061_3fb9_0ccd_08e0b97c9474
  8d3a235d_a08f_2979_f52a_1772067dd1d3["LLMChain"]
  f8e12f25_9d7f_59ff_d901_84354945ca2e -->|extends| 8d3a235d_a08f_2979_f52a_1772067dd1d3
  f44a9460_d594_ffbb_d8b3_f434774f862a["chain_filter.py"]
  f8e12f25_9d7f_59ff_d901_84354945ca2e -->|defined in| f44a9460_d594_ffbb_d8b3_f434774f862a
  3087209f_a32b_4b8f_523f_562735200951["compress_documents()"]
  f8e12f25_9d7f_59ff_d901_84354945ca2e -->|method| 3087209f_a32b_4b8f_523f_562735200951
  512ef76c_fdb8_cf01_6ef4_6cba9cec7e49["acompress_documents()"]
  f8e12f25_9d7f_59ff_d901_84354945ca2e -->|method| 512ef76c_fdb8_cf01_6ef4_6cba9cec7e49
  4ce14a4b_4827_b293_2706_fdbd73dd37bb["from_llm()"]
  f8e12f25_9d7f_59ff_d901_84354945ca2e -->|method| 4ce14a4b_4827_b293_2706_fdbd73dd37bb

Relationship Graph

Source Code

libs/langchain/langchain_classic/retrievers/document_compressors/chain_filter.py lines 35–135

class LLMChainFilter(BaseDocumentCompressor):
    """Filter that drops documents that aren't relevant to the query."""

    llm_chain: Runnable
    """LLM wrapper to use for filtering documents.
    The chain prompt is expected to have a BooleanOutputParser."""

    get_input: Callable[[str, Document], dict] = default_get_input
    """Callable for constructing the chain input from the query and a Document."""

    model_config = ConfigDict(
        arbitrary_types_allowed=True,
    )

    def compress_documents(
        self,
        documents: Sequence[Document],
        query: str,
        callbacks: Callbacks | None = None,
    ) -> Sequence[Document]:
        """Filter down documents based on their relevance to the query."""
        filtered_docs = []

        config = RunnableConfig(callbacks=callbacks)
        outputs = zip(
            self.llm_chain.batch(
                [self.get_input(query, doc) for doc in documents],
                config=config,
            ),
            documents,
            strict=False,
        )

        for output_, doc in outputs:
            include_doc = None
            if isinstance(self.llm_chain, LLMChain):
                output = output_[self.llm_chain.output_key]
                if self.llm_chain.prompt.output_parser is not None:
                    include_doc = self.llm_chain.prompt.output_parser.parse(output)
            elif isinstance(output_, bool):
                include_doc = output_
            if include_doc:
                filtered_docs.append(doc)

        return filtered_docs

    async def acompress_documents(
        self,
        documents: Sequence[Document],
        query: str,
        callbacks: Callbacks | None = None,
    ) -> Sequence[Document]:
        """Filter down documents based on their relevance to the query."""
        filtered_docs = []

        config = RunnableConfig(callbacks=callbacks)
        outputs = zip(
            await self.llm_chain.abatch(
                [self.get_input(query, doc) for doc in documents],
                config=config,
            ),
            documents,
            strict=False,
        )
        for output_, doc in outputs:
            include_doc = None
            if isinstance(self.llm_chain, LLMChain):
                output = output_[self.llm_chain.output_key]
                if self.llm_chain.prompt.output_parser is not None:
                    include_doc = self.llm_chain.prompt.output_parser.parse(output)
            elif isinstance(output_, bool):
                include_doc = output_
            if include_doc:
                filtered_docs.append(doc)

        return filtered_docs

    @classmethod
    def from_llm(
        cls,
        llm: BaseLanguageModel,

Frequently Asked Questions

What is the LLMChainFilter class?
LLMChainFilter is a class in the langchain codebase, defined in libs/langchain/langchain_classic/retrievers/document_compressors/chain_filter.py.
Where is LLMChainFilter defined?
LLMChainFilter is defined in libs/langchain/langchain_classic/retrievers/document_compressors/chain_filter.py at line 35.
What does LLMChainFilter extend?
LLMChainFilter extends BaseDocumentCompressor, LLMChain.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free