Home / Function/ batch() — langchain Function Reference

batch() — langchain Function Reference

Architecture documentation for the batch() function in llms.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  167adfbd_1909_c4d8_d917_a73cb9d7def9["batch()"]
  ce4aa464_3868_179e_5d99_df48bc307c5f["BaseLLM"]
  167adfbd_1909_c4d8_d917_a73cb9d7def9 -->|defined in| ce4aa464_3868_179e_5d99_df48bc307c5f
  bb2bf093_28c4_0d6f_65af_734ed19d76d4["generate_prompt()"]
  167adfbd_1909_c4d8_d917_a73cb9d7def9 -->|calls| bb2bf093_28c4_0d6f_65af_734ed19d76d4
  38007078_b49e_18a9_9c2b_bb8361fa3fb3["_convert_input()"]
  167adfbd_1909_c4d8_d917_a73cb9d7def9 -->|calls| 38007078_b49e_18a9_9c2b_bb8361fa3fb3
  style 167adfbd_1909_c4d8_d917_a73cb9d7def9 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/core/langchain_core/language_models/llms.py lines 415–459

    def batch(
        self,
        inputs: list[LanguageModelInput],
        config: RunnableConfig | list[RunnableConfig] | None = None,
        *,
        return_exceptions: bool = False,
        **kwargs: Any,
    ) -> list[str]:
        if not inputs:
            return []

        config = get_config_list(config, len(inputs))
        max_concurrency = config[0].get("max_concurrency")

        if max_concurrency is None:
            try:
                llm_result = self.generate_prompt(
                    [self._convert_input(input_) for input_ in inputs],
                    callbacks=[c.get("callbacks") for c in config],
                    tags=[c.get("tags") for c in config],
                    metadata=[c.get("metadata") for c in config],
                    run_name=[c.get("run_name") for c in config],
                    **kwargs,
                )
                return [g[0].text for g in llm_result.generations]
            except Exception as e:
                if return_exceptions:
                    return cast("list[str]", [e for _ in inputs])
                raise
        else:
            batches = [
                inputs[i : i + max_concurrency]
                for i in range(0, len(inputs), max_concurrency)
            ]
            config = [{**c, "max_concurrency": None} for c in config]
            return [
                output
                for i, batch in enumerate(batches)
                for output in self.batch(
                    batch,
                    config=config[i * max_concurrency : (i + 1) * max_concurrency],
                    return_exceptions=return_exceptions,
                    **kwargs,
                )
            ]

Domain

Subdomains

Frequently Asked Questions

What does batch() do?
batch() is a function in the langchain codebase, defined in libs/core/langchain_core/language_models/llms.py.
Where is batch() defined?
batch() is defined in libs/core/langchain_core/language_models/llms.py at line 415.
What does batch() call?
batch() calls 2 function(s): _convert_input, generate_prompt.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free