Home / Function/ abatch() — langchain Function Reference

abatch() — langchain Function Reference

Architecture documentation for the abatch() function in llms.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  423fa519_84d2_1bc5_e60d_555722c0d1c7["abatch()"]
  ce4aa464_3868_179e_5d99_df48bc307c5f["BaseLLM"]
  423fa519_84d2_1bc5_e60d_555722c0d1c7 -->|defined in| ce4aa464_3868_179e_5d99_df48bc307c5f
  55376563_12dc_0d47_e55b_0b52a5d2db0e["agenerate_prompt()"]
  423fa519_84d2_1bc5_e60d_555722c0d1c7 -->|calls| 55376563_12dc_0d47_e55b_0b52a5d2db0e
  38007078_b49e_18a9_9c2b_bb8361fa3fb3["_convert_input()"]
  423fa519_84d2_1bc5_e60d_555722c0d1c7 -->|calls| 38007078_b49e_18a9_9c2b_bb8361fa3fb3
  style 423fa519_84d2_1bc5_e60d_555722c0d1c7 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/core/langchain_core/language_models/llms.py lines 462–505

    async def abatch(
        self,
        inputs: list[LanguageModelInput],
        config: RunnableConfig | list[RunnableConfig] | None = None,
        *,
        return_exceptions: bool = False,
        **kwargs: Any,
    ) -> list[str]:
        if not inputs:
            return []
        config = get_config_list(config, len(inputs))
        max_concurrency = config[0].get("max_concurrency")

        if max_concurrency is None:
            try:
                llm_result = await self.agenerate_prompt(
                    [self._convert_input(input_) for input_ in inputs],
                    callbacks=[c.get("callbacks") for c in config],
                    tags=[c.get("tags") for c in config],
                    metadata=[c.get("metadata") for c in config],
                    run_name=[c.get("run_name") for c in config],
                    **kwargs,
                )
                return [g[0].text for g in llm_result.generations]
            except Exception as e:
                if return_exceptions:
                    return cast("list[str]", [e for _ in inputs])
                raise
        else:
            batches = [
                inputs[i : i + max_concurrency]
                for i in range(0, len(inputs), max_concurrency)
            ]
            config = [{**c, "max_concurrency": None} for c in config]
            return [
                output
                for i, batch in enumerate(batches)
                for output in await self.abatch(
                    batch,
                    config=config[i * max_concurrency : (i + 1) * max_concurrency],
                    return_exceptions=return_exceptions,
                    **kwargs,
                )
            ]

Domain

Subdomains

Frequently Asked Questions

What does abatch() do?
abatch() is a function in the langchain codebase, defined in libs/core/langchain_core/language_models/llms.py.
Where is abatch() defined?
abatch() is defined in libs/core/langchain_core/language_models/llms.py at line 462.
What does abatch() call?
abatch() calls 2 function(s): _convert_input, agenerate_prompt.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free