Home / Function/ _astream_with_aggregation() — langchain Function Reference

_astream_with_aggregation() — langchain Function Reference

Architecture documentation for the _astream_with_aggregation() function in llms.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  24323d78_8a9f_dfa5_957d_6cdf837960b0["_astream_with_aggregation()"]
  6220540a_4afa_4d39_cdb6_bb2f4e26fe5f["OllamaLLM"]
  24323d78_8a9f_dfa5_957d_6cdf837960b0 -->|defined in| 6220540a_4afa_4d39_cdb6_bb2f4e26fe5f
  4d13dd97_2efb_7876_bd9a_d6f306eb9901["_agenerate()"]
  4d13dd97_2efb_7876_bd9a_d6f306eb9901 -->|calls| 24323d78_8a9f_dfa5_957d_6cdf837960b0
  ace15fad_c8f4_0757_7ede_5b26d488adae["_acreate_generate_stream()"]
  24323d78_8a9f_dfa5_957d_6cdf837960b0 -->|calls| ace15fad_c8f4_0757_7ede_5b26d488adae
  style 24323d78_8a9f_dfa5_957d_6cdf837960b0 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/ollama/langchain_ollama/llms.py lines 367–407

    async def _astream_with_aggregation(
        self,
        prompt: str,
        stop: list[str] | None = None,
        run_manager: AsyncCallbackManagerForLLMRun | None = None,
        verbose: bool = False,  # noqa: FBT002
        **kwargs: Any,
    ) -> GenerationChunk:
        final_chunk = None
        thinking_content = ""
        async for stream_resp in self._acreate_generate_stream(prompt, stop, **kwargs):
            if not isinstance(stream_resp, str):
                if stream_resp.get("thinking"):
                    thinking_content += stream_resp["thinking"]
                chunk = GenerationChunk(
                    text=stream_resp.get("response", ""),
                    generation_info=(
                        dict(stream_resp) if stream_resp.get("done") is True else None
                    ),
                )
                if final_chunk is None:
                    final_chunk = chunk
                else:
                    final_chunk += chunk
                if run_manager:
                    await run_manager.on_llm_new_token(
                        chunk.text,
                        chunk=chunk,
                        verbose=verbose,
                    )
        if final_chunk is None:
            msg = "No data received from Ollama stream."
            raise ValueError(msg)

        if thinking_content:
            if final_chunk.generation_info:
                final_chunk.generation_info["thinking"] = thinking_content
            else:
                final_chunk.generation_info = {"thinking": thinking_content}

        return final_chunk

Domain

Subdomains

Called By

Frequently Asked Questions

What does _astream_with_aggregation() do?
_astream_with_aggregation() is a function in the langchain codebase, defined in libs/partners/ollama/langchain_ollama/llms.py.
Where is _astream_with_aggregation() defined?
_astream_with_aggregation() is defined in libs/partners/ollama/langchain_ollama/llms.py at line 367.
What does _astream_with_aggregation() call?
_astream_with_aggregation() calls 1 function(s): _acreate_generate_stream.
What calls _astream_with_aggregation()?
_astream_with_aggregation() is called by 1 function(s): _agenerate.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free