_astream() — langchain Function Reference
Architecture documentation for the _astream() function in llms.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 4c5e5365_9a25_b73e_204c_2b5cf4b85b4c["_astream()"] 6220540a_4afa_4d39_cdb6_bb2f4e26fe5f["OllamaLLM"] 4c5e5365_9a25_b73e_204c_2b5cf4b85b4c -->|defined in| 6220540a_4afa_4d39_cdb6_bb2f4e26fe5f ace15fad_c8f4_0757_7ede_5b26d488adae["_acreate_generate_stream()"] 4c5e5365_9a25_b73e_204c_2b5cf4b85b4c -->|calls| ace15fad_c8f4_0757_7ede_5b26d488adae style 4c5e5365_9a25_b73e_204c_2b5cf4b85b4c fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/ollama/langchain_ollama/llms.py lines 520–549
async def _astream(
self,
prompt: str,
stop: list[str] | None = None,
run_manager: AsyncCallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> AsyncIterator[GenerationChunk]:
reasoning = kwargs.get("reasoning", self.reasoning)
async for stream_resp in self._acreate_generate_stream(prompt, stop, **kwargs):
if not isinstance(stream_resp, str):
additional_kwargs = {}
if reasoning and (thinking_content := stream_resp.get("thinking")):
additional_kwargs["reasoning_content"] = thinking_content
chunk = GenerationChunk(
text=(stream_resp.get("response", "")),
generation_info={
"finish_reason": self.stop,
**additional_kwargs,
**(
dict(stream_resp) if stream_resp.get("done") is True else {}
),
},
)
if run_manager:
await run_manager.on_llm_new_token(
chunk.text,
verbose=self.verbose,
)
yield chunk
Domain
Subdomains
Source
Frequently Asked Questions
What does _astream() do?
_astream() is a function in the langchain codebase, defined in libs/partners/ollama/langchain_ollama/llms.py.
Where is _astream() defined?
_astream() is defined in libs/partners/ollama/langchain_ollama/llms.py at line 520.
What does _astream() call?
_astream() calls 1 function(s): _acreate_generate_stream.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free