_iterate_over_stream() — langchain Function Reference
Architecture documentation for the _iterate_over_stream() function in chat_models.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 3b4a8d88_e271_d530_a2cc_d47df386a668["_iterate_over_stream()"] 19e4be00_71fb_5390_6768_f6e6158f49b4["ChatOllama"] 3b4a8d88_e271_d530_a2cc_d47df386a668 -->|defined in| 19e4be00_71fb_5390_6768_f6e6158f49b4 a9bec111_57fc_1b0b_ba6f_432ebeb29556["_chat_stream_with_aggregation()"] a9bec111_57fc_1b0b_ba6f_432ebeb29556 -->|calls| 3b4a8d88_e271_d530_a2cc_d47df386a668 3b227c44_d18c_1d76_4ec4_040ff227b361["_stream()"] 3b227c44_d18c_1d76_4ec4_040ff227b361 -->|calls| 3b4a8d88_e271_d530_a2cc_d47df386a668 54502394_47d8_fe09_4e7a_ec02c5d30d0c["_create_chat_stream()"] 3b4a8d88_e271_d530_a2cc_d47df386a668 -->|calls| 54502394_47d8_fe09_4e7a_ec02c5d30d0c b0d70326_371d_ec7e_1902_93fb811b392d["_get_usage_metadata_from_generation_info()"] 3b4a8d88_e271_d530_a2cc_d47df386a668 -->|calls| b0d70326_371d_ec7e_1902_93fb811b392d 22b4b72b_4c0e_fde2_2401_52040de102e0["_get_tool_calls_from_response()"] 3b4a8d88_e271_d530_a2cc_d47df386a668 -->|calls| 22b4b72b_4c0e_fde2_2401_52040de102e0 style 3b4a8d88_e271_d530_a2cc_d47df386a668 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/ollama/langchain_ollama/chat_models.py lines 1050–1110
def _iterate_over_stream(
self,
messages: list[BaseMessage],
stop: list[str] | None = None,
**kwargs: Any,
) -> Iterator[ChatGenerationChunk]:
reasoning = kwargs.get("reasoning", self.reasoning)
for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
if not isinstance(stream_resp, str):
content = (
stream_resp["message"]["content"]
if "message" in stream_resp and "content" in stream_resp["message"]
else ""
)
# Warn and skip responses with done_reason: 'load' and empty content
# These indicate the model was loaded but no actual generation occurred
is_load_response_with_empty_content = (
stream_resp.get("done") is True
and stream_resp.get("done_reason") == "load"
and not content.strip()
)
if is_load_response_with_empty_content:
log.warning(
"Ollama returned empty response with done_reason='load'."
"This typically indicates the model was loaded but no content "
"was generated. Skipping this response."
)
continue
if stream_resp.get("done") is True:
generation_info = dict(stream_resp)
if "model" in generation_info:
generation_info["model_name"] = generation_info["model"]
generation_info["model_provider"] = "ollama"
_ = generation_info.pop("message", None)
else:
generation_info = None
additional_kwargs = {}
if (
reasoning
and "message" in stream_resp
and (thinking_content := stream_resp["message"].get("thinking"))
):
additional_kwargs["reasoning_content"] = thinking_content
chunk = ChatGenerationChunk(
message=AIMessageChunk(
content=content,
additional_kwargs=additional_kwargs,
usage_metadata=_get_usage_metadata_from_generation_info(
stream_resp
),
tool_calls=_get_tool_calls_from_response(stream_resp),
),
generation_info=generation_info,
)
yield chunk
Domain
Subdomains
Calls
Called By
Source
Frequently Asked Questions
What does _iterate_over_stream() do?
_iterate_over_stream() is a function in the langchain codebase, defined in libs/partners/ollama/langchain_ollama/chat_models.py.
Where is _iterate_over_stream() defined?
_iterate_over_stream() is defined in libs/partners/ollama/langchain_ollama/chat_models.py at line 1050.
What does _iterate_over_stream() call?
_iterate_over_stream() calls 3 function(s): _create_chat_stream, _get_tool_calls_from_response, _get_usage_metadata_from_generation_info.
What calls _iterate_over_stream()?
_iterate_over_stream() is called by 2 function(s): _chat_stream_with_aggregation, _stream.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free