_astream() — langchain Function Reference
Architecture documentation for the _astream() function in huggingface.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 5b8bd0f7_fb8d_1d6b_224f_9245b077cec2["_astream()"] 8cf0d6c0_abf8_3ee2_fd00_8bfc8c02058a["ChatHuggingFace"] 5b8bd0f7_fb8d_1d6b_224f_9245b077cec2 -->|defined in| 8cf0d6c0_abf8_3ee2_fd00_8bfc8c02058a f159f0cd_7dad_a4c8_5648_66f72caa1ece["_agenerate()"] f159f0cd_7dad_a4c8_5648_66f72caa1ece -->|calls| 5b8bd0f7_fb8d_1d6b_224f_9245b077cec2 9722f3bb_f150_918c_aa80_b6c7608bdffa["_should_stream_usage()"] 5b8bd0f7_fb8d_1d6b_224f_9245b077cec2 -->|calls| 9722f3bb_f150_918c_aa80_b6c7608bdffa 4c7378b7_670a_b9c9_e573_d0052d7e8308["_create_message_dicts()"] 5b8bd0f7_fb8d_1d6b_224f_9245b077cec2 -->|calls| 4c7378b7_670a_b9c9_e573_d0052d7e8308 5fa8863f_86a4_f68d_871c_bd4a2d0d8448["_convert_chunk_to_message_chunk()"] 5b8bd0f7_fb8d_1d6b_224f_9245b077cec2 -->|calls| 5fa8863f_86a4_f68d_871c_bd4a2d0d8448 style 5b8bd0f7_fb8d_1d6b_224f_9245b077cec2 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py lines 891–945
async def _astream(
self,
messages: list[BaseMessage],
stop: list[str] | None = None,
run_manager: AsyncCallbackManagerForLLMRun | None = None,
*,
stream_usage: bool | None = None,
**kwargs: Any,
) -> AsyncIterator[ChatGenerationChunk]:
stream_usage = self._should_stream_usage(stream_usage=stream_usage, **kwargs)
if stream_usage:
kwargs["stream_options"] = {"include_usage": stream_usage}
message_dicts, params = self._create_message_dicts(messages, stop)
params = {**params, **kwargs, "stream": True}
default_chunk_class: type[BaseMessageChunk] = AIMessageChunk
async for chunk in await self.llm.async_client.chat_completion(
messages=message_dicts, **params
):
if len(chunk["choices"]) == 0:
if usage := chunk.get("usage"):
usage_msg = AIMessageChunk(
content="",
additional_kwargs={},
response_metadata={},
usage_metadata={
"input_tokens": usage.get("prompt_tokens", 0),
"output_tokens": usage.get("completion_tokens", 0),
"total_tokens": usage.get("total_tokens", 0),
},
)
yield ChatGenerationChunk(message=usage_msg)
continue
choice = chunk["choices"][0]
message_chunk = _convert_chunk_to_message_chunk(chunk, default_chunk_class)
generation_info = {}
if finish_reason := choice.get("finish_reason"):
generation_info["finish_reason"] = finish_reason
generation_info["model_name"] = self.model_id
logprobs = choice.get("logprobs")
if logprobs:
generation_info["logprobs"] = logprobs
default_chunk_class = message_chunk.__class__
generation_chunk = ChatGenerationChunk(
message=message_chunk, generation_info=generation_info or None
)
if run_manager:
await run_manager.on_llm_new_token(
token=generation_chunk.text,
chunk=generation_chunk,
logprobs=logprobs,
)
yield generation_chunk
Domain
Subdomains
Called By
Source
Frequently Asked Questions
What does _astream() do?
_astream() is a function in the langchain codebase, defined in libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py.
Where is _astream() defined?
_astream() is defined in libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py at line 891.
What does _astream() call?
_astream() calls 3 function(s): _convert_chunk_to_message_chunk, _create_message_dicts, _should_stream_usage.
What calls _astream()?
_astream() is called by 1 function(s): _agenerate.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free