_convert_chunk_to_message_chunk() — langchain Function Reference
Architecture documentation for the _convert_chunk_to_message_chunk() function in huggingface.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 5fa8863f_86a4_f68d_871c_bd4a2d0d8448["_convert_chunk_to_message_chunk()"] d84d1503_7d4c_e393_0d48_409c5faa5e2d["huggingface.py"] 5fa8863f_86a4_f68d_871c_bd4a2d0d8448 -->|defined in| d84d1503_7d4c_e393_0d48_409c5faa5e2d ced9b52f_7bf4_4dc3_bc59_a48a9563d9bc["_stream()"] ced9b52f_7bf4_4dc3_bc59_a48a9563d9bc -->|calls| 5fa8863f_86a4_f68d_871c_bd4a2d0d8448 5b8bd0f7_fb8d_1d6b_224f_9245b077cec2["_astream()"] 5b8bd0f7_fb8d_1d6b_224f_9245b077cec2 -->|calls| 5fa8863f_86a4_f68d_871c_bd4a2d0d8448 style 5fa8863f_86a4_f68d_871c_bd4a2d0d8448 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py lines 248–301
def _convert_chunk_to_message_chunk(
chunk: Mapping[str, Any], default_class: type[BaseMessageChunk]
) -> BaseMessageChunk:
choice = chunk["choices"][0]
_dict = choice["delta"]
role = cast(str, _dict.get("role"))
content = cast(str, _dict.get("content") or "")
additional_kwargs: dict = {}
tool_call_chunks: list[ToolCallChunk] = []
if _dict.get("function_call"):
function_call = dict(_dict["function_call"])
if "name" in function_call and function_call["name"] is None:
function_call["name"] = ""
additional_kwargs["function_call"] = function_call
if raw_tool_calls := _dict.get("tool_calls"):
additional_kwargs["tool_calls"] = raw_tool_calls
for rtc in raw_tool_calls:
with contextlib.suppress(KeyError):
tool_call_chunks.append(
create_tool_call_chunk(
name=rtc["function"].get("name"),
args=rtc["function"].get("arguments"),
id=rtc.get("id"),
index=rtc.get("index"),
)
)
if role == "user" or default_class == HumanMessageChunk:
return HumanMessageChunk(content=content)
if role == "assistant" or default_class == AIMessageChunk:
if usage := chunk.get("usage"):
input_tokens = usage.get("prompt_tokens", 0)
output_tokens = usage.get("completion_tokens", 0)
usage_metadata = {
"input_tokens": input_tokens,
"output_tokens": output_tokens,
"total_tokens": usage.get("total_tokens", input_tokens + output_tokens),
}
else:
usage_metadata = None
return AIMessageChunk(
content=content,
additional_kwargs=additional_kwargs,
tool_call_chunks=tool_call_chunks,
usage_metadata=usage_metadata, # type: ignore[arg-type]
)
if role == "system" or default_class == SystemMessageChunk:
return SystemMessageChunk(content=content)
if role == "function" or default_class == FunctionMessageChunk:
return FunctionMessageChunk(content=content, name=_dict["name"])
if role == "tool" or default_class == ToolMessageChunk:
return ToolMessageChunk(content=content, tool_call_id=_dict["tool_call_id"])
if role or default_class == ChatMessageChunk:
return ChatMessageChunk(content=content, role=role)
return default_class(content=content) # type: ignore[call-arg]
Domain
Subdomains
Called By
Source
Frequently Asked Questions
What does _convert_chunk_to_message_chunk() do?
_convert_chunk_to_message_chunk() is a function in the langchain codebase, defined in libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py.
Where is _convert_chunk_to_message_chunk() defined?
_convert_chunk_to_message_chunk() is defined in libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py at line 248.
What calls _convert_chunk_to_message_chunk()?
_convert_chunk_to_message_chunk() is called by 2 function(s): _astream, _stream.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free