_convert_chunk_to_message_chunk() — langchain Function Reference
Architecture documentation for the _convert_chunk_to_message_chunk() function in chat_models.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD def2bd88_9f61_77d1_2fcb_e9dad1281a22["_convert_chunk_to_message_chunk()"] 21b8cbde_a9dc_13d6_83f9_010248e2bfc8["chat_models.py"] def2bd88_9f61_77d1_2fcb_e9dad1281a22 -->|defined in| 21b8cbde_a9dc_13d6_83f9_010248e2bfc8 2c99d0d8_7ba8_b5ed_0b85_d9060b83b85d["_stream()"] 2c99d0d8_7ba8_b5ed_0b85_d9060b83b85d -->|calls| def2bd88_9f61_77d1_2fcb_e9dad1281a22 52d9a76c_826c_e2cd_136c_f0f928571807["_astream()"] 52d9a76c_826c_e2cd_136c_f0f928571807 -->|calls| def2bd88_9f61_77d1_2fcb_e9dad1281a22 style def2bd88_9f61_77d1_2fcb_e9dad1281a22 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/fireworks/langchain_fireworks/chat_models.py lines 219–273
def _convert_chunk_to_message_chunk(
chunk: Mapping[str, Any], default_class: type[BaseMessageChunk]
) -> BaseMessageChunk:
choice = chunk["choices"][0]
_dict = choice["delta"]
role = cast(str, _dict.get("role"))
content = cast(str, _dict.get("content") or "")
additional_kwargs: dict = {}
tool_call_chunks: list[ToolCallChunk] = []
if _dict.get("function_call"):
function_call = dict(_dict["function_call"])
if "name" in function_call and function_call["name"] is None:
function_call["name"] = ""
additional_kwargs["function_call"] = function_call
if raw_tool_calls := _dict.get("tool_calls"):
additional_kwargs["tool_calls"] = raw_tool_calls
for rtc in raw_tool_calls:
with contextlib.suppress(KeyError):
tool_call_chunks.append(
create_tool_call_chunk(
name=rtc["function"].get("name"),
args=rtc["function"].get("arguments"),
id=rtc.get("id"),
index=rtc.get("index"),
)
)
if role == "user" or default_class == HumanMessageChunk:
return HumanMessageChunk(content=content)
if role == "assistant" or default_class == AIMessageChunk:
if usage := chunk.get("usage"):
input_tokens = usage.get("prompt_tokens", 0)
output_tokens = usage.get("completion_tokens", 0)
usage_metadata = {
"input_tokens": input_tokens,
"output_tokens": output_tokens,
"total_tokens": usage.get("total_tokens", input_tokens + output_tokens),
}
else:
usage_metadata = None
return AIMessageChunk(
content=content,
additional_kwargs=additional_kwargs,
tool_call_chunks=tool_call_chunks,
usage_metadata=usage_metadata, # type: ignore[arg-type]
response_metadata={"model_provider": "fireworks"},
)
if role == "system" or default_class == SystemMessageChunk:
return SystemMessageChunk(content=content)
if role == "function" or default_class == FunctionMessageChunk:
return FunctionMessageChunk(content=content, name=_dict["name"])
if role == "tool" or default_class == ToolMessageChunk:
return ToolMessageChunk(content=content, tool_call_id=_dict["tool_call_id"])
if role or default_class == ChatMessageChunk:
return ChatMessageChunk(content=content, role=role)
return default_class(content=content) # type: ignore[call-arg]
Domain
Subdomains
Called By
Source
Frequently Asked Questions
What does _convert_chunk_to_message_chunk() do?
_convert_chunk_to_message_chunk() is a function in the langchain codebase, defined in libs/partners/fireworks/langchain_fireworks/chat_models.py.
Where is _convert_chunk_to_message_chunk() defined?
_convert_chunk_to_message_chunk() is defined in libs/partners/fireworks/langchain_fireworks/chat_models.py at line 219.
What calls _convert_chunk_to_message_chunk()?
_convert_chunk_to_message_chunk() is called by 2 function(s): _astream, _stream.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free