_combine_llm_outputs() — langchain Function Reference
Architecture documentation for the _combine_llm_outputs() function in chat_models.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD b0935afc_11dc_d98f_c5bb_d116c6ed3f61["_combine_llm_outputs()"] 1a5cd25a_9420_c6b2_ec8d_2b53c6427514["ChatFireworks"] b0935afc_11dc_d98f_c5bb_d116c6ed3f61 -->|defined in| 1a5cd25a_9420_c6b2_ec8d_2b53c6427514 style b0935afc_11dc_d98f_c5bb_d116c6ed3f61 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/fireworks/langchain_fireworks/chat_models.py lines 462–481
def _combine_llm_outputs(self, llm_outputs: list[dict | None]) -> dict:
overall_token_usage: dict = {}
system_fingerprint = None
for output in llm_outputs:
if output is None:
# Happens in streaming
continue
token_usage = output["token_usage"]
if token_usage is not None:
for k, v in token_usage.items():
if k in overall_token_usage:
overall_token_usage[k] += v
else:
overall_token_usage[k] = v
if system_fingerprint is None:
system_fingerprint = output.get("system_fingerprint")
combined = {"token_usage": overall_token_usage, "model_name": self.model_name}
if system_fingerprint:
combined["system_fingerprint"] = system_fingerprint
return combined
Domain
Subdomains
Source
Frequently Asked Questions
What does _combine_llm_outputs() do?
_combine_llm_outputs() is a function in the langchain codebase, defined in libs/partners/fireworks/langchain_fireworks/chat_models.py.
Where is _combine_llm_outputs() defined?
_combine_llm_outputs() is defined in libs/partners/fireworks/langchain_fireworks/chat_models.py at line 462.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free