_combine_llm_outputs() — langchain Function Reference
Architecture documentation for the _combine_llm_outputs() function in chat_models.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 22bc90b2_a842_48e1_fb55_124730d80c06["_combine_llm_outputs()"] 24cf3ac3_ed46_66ab_27f4_5b5d67f0b7f0["ChatMistralAI"] 22bc90b2_a842_48e1_fb55_124730d80c06 -->|defined in| 24cf3ac3_ed46_66ab_27f4_5b5d67f0b7f0 style 22bc90b2_a842_48e1_fb55_124730d80c06 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/mistralai/langchain_mistralai/chat_models.py lines 585–598
def _combine_llm_outputs(self, llm_outputs: list[dict | None]) -> dict:
overall_token_usage: dict = {}
for output in llm_outputs:
if output is None:
# Happens in streaming
continue
token_usage = output["token_usage"]
if token_usage is not None:
for k, v in token_usage.items():
if k in overall_token_usage:
overall_token_usage[k] += v
else:
overall_token_usage[k] = v
return {"token_usage": overall_token_usage, "model_name": self.model}
Domain
Subdomains
Source
Frequently Asked Questions
What does _combine_llm_outputs() do?
_combine_llm_outputs() is a function in the langchain codebase, defined in libs/partners/mistralai/langchain_mistralai/chat_models.py.
Where is _combine_llm_outputs() defined?
_combine_llm_outputs() is defined in libs/partners/mistralai/langchain_mistralai/chat_models.py at line 585.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free