Home / Function/ _combine_llm_outputs() — langchain Function Reference

_combine_llm_outputs() — langchain Function Reference

Architecture documentation for the _combine_llm_outputs() function in chat_models.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  cd81441b_f6c7_6729_d2d5_fcb8af20a0d1["_combine_llm_outputs()"]
  d5ca3c3a_3c29_0cb2_a156_35c92a31f5fd["ChatGroq"]
  cd81441b_f6c7_6729_d2d5_fcb8af20a0d1 -->|defined in| d5ca3c3a_3c29_0cb2_a156_35c92a31f5fd
  style cd81441b_f6c7_6729_d2d5_fcb8af20a0d1 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/groq/langchain_groq/chat_models.py lines 813–847

    def _combine_llm_outputs(self, llm_outputs: list[dict | None]) -> dict:
        overall_token_usage: dict = {}
        system_fingerprint = None
        for output in llm_outputs:
            if output is None:
                # Happens in streaming
                continue
            token_usage = output["token_usage"]
            if token_usage is not None:
                for k, v in token_usage.items():
                    if k in overall_token_usage and v is not None:
                        # Handle nested dictionaries
                        if isinstance(v, dict):
                            if k not in overall_token_usage:
                                overall_token_usage[k] = {}
                            for nested_k, nested_v in v.items():
                                if (
                                    nested_k in overall_token_usage[k]
                                    and nested_v is not None
                                ):
                                    overall_token_usage[k][nested_k] += nested_v
                                else:
                                    overall_token_usage[k][nested_k] = nested_v
                        else:
                            overall_token_usage[k] += v
                    else:
                        overall_token_usage[k] = v
            if system_fingerprint is None:
                system_fingerprint = output.get("system_fingerprint")
        combined = {"token_usage": overall_token_usage, "model_name": self.model_name}
        if system_fingerprint:
            combined["system_fingerprint"] = system_fingerprint
        if self.service_tier:
            combined["service_tier"] = self.service_tier
        return combined

Domain

Subdomains

Frequently Asked Questions

What does _combine_llm_outputs() do?
_combine_llm_outputs() is a function in the langchain codebase, defined in libs/partners/groq/langchain_groq/chat_models.py.
Where is _combine_llm_outputs() defined?
_combine_llm_outputs() is defined in libs/partners/groq/langchain_groq/chat_models.py at line 813.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free