Home / Function/ create_llm_result() — langchain Function Reference

create_llm_result() — langchain Function Reference

Architecture documentation for the create_llm_result() function in base.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  01611288_3478_a4f4_894c_f5db6dd12168["create_llm_result()"]
  6bee45b2_b649_e251_1fdc_dcf49f8bb331["BaseOpenAI"]
  01611288_3478_a4f4_894c_f5db6dd12168 -->|defined in| 6bee45b2_b649_e251_1fdc_dcf49f8bb331
  f5892a19_915a_df0a_fbd3_f7a5a0993b64["_generate()"]
  f5892a19_915a_df0a_fbd3_f7a5a0993b64 -->|calls| 01611288_3478_a4f4_894c_f5db6dd12168
  2d321de2_cda4_2af7_7550_f3f179b1dddb["_agenerate()"]
  2d321de2_cda4_2af7_7550_f3f179b1dddb -->|calls| 01611288_3478_a4f4_894c_f5db6dd12168
  style 01611288_3478_a4f4_894c_f5db6dd12168 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/openai/langchain_openai/llms/base.py lines 591–620

    def create_llm_result(
        self,
        choices: Any,
        prompts: list[str],
        params: dict[str, Any],
        token_usage: dict[str, int],
        *,
        system_fingerprint: str | None = None,
    ) -> LLMResult:
        """Create the LLMResult from the choices and prompts."""
        generations = []
        n = params.get("n", self.n)
        for i, _ in enumerate(prompts):
            sub_choices = choices[i * n : (i + 1) * n]
            generations.append(
                [
                    Generation(
                        text=choice["text"],
                        generation_info={
                            "finish_reason": choice.get("finish_reason"),
                            "logprobs": choice.get("logprobs"),
                        },
                    )
                    for choice in sub_choices
                ]
            )
        llm_output = {"token_usage": token_usage, "model_name": self.model_name}
        if system_fingerprint:
            llm_output["system_fingerprint"] = system_fingerprint
        return LLMResult(generations=generations, llm_output=llm_output)

Domain

Subdomains

Frequently Asked Questions

What does create_llm_result() do?
create_llm_result() is a function in the langchain codebase, defined in libs/partners/openai/langchain_openai/llms/base.py.
Where is create_llm_result() defined?
create_llm_result() is defined in libs/partners/openai/langchain_openai/llms/base.py at line 591.
What calls create_llm_result()?
create_llm_result() is called by 2 function(s): _agenerate, _generate.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free