Home / Function/ create_openai_fn_runnable() — langchain Function Reference

create_openai_fn_runnable() — langchain Function Reference

Architecture documentation for the create_openai_fn_runnable() function in base.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  e8355b27_2273_5b96_4b84_f25ab554e0b3["create_openai_fn_runnable()"]
  22e1446e_2db6_2965_eef8_bf9239c6dbfc["base.py"]
  e8355b27_2273_5b96_4b84_f25ab554e0b3 -->|defined in| 22e1446e_2db6_2965_eef8_bf9239c6dbfc
  d352db91_94a6_ec14_88eb_6644997dcde0["_create_openai_functions_structured_output_runnable()"]
  d352db91_94a6_ec14_88eb_6644997dcde0 -->|calls| e8355b27_2273_5b96_4b84_f25ab554e0b3
  9d952f43_b82e_ed01_a6db_5be428bc4ed2["get_openai_output_parser()"]
  e8355b27_2273_5b96_4b84_f25ab554e0b3 -->|calls| 9d952f43_b82e_ed01_a6db_5be428bc4ed2
  style e8355b27_2273_5b96_4b84_f25ab554e0b3 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/chains/structured_output/base.py lines 66–147

def create_openai_fn_runnable(
    functions: Sequence[dict[str, Any] | type[BaseModel] | Callable],
    llm: Runnable,
    prompt: BasePromptTemplate | None = None,
    *,
    enforce_single_function_usage: bool = True,
    output_parser: BaseOutputParser | BaseGenerationOutputParser | None = None,
    **llm_kwargs: Any,
) -> Runnable:
    """Create a runnable sequence that uses OpenAI functions.

    Args:
        functions: A sequence of either dictionaries, pydantic.BaseModels classes, or
            Python functions. If dictionaries are passed in, they are assumed to
            already be a valid OpenAI functions. If only a single
            function is passed in, then it will be enforced that the model use that
            function. pydantic.BaseModels and Python functions should have docstrings
            describing what the function does. For best results, pydantic.BaseModels
            should have descriptions of the parameters and Python functions should have
            Google Python style args descriptions in the docstring. Additionally,
            Python functions should only use primitive types (str, int, float, bool) or
            pydantic.BaseModels for arguments.
        llm: Language model to use, assumed to support the OpenAI function-calling API.
        prompt: BasePromptTemplate to pass to the model.
        enforce_single_function_usage: only used if a single function is passed in. If
            True, then the model will be forced to use the given function. If `False`,
            then the model will be given the option to use the given function or not.
        output_parser: BaseLLMOutputParser to use for parsing model outputs. By default
            will be inferred from the function types. If pydantic.BaseModels are passed
            in, then the OutputParser will try to parse outputs using those. Otherwise
            model outputs will simply be parsed as JSON. If multiple functions are
            passed in and they are not pydantic.BaseModels, the chain output will
            include both the name of the function that was returned and the arguments
            to pass to the function.
        **llm_kwargs: Additional named arguments to pass to the language model.

    Returns:
        A runnable sequence that will pass in the given functions to the model when run.

    Example:
        ```python
        from typing import Optional

        from langchain_classic.chains.structured_output import create_openai_fn_runnable
        from langchain_openai import ChatOpenAI
        from pydantic import BaseModel, Field


        class RecordPerson(BaseModel):
            '''Record some identifying information about a person.'''

            name: str = Field(..., description="The person's name")
            age: int = Field(..., description="The person's age")
            fav_food: str | None = Field(None, description="The person's favorite food")


        class RecordDog(BaseModel):
            '''Record some identifying information about a dog.'''

            name: str = Field(..., description="The dog's name")
            color: str = Field(..., description="The dog's color")
            fav_food: str | None = Field(None, description="The dog's favorite food")


        model = ChatOpenAI(model="gpt-4", temperature=0)
        structured_model = create_openai_fn_runnable([RecordPerson, RecordDog], model)
        structured_model.invoke("Harry was a chubby brown beagle who loved chicken)
        # -> RecordDog(name="Harry", color="brown", fav_food="chicken")

        ```
    """
    if not functions:
        msg = "Need to pass in at least one function. Received zero."
        raise ValueError(msg)
    openai_functions = [convert_to_openai_function(f) for f in functions]
    llm_kwargs_: dict[str, Any] = {"functions": openai_functions, **llm_kwargs}
    if len(openai_functions) == 1 and enforce_single_function_usage:
        llm_kwargs_["function_call"] = {"name": openai_functions[0]["name"]}
    output_parser = output_parser or get_openai_output_parser(functions)
    if prompt:
        return prompt | llm.bind(**llm_kwargs_) | output_parser

Subdomains

Frequently Asked Questions

What does create_openai_fn_runnable() do?
create_openai_fn_runnable() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/chains/structured_output/base.py.
Where is create_openai_fn_runnable() defined?
create_openai_fn_runnable() is defined in libs/langchain/langchain_classic/chains/structured_output/base.py at line 66.
What does create_openai_fn_runnable() call?
create_openai_fn_runnable() calls 1 function(s): get_openai_output_parser.
What calls create_openai_fn_runnable()?
create_openai_fn_runnable() is called by 1 function(s): _create_openai_functions_structured_output_runnable.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free