Home / Function/ _get_llm_for_structured_output_when_thinking_is_enabled() — langchain Function Reference

_get_llm_for_structured_output_when_thinking_is_enabled() — langchain Function Reference

Architecture documentation for the _get_llm_for_structured_output_when_thinking_is_enabled() function in chat_models.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  57f22736_087b_6915_6548_5529978001fa["_get_llm_for_structured_output_when_thinking_is_enabled()"]
  977b57b2_5d0e_bcf4_a43e_b52857105005["ChatAnthropic"]
  57f22736_087b_6915_6548_5529978001fa -->|defined in| 977b57b2_5d0e_bcf4_a43e_b52857105005
  a484b53c_8c1c_5314_de44_ea0330c8aed2["with_structured_output()"]
  a484b53c_8c1c_5314_de44_ea0330c8aed2 -->|calls| 57f22736_087b_6915_6548_5529978001fa
  b4ebc2e5_c582_39ab_6403_117a559ab366["bind_tools()"]
  57f22736_087b_6915_6548_5529978001fa -->|calls| b4ebc2e5_c582_39ab_6403_117a559ab366
  style 57f22736_087b_6915_6548_5529978001fa fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/anthropic/langchain_anthropic/chat_models.py lines 1415–1442

    def _get_llm_for_structured_output_when_thinking_is_enabled(
        self,
        schema: dict | type,
        formatted_tool: AnthropicTool,
    ) -> Runnable[LanguageModelInput, BaseMessage]:
        thinking_admonition = (
            "You are attempting to use structured output via forced tool calling, "
            "which is not guaranteed when `thinking` is enabled. This method will "
            "raise an OutputParserException if tool calls are not generated. Consider "
            "disabling `thinking` or adjust your prompt to ensure the tool is called."
        )
        warnings.warn(thinking_admonition, stacklevel=2)
        llm = self.bind_tools(
            [schema],
            # We don't specify tool_choice here since the API will reject attempts to
            # force tool calls when thinking=true
            ls_structured_output_format={
                "kwargs": {"method": "function_calling"},
                "schema": formatted_tool,
            },
        )

        def _raise_if_no_tool_calls(message: AIMessage) -> AIMessage:
            if not message.tool_calls:
                raise OutputParserException(thinking_admonition)
            return message

        return llm | _raise_if_no_tool_calls

Domain

Subdomains

Calls

Frequently Asked Questions

What does _get_llm_for_structured_output_when_thinking_is_enabled() do?
_get_llm_for_structured_output_when_thinking_is_enabled() is a function in the langchain codebase, defined in libs/partners/anthropic/langchain_anthropic/chat_models.py.
Where is _get_llm_for_structured_output_when_thinking_is_enabled() defined?
_get_llm_for_structured_output_when_thinking_is_enabled() is defined in libs/partners/anthropic/langchain_anthropic/chat_models.py at line 1415.
What does _get_llm_for_structured_output_when_thinking_is_enabled() call?
_get_llm_for_structured_output_when_thinking_is_enabled() calls 1 function(s): bind_tools.
What calls _get_llm_for_structured_output_when_thinking_is_enabled()?
_get_llm_for_structured_output_when_thinking_is_enabled() is called by 1 function(s): with_structured_output.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free