_format_output() — langchain Function Reference
Architecture documentation for the _format_output() function in chat_models.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD e2be9f1d_bbea_f0b0_96d6_30115dc6ec54["_format_output()"] 977b57b2_5d0e_bcf4_a43e_b52857105005["ChatAnthropic"] e2be9f1d_bbea_f0b0_96d6_30115dc6ec54 -->|defined in| 977b57b2_5d0e_bcf4_a43e_b52857105005 cfc69e00_d0bd_d481_d810_57dcf2ca7d10["_generate()"] cfc69e00_d0bd_d481_d810_57dcf2ca7d10 -->|calls| e2be9f1d_bbea_f0b0_96d6_30115dc6ec54 417dddb4_df70_f3d8_3ca3_1a8f1b4ef59e["_agenerate()"] 417dddb4_df70_f3d8_3ca3_1a8f1b4ef59e -->|calls| e2be9f1d_bbea_f0b0_96d6_30115dc6ec54 e8fbda3b_7aa9_1575_74a6_35146039904f["_create_usage_metadata()"] e2be9f1d_bbea_f0b0_96d6_30115dc6ec54 -->|calls| e8fbda3b_7aa9_1575_74a6_35146039904f style e2be9f1d_bbea_f0b0_96d6_30115dc6ec54 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/anthropic/langchain_anthropic/chat_models.py lines 1331–1385
def _format_output(self, data: Any, **kwargs: Any) -> ChatResult:
"""Format the output from the Anthropic API to LC."""
data_dict = data.model_dump()
content = data_dict["content"]
# Remove citations if they are None - introduced in anthropic sdk 0.45
for block in content:
if isinstance(block, dict):
if "citations" in block and block["citations"] is None:
block.pop("citations")
if "caller" in block and block["caller"] is None:
block.pop("caller")
if (
block.get("type") == "thinking"
and "text" in block
and block["text"] is None
):
block.pop("text")
llm_output = {
k: v for k, v in data_dict.items() if k not in ("content", "role", "type")
}
if (
(container := llm_output.get("container"))
and isinstance(container, dict)
and (expires_at := container.get("expires_at"))
and isinstance(expires_at, datetime.datetime)
):
# TODO: dump all `data` with `mode="json"`
llm_output["container"]["expires_at"] = expires_at.isoformat()
response_metadata = {"model_provider": "anthropic"}
if "model" in llm_output and "model_name" not in llm_output:
llm_output["model_name"] = llm_output["model"]
if (
len(content) == 1
and content[0]["type"] == "text"
and not content[0].get("citations")
):
msg = AIMessage(
content=content[0]["text"], response_metadata=response_metadata
)
elif any(block["type"] == "tool_use" for block in content):
tool_calls = extract_tool_calls(content)
msg = AIMessage(
content=content,
tool_calls=tool_calls,
response_metadata=response_metadata,
)
else:
msg = AIMessage(content=content, response_metadata=response_metadata)
msg.usage_metadata = _create_usage_metadata(data.usage)
return ChatResult(
generations=[ChatGeneration(message=msg)],
llm_output=llm_output,
)
Domain
Subdomains
Calls
Called By
Source
Frequently Asked Questions
What does _format_output() do?
_format_output() is a function in the langchain codebase, defined in libs/partners/anthropic/langchain_anthropic/chat_models.py.
Where is _format_output() defined?
_format_output() is defined in libs/partners/anthropic/langchain_anthropic/chat_models.py at line 1331.
What does _format_output() call?
_format_output() calls 1 function(s): _create_usage_metadata.
What calls _format_output()?
_format_output() is called by 2 function(s): _agenerate, _generate.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free