_compat.py — langchain Source File
Architecture documentation for _compat.py, a python file in the langchain codebase. 3 imports, 0 dependents.
Entity Profile
Dependency Diagram
graph LR e163e324_6898_5de8_208f_842c6da17dfb["_compat.py"] 7025b240_fdc3_cf68_b72f_f41dac94566b["json"] e163e324_6898_5de8_208f_842c6da17dfb --> 7025b240_fdc3_cf68_b72f_f41dac94566b 8e2034b7_ceb8_963f_29fc_2ea6b50ef9b3["typing"] e163e324_6898_5de8_208f_842c6da17dfb --> 8e2034b7_ceb8_963f_29fc_2ea6b50ef9b3 d758344f_537f_649e_f467_b9d7442e86df["langchain_core.messages"] e163e324_6898_5de8_208f_842c6da17dfb --> d758344f_537f_649e_f467_b9d7442e86df style e163e324_6898_5de8_208f_842c6da17dfb fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
from __future__ import annotations
import json
from typing import Any, cast
from langchain_core.messages import content as types
def _convert_from_v1_to_groq(
content: list[types.ContentBlock],
model_provider: str | None,
) -> tuple[list[dict[str, Any] | str], dict]:
new_content: list = []
new_additional_kwargs: dict = {}
for i, block in enumerate(content):
if block["type"] == "text":
new_content.append({"text": block.get("text", ""), "type": "text"})
elif (
block["type"] == "reasoning"
and (reasoning := block.get("reasoning"))
and model_provider == "groq"
):
new_additional_kwargs["reasoning_content"] = reasoning
elif block["type"] == "server_tool_call" and model_provider == "groq":
new_block = {}
if "args" in block:
new_block["arguments"] = json.dumps(block["args"])
if idx := block.get("extras", {}).get("index"):
new_block["index"] = idx
if block.get("name") == "web_search":
new_block["type"] = "search"
elif block.get("name") == "code_interpreter":
new_block["type"] = "python"
else:
new_block["type"] = ""
if i < len(content) - 1 and content[i + 1]["type"] == "server_tool_result":
result = cast("types.ServerToolResult", content[i + 1])
for k, v in result.get("extras", {}).items():
new_block[k] = v # noqa: PERF403
if "output" in result:
new_block["output"] = result["output"]
if "executed_tools" not in new_additional_kwargs:
new_additional_kwargs["executed_tools"] = []
new_additional_kwargs["executed_tools"].append(new_block)
elif block["type"] == "server_tool_result":
continue
elif (
block["type"] == "non_standard"
and "value" in block
and model_provider == "groq"
):
new_content.append(block["value"])
else:
new_content.append(block)
# For consistency with v0 payloads, we cast single text blocks to str
if (
len(new_content) == 1
and isinstance(new_content[0], dict)
and new_content[0].get("type") == "text"
and (text_content := new_content[0].get("text"))
and isinstance(text_content, str)
):
return text_content, new_additional_kwargs
return new_content, new_additional_kwargs
Domain
Subdomains
Functions
Dependencies
- json
- langchain_core.messages
- typing
Source
Frequently Asked Questions
What does _compat.py do?
_compat.py is a source file in the langchain codebase, written in python. It belongs to the CoreAbstractions domain, MessageSchema subdomain.
What functions are defined in _compat.py?
_compat.py defines 1 function(s): _convert_from_v1_to_groq.
What does _compat.py depend on?
_compat.py imports 3 module(s): json, langchain_core.messages, typing.
Where is _compat.py in the architecture?
_compat.py is located at libs/partners/groq/langchain_groq/_compat.py (domain: CoreAbstractions, subdomain: MessageSchema, directory: libs/partners/groq/langchain_groq).
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free