FakeToolCallingModel Class — langchain Architecture
Architecture documentation for the FakeToolCallingModel class in model.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 67d9a14e_be69_a367_fd3c_cdb7f78dfa76["FakeToolCallingModel"] 48aa29b8_65e7_522f_a445_a441eeb6baff["BaseChatModel"] 67d9a14e_be69_a367_fd3c_cdb7f78dfa76 -->|extends| 48aa29b8_65e7_522f_a445_a441eeb6baff 18e85ff8_9a5d_f800_f722_027398dc89e7["BaseTool"] 67d9a14e_be69_a367_fd3c_cdb7f78dfa76 -->|extends| 18e85ff8_9a5d_f800_f722_027398dc89e7 f8999da2_b59a_c582_574c_9ac03dfa8539["model.py"] 67d9a14e_be69_a367_fd3c_cdb7f78dfa76 -->|defined in| f8999da2_b59a_c582_574c_9ac03dfa8539 b3112488_5244_cc1c_84da_f065ee64f731["_generate()"] 67d9a14e_be69_a367_fd3c_cdb7f78dfa76 -->|method| b3112488_5244_cc1c_84da_f065ee64f731 5a2fefed_1f98_378e_35c6_8ea2fa60b25e["_llm_type()"] 67d9a14e_be69_a367_fd3c_cdb7f78dfa76 -->|method| 5a2fefed_1f98_378e_35c6_8ea2fa60b25e 1c6867c8_fc68_dc42_8f10_0e4a1e9630f1["bind_tools()"] 67d9a14e_be69_a367_fd3c_cdb7f78dfa76 -->|method| 1c6867c8_fc68_dc42_8f10_0e4a1e9630f1
Relationship Graph
Source Code
libs/langchain_v1/tests/unit_tests/agents/model.py lines 23–111
class FakeToolCallingModel(BaseChatModel):
tool_calls: list[list[ToolCall]] | list[list[dict[str, Any]]] | None = None
structured_response: Any | None = None
index: int = 0
tool_style: Literal["openai", "anthropic"] = "openai"
def _generate(
self,
messages: list[BaseMessage],
stop: list[str] | None = None,
run_manager: CallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> ChatResult:
"""Top Level call."""
is_native = kwargs.get("response_format")
if self.tool_calls:
if is_native:
tool_calls = (
self.tool_calls[self.index] if self.index < len(self.tool_calls) else []
)
else:
tool_calls = self.tool_calls[self.index % len(self.tool_calls)]
else:
tool_calls = []
if is_native and not tool_calls:
if isinstance(self.structured_response, BaseModel):
content_obj = self.structured_response.model_dump()
elif is_dataclass(self.structured_response) and not isinstance(
self.structured_response, type
):
content_obj = asdict(self.structured_response)
elif isinstance(self.structured_response, dict):
content_obj = self.structured_response
message = AIMessage(content=json.dumps(content_obj), id=str(self.index))
else:
messages_string = "-".join([m.text for m in messages])
message = AIMessage(
content=messages_string,
id=str(self.index),
tool_calls=tool_calls.copy(),
)
self.index += 1
return ChatResult(generations=[ChatGeneration(message=message)])
@property
def _llm_type(self) -> str:
return "fake-tool-call-model"
@override
def bind_tools(
self,
tools: Sequence[dict[str, Any] | type | Callable[..., Any] | BaseTool],
*,
tool_choice: str | None = None,
**kwargs: Any,
) -> Runnable[LanguageModelInput, AIMessage]:
if len(tools) == 0:
msg = "Must provide at least one tool"
raise ValueError(msg)
tool_dicts = []
for tool in tools:
if isinstance(tool, dict):
tool_dicts.append(tool)
continue
if not isinstance(tool, BaseTool):
msg = "Only BaseTool and dict is supported by FakeToolCallingModel.bind_tools"
raise TypeError(msg)
# NOTE: this is a simplified tool spec for testing purposes only
if self.tool_style == "openai":
tool_dicts.append(
{
"type": "function",
"function": {
"name": tool.name,
},
}
)
Extends
Source
Frequently Asked Questions
What is the FakeToolCallingModel class?
FakeToolCallingModel is a class in the langchain codebase, defined in libs/langchain_v1/tests/unit_tests/agents/model.py.
Where is FakeToolCallingModel defined?
FakeToolCallingModel is defined in libs/langchain_v1/tests/unit_tests/agents/model.py at line 23.
What does FakeToolCallingModel extend?
FakeToolCallingModel extends BaseChatModel, BaseTool.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free