Home / Function/ _moderate_tool_messages() — langchain Function Reference

_moderate_tool_messages() — langchain Function Reference

Architecture documentation for the _moderate_tool_messages() function in openai_moderation.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  96f205a3_76d2_5a83_9398_42ec7a99a7e7["_moderate_tool_messages()"]
  48713c67_3a9a_ec9e_ec74_46e6955f07bd["OpenAIModerationMiddleware"]
  96f205a3_76d2_5a83_9398_42ec7a99a7e7 -->|defined in| 48713c67_3a9a_ec9e_ec74_46e6955f07bd
  551a16da_13eb_12e9_2086_b086d1dc2bfc["_moderate_inputs()"]
  551a16da_13eb_12e9_2086_b086d1dc2bfc -->|calls| 96f205a3_76d2_5a83_9398_42ec7a99a7e7
  1837f115_98c9_8c34_1443_f85d68c85154["_find_last_index()"]
  96f205a3_76d2_5a83_9398_42ec7a99a7e7 -->|calls| 1837f115_98c9_8c34_1443_f85d68c85154
  f339afc9_dc76_818a_622f_e3da922e8e0c["_extract_text()"]
  96f205a3_76d2_5a83_9398_42ec7a99a7e7 -->|calls| f339afc9_dc76_818a_622f_e3da922e8e0c
  00b44df2_8039_fbfd_6b56_82201884b43b["_moderate()"]
  96f205a3_76d2_5a83_9398_42ec7a99a7e7 -->|calls| 00b44df2_8039_fbfd_6b56_82201884b43b
  59081f65_8455_8937_22b3_f7febac7b501["_apply_violation()"]
  96f205a3_76d2_5a83_9398_42ec7a99a7e7 -->|calls| 59081f65_8455_8937_22b3_f7febac7b501
  style 96f205a3_76d2_5a83_9398_42ec7a99a7e7 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/openai/langchain_openai/middleware/openai_moderation.py lines 272–307

    def _moderate_tool_messages(
        self, messages: Sequence[BaseMessage]
    ) -> dict[str, Any] | None:
        last_ai_idx = self._find_last_index(messages, AIMessage)
        if last_ai_idx is None:
            return None

        working = list(messages)
        modified = False

        for idx in range(last_ai_idx + 1, len(working)):
            msg = working[idx]
            if not isinstance(msg, ToolMessage):
                continue

            text = self._extract_text(msg)
            if not text:
                continue

            result = self._moderate(text)
            if not result.flagged:
                continue

            action = self._apply_violation(
                working, index=idx, stage="tool", content=text, result=result
            )
            if action:
                if "jump_to" in action:
                    return action
                working = cast("list[BaseMessage]", action["messages"])
                modified = True

        if modified:
            return {"messages": working}

        return None

Domain

Subdomains

Called By

Frequently Asked Questions

What does _moderate_tool_messages() do?
_moderate_tool_messages() is a function in the langchain codebase, defined in libs/partners/openai/langchain_openai/middleware/openai_moderation.py.
Where is _moderate_tool_messages() defined?
_moderate_tool_messages() is defined in libs/partners/openai/langchain_openai/middleware/openai_moderation.py at line 272.
What does _moderate_tool_messages() call?
_moderate_tool_messages() calls 4 function(s): _apply_violation, _extract_text, _find_last_index, _moderate.
What calls _moderate_tool_messages()?
_moderate_tool_messages() is called by 1 function(s): _moderate_inputs.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free