Home / Function/ _amoderate_tool_messages() — langchain Function Reference

_amoderate_tool_messages() — langchain Function Reference

Architecture documentation for the _amoderate_tool_messages() function in openai_moderation.py from the langchain codebase.

Function python LangChainCore Runnables calls 4 called by 1

Entity Profile

Dependency Diagram

graph TD
  f8bee86c_3f1e_1e20_7cac_e6c507e856d6["_amoderate_tool_messages()"]
  48713c67_3a9a_ec9e_ec74_46e6955f07bd["OpenAIModerationMiddleware"]
  f8bee86c_3f1e_1e20_7cac_e6c507e856d6 -->|defined in| 48713c67_3a9a_ec9e_ec74_46e6955f07bd
  e4877997_5734_a30d_80bc_a99942eef494["_amoderate_inputs()"]
  e4877997_5734_a30d_80bc_a99942eef494 -->|calls| f8bee86c_3f1e_1e20_7cac_e6c507e856d6
  1837f115_98c9_8c34_1443_f85d68c85154["_find_last_index()"]
  f8bee86c_3f1e_1e20_7cac_e6c507e856d6 -->|calls| 1837f115_98c9_8c34_1443_f85d68c85154
  f339afc9_dc76_818a_622f_e3da922e8e0c["_extract_text()"]
  f8bee86c_3f1e_1e20_7cac_e6c507e856d6 -->|calls| f339afc9_dc76_818a_622f_e3da922e8e0c
  d71322b2_e390_f20d_bda8_c8361be856d9["_amoderate()"]
  f8bee86c_3f1e_1e20_7cac_e6c507e856d6 -->|calls| d71322b2_e390_f20d_bda8_c8361be856d9
  59081f65_8455_8937_22b3_f7febac7b501["_apply_violation()"]
  f8bee86c_3f1e_1e20_7cac_e6c507e856d6 -->|calls| 59081f65_8455_8937_22b3_f7febac7b501
  style f8bee86c_3f1e_1e20_7cac_e6c507e856d6 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/openai/langchain_openai/middleware/openai_moderation.py lines 309–344

    async def _amoderate_tool_messages(
        self, messages: Sequence[BaseMessage]
    ) -> dict[str, Any] | None:
        last_ai_idx = self._find_last_index(messages, AIMessage)
        if last_ai_idx is None:
            return None

        working = list(messages)
        modified = False

        for idx in range(last_ai_idx + 1, len(working)):
            msg = working[idx]
            if not isinstance(msg, ToolMessage):
                continue

            text = self._extract_text(msg)
            if not text:
                continue

            result = await self._amoderate(text)
            if not result.flagged:
                continue

            action = self._apply_violation(
                working, index=idx, stage="tool", content=text, result=result
            )
            if action:
                if "jump_to" in action:
                    return action
                working = cast("list[BaseMessage]", action["messages"])
                modified = True

        if modified:
            return {"messages": working}

        return None

Domain

Subdomains

Frequently Asked Questions

What does _amoderate_tool_messages() do?
_amoderate_tool_messages() is a function in the langchain codebase, defined in libs/partners/openai/langchain_openai/middleware/openai_moderation.py.
Where is _amoderate_tool_messages() defined?
_amoderate_tool_messages() is defined in libs/partners/openai/langchain_openai/middleware/openai_moderation.py at line 309.
What does _amoderate_tool_messages() call?
_amoderate_tool_messages() calls 4 function(s): _amoderate, _apply_violation, _extract_text, _find_last_index.
What calls _amoderate_tool_messages()?
_amoderate_tool_messages() is called by 1 function(s): _amoderate_inputs.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free