OpenAIModerationMiddleware Class — langchain Architecture
Architecture documentation for the OpenAIModerationMiddleware class in openai_moderation.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 0a72aedc_654f_7593_83fa_5042f49b13f4["OpenAIModerationMiddleware"] fcfa55b0_4a86_fa31_a156_3c38c76a0a9b["AIMessage"] 0a72aedc_654f_7593_83fa_5042f49b13f4 -->|extends| fcfa55b0_4a86_fa31_a156_3c38c76a0a9b 4318b819_4fe9_65b0_5369_424ec9518efe["ToolMessage"] 0a72aedc_654f_7593_83fa_5042f49b13f4 -->|extends| 4318b819_4fe9_65b0_5369_424ec9518efe e0e879bf_e732_8d0f_6ce2_3d40e66f4eb3["HumanMessage"] 0a72aedc_654f_7593_83fa_5042f49b13f4 -->|extends| e0e879bf_e732_8d0f_6ce2_3d40e66f4eb3 b9553aad_b797_0a7b_73ed_8d05b0819c0f["BaseMessage"] 0a72aedc_654f_7593_83fa_5042f49b13f4 -->|extends| b9553aad_b797_0a7b_73ed_8d05b0819c0f e98c0351_7e9b_e4a2_88d2_d7ba4aa4bc4b["openai_moderation.py"] 0a72aedc_654f_7593_83fa_5042f49b13f4 -->|defined in| e98c0351_7e9b_e4a2_88d2_d7ba4aa4bc4b 76733898_8423_91bc_0b73_daee4bb8a22d["__init__()"] 0a72aedc_654f_7593_83fa_5042f49b13f4 -->|method| 76733898_8423_91bc_0b73_daee4bb8a22d f7ca7524_28fd_492d_d0e1_b2b458d3aea6["before_model()"] 0a72aedc_654f_7593_83fa_5042f49b13f4 -->|method| f7ca7524_28fd_492d_d0e1_b2b458d3aea6 5eb6a0b0_eeeb_98a9_ebf7_30cee529127d["after_model()"] 0a72aedc_654f_7593_83fa_5042f49b13f4 -->|method| 5eb6a0b0_eeeb_98a9_ebf7_30cee529127d fe9fac58_ca08_55d0_efe8_f0c77bd2132a["abefore_model()"] 0a72aedc_654f_7593_83fa_5042f49b13f4 -->|method| fe9fac58_ca08_55d0_efe8_f0c77bd2132a 0bc61356_7446_fb56_9eb6_e321e64395b9["aafter_model()"] 0a72aedc_654f_7593_83fa_5042f49b13f4 -->|method| 0bc61356_7446_fb56_9eb6_e321e64395b9 a458788f_51dd_d4df_e1af_a27c6bb4774f["_moderate_inputs()"] 0a72aedc_654f_7593_83fa_5042f49b13f4 -->|method| a458788f_51dd_d4df_e1af_a27c6bb4774f c91025db_f1bd_ea4b_cab4_720300c6d416["_amoderate_inputs()"] 0a72aedc_654f_7593_83fa_5042f49b13f4 -->|method| c91025db_f1bd_ea4b_cab4_720300c6d416 6ebfc43b_9d27_175e_85e3_1d9653242095["_moderate_output()"] 0a72aedc_654f_7593_83fa_5042f49b13f4 -->|method| 6ebfc43b_9d27_175e_85e3_1d9653242095 08a50a52_e336_eaac_d436_df2a4d7a43c4["_amoderate_output()"] 0a72aedc_654f_7593_83fa_5042f49b13f4 -->|method| 08a50a52_e336_eaac_d436_df2a4d7a43c4
Relationship Graph
Source Code
libs/partners/openai/langchain_openai/middleware/openai_moderation.py lines 49–478
class OpenAIModerationMiddleware(AgentMiddleware[AgentState[Any], Any]):
"""Moderate agent traffic using OpenAI's moderation endpoint."""
def __init__(
self,
*,
model: ModerationModel = "omni-moderation-latest",
check_input: bool = True,
check_output: bool = True,
check_tool_results: bool = False,
exit_behavior: Literal["error", "end", "replace"] = "end",
violation_message: str | None = None,
client: OpenAI | None = None,
async_client: AsyncOpenAI | None = None,
) -> None:
"""Create the middleware instance.
Args:
model: OpenAI moderation model to use.
check_input: Whether to check user input messages.
check_output: Whether to check model output messages.
check_tool_results: Whether to check tool result messages.
exit_behavior: How to handle violations
(`'error'`, `'end'`, or `'replace'`).
violation_message: Custom template for violation messages.
client: Optional pre-configured OpenAI client to reuse.
If not provided, a new client will be created.
async_client: Optional pre-configured AsyncOpenAI client to reuse.
If not provided, a new async client will be created.
"""
super().__init__()
self.model = model
self.check_input = check_input
self.check_output = check_output
self.check_tool_results = check_tool_results
self.exit_behavior = exit_behavior
self.violation_message = violation_message
self._client = client
self._async_client = async_client
@hook_config(can_jump_to=["end"])
def before_model(
self, state: AgentState[Any], runtime: Runtime[Any]
) -> dict[str, Any] | None: # type: ignore[override]
"""Moderate user input and tool results before the model is called.
Args:
state: Current agent state containing messages.
runtime: Agent runtime context.
Returns:
Updated state with moderated messages, or `None` if no changes.
"""
if not self.check_input and not self.check_tool_results:
return None
messages = list(state.get("messages", []))
if not messages:
return None
return self._moderate_inputs(messages)
@hook_config(can_jump_to=["end"])
def after_model(
self, state: AgentState[Any], runtime: Runtime[Any]
) -> dict[str, Any] | None: # type: ignore[override]
"""Moderate model output after the model is called.
Args:
state: Current agent state containing messages.
runtime: Agent runtime context.
Returns:
Updated state with moderated messages, or `None` if no changes.
"""
if not self.check_output:
return None
messages = list(state.get("messages", []))
if not messages:
Source
Frequently Asked Questions
What is the OpenAIModerationMiddleware class?
OpenAIModerationMiddleware is a class in the langchain codebase, defined in libs/partners/openai/langchain_openai/middleware/openai_moderation.py.
Where is OpenAIModerationMiddleware defined?
OpenAIModerationMiddleware is defined in libs/partners/openai/langchain_openai/middleware/openai_moderation.py at line 49.
What does OpenAIModerationMiddleware extend?
OpenAIModerationMiddleware extends AIMessage, ToolMessage, HumanMessage, BaseMessage.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free