Home / Function/ _apply_violation() — langchain Function Reference

_apply_violation() — langchain Function Reference

Architecture documentation for the _apply_violation() function in openai_moderation.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  59081f65_8455_8937_22b3_f7febac7b501["_apply_violation()"]
  48713c67_3a9a_ec9e_ec74_46e6955f07bd["OpenAIModerationMiddleware"]
  59081f65_8455_8937_22b3_f7febac7b501 -->|defined in| 48713c67_3a9a_ec9e_ec74_46e6955f07bd
  171f9009_f0c5_1fb7_2224_0686c89f7a33["_moderate_output()"]
  171f9009_f0c5_1fb7_2224_0686c89f7a33 -->|calls| 59081f65_8455_8937_22b3_f7febac7b501
  7ab5affa_5898_4089_2b48_c4caa16d99a2["_amoderate_output()"]
  7ab5affa_5898_4089_2b48_c4caa16d99a2 -->|calls| 59081f65_8455_8937_22b3_f7febac7b501
  96f205a3_76d2_5a83_9398_42ec7a99a7e7["_moderate_tool_messages()"]
  96f205a3_76d2_5a83_9398_42ec7a99a7e7 -->|calls| 59081f65_8455_8937_22b3_f7febac7b501
  f8bee86c_3f1e_1e20_7cac_e6c507e856d6["_amoderate_tool_messages()"]
  f8bee86c_3f1e_1e20_7cac_e6c507e856d6 -->|calls| 59081f65_8455_8937_22b3_f7febac7b501
  5ce3deff_f9f4_37bf_1258_39cb41cad4c4["_moderate_user_message()"]
  5ce3deff_f9f4_37bf_1258_39cb41cad4c4 -->|calls| 59081f65_8455_8937_22b3_f7febac7b501
  3b0af14d_fc17_a4ff_f519_e197161877e5["_amoderate_user_message()"]
  3b0af14d_fc17_a4ff_f519_e197161877e5 -->|calls| 59081f65_8455_8937_22b3_f7febac7b501
  033a3f7e_bdfd_a614_a8dc_e4e3a07e50fd["_format_violation_message()"]
  59081f65_8455_8937_22b3_f7febac7b501 -->|calls| 033a3f7e_bdfd_a614_a8dc_e4e3a07e50fd
  style 59081f65_8455_8937_22b3_f7febac7b501 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/openai/langchain_openai/middleware/openai_moderation.py lines 386–416

    def _apply_violation(
        self,
        messages: Sequence[BaseMessage],
        *,
        index: int | None,
        stage: ViolationStage,
        content: str,
        result: Moderation,
    ) -> dict[str, Any] | None:
        violation_text = self._format_violation_message(content, result)

        if self.exit_behavior == "error":
            raise OpenAIModerationError(
                content=content,
                stage=stage,
                result=result,
                message=violation_text,
            )

        if self.exit_behavior == "end":
            return {"jump_to": "end", "messages": [AIMessage(content=violation_text)]}

        if index is None:
            return None

        new_messages = list(messages)
        original = new_messages[index]
        new_messages[index] = cast(
            BaseMessage, original.model_copy(update={"content": violation_text})
        )
        return {"messages": new_messages}

Domain

Subdomains

Frequently Asked Questions

What does _apply_violation() do?
_apply_violation() is a function in the langchain codebase, defined in libs/partners/openai/langchain_openai/middleware/openai_moderation.py.
Where is _apply_violation() defined?
_apply_violation() is defined in libs/partners/openai/langchain_openai/middleware/openai_moderation.py at line 386.
What does _apply_violation() call?
_apply_violation() calls 1 function(s): _format_violation_message.
What calls _apply_violation()?
_apply_violation() is called by 6 function(s): _amoderate_output, _amoderate_tool_messages, _amoderate_user_message, _moderate_output, _moderate_tool_messages, _moderate_user_message.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free