Home / Function/ _moderate() — langchain Function Reference

_moderate() — langchain Function Reference

Architecture documentation for the _moderate() function in moderation.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  8f806073_b5ea_22ed_c6ce_442f9541f9ea["_moderate()"]
  681be7b2_2ef1_3fa3_8051_488d0318bbe1["OpenAIModerationChain"]
  8f806073_b5ea_22ed_c6ce_442f9541f9ea -->|defined in| 681be7b2_2ef1_3fa3_8051_488d0318bbe1
  06d68222_a4cc_adc7_f696_ec1e270c9c3a["_call()"]
  06d68222_a4cc_adc7_f696_ec1e270c9c3a -->|calls| 8f806073_b5ea_22ed_c6ce_442f9541f9ea
  bca6ad32_d33f_9e17_5e1c_056f97df298e["_acall()"]
  bca6ad32_d33f_9e17_5e1c_056f97df298e -->|calls| 8f806073_b5ea_22ed_c6ce_442f9541f9ea
  style 8f806073_b5ea_22ed_c6ce_442f9541f9ea fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/langchain/langchain_classic/chains/moderation.py lines 95–102

    def _moderate(self, text: str, results: Any) -> str:
        condition = results["flagged"] if self.openai_pre_1_0 else results.flagged
        if condition:
            error_str = "Text was found that violates OpenAI's content policy."
            if self.error:
                raise ValueError(error_str)
            return error_str
        return text

Subdomains

Called By

Frequently Asked Questions

What does _moderate() do?
_moderate() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/chains/moderation.py.
Where is _moderate() defined?
_moderate() is defined in libs/langchain/langchain_classic/chains/moderation.py at line 95.
What calls _moderate()?
_moderate() is called by 2 function(s): _acall, _call.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free