aafter_model() — langchain Function Reference
Architecture documentation for the aafter_model() function in openai_moderation.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD fa7fdc65_cf99_5db0_1932_68c75c40b726["aafter_model()"] 48713c67_3a9a_ec9e_ec74_46e6955f07bd["OpenAIModerationMiddleware"] fa7fdc65_cf99_5db0_1932_68c75c40b726 -->|defined in| 48713c67_3a9a_ec9e_ec74_46e6955f07bd 7ab5affa_5898_4089_2b48_c4caa16d99a2["_amoderate_output()"] fa7fdc65_cf99_5db0_1932_68c75c40b726 -->|calls| 7ab5affa_5898_4089_2b48_c4caa16d99a2 style fa7fdc65_cf99_5db0_1932_68c75c40b726 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/langchain_openai/middleware/openai_moderation.py lines 157–176
async def aafter_model(
self, state: AgentState[Any], runtime: Runtime[Any]
) -> dict[str, Any] | None: # type: ignore[override]
"""Async version of after_model.
Args:
state: Current agent state containing messages.
runtime: Agent runtime context.
Returns:
Updated state with moderated messages, or `None` if no changes.
"""
if not self.check_output:
return None
messages = list(state.get("messages", []))
if not messages:
return None
return await self._amoderate_output(messages)
Domain
Subdomains
Calls
Source
Frequently Asked Questions
What does aafter_model() do?
aafter_model() is a function in the langchain codebase, defined in libs/partners/openai/langchain_openai/middleware/openai_moderation.py.
Where is aafter_model() defined?
aafter_model() is defined in libs/partners/openai/langchain_openai/middleware/openai_moderation.py at line 157.
What does aafter_model() call?
aafter_model() calls 1 function(s): _amoderate_output.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free