Home / Function/ _create_chat_model_run() — langchain Function Reference

_create_chat_model_run() — langchain Function Reference

Architecture documentation for the _create_chat_model_run() function in core.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  4e1a8b6e_abb8_b0b3_8596_e0f56898ec2f["_create_chat_model_run()"]
  70348e44_de0f_ccb4_c06a_8453289ed93e["_TracerCore"]
  4e1a8b6e_abb8_b0b3_8596_e0f56898ec2f -->|defined in| 70348e44_de0f_ccb4_c06a_8453289ed93e
  style 4e1a8b6e_abb8_b0b3_8596_e0f56898ec2f fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/core/langchain_core/tracers/core.py lines 151–193

    def _create_chat_model_run(
        self,
        serialized: dict[str, Any],
        messages: list[list[BaseMessage]],
        run_id: UUID,
        tags: list[str] | None = None,
        parent_run_id: UUID | None = None,
        metadata: dict[str, Any] | None = None,
        name: str | None = None,
        **kwargs: Any,
    ) -> Run:
        """Create a chat model run."""
        if self._schema_format not in {"streaming_events", "original+chat"}:
            # Please keep this un-implemented for backwards compatibility.
            # When it's unimplemented old tracers that use the "original" format
            # fallback on the on_llm_start method implementation if they
            # find that the on_chat_model_start method is not implemented.
            # This can eventually be cleaned up by writing a "modern" tracer
            # that has all the updated schema changes corresponding to
            # the "streaming_events" format.
            msg = (
                f"Chat model tracing is not supported in "
                f"for {self._schema_format} format."
            )
            raise NotImplementedError(msg)
        start_time = datetime.now(timezone.utc)
        if metadata:
            kwargs.update({"metadata": metadata})
        return Run(
            id=run_id,
            parent_run_id=parent_run_id,
            serialized=serialized,
            inputs={"messages": [[dumpd(msg) for msg in batch] for batch in messages]},
            extra=kwargs,
            events=[{"name": "start", "time": start_time}],
            start_time=start_time,
            # WARNING: This is valid ONLY for streaming_events.
            # run_type="llm" is what's used by virtually all tracers.
            # Changing this to "chat_model" may break triggering on_llm_start
            run_type="chat_model",
            tags=tags,
            name=name,
        )

Domain

Subdomains

Frequently Asked Questions

What does _create_chat_model_run() do?
_create_chat_model_run() is a function in the langchain codebase, defined in libs/core/langchain_core/tracers/core.py.
Where is _create_chat_model_run() defined?
_create_chat_model_run() is defined in libs/core/langchain_core/tracers/core.py at line 151.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free