Home / File/ _compat.py — langchain Source File

_compat.py — langchain Source File

Architecture documentation for _compat.py, a python file in the langchain codebase. 2 imports, 0 dependents.

File python CoreAbstractions Serialization 2 imports 1 functions

Entity Profile

Dependency Diagram

graph LR
  1fe405a8_99ca_7ed7_39ec_687722e7f6e3["_compat.py"]
  8e2034b7_ceb8_963f_29fc_2ea6b50ef9b3["typing"]
  1fe405a8_99ca_7ed7_39ec_687722e7f6e3 --> 8e2034b7_ceb8_963f_29fc_2ea6b50ef9b3
  d758344f_537f_649e_f467_b9d7442e86df["langchain_core.messages"]
  1fe405a8_99ca_7ed7_39ec_687722e7f6e3 --> d758344f_537f_649e_f467_b9d7442e86df
  style 1fe405a8_99ca_7ed7_39ec_687722e7f6e3 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

"""Go from v1 content blocks to Ollama SDK format."""

from typing import Any

from langchain_core.messages import content as types


def _convert_from_v1_to_ollama(
    content: list[types.ContentBlock],
    model_provider: str | None,  # noqa: ARG001
) -> list[dict[str, Any]]:
    """Convert v1 content blocks to Ollama format.

    Args:
        content: List of v1 `ContentBlock` objects.
        model_provider: The model provider name that generated the v1 content.

    Returns:
        List of content blocks in Ollama format.
    """
    new_content: list = []
    for block in content:
        if not isinstance(block, dict) or "type" not in block:
            continue

        block_dict = dict(block)  # (For typing)

        # TextContentBlock
        if block_dict["type"] == "text":
            # Note: this drops all other fields/extras
            new_content.append({"type": "text", "text": block_dict["text"]})

        # ReasoningContentBlock
        # Ollama doesn't take reasoning back in
        # In the future, could consider coercing into text as an option?
        # e.g.:
        # if block_dict["type"] == "reasoning":
        #     # Attempt to preserve content in text form
        #     new_content.append({"text": str(block_dict["reasoning"])})

        # ImageContentBlock
        if block_dict["type"] == "image":
            # Already handled in _get_image_from_data_content_block
            new_content.append(block_dict)

        # TODO: AudioContentBlock once models support

        # TODO: FileContentBlock once models support

        # ToolCall -> ???
        # if block_dict["type"] == "tool_call":
        #     function_call = {}
        #     new_content.append(function_call)

        # ToolCallChunk -> ???
        # elif block_dict["type"] == "tool_call_chunk":
        #     function_call = {}
        #     new_content.append(function_call)

        # NonStandardContentBlock
        if block_dict["type"] == "non_standard":
            # Attempt to preserve content in text form
            new_content.append(
                {"type": "text", "text": str(block_dict.get("value", ""))}
            )

    return new_content

Subdomains

Dependencies

  • langchain_core.messages
  • typing

Frequently Asked Questions

What does _compat.py do?
_compat.py is a source file in the langchain codebase, written in python. It belongs to the CoreAbstractions domain, Serialization subdomain.
What functions are defined in _compat.py?
_compat.py defines 1 function(s): _convert_from_v1_to_ollama.
What does _compat.py depend on?
_compat.py imports 2 module(s): langchain_core.messages, typing.
Where is _compat.py in the architecture?
_compat.py is located at libs/partners/ollama/langchain_ollama/_compat.py (domain: CoreAbstractions, subdomain: Serialization, directory: libs/partners/ollama/langchain_ollama).

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free