_convert_from_v1_to_ollama() — langchain Function Reference
Architecture documentation for the _convert_from_v1_to_ollama() function in _compat.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD fb275987_b9c3_e866_29f5_f549abbcb039["_convert_from_v1_to_ollama()"] 3b91405f_3c4c_9b29_a361_cd5ec82f7708["_compat.py"] fb275987_b9c3_e866_29f5_f549abbcb039 -->|defined in| 3b91405f_3c4c_9b29_a361_cd5ec82f7708 style fb275987_b9c3_e866_29f5_f549abbcb039 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/ollama/langchain_ollama/_compat.py lines 8–67
def _convert_from_v1_to_ollama(
content: list[types.ContentBlock],
model_provider: str | None, # noqa: ARG001
) -> list[dict[str, Any]]:
"""Convert v1 content blocks to Ollama format.
Args:
content: List of v1 `ContentBlock` objects.
model_provider: The model provider name that generated the v1 content.
Returns:
List of content blocks in Ollama format.
"""
new_content: list = []
for block in content:
if not isinstance(block, dict) or "type" not in block:
continue
block_dict = dict(block) # (For typing)
# TextContentBlock
if block_dict["type"] == "text":
# Note: this drops all other fields/extras
new_content.append({"type": "text", "text": block_dict["text"]})
# ReasoningContentBlock
# Ollama doesn't take reasoning back in
# In the future, could consider coercing into text as an option?
# e.g.:
# if block_dict["type"] == "reasoning":
# # Attempt to preserve content in text form
# new_content.append({"text": str(block_dict["reasoning"])})
# ImageContentBlock
if block_dict["type"] == "image":
# Already handled in _get_image_from_data_content_block
new_content.append(block_dict)
# TODO: AudioContentBlock once models support
# TODO: FileContentBlock once models support
# ToolCall -> ???
# if block_dict["type"] == "tool_call":
# function_call = {}
# new_content.append(function_call)
# ToolCallChunk -> ???
# elif block_dict["type"] == "tool_call_chunk":
# function_call = {}
# new_content.append(function_call)
# NonStandardContentBlock
if block_dict["type"] == "non_standard":
# Attempt to preserve content in text form
new_content.append(
{"type": "text", "text": str(block_dict.get("value", ""))}
)
return new_content
Domain
Subdomains
Source
Frequently Asked Questions
What does _convert_from_v1_to_ollama() do?
_convert_from_v1_to_ollama() is a function in the langchain codebase, defined in libs/partners/ollama/langchain_ollama/_compat.py.
Where is _convert_from_v1_to_ollama() defined?
_convert_from_v1_to_ollama() is defined in libs/partners/ollama/langchain_ollama/_compat.py at line 8.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free