ChatOllama Class — langchain Architecture
Architecture documentation for the ChatOllama class in chat_models.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD c2e46b05_152a_6249_ac80_ab1d19c95906["ChatOllama"] d009a608_c505_bd50_7200_0de8a69ba4b7["BaseChatModel"] c2e46b05_152a_6249_ac80_ab1d19c95906 -->|extends| d009a608_c505_bd50_7200_0de8a69ba4b7 fcfa55b0_4a86_fa31_a156_3c38c76a0a9b["AIMessage"] c2e46b05_152a_6249_ac80_ab1d19c95906 -->|extends| fcfa55b0_4a86_fa31_a156_3c38c76a0a9b e0e879bf_e732_8d0f_6ce2_3d40e66f4eb3["HumanMessage"] c2e46b05_152a_6249_ac80_ab1d19c95906 -->|extends| e0e879bf_e732_8d0f_6ce2_3d40e66f4eb3 5c3ab140_4d2a_43d4_63cb_89f4c4b1a8d6["SystemMessage"] c2e46b05_152a_6249_ac80_ab1d19c95906 -->|extends| 5c3ab140_4d2a_43d4_63cb_89f4c4b1a8d6 169a8cd0_0dac_1362_0fd1_9b784bf38c56["ChatMessage"] c2e46b05_152a_6249_ac80_ab1d19c95906 -->|extends| 169a8cd0_0dac_1362_0fd1_9b784bf38c56 4318b819_4fe9_65b0_5369_424ec9518efe["ToolMessage"] c2e46b05_152a_6249_ac80_ab1d19c95906 -->|extends| 4318b819_4fe9_65b0_5369_424ec9518efe c9284a6d_f83c_0698_fc0a_b5e98d911196["chat_models.py"] c2e46b05_152a_6249_ac80_ab1d19c95906 -->|defined in| c9284a6d_f83c_0698_fc0a_b5e98d911196 29b0f48c_e03a_fb1f_bb40_e2e74df79e3f["_chat_params()"] c2e46b05_152a_6249_ac80_ab1d19c95906 -->|method| 29b0f48c_e03a_fb1f_bb40_e2e74df79e3f 616a4376_59e1_136f_67cb_7ef76433ad57["_set_clients()"] c2e46b05_152a_6249_ac80_ab1d19c95906 -->|method| 616a4376_59e1_136f_67cb_7ef76433ad57 527a9cff_15fb_7949_522d_8a9c58c965d1["_convert_messages_to_ollama_messages()"] c2e46b05_152a_6249_ac80_ab1d19c95906 -->|method| 527a9cff_15fb_7949_522d_8a9c58c965d1 cf7f20bc_e620_c340_7f83_e86a19592884["_acreate_chat_stream()"] c2e46b05_152a_6249_ac80_ab1d19c95906 -->|method| cf7f20bc_e620_c340_7f83_e86a19592884 65b95667_ecaf_0ef9_13ab_dd6324a71622["_create_chat_stream()"] c2e46b05_152a_6249_ac80_ab1d19c95906 -->|method| 65b95667_ecaf_0ef9_13ab_dd6324a71622 522526bb_d9ab_9964_6fcd_3c28ca289eda["_chat_stream_with_aggregation()"] c2e46b05_152a_6249_ac80_ab1d19c95906 -->|method| 522526bb_d9ab_9964_6fcd_3c28ca289eda bf936af0_04f8_1063_4aca_97d9e85ee6a5["_achat_stream_with_aggregation()"] c2e46b05_152a_6249_ac80_ab1d19c95906 -->|method| bf936af0_04f8_1063_4aca_97d9e85ee6a5
Relationship Graph
Source Code
libs/partners/ollama/langchain_ollama/chat_models.py lines 260–1606
class ChatOllama(BaseChatModel):
r"""Ollama chat model integration.
???+ note "Setup"
Install `langchain-ollama` and download any models you want to use from ollama.
```bash
ollama pull gpt-oss:20b
pip install -U langchain-ollama
```
Key init args — completion params:
model: str
Name of Ollama model to use.
reasoning: bool | None
Controls the reasoning/thinking mode for
[supported models](https://ollama.com/search?c=thinking).
- `True`: Enables reasoning mode. The model's reasoning process will be
captured and returned separately in the `additional_kwargs` of the
response message, under `reasoning_content`. The main response
content will not include the reasoning tags.
- `False`: Disables reasoning mode. The model will not perform any reasoning,
and the response will not include any reasoning content.
- `None` (Default): The model will use its default reasoning behavior. Note
however, if the model's default behavior *is* to perform reasoning, think tags
(`<think>` and `</think>`) will be present within the main response content
unless you set `reasoning` to `True`.
temperature: float
Sampling temperature. Ranges from `0.0` to `1.0`.
num_predict: int | None
Max number of tokens to generate.
See full list of supported init args and their descriptions in the params section.
Instantiate:
```python
from langchain_ollama import ChatOllama
model = ChatOllama(
model="gpt-oss:20b",
validate_model_on_init=True,
temperature=0.8,
num_predict=256,
# other params ...
)
```
Invoke:
```python
messages = [
("system", "You are a helpful translator. Translate the user sentence to French."),
("human", "I love programming."),
]
model.invoke(messages)
```
```python
AIMessage(content='J'adore le programmation. (Note: "programming" can also refer to the act of writing code, so if you meant that, I could translate it as "J'adore programmer". But since you didn\'t specify, I assumed you were talking about the activity itself, which is what "le programmation" usually refers to.)', response_metadata={'model': 'llama3', 'created_at': '2024-07-04T03:37:50.182604Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 3576619666, 'load_duration': 788524916, 'prompt_eval_count': 32, 'prompt_eval_duration': 128125000, 'eval_count': 71, 'eval_duration': 2656556000}, id='run-ba48f958-6402-41a5-b461-5e250a4ebd36-0')
```
Stream:
```python
for chunk in model.stream("Return the words Hello World!"):
print(chunk.text, end="")
```
```python
content='Hello' id='run-327ff5ad-45c8-49fe-965c-0a93982e9be1'
content=' World' id='run-327ff5ad-45c8-49fe-965c-0a93982e9be1'
content='!' id='run-327ff5ad-45c8-49fe-965c-0a93982e9be1'
content='' response_metadata={'model': 'llama3', 'created_at': '2024-07-04T03:39:42.274449Z', 'message': {'role': 'assistant', 'content': ''}, 'done_reason': 'stop', 'done': True, 'total_duration': 411875125, 'load_duration': 1898166, 'prompt_eval_count': 14, 'prompt_eval_duration': 297320000, 'eval_count': 4, 'eval_duration': 111099000} id='run-327ff5ad-45c8-49fe-965c-0a93982e9be1'
```
```python
stream = model.stream(messages)
full = next(stream)
for chunk in stream:
full += chunk
Source
Frequently Asked Questions
What is the ChatOllama class?
ChatOllama is a class in the langchain codebase, defined in libs/partners/ollama/langchain_ollama/chat_models.py.
Where is ChatOllama defined?
ChatOllama is defined in libs/partners/ollama/langchain_ollama/chat_models.py at line 260.
What does ChatOllama extend?
ChatOllama extends BaseChatModel, AIMessage, HumanMessage, SystemMessage, ChatMessage, ToolMessage.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free