OllamaLLM Class — langchain Architecture
Architecture documentation for the OllamaLLM class in llms.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 6220540a_4afa_4d39_cdb6_bb2f4e26fe5f["OllamaLLM"] ce4aa464_3868_179e_5d99_df48bc307c5f["BaseLLM"] 6220540a_4afa_4d39_cdb6_bb2f4e26fe5f -->|extends| ce4aa464_3868_179e_5d99_df48bc307c5f b8d96a99_350e_b698_0563_d42b2b22d58d["llms.py"] 6220540a_4afa_4d39_cdb6_bb2f4e26fe5f -->|defined in| b8d96a99_350e_b698_0563_d42b2b22d58d 52b6a011_e712_01d4_b1f3_3c5e8426cbe1["_generate_params()"] 6220540a_4afa_4d39_cdb6_bb2f4e26fe5f -->|method| 52b6a011_e712_01d4_b1f3_3c5e8426cbe1 a7b75614_03e1_00a1_2c61_77ae3dd679f1["_llm_type()"] 6220540a_4afa_4d39_cdb6_bb2f4e26fe5f -->|method| a7b75614_03e1_00a1_2c61_77ae3dd679f1 4086aae6_86ce_50f4_30d0_df98252c56bd["_get_ls_params()"] 6220540a_4afa_4d39_cdb6_bb2f4e26fe5f -->|method| 4086aae6_86ce_50f4_30d0_df98252c56bd 13729fb8_5ea7_43cd_e0f5_062b9642f913["_set_clients()"] 6220540a_4afa_4d39_cdb6_bb2f4e26fe5f -->|method| 13729fb8_5ea7_43cd_e0f5_062b9642f913 ace15fad_c8f4_0757_7ede_5b26d488adae["_acreate_generate_stream()"] 6220540a_4afa_4d39_cdb6_bb2f4e26fe5f -->|method| ace15fad_c8f4_0757_7ede_5b26d488adae 5a56cfec_c901_d53f_c482_f32ccd466460["_create_generate_stream()"] 6220540a_4afa_4d39_cdb6_bb2f4e26fe5f -->|method| 5a56cfec_c901_d53f_c482_f32ccd466460 24323d78_8a9f_dfa5_957d_6cdf837960b0["_astream_with_aggregation()"] 6220540a_4afa_4d39_cdb6_bb2f4e26fe5f -->|method| 24323d78_8a9f_dfa5_957d_6cdf837960b0 485962b7_8158_7fa1_e16f_5af052f86e09["_stream_with_aggregation()"] 6220540a_4afa_4d39_cdb6_bb2f4e26fe5f -->|method| 485962b7_8158_7fa1_e16f_5af052f86e09 ba978ac5_9b81_aa68_4fc3_ce3b4b3d5bba["_generate()"] 6220540a_4afa_4d39_cdb6_bb2f4e26fe5f -->|method| ba978ac5_9b81_aa68_4fc3_ce3b4b3d5bba 4d13dd97_2efb_7876_bd9a_d6f306eb9901["_agenerate()"] 6220540a_4afa_4d39_cdb6_bb2f4e26fe5f -->|method| 4d13dd97_2efb_7876_bd9a_d6f306eb9901 cf3f81e3_6c76_6f68_9dff_ca2ade12ec2f["_stream()"] 6220540a_4afa_4d39_cdb6_bb2f4e26fe5f -->|method| cf3f81e3_6c76_6f68_9dff_ca2ade12ec2f 4c5e5365_9a25_b73e_204c_2b5cf4b85b4c["_astream()"] 6220540a_4afa_4d39_cdb6_bb2f4e26fe5f -->|method| 4c5e5365_9a25_b73e_204c_2b5cf4b85b4c
Relationship Graph
Source Code
libs/partners/ollama/langchain_ollama/llms.py lines 25–549
class OllamaLLM(BaseLLM):
"""Ollama large language models.
Setup:
Install `langchain-ollama` and install/run the Ollama server locally:
```bash
pip install -U langchain-ollama
# Visit https://ollama.com/download to download and install Ollama
# (Linux users): start the server with `ollama serve`
```
Download a model to use:
```bash
ollama pull llama3.1
```
Key init args — generation params:
model: str
Name of the Ollama model to use (e.g. `'llama4'`).
temperature: float | None
Sampling temperature. Higher values make output more creative.
num_predict: int | None
Maximum number of tokens to predict.
top_k: int | None
Limits the next token selection to the K most probable tokens.
top_p: float | None
Nucleus sampling parameter. Higher values lead to more diverse text.
mirostat: int | None
Enable Mirostat sampling for controlling perplexity.
seed: int | None
Random number seed for generation reproducibility.
Key init args — client params:
base_url:
Base URL where Ollama server is hosted.
keep_alive:
How long the model stays loaded into memory.
format:
Specify the format of the output.
See full list of supported init args and their descriptions in the params section.
Instantiate:
```python
from langchain_ollama import OllamaLLM
model = OllamaLLM(
model="llama3.1",
temperature=0.7,
num_predict=256,
# base_url="http://localhost:11434",
# other params...
)
```
Invoke:
```python
input_text = "The meaning of life is "
response = model.invoke(input_text)
print(response)
```
```txt
"a philosophical question that has been contemplated by humans for
centuries..."
```
Stream:
```python
for chunk in model.stream(input_text):
print(chunk, end="")
```
```txt
a philosophical question that has been contemplated by humans for
centuries...
```
Async:
```python
response = await model.ainvoke(input_text)
Extends
Source
Frequently Asked Questions
What is the OllamaLLM class?
OllamaLLM is a class in the langchain codebase, defined in libs/partners/ollama/langchain_ollama/llms.py.
Where is OllamaLLM defined?
OllamaLLM is defined in libs/partners/ollama/langchain_ollama/llms.py at line 25.
What does OllamaLLM extend?
OllamaLLM extends BaseLLM.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free