OllamaEmbeddings Class — langchain Architecture
Architecture documentation for the OllamaEmbeddings class in embeddings.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 0c74773d_9131_7802_d6df_f19bff9e4247["OllamaEmbeddings"] b1e4f760_c634_d3bf_ca9a_db7ab899cc4a["Embeddings"] 0c74773d_9131_7802_d6df_f19bff9e4247 -->|extends| b1e4f760_c634_d3bf_ca9a_db7ab899cc4a 3e58a54c_dcc6_1588_0a1f_39de205758ef["embeddings.py"] 0c74773d_9131_7802_d6df_f19bff9e4247 -->|defined in| 3e58a54c_dcc6_1588_0a1f_39de205758ef ed61a297_f8eb_5ecc_600b_c3611d9d8499["_default_params()"] 0c74773d_9131_7802_d6df_f19bff9e4247 -->|method| ed61a297_f8eb_5ecc_600b_c3611d9d8499 dbc1866e_deaa_7ac7_a687_6050b5213b30["_set_clients()"] 0c74773d_9131_7802_d6df_f19bff9e4247 -->|method| dbc1866e_deaa_7ac7_a687_6050b5213b30 05b784d6_9929_b505_8950_cd1c3c4296a3["embed_documents()"] 0c74773d_9131_7802_d6df_f19bff9e4247 -->|method| 05b784d6_9929_b505_8950_cd1c3c4296a3 28d29cd2_4f8f_d548_96c7_0edf8c40fb71["embed_query()"] 0c74773d_9131_7802_d6df_f19bff9e4247 -->|method| 28d29cd2_4f8f_d548_96c7_0edf8c40fb71 6da257f1_8f04_1bab_2705_6a6c38ec8575["aembed_documents()"] 0c74773d_9131_7802_d6df_f19bff9e4247 -->|method| 6da257f1_8f04_1bab_2705_6a6c38ec8575 889b8605_e993_3a76_573b_2207ae285cbe["aembed_query()"] 0c74773d_9131_7802_d6df_f19bff9e4247 -->|method| 889b8605_e993_3a76_573b_2207ae285cbe
Relationship Graph
Source Code
libs/partners/ollama/langchain_ollama/embeddings.py lines 19–332
class OllamaEmbeddings(BaseModel, Embeddings):
"""Ollama embedding model integration.
Set up a local Ollama instance:
[Install the Ollama package](https://github.com/ollama/ollama) and set up a
local Ollama instance.
You will need to choose a model to serve.
You can view a list of available models via [the model library](https://ollama.com/library).
To fetch a model from the Ollama model library use `ollama pull <name-of-model>`.
For example, to pull the llama3 model:
```bash
ollama pull llama3
```
This will download the default tagged version of the model.
Typically, the default points to the latest, smallest sized-parameter model.
* On Mac, the models will be downloaded to `~/.ollama/models`
* On Linux (or WSL), the models will be stored at `/usr/share/ollama/.ollama/models`
You can specify the exact version of the model of interest
as such `ollama pull vicuna:13b-v1.5-16k-q4_0`.
To view pulled models:
```bash
ollama list
```
To start serving:
```bash
ollama serve
```
View the Ollama documentation for more commands.
```bash
ollama help
```
Install the `langchain-ollama` integration package:
```bash
pip install -U langchain_ollama
```
Key init args — completion params:
model: str
Name of Ollama model to use.
base_url: str | None
Base url the model is hosted under.
See full list of supported init args and their descriptions in the params section.
Instantiate:
```python
from langchain_ollama import OllamaEmbeddings
embed = OllamaEmbeddings(model="llama3")
```
Embed single text:
```python
input_text = "The meaning of life is 42"
vector = embed.embed_query(input_text)
print(vector[:3])
```
```python
[-0.024603435769677162, -0.007543657906353474, 0.0039630369283258915]
```
Embed multiple texts:
```python
input_texts = ["Document 1...", "Document 2..."]
vectors = embed.embed_documents(input_texts)
Extends
Source
Frequently Asked Questions
What is the OllamaEmbeddings class?
OllamaEmbeddings is a class in the langchain codebase, defined in libs/partners/ollama/langchain_ollama/embeddings.py.
Where is OllamaEmbeddings defined?
OllamaEmbeddings is defined in libs/partners/ollama/langchain_ollama/embeddings.py at line 19.
What does OllamaEmbeddings extend?
OllamaEmbeddings extends Embeddings.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free