Home / File/ embeddings.py — langchain Source File

embeddings.py — langchain Source File

Architecture documentation for embeddings.py, a python file in the langchain codebase. 6 imports, 0 dependents.

Entity Profile

Dependency Diagram

graph LR
  751b2f98_7765_dbd5_1dd7_a14a29453d1a["embeddings.py"]
  8e2034b7_ceb8_963f_29fc_2ea6b50ef9b3["typing"]
  751b2f98_7765_dbd5_1dd7_a14a29453d1a --> 8e2034b7_ceb8_963f_29fc_2ea6b50ef9b3
  bc46b61d_cfdf_3f6b_a9dd_ac2a328d84b3["langchain_core.embeddings"]
  751b2f98_7765_dbd5_1dd7_a14a29453d1a --> bc46b61d_cfdf_3f6b_a9dd_ac2a328d84b3
  e36ef4a1_87ee_d91e_3f75_05e353ec925c["ollama"]
  751b2f98_7765_dbd5_1dd7_a14a29453d1a --> e36ef4a1_87ee_d91e_3f75_05e353ec925c
  6e58aaea_f08e_c099_3cc7_f9567bfb1ae7["pydantic"]
  751b2f98_7765_dbd5_1dd7_a14a29453d1a --> 6e58aaea_f08e_c099_3cc7_f9567bfb1ae7
  91721f45_4909_e489_8c1f_084f8bd87145["typing_extensions"]
  751b2f98_7765_dbd5_1dd7_a14a29453d1a --> 91721f45_4909_e489_8c1f_084f8bd87145
  3caf0eca_9e5b_c2a6_c583_186d108c10fd["langchain_ollama._utils"]
  751b2f98_7765_dbd5_1dd7_a14a29453d1a --> 3caf0eca_9e5b_c2a6_c583_186d108c10fd
  style 751b2f98_7765_dbd5_1dd7_a14a29453d1a fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

"""Ollama embeddings models."""

from __future__ import annotations

from typing import Any

from langchain_core.embeddings import Embeddings
from ollama import AsyncClient, Client
from pydantic import BaseModel, ConfigDict, PrivateAttr, model_validator
from typing_extensions import Self

from langchain_ollama._utils import (
    merge_auth_headers,
    parse_url_with_auth,
    validate_model,
)


class OllamaEmbeddings(BaseModel, Embeddings):
    """Ollama embedding model integration.

    Set up a local Ollama instance:
        [Install the Ollama package](https://github.com/ollama/ollama) and set up a
        local Ollama instance.

        You will need to choose a model to serve.

        You can view a list of available models via [the model library](https://ollama.com/library).

        To fetch a model from the Ollama model library use `ollama pull <name-of-model>`.

        For example, to pull the llama3 model:

        ```bash
        ollama pull llama3
        ```

        This will download the default tagged version of the model.
        Typically, the default points to the latest, smallest sized-parameter model.

        * On Mac, the models will be downloaded to `~/.ollama/models`
        * On Linux (or WSL), the models will be stored at `/usr/share/ollama/.ollama/models`

        You can specify the exact version of the model of interest
        as such `ollama pull vicuna:13b-v1.5-16k-q4_0`.

        To view pulled models:

        ```bash
        ollama list
        ```

        To start serving:

        ```bash
        ollama serve
        ```

        View the Ollama documentation for more commands.

// ... (273 more lines)

Subdomains

Dependencies

  • langchain_core.embeddings
  • langchain_ollama._utils
  • ollama
  • pydantic
  • typing
  • typing_extensions

Frequently Asked Questions

What does embeddings.py do?
embeddings.py is a source file in the langchain codebase, written in python. It belongs to the CoreAbstractions domain, RunnableInterface subdomain.
What does embeddings.py depend on?
embeddings.py imports 6 module(s): langchain_core.embeddings, langchain_ollama._utils, ollama, pydantic, typing, typing_extensions.
Where is embeddings.py in the architecture?
embeddings.py is located at libs/partners/ollama/langchain_ollama/embeddings.py (domain: CoreAbstractions, subdomain: RunnableInterface, directory: libs/partners/ollama/langchain_ollama).

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free