index() — langchain Function Reference
Architecture documentation for the index() function in api.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 5721a97d_0581_0694_e3e6_0ae44f2b3fb0["index()"] 203188c0_72d6_6932_bc21_edf25c4c00ef["api.py"] 5721a97d_0581_0694_e3e6_0ae44f2b3fb0 -->|defined in| 203188c0_72d6_6932_bc21_edf25c4c00ef dadaa1e9_e74e_95c1_ae3a_65e79ceaffbe["_warn_about_sha1()"] 5721a97d_0581_0694_e3e6_0ae44f2b3fb0 -->|calls| dadaa1e9_e74e_95c1_ae3a_65e79ceaffbe fbf152c4_37d1_876c_6600_b5f729f313a9["_get_source_id_assigner()"] 5721a97d_0581_0694_e3e6_0ae44f2b3fb0 -->|calls| fbf152c4_37d1_876c_6600_b5f729f313a9 6511d1ae_09e2_46b2_ccb4_3a743c9aaef4["_batch()"] 5721a97d_0581_0694_e3e6_0ae44f2b3fb0 -->|calls| 6511d1ae_09e2_46b2_ccb4_3a743c9aaef4 5aadbe30_58aa_8bc1_2fcd_e8fc84b92311["_deduplicate_in_order()"] 5721a97d_0581_0694_e3e6_0ae44f2b3fb0 -->|calls| 5aadbe30_58aa_8bc1_2fcd_e8fc84b92311 adeaf2c1_ef58_0e0c_bf53_4534663c6164["_get_document_with_hash()"] 5721a97d_0581_0694_e3e6_0ae44f2b3fb0 -->|calls| adeaf2c1_ef58_0e0c_bf53_4534663c6164 e0d9b7ca_ea1c_b50e_ee8b_5e232dcff3ba["_delete()"] 5721a97d_0581_0694_e3e6_0ae44f2b3fb0 -->|calls| e0d9b7ca_ea1c_b50e_ee8b_5e232dcff3ba style 5721a97d_0581_0694_e3e6_0ae44f2b3fb0 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/core/langchain_core/indexing/api.py lines 290–597
def index(
docs_source: BaseLoader | Iterable[Document],
record_manager: RecordManager,
vector_store: VectorStore | DocumentIndex,
*,
batch_size: int = 100,
cleanup: Literal["incremental", "full", "scoped_full"] | None = None,
source_id_key: str | Callable[[Document], str] | None = None,
cleanup_batch_size: int = 1_000,
force_update: bool = False,
key_encoder: Literal["sha1", "sha256", "sha512", "blake2b"]
| Callable[[Document], str] = "sha1",
upsert_kwargs: dict[str, Any] | None = None,
) -> IndexingResult:
"""Index data from the loader into the vector store.
Indexing functionality uses a manager to keep track of which documents
are in the vector store.
This allows us to keep track of which documents were updated, and which
documents were deleted, which documents should be skipped.
For the time being, documents are indexed using their hashes, and users
are not able to specify the uid of the document.
!!! warning "Behavior changed in `langchain-core` 0.3.25"
Added `scoped_full` cleanup mode.
!!! warning
* In full mode, the loader should be returning
the entire dataset, and not just a subset of the dataset.
Otherwise, the auto_cleanup will remove documents that it is not
supposed to.
* In incremental mode, if documents associated with a particular
source id appear across different batches, the indexing API
will do some redundant work. This will still result in the
correct end state of the index, but will unfortunately not be
100% efficient. For example, if a given document is split into 15
chunks, and we index them using a batch size of 5, we'll have 3 batches
all with the same source id. In general, to avoid doing too much
redundant work select as big a batch size as possible.
* The `scoped_full` mode is suitable if determining an appropriate batch size
is challenging or if your data loader cannot return the entire dataset at
once. This mode keeps track of source IDs in memory, which should be fine
for most use cases. If your dataset is large (10M+ docs), you will likely
need to parallelize the indexing process regardless.
Args:
docs_source: Data loader or iterable of documents to index.
record_manager: Timestamped set to keep track of which documents were
updated.
vector_store: `VectorStore` or DocumentIndex to index the documents into.
batch_size: Batch size to use when indexing.
cleanup: How to handle clean up of documents.
- incremental: Cleans up all documents that haven't been updated AND
that are associated with source IDs that were seen during indexing.
Clean up is done continuously during indexing helping to minimize the
probability of users seeing duplicated content.
- full: Delete all documents that have not been returned by the loader
during this run of indexing.
Clean up runs after all documents have been indexed.
This means that users may see duplicated content during indexing.
- scoped_full: Similar to Full, but only deletes all documents
that haven't been updated AND that are associated with
source IDs that were seen during indexing.
- None: Do not delete any documents.
source_id_key: Optional key that helps identify the original source
of the document.
cleanup_batch_size: Batch size to use when cleaning up documents.
force_update: Force update documents even if they are present in the
record manager. Useful if you are re-indexing with updated embeddings.
key_encoder: Hashing algorithm to use for hashing the document content and
metadata. Options include "blake2b", "sha256", and "sha512".
!!! version-added "Added in `langchain-core` 0.3.66"
key_encoder: Hashing algorithm to use for hashing the document.
If not provided, a default encoder using SHA-1 will be used.
Domain
Subdomains
Defined In
Calls
Source
Frequently Asked Questions
What does index() do?
index() is a function in the langchain codebase, defined in libs/core/langchain_core/indexing/api.py.
Where is index() defined?
index() is defined in libs/core/langchain_core/indexing/api.py at line 290.
What does index() call?
index() calls 6 function(s): _batch, _deduplicate_in_order, _delete, _get_document_with_hash, _get_source_id_assigner, _warn_about_sha1.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free