aindex() — langchain Function Reference
Architecture documentation for the aindex() function in api.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 02b67c59_d093_f33d_633c_d77332eb191e["aindex()"] 203188c0_72d6_6932_bc21_edf25c4c00ef["api.py"] 02b67c59_d093_f33d_633c_d77332eb191e -->|defined in| 203188c0_72d6_6932_bc21_edf25c4c00ef dadaa1e9_e74e_95c1_ae3a_65e79ceaffbe["_warn_about_sha1()"] 02b67c59_d093_f33d_633c_d77332eb191e -->|calls| dadaa1e9_e74e_95c1_ae3a_65e79ceaffbe e21cf809_9a76_c173_16af_56219b41085f["_to_async_iterator()"] 02b67c59_d093_f33d_633c_d77332eb191e -->|calls| e21cf809_9a76_c173_16af_56219b41085f fbf152c4_37d1_876c_6600_b5f729f313a9["_get_source_id_assigner()"] 02b67c59_d093_f33d_633c_d77332eb191e -->|calls| fbf152c4_37d1_876c_6600_b5f729f313a9 edbcc417_69c7_d8bc_0a8b_194cd8c3dd73["_abatch()"] 02b67c59_d093_f33d_633c_d77332eb191e -->|calls| edbcc417_69c7_d8bc_0a8b_194cd8c3dd73 5aadbe30_58aa_8bc1_2fcd_e8fc84b92311["_deduplicate_in_order()"] 02b67c59_d093_f33d_633c_d77332eb191e -->|calls| 5aadbe30_58aa_8bc1_2fcd_e8fc84b92311 adeaf2c1_ef58_0e0c_bf53_4534663c6164["_get_document_with_hash()"] 02b67c59_d093_f33d_633c_d77332eb191e -->|calls| adeaf2c1_ef58_0e0c_bf53_4534663c6164 644340dd_43ec_9052_f7ff_84c850a4fef1["_adelete()"] 02b67c59_d093_f33d_633c_d77332eb191e -->|calls| 644340dd_43ec_9052_f7ff_84c850a4fef1 style 02b67c59_d093_f33d_633c_d77332eb191e fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/core/langchain_core/indexing/api.py lines 629–948
async def aindex(
docs_source: BaseLoader | Iterable[Document] | AsyncIterator[Document],
record_manager: RecordManager,
vector_store: VectorStore | DocumentIndex,
*,
batch_size: int = 100,
cleanup: Literal["incremental", "full", "scoped_full"] | None = None,
source_id_key: str | Callable[[Document], str] | None = None,
cleanup_batch_size: int = 1_000,
force_update: bool = False,
key_encoder: Literal["sha1", "sha256", "sha512", "blake2b"]
| Callable[[Document], str] = "sha1",
upsert_kwargs: dict[str, Any] | None = None,
) -> IndexingResult:
"""Async index data from the loader into the vector store.
Indexing functionality uses a manager to keep track of which documents
are in the vector store.
This allows us to keep track of which documents were updated, and which
documents were deleted, which documents should be skipped.
For the time being, documents are indexed using their hashes, and users
are not able to specify the uid of the document.
!!! warning "Behavior changed in `langchain-core` 0.3.25"
Added `scoped_full` cleanup mode.
!!! warning
* In full mode, the loader should be returning
the entire dataset, and not just a subset of the dataset.
Otherwise, the auto_cleanup will remove documents that it is not
supposed to.
* In incremental mode, if documents associated with a particular
source id appear across different batches, the indexing API
will do some redundant work. This will still result in the
correct end state of the index, but will unfortunately not be
100% efficient. For example, if a given document is split into 15
chunks, and we index them using a batch size of 5, we'll have 3 batches
all with the same source id. In general, to avoid doing too much
redundant work select as big a batch size as possible.
* The `scoped_full` mode is suitable if determining an appropriate batch size
is challenging or if your data loader cannot return the entire dataset at
once. This mode keeps track of source IDs in memory, which should be fine
for most use cases. If your dataset is large (10M+ docs), you will likely
need to parallelize the indexing process regardless.
Args:
docs_source: Data loader or iterable of documents to index.
record_manager: Timestamped set to keep track of which documents were
updated.
vector_store: `VectorStore` or DocumentIndex to index the documents into.
batch_size: Batch size to use when indexing.
cleanup: How to handle clean up of documents.
- incremental: Cleans up all documents that haven't been updated AND
that are associated with source IDs that were seen during indexing.
Clean up is done continuously during indexing helping to minimize the
probability of users seeing duplicated content.
- full: Delete all documents that have not been returned by the loader
during this run of indexing.
Clean up runs after all documents have been indexed.
This means that users may see duplicated content during indexing.
- scoped_full: Similar to Full, but only deletes all documents
that haven't been updated AND that are associated with
source IDs that were seen during indexing.
- None: Do not delete any documents.
source_id_key: Optional key that helps identify the original source
of the document.
cleanup_batch_size: Batch size to use when cleaning up documents.
force_update: Force update documents even if they are present in the
record manager. Useful if you are re-indexing with updated embeddings.
key_encoder: Hashing algorithm to use for hashing the document content and
metadata. Options include "blake2b", "sha256", and "sha512".
!!! version-added "Added in `langchain-core` 0.3.66"
key_encoder: Hashing algorithm to use for hashing the document.
If not provided, a default encoder using SHA-1 will be used.
Domain
Subdomains
Defined In
Calls
Source
Frequently Asked Questions
What does aindex() do?
aindex() is a function in the langchain codebase, defined in libs/core/langchain_core/indexing/api.py.
Where is aindex() defined?
aindex() is defined in libs/core/langchain_core/indexing/api.py at line 629.
What does aindex() call?
aindex() calls 7 function(s): _abatch, _adelete, _deduplicate_in_order, _get_document_with_hash, _get_source_id_assigner, _to_async_iterator, _warn_about_sha1.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free