Home / Function/ __init__() — langchain Function Reference

__init__() — langchain Function Reference

Architecture documentation for the __init__() function in langsmith.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  31d1cd3e_7383_3abf_9124_6b59ac571e99["__init__()"]
  e0eb9f17_a78c_506d_094b_7f87b2c3d423["LangSmithLoader"]
  31d1cd3e_7383_3abf_9124_6b59ac571e99 -->|defined in| e0eb9f17_a78c_506d_094b_7f87b2c3d423
  style 31d1cd3e_7383_3abf_9124_6b59ac571e99 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/core/langchain_core/document_loaders/langsmith.py lines 40–110

    def __init__(
        self,
        *,
        dataset_id: uuid.UUID | str | None = None,
        dataset_name: str | None = None,
        example_ids: Sequence[uuid.UUID | str] | None = None,
        as_of: datetime.datetime | str | None = None,
        splits: Sequence[str] | None = None,
        inline_s3_urls: bool = True,
        offset: int = 0,
        limit: int | None = None,
        metadata: dict | None = None,
        filter: str | None = None,  # noqa: A002
        content_key: str = "",
        format_content: Callable[..., str] | None = None,
        client: LangSmithClient | None = None,
        **client_kwargs: Any,
    ) -> None:
        """Create a LangSmith loader.

        Args:
            dataset_id: The ID of the dataset to filter by.
            dataset_name: The name of the dataset to filter by.
            content_key: The inputs key to set as `Document` page content.

                `'.'` characters are interpreted as nested keys, e.g.
                `content_key="first.second"` will result in
                `Document(page_content=format_content(example.inputs["first"]["second"]))`
            format_content: Function for converting the content extracted from the example
                inputs into a string.

                Defaults to JSON-encoding the contents.
            example_ids: The IDs of the examples to filter by.
            as_of: The dataset version tag or timestamp to retrieve the examples as of.

                Response examples will only be those that were present at the time of
                the tagged (or timestamped) version.
            splits: A list of dataset splits, which are divisions of your dataset such
                as `train`, `test`, or `validation`.

                Returns examples only from the specified splits.
            inline_s3_urls: Whether to inline S3 URLs.
            offset: The offset to start from.
            limit: The maximum number of examples to return.
            metadata: Metadata to filter by.
            filter: A structured filter string to apply to the examples.
            client: LangSmith Client.

                If not provided will be initialized from below args.
            client_kwargs: Keyword args to pass to LangSmith client init.

                Should only be specified if `client` isn't.

        Raises:
            ValueError: If both `client` and `client_kwargs` are provided.
        """  # noqa: E501
        if client and client_kwargs:
            raise ValueError
        self._client = client or LangSmithClient(**client_kwargs)
        self.content_key = list(content_key.split(".")) if content_key else []
        self.format_content = format_content or _stringify
        self.dataset_id = dataset_id
        self.dataset_name = dataset_name
        self.example_ids = example_ids
        self.as_of = as_of
        self.splits = splits
        self.inline_s3_urls = inline_s3_urls
        self.offset = offset
        self.limit = limit
        self.metadata = metadata
        self.filter = filter

Subdomains

Frequently Asked Questions

What does __init__() do?
__init__() is a function in the langchain codebase, defined in libs/core/langchain_core/document_loaders/langsmith.py.
Where is __init__() defined?
__init__() is defined in libs/core/langchain_core/document_loaders/langsmith.py at line 40.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free