Home / Class/ InMemoryRateLimiter Class — langchain Architecture

InMemoryRateLimiter Class — langchain Architecture

Architecture documentation for the InMemoryRateLimiter class in rate_limiters.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  29cf4437_3b06_c93f_9779_9154d2ed9ff9["InMemoryRateLimiter"]
  05335d2f_d103_7c67_8d54_ea059b452805["BaseRateLimiter"]
  29cf4437_3b06_c93f_9779_9154d2ed9ff9 -->|extends| 05335d2f_d103_7c67_8d54_ea059b452805
  67db8b02_6e06_ecaf_9ec0_d1db30e3459f["rate_limiters.py"]
  29cf4437_3b06_c93f_9779_9154d2ed9ff9 -->|defined in| 67db8b02_6e06_ecaf_9ec0_d1db30e3459f
  ccc2f71e_2271_a7c6_855b_5ec453b19836["__init__()"]
  29cf4437_3b06_c93f_9779_9154d2ed9ff9 -->|method| ccc2f71e_2271_a7c6_855b_5ec453b19836
  e0964492_c52e_ec68_41ea_efa229196967["_consume()"]
  29cf4437_3b06_c93f_9779_9154d2ed9ff9 -->|method| e0964492_c52e_ec68_41ea_efa229196967
  7eb4873b_0a65_ee74_d6ce_9f59f9def5b3["acquire()"]
  29cf4437_3b06_c93f_9779_9154d2ed9ff9 -->|method| 7eb4873b_0a65_ee74_d6ce_9f59f9def5b3
  022b3bdb_ff2e_b5b9_09ea_d248b2fd2811["aacquire()"]
  29cf4437_3b06_c93f_9779_9154d2ed9ff9 -->|method| 022b3bdb_ff2e_b5b9_09ea_d248b2fd2811

Relationship Graph

Source Code

libs/core/langchain_core/rate_limiters.py lines 67–250

class InMemoryRateLimiter(BaseRateLimiter):
    """An in memory rate limiter based on a token bucket algorithm.

    This is an in memory rate limiter, so it cannot rate limit across
    different processes.

    The rate limiter only allows time-based rate limiting and does not
    take into account any information about the input or the output, so it
    cannot be used to rate limit based on the size of the request.

    It is thread safe and can be used in either a sync or async context.

    The in memory rate limiter is based on a token bucket. The bucket is filled
    with tokens at a given rate. Each request consumes a token. If there are
    not enough tokens in the bucket, the request is blocked until there are
    enough tokens.

    These tokens have nothing to do with LLM tokens. They are just
    a way to keep track of how many requests can be made at a given time.

    Current limitations:

    - The rate limiter is not designed to work across different processes. It is
        an in-memory rate limiter, but it is thread safe.
    - The rate limiter only supports time-based rate limiting. It does not take
        into account the size of the request or any other factors.

    Example:
        ```python
        import time

        from langchain_core.rate_limiters import InMemoryRateLimiter

        rate_limiter = InMemoryRateLimiter(
            requests_per_second=0.1,  # <-- Can only make a request once every 10 seconds!!
            check_every_n_seconds=0.1,  # Wake up every 100 ms to check whether allowed to make a request,
            max_bucket_size=10,  # Controls the maximum burst size.
        )

        from langchain_anthropic import ChatAnthropic

        model = ChatAnthropic(
            model_name="claude-sonnet-4-5-20250929", rate_limiter=rate_limiter
        )

        for _ in range(5):
            tic = time.time()
            model.invoke("hello")
            toc = time.time()
            print(toc - tic)
        ```
    """  # noqa: E501

    def __init__(
        self,
        *,
        requests_per_second: float = 1,
        check_every_n_seconds: float = 0.1,
        max_bucket_size: float = 1,
    ) -> None:
        """A rate limiter based on a token bucket.

        These tokens have nothing to do with LLM tokens. They are just
        a way to keep track of how many requests can be made at a given time.

        This rate limiter is designed to work in a threaded environment.

        It works by filling up a bucket with tokens at a given rate. Each
        request consumes a given number of tokens. If there are not enough
        tokens in the bucket, the request is blocked until there are enough
        tokens.

        Args:
            requests_per_second: The number of tokens to add per second to the bucket.
                The tokens represent "credit" that can be used to make requests.
            check_every_n_seconds: Check whether the tokens are available
                every this many seconds. Can be a float to represent
                fractions of a second.
            max_bucket_size: The maximum number of tokens that can be in the bucket.
                Must be at least `1`. Used to prevent bursts of requests.
        """

Extends

Frequently Asked Questions

What is the InMemoryRateLimiter class?
InMemoryRateLimiter is a class in the langchain codebase, defined in libs/core/langchain_core/rate_limiters.py.
Where is InMemoryRateLimiter defined?
InMemoryRateLimiter is defined in libs/core/langchain_core/rate_limiters.py at line 67.
What does InMemoryRateLimiter extend?
InMemoryRateLimiter extends BaseRateLimiter.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free