test_in_memory_rate_limiter.py — langchain Source File
Architecture documentation for test_in_memory_rate_limiter.py, a python file in the langchain codebase. 4 imports, 0 dependents.
Entity Profile
Dependency Diagram
graph LR a2ad4c1d_ffde_beed_c251_817f2cc8edf6["test_in_memory_rate_limiter.py"] 996b2db9_46dd_901f_f7eb_068bafab4b12["time"] a2ad4c1d_ffde_beed_c251_817f2cc8edf6 --> 996b2db9_46dd_901f_f7eb_068bafab4b12 f69d6389_263d_68a4_7fbf_f14c0602a9ba["pytest"] a2ad4c1d_ffde_beed_c251_817f2cc8edf6 --> f69d6389_263d_68a4_7fbf_f14c0602a9ba e89eeb6b_e663_1c94_9ba3_68d7d1070454["freezegun"] a2ad4c1d_ffde_beed_c251_817f2cc8edf6 --> e89eeb6b_e663_1c94_9ba3_68d7d1070454 925d0a3b_d05c_e2d3_1510_b51a9462e17a["langchain_core.rate_limiters"] a2ad4c1d_ffde_beed_c251_817f2cc8edf6 --> 925d0a3b_d05c_e2d3_1510_b51a9462e17a style a2ad4c1d_ffde_beed_c251_817f2cc8edf6 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
"""Test rate limiter."""
import time
import pytest
from freezegun import freeze_time
from langchain_core.rate_limiters import InMemoryRateLimiter
@pytest.fixture
def rate_limiter() -> InMemoryRateLimiter:
"""Return an instance of InMemoryRateLimiter."""
return InMemoryRateLimiter(
requests_per_second=2, check_every_n_seconds=0.1, max_bucket_size=2
)
def test_initial_state(rate_limiter: InMemoryRateLimiter) -> None:
"""Test the initial state of the rate limiter."""
assert rate_limiter.available_tokens == 0.0
def test_sync_wait(rate_limiter: InMemoryRateLimiter) -> None:
with freeze_time("2023-01-01 00:00:00") as frozen_time:
rate_limiter.last = time.time()
assert not rate_limiter.acquire(blocking=False)
frozen_time.tick(0.1) # Increment by 0.1 seconds
assert rate_limiter.available_tokens == 0
assert not rate_limiter.acquire(blocking=False)
frozen_time.tick(0.1) # Increment by 0.1 seconds
assert rate_limiter.available_tokens == 0
assert not rate_limiter.acquire(blocking=False)
frozen_time.tick(1.8)
assert rate_limiter.acquire(blocking=False)
assert rate_limiter.available_tokens == 1.0
assert rate_limiter.acquire(blocking=False)
assert rate_limiter.available_tokens == 0
frozen_time.tick(2.1)
assert rate_limiter.acquire(blocking=False)
assert rate_limiter.available_tokens == 1
frozen_time.tick(0.9)
assert rate_limiter.acquire(blocking=False)
assert rate_limiter.available_tokens == 1
# Check max bucket size
frozen_time.tick(100)
assert rate_limiter.acquire(blocking=False)
assert rate_limiter.available_tokens == 1
async def test_async_wait(rate_limiter: InMemoryRateLimiter) -> None:
with freeze_time("2023-01-01 00:00:00") as frozen_time:
rate_limiter.last = time.time()
assert not await rate_limiter.aacquire(blocking=False)
frozen_time.tick(0.1) # Increment by 0.1 seconds
assert rate_limiter.available_tokens == 0
assert not await rate_limiter.aacquire(blocking=False)
frozen_time.tick(0.1) # Increment by 0.1 seconds
assert rate_limiter.available_tokens == 0
assert not await rate_limiter.aacquire(blocking=False)
frozen_time.tick(1.8)
assert await rate_limiter.aacquire(blocking=False)
assert rate_limiter.available_tokens == 1.0
assert await rate_limiter.aacquire(blocking=False)
assert rate_limiter.available_tokens == 0
frozen_time.tick(2.1)
assert await rate_limiter.aacquire(blocking=False)
assert rate_limiter.available_tokens == 1
frozen_time.tick(0.9)
assert await rate_limiter.aacquire(blocking=False)
assert rate_limiter.available_tokens == 1
def test_sync_wait_max_bucket_size() -> None:
with freeze_time("2023-01-01 00:00:00") as frozen_time:
rate_limiter = InMemoryRateLimiter(
requests_per_second=2, check_every_n_seconds=0.1, max_bucket_size=500
)
rate_limiter.last = time.time()
frozen_time.tick(100) # Increment by 100 seconds
assert rate_limiter.acquire(blocking=False)
# After 100 seconds we manage to refill the bucket with 200 tokens
# After consuming 1 token, we should have 199 tokens left
assert rate_limiter.available_tokens == 199.0
frozen_time.tick(10000)
assert rate_limiter.acquire(blocking=False)
assert rate_limiter.available_tokens == 499.0
# Assert that sync wait can proceed without blocking
# since we have enough tokens
rate_limiter.acquire(blocking=True)
async def test_async_wait_max_bucket_size() -> None:
with freeze_time("2023-01-01 00:00:00") as frozen_time:
rate_limiter = InMemoryRateLimiter(
requests_per_second=2, check_every_n_seconds=0.1, max_bucket_size=500
)
rate_limiter.last = time.time()
frozen_time.tick(100) # Increment by 100 seconds
assert await rate_limiter.aacquire(blocking=False)
# After 100 seconds we manage to refill the bucket with 200 tokens
# After consuming 1 token, we should have 199 tokens left
assert rate_limiter.available_tokens == 199.0
frozen_time.tick(10000)
assert await rate_limiter.aacquire(blocking=False)
assert rate_limiter.available_tokens == 499.0
# Assert that sync wait can proceed without blocking
# since we have enough tokens
await rate_limiter.aacquire(blocking=True)
Domain
Subdomains
Functions
Dependencies
- freezegun
- langchain_core.rate_limiters
- pytest
- time
Source
Frequently Asked Questions
What does test_in_memory_rate_limiter.py do?
test_in_memory_rate_limiter.py is a source file in the langchain codebase, written in python. It belongs to the LangChainCore domain, MessageInterface subdomain.
What functions are defined in test_in_memory_rate_limiter.py?
test_in_memory_rate_limiter.py defines 6 function(s): rate_limiter, test_async_wait, test_async_wait_max_bucket_size, test_initial_state, test_sync_wait, test_sync_wait_max_bucket_size.
What does test_in_memory_rate_limiter.py depend on?
test_in_memory_rate_limiter.py imports 4 module(s): freezegun, langchain_core.rate_limiters, pytest, time.
Where is test_in_memory_rate_limiter.py in the architecture?
test_in_memory_rate_limiter.py is located at libs/core/tests/unit_tests/rate_limiters/test_in_memory_rate_limiter.py (domain: LangChainCore, subdomain: MessageInterface, directory: libs/core/tests/unit_tests/rate_limiters).
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free