invoke_with_cache_read_input() — langchain Function Reference
Architecture documentation for the invoke_with_cache_read_input() function in test_base_standard.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 76f443df_7898_9354_0cd0_7303ea53ca78["invoke_with_cache_read_input()"] 59b2cda2_7dec_b2fd_a101_d5afcda5ed66["TestOpenAIStandard"] 76f443df_7898_9354_0cd0_7303ea53ca78 -->|defined in| 59b2cda2_7dec_b2fd_a101_d5afcda5ed66 9418c95f_3a5b_0518_1e96_0913ee96a6b8["_invoke()"] 76f443df_7898_9354_0cd0_7303ea53ca78 -->|calls| 9418c95f_3a5b_0518_1e96_0913ee96a6b8 style 76f443df_7898_9354_0cd0_7303ea53ca78 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/tests/integration_tests/chat_models/test_base_standard.py lines 64–75
def invoke_with_cache_read_input(self, *, stream: bool = False) -> AIMessage:
with Path.open(REPO_ROOT_DIR / "README.md") as f:
readme = f.read()
input_ = f"""What's langchain? Here's the langchain README:
{readme}
"""
llm = ChatOpenAI(model="gpt-4o-mini", stream_usage=True)
_invoke(llm, input_, stream)
# invoke twice so first invocation is cached
return _invoke(llm, input_, stream)
Domain
Subdomains
Calls
Source
Frequently Asked Questions
What does invoke_with_cache_read_input() do?
invoke_with_cache_read_input() is a function in the langchain codebase, defined in libs/partners/openai/tests/integration_tests/chat_models/test_base_standard.py.
Where is invoke_with_cache_read_input() defined?
invoke_with_cache_read_input() is defined in libs/partners/openai/tests/integration_tests/chat_models/test_base_standard.py at line 64.
What does invoke_with_cache_read_input() call?
invoke_with_cache_read_input() calls 1 function(s): _invoke.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free