invoke_with_cache_read_input() — langchain Function Reference
Architecture documentation for the invoke_with_cache_read_input() function in test_responses_standard.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD f94ea83b_6f8c_3303_ea07_185c3e5ae19d["invoke_with_cache_read_input()"] 25cafc0a_8f3b_2d49_7e58_88f79aca1167["TestOpenAIResponses"] f94ea83b_6f8c_3303_ea07_185c3e5ae19d -->|defined in| 25cafc0a_8f3b_2d49_7e58_88f79aca1167 3b067821_fac2_9f7c_2f0e_98992a056f44["_invoke()"] f94ea83b_6f8c_3303_ea07_185c3e5ae19d -->|calls| 3b067821_fac2_9f7c_2f0e_98992a056f44 style f94ea83b_6f8c_3303_ea07_185c3e5ae19d fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/openai/tests/integration_tests/chat_models/test_responses_standard.py lines 35–46
def invoke_with_cache_read_input(self, *, stream: bool = False) -> AIMessage:
with Path.open(REPO_ROOT_DIR / "README.md") as f:
readme = f.read()
input_ = f"""What's langchain? Here's the langchain README:
{readme}
"""
llm = ChatOpenAI(model="gpt-4.1-mini", use_responses_api=True)
_invoke(llm, input_, stream)
# invoke twice so first invocation is cached
return _invoke(llm, input_, stream)
Domain
Subdomains
Calls
Source
Frequently Asked Questions
What does invoke_with_cache_read_input() do?
invoke_with_cache_read_input() is a function in the langchain codebase, defined in libs/partners/openai/tests/integration_tests/chat_models/test_responses_standard.py.
Where is invoke_with_cache_read_input() defined?
invoke_with_cache_read_input() is defined in libs/partners/openai/tests/integration_tests/chat_models/test_responses_standard.py at line 35.
What does invoke_with_cache_read_input() call?
invoke_with_cache_read_input() calls 1 function(s): _invoke.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free