Home / Function/ invoke_with_cache_read_input() — langchain Function Reference

invoke_with_cache_read_input() — langchain Function Reference

Architecture documentation for the invoke_with_cache_read_input() function in test_standard.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  cdc99555_09b8_a41a_6a84_2593190a2245["invoke_with_cache_read_input()"]
  c3a3c277_9cf7_2326_165f_554aee9ad2a1["TestAnthropicStandard"]
  cdc99555_09b8_a41a_6a84_2593190a2245 -->|defined in| c3a3c277_9cf7_2326_165f_554aee9ad2a1
  a2665efd_1e6c_805f_4895_0066bb0f0af7["_invoke()"]
  cdc99555_09b8_a41a_6a84_2593190a2245 -->|calls| a2665efd_1e6c_805f_4895_0066bb0f0af7
  style cdc99555_09b8_a41a_6a84_2593190a2245 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/anthropic/tests/integration_tests/test_standard.py lines 105–149

    def invoke_with_cache_read_input(self, *, stream: bool = False) -> AIMessage:
        llm = ChatAnthropic(
            model=MODEL,  # type: ignore[call-arg]
        )
        with Path.open(REPO_ROOT_DIR / "README.md") as f:
            readme = f.read()

        input_ = f"""What's langchain? Here's the langchain README:

        {readme}
        """

        # invoke twice so first invocation is cached
        _invoke(
            llm,
            [
                {
                    "role": "user",
                    "content": [
                        {
                            "type": "text",
                            "text": input_,
                            "cache_control": {"type": "ephemeral"},
                        },
                    ],
                },
            ],
            stream,
        )
        return _invoke(
            llm,
            [
                {
                    "role": "user",
                    "content": [
                        {
                            "type": "text",
                            "text": input_,
                            "cache_control": {"type": "ephemeral"},
                        },
                    ],
                },
            ],
            stream,
        )

Domain

Subdomains

Calls

Frequently Asked Questions

What does invoke_with_cache_read_input() do?
invoke_with_cache_read_input() is a function in the langchain codebase, defined in libs/partners/anthropic/tests/integration_tests/test_standard.py.
Where is invoke_with_cache_read_input() defined?
invoke_with_cache_read_input() is defined in libs/partners/anthropic/tests/integration_tests/test_standard.py at line 105.
What does invoke_with_cache_read_input() call?
invoke_with_cache_read_input() calls 1 function(s): _invoke.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free