astream() — langchain Function Reference
Architecture documentation for the astream() function in llms.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 6d1a7389_71e1_42ae_986c_fd865f245d2b["astream()"] ce4aa464_3868_179e_5d99_df48bc307c5f["BaseLLM"] 6d1a7389_71e1_42ae_986c_fd865f245d2b -->|defined in| ce4aa464_3868_179e_5d99_df48bc307c5f 9a69543c_9797_7645_cf18_8d67993b8584["ainvoke()"] 6d1a7389_71e1_42ae_986c_fd865f245d2b -->|calls| 9a69543c_9797_7645_cf18_8d67993b8584 38007078_b49e_18a9_9c2b_bb8361fa3fb3["_convert_input()"] 6d1a7389_71e1_42ae_986c_fd865f245d2b -->|calls| 38007078_b49e_18a9_9c2b_bb8361fa3fb3 b4a028e5_e42e_3478_739f_03ee8ab9100d["dict()"] 6d1a7389_71e1_42ae_986c_fd865f245d2b -->|calls| b4a028e5_e42e_3478_739f_03ee8ab9100d 4573eeb7_3c0a_7ca6_23d7_ed5efc46fdb1["_get_ls_params()"] 6d1a7389_71e1_42ae_986c_fd865f245d2b -->|calls| 4573eeb7_3c0a_7ca6_23d7_ed5efc46fdb1 67bb9e7b_f91d_5598_fccd_d291f76257e7["_astream()"] 6d1a7389_71e1_42ae_986c_fd865f245d2b -->|calls| 67bb9e7b_f91d_5598_fccd_d291f76257e7 style 6d1a7389_71e1_42ae_986c_fd865f245d2b fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/core/langchain_core/language_models/llms.py lines 575–643
async def astream(
self,
input: LanguageModelInput,
config: RunnableConfig | None = None,
*,
stop: list[str] | None = None,
**kwargs: Any,
) -> AsyncIterator[str]:
if (
type(self)._astream is BaseLLM._astream # noqa: SLF001
and type(self)._stream is BaseLLM._stream # noqa: SLF001
):
yield await self.ainvoke(input, config=config, stop=stop, **kwargs)
return
prompt = self._convert_input(input).to_string()
config = ensure_config(config)
params = self.dict()
params["stop"] = stop
params = {**params, **kwargs}
options = {"stop": stop}
inheritable_metadata = {
**(config.get("metadata") or {}),
**self._get_ls_params(stop=stop, **kwargs),
}
callback_manager = AsyncCallbackManager.configure(
config.get("callbacks"),
self.callbacks,
self.verbose,
config.get("tags"),
self.tags,
inheritable_metadata,
self.metadata,
)
(run_manager,) = await callback_manager.on_llm_start(
self._serialized,
[prompt],
invocation_params=params,
options=options,
name=config.get("run_name"),
run_id=config.pop("run_id", None),
batch_size=1,
)
generation: GenerationChunk | None = None
try:
async for chunk in self._astream(
prompt,
stop=stop,
run_manager=run_manager,
**kwargs,
):
yield chunk.text
if generation is None:
generation = chunk
else:
generation += chunk
except BaseException as e:
await run_manager.on_llm_error(
e,
response=LLMResult(generations=[[generation]] if generation else []),
)
raise
if generation is None:
err = ValueError("No generation chunks were returned")
await run_manager.on_llm_error(err, response=LLMResult(generations=[]))
raise err
await run_manager.on_llm_end(LLMResult(generations=[[generation]]))
Domain
Subdomains
Source
Frequently Asked Questions
What does astream() do?
astream() is a function in the langchain codebase, defined in libs/core/langchain_core/language_models/llms.py.
Where is astream() defined?
astream() is defined in libs/core/langchain_core/language_models/llms.py at line 575.
What does astream() call?
astream() calls 5 function(s): _astream, _convert_input, _get_ls_params, ainvoke, dict.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free