_load_llm_chain() — langchain Function Reference
Architecture documentation for the _load_llm_chain() function in loading.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 1503a8e4_6d4d_0771_d0e4_1f69d8350ed1["_load_llm_chain()"] 61dd5a0b_3bf7_b973_6dac_edfd465b21fb["loading.py"] 1503a8e4_6d4d_0771_d0e4_1f69d8350ed1 -->|defined in| 61dd5a0b_3bf7_b973_6dac_edfd465b21fb 50b2ba2d_27b4_d3bb_0ff2_3f113f11320a["load_llm()"] 1503a8e4_6d4d_0771_d0e4_1f69d8350ed1 -->|calls| 50b2ba2d_27b4_d3bb_0ff2_3f113f11320a style 1503a8e4_6d4d_0771_d0e4_1f69d8350ed1 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain/langchain_classic/chains/loading.py lines 72–93
def _load_llm_chain(config: dict, **kwargs: Any) -> LLMChain:
"""Load LLM chain from config dict."""
if "llm" in config:
llm_config = config.pop("llm")
llm = load_llm_from_config(llm_config, **kwargs)
elif "llm_path" in config:
llm = load_llm(config.pop("llm_path"), **kwargs)
else:
msg = "One of `llm` or `llm_path` must be present."
raise ValueError(msg)
if "prompt" in config:
prompt_config = config.pop("prompt")
prompt = load_prompt_from_config(prompt_config)
elif "prompt_path" in config:
prompt = load_prompt(config.pop("prompt_path"))
else:
msg = "One of `prompt` or `prompt_path` must be present."
raise ValueError(msg)
_load_output_parser(config)
return LLMChain(llm=llm, prompt=prompt, **config)
Domain
Subdomains
Calls
Source
Frequently Asked Questions
What does _load_llm_chain() do?
_load_llm_chain() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/chains/loading.py.
Where is _load_llm_chain() defined?
_load_llm_chain() is defined in libs/langchain/langchain_classic/chains/loading.py at line 72.
What does _load_llm_chain() call?
_load_llm_chain() calls 1 function(s): load_llm.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free