from_llms() — langchain Function Reference
Architecture documentation for the from_llms() function in model_laboratory.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 962bae77_5dd8_f194_e1d3_68d9fbeb473e["from_llms()"] ddc55513_8bea_9f0f_9a07_33c17a0d376f["ModelLaboratory"] 962bae77_5dd8_f194_e1d3_68d9fbeb473e -->|defined in| ddc55513_8bea_9f0f_9a07_33c17a0d376f faa83a0b_d5b8_28b7_b07c_725dc6ca4594["__init__()"] faa83a0b_d5b8_28b7_b07c_725dc6ca4594 -->|calls| 962bae77_5dd8_f194_e1d3_68d9fbeb473e style 962bae77_5dd8_f194_e1d3_68d9fbeb473e fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/langchain/langchain_classic/model_laboratory.py lines 62–81
def from_llms(
cls,
llms: list[BaseLLM],
prompt: PromptTemplate | None = None,
) -> ModelLaboratory:
"""Initialize the ModelLaboratory with LLMs and an optional prompt.
Args:
llms: A list of LLMs to experiment with.
prompt: An optional prompt to use with the LLMs.
If provided, the prompt must contain exactly one input variable.
Returns:
An instance of `ModelLaboratory` initialized with LLMs.
"""
if prompt is None:
prompt = PromptTemplate(input_variables=["_input"], template="{_input}")
chains = [LLMChain(llm=llm, prompt=prompt) for llm in llms]
names = [str(llm) for llm in llms]
return cls(chains, names=names)
Domain
Subdomains
Called By
Source
Frequently Asked Questions
What does from_llms() do?
from_llms() is a function in the langchain codebase, defined in libs/langchain/langchain_classic/model_laboratory.py.
Where is from_llms() defined?
from_llms() is defined in libs/langchain/langchain_classic/model_laboratory.py at line 62.
What calls from_llms()?
from_llms() is called by 1 function(s): __init__.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free