model_laboratory.py — langchain Source File
Architecture documentation for model_laboratory.py, a python file in the langchain codebase. 6 imports, 0 dependents.
Entity Profile
Dependency Diagram
graph LR 2bfe14ae_d295_908c_8ba4_1f61f3c2f197["model_laboratory.py"] cfe2bde5_180e_e3b0_df2b_55b3ebaca8e7["collections.abc"] 2bfe14ae_d295_908c_8ba4_1f61f3c2f197 --> cfe2bde5_180e_e3b0_df2b_55b3ebaca8e7 89934eed_a823_2184_acf2_039f48eed5f9["langchain_core.language_models.llms"] 2bfe14ae_d295_908c_8ba4_1f61f3c2f197 --> 89934eed_a823_2184_acf2_039f48eed5f9 c17bcf07_a2ef_b992_448f_5088d46a1e79["langchain_core.prompts.prompt"] 2bfe14ae_d295_908c_8ba4_1f61f3c2f197 --> c17bcf07_a2ef_b992_448f_5088d46a1e79 75b56223_6cf5_b347_8586_de20156157a1["langchain_core.utils.input"] 2bfe14ae_d295_908c_8ba4_1f61f3c2f197 --> 75b56223_6cf5_b347_8586_de20156157a1 01158a5b_b299_f45d_92e9_2a7433a1a91a["langchain_classic.chains.base"] 2bfe14ae_d295_908c_8ba4_1f61f3c2f197 --> 01158a5b_b299_f45d_92e9_2a7433a1a91a 31974615_0d58_bd26_13f1_776e0a9d1413["langchain_classic.chains.llm"] 2bfe14ae_d295_908c_8ba4_1f61f3c2f197 --> 31974615_0d58_bd26_13f1_776e0a9d1413 style 2bfe14ae_d295_908c_8ba4_1f61f3c2f197 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
"""Experiment with different models."""
from __future__ import annotations
from collections.abc import Sequence
from langchain_core.language_models.llms import BaseLLM
from langchain_core.prompts.prompt import PromptTemplate
from langchain_core.utils.input import get_color_mapping, print_text
from langchain_classic.chains.base import Chain
from langchain_classic.chains.llm import LLMChain
class ModelLaboratory:
"""A utility to experiment with and compare the performance of different models."""
def __init__(self, chains: Sequence[Chain], names: list[str] | None = None):
"""Initialize the ModelLaboratory with chains to experiment with.
Args:
chains: A sequence of chains to experiment with.
Each chain must have exactly one input and one output variable.
names: Optional list of names corresponding to each chain.
If provided, its length must match the number of chains.
Raises:
ValueError: If any chain is not an instance of `Chain`.
ValueError: If a chain does not have exactly one input variable.
ValueError: If a chain does not have exactly one output variable.
ValueError: If the length of `names` does not match the number of chains.
"""
for chain in chains:
if not isinstance(chain, Chain):
msg = ( # type: ignore[unreachable]
"ModelLaboratory should now be initialized with Chains. "
"If you want to initialize with LLMs, use the `from_llms` method "
"instead (`ModelLaboratory.from_llms(...)`)"
)
raise ValueError(msg) # noqa: TRY004
if len(chain.input_keys) != 1:
msg = (
"Currently only support chains with one input variable, "
f"got {chain.input_keys}"
)
raise ValueError(msg)
if len(chain.output_keys) != 1:
msg = (
"Currently only support chains with one output variable, "
f"got {chain.output_keys}"
)
if names is not None and len(names) != len(chains):
msg = "Length of chains does not match length of names."
raise ValueError(msg)
self.chains = chains
chain_range = [str(i) for i in range(len(self.chains))]
self.chain_colors = get_color_mapping(chain_range)
self.names = names
@classmethod
def from_llms(
cls,
llms: list[BaseLLM],
prompt: PromptTemplate | None = None,
) -> ModelLaboratory:
"""Initialize the ModelLaboratory with LLMs and an optional prompt.
Args:
llms: A list of LLMs to experiment with.
prompt: An optional prompt to use with the LLMs.
If provided, the prompt must contain exactly one input variable.
Returns:
An instance of `ModelLaboratory` initialized with LLMs.
"""
if prompt is None:
prompt = PromptTemplate(input_variables=["_input"], template="{_input}")
chains = [LLMChain(llm=llm, prompt=prompt) for llm in llms]
names = [str(llm) for llm in llms]
return cls(chains, names=names)
def compare(self, text: str) -> None:
"""Compare model outputs on an input text.
If a prompt was provided with starting the laboratory, then this text will be
fed into the prompt. If no prompt was provided, then the input text is the
entire prompt.
Args:
text: input text to run all models on.
"""
print(f"\033[1mInput:\033[0m\n{text}\n") # noqa: T201
for i, chain in enumerate(self.chains):
name = self.names[i] if self.names is not None else str(chain)
print_text(name, end="\n")
output = chain.run(text)
print_text(output, color=self.chain_colors[str(i)], end="\n\n")
Domain
Subdomains
Classes
Dependencies
- collections.abc
- langchain_classic.chains.base
- langchain_classic.chains.llm
- langchain_core.language_models.llms
- langchain_core.prompts.prompt
- langchain_core.utils.input
Source
Frequently Asked Questions
What does model_laboratory.py do?
model_laboratory.py is a source file in the langchain codebase, written in python. It belongs to the CoreAbstractions domain, Serialization subdomain.
What does model_laboratory.py depend on?
model_laboratory.py imports 6 module(s): collections.abc, langchain_classic.chains.base, langchain_classic.chains.llm, langchain_core.language_models.llms, langchain_core.prompts.prompt, langchain_core.utils.input.
Where is model_laboratory.py in the architecture?
model_laboratory.py is located at libs/langchain/langchain_classic/model_laboratory.py (domain: CoreAbstractions, subdomain: Serialization, directory: libs/langchain/langchain_classic).
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free