from_model_id() — langchain Function Reference
Architecture documentation for the from_model_id() function in huggingface_pipeline.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 834807da_062f_4f2f_6975_506464144209["from_model_id()"] 54333c82_6644_5574_2c41_4cc818ce3595["HuggingFacePipeline"] 834807da_062f_4f2f_6975_506464144209 -->|defined in| 54333c82_6644_5574_2c41_4cc818ce3595 style 834807da_062f_4f2f_6975_506464144209 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/huggingface/langchain_huggingface/llms/huggingface_pipeline.py lines 106–301
def from_model_id(
cls,
model_id: str,
task: str,
backend: str = "default",
device: int | None = None,
device_map: str | None = None,
model_kwargs: dict | None = None,
pipeline_kwargs: dict | None = None,
batch_size: int = DEFAULT_BATCH_SIZE,
**kwargs: Any,
) -> HuggingFacePipeline:
"""Construct the pipeline object from model_id and task."""
try:
from transformers import ( # type: ignore[import]
AutoModelForCausalLM,
AutoModelForSeq2SeqLM,
AutoTokenizer,
)
from transformers import pipeline as hf_pipeline # type: ignore[import]
except ImportError as e:
msg = (
"Could not import transformers python package. "
"Please install it with `pip install transformers`."
)
raise ValueError(msg) from e
_model_kwargs = model_kwargs.copy() if model_kwargs else {}
if device_map is not None:
if device is not None:
msg = (
"Both `device` and `device_map` are specified. "
"`device` will override `device_map`. "
"You will most likely encounter unexpected behavior."
"Please remove `device` and keep "
"`device_map`."
)
raise ValueError(msg)
if "device_map" in _model_kwargs:
msg = "`device_map` is already specified in `model_kwargs`."
raise ValueError(msg)
_model_kwargs["device_map"] = device_map
tokenizer = AutoTokenizer.from_pretrained(model_id, **_model_kwargs)
if backend in {"openvino", "ipex"}:
if task not in VALID_TASKS:
msg = (
f"Got invalid task {task}, "
f"currently only {VALID_TASKS} are supported"
)
raise ValueError(msg)
err_msg = f"Backend: {backend} {IMPORT_ERROR.format(f'optimum[{backend}]')}"
if not is_optimum_intel_available():
raise ImportError(err_msg)
# TODO: upgrade _MIN_OPTIMUM_VERSION to 1.22 after release
min_optimum_version = (
"1.22"
if backend == "ipex" and task != "text-generation"
else _MIN_OPTIMUM_VERSION
)
if is_optimum_intel_version("<", min_optimum_version):
msg = (
f"Backend: {backend} requires optimum-intel>="
f"{min_optimum_version}. You can install it with pip: "
"`pip install --upgrade --upgrade-strategy eager "
f"`optimum[{backend}]`."
)
raise ImportError(msg)
if backend == "openvino":
if not is_openvino_available():
raise ImportError(err_msg)
from optimum.intel import ( # type: ignore[import]
OVModelForCausalLM,
OVModelForSeq2SeqLM,
Domain
Subdomains
Source
Frequently Asked Questions
What does from_model_id() do?
from_model_id() is a function in the langchain codebase, defined in libs/partners/huggingface/langchain_huggingface/llms/huggingface_pipeline.py.
Where is from_model_id() defined?
from_model_id() is defined in libs/partners/huggingface/langchain_huggingface/llms/huggingface_pipeline.py at line 106.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free