_resolve_model_id() — langchain Function Reference
Architecture documentation for the _resolve_model_id() function in huggingface.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 22e01022_d885_0ce1_205d_61a25556c8f8["_resolve_model_id()"] 8cf0d6c0_abf8_3ee2_fd00_8bfc8c02058a["ChatHuggingFace"] 22e01022_d885_0ce1_205d_61a25556c8f8 -->|defined in| 8cf0d6c0_abf8_3ee2_fd00_8bfc8c02058a 8ac960fc_fbc0_6865_20cb_72e7b4885586["__init__()"] 8ac960fc_fbc0_6865_20cb_72e7b4885586 -->|calls| 22e01022_d885_0ce1_205d_61a25556c8f8 2230a40c_6f71_52d5_fb7c_c75e81b6a18e["_is_huggingface_hub()"] 22e01022_d885_0ce1_205d_61a25556c8f8 -->|calls| 2230a40c_6f71_52d5_fb7c_c75e81b6a18e e8659305_1c54_8f00_e4ea_e2e11763c89f["_is_huggingface_textgen_inference()"] 22e01022_d885_0ce1_205d_61a25556c8f8 -->|calls| e8659305_1c54_8f00_e4ea_e2e11763c89f 2d69e762_e417_3944_d08b_9ec4de6e3bbe["_is_huggingface_pipeline()"] 22e01022_d885_0ce1_205d_61a25556c8f8 -->|calls| 2d69e762_e417_3944_d08b_9ec4de6e3bbe 2b662de8_c3da_0d48_1bc1_b88b1d6c6022["_is_huggingface_endpoint()"] 22e01022_d885_0ce1_205d_61a25556c8f8 -->|calls| 2b662de8_c3da_0d48_1bc1_b88b1d6c6022 style 22e01022_d885_0ce1_205d_61a25556c8f8 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py lines 994–1030
def _resolve_model_id(self) -> None:
"""Resolve the model_id from the LLM's inference_server_url."""
from huggingface_hub import list_inference_endpoints # type: ignore[import]
if _is_huggingface_hub(self.llm) or (
hasattr(self.llm, "repo_id") and self.llm.repo_id
):
self.model_id = self.llm.repo_id
return
if _is_huggingface_textgen_inference(self.llm):
endpoint_url: str | None = self.llm.inference_server_url
if _is_huggingface_pipeline(self.llm):
from transformers import AutoTokenizer # type: ignore[import]
self.model_id = self.model_id or self.llm.model_id
self.tokenizer = (
AutoTokenizer.from_pretrained(self.model_id)
if self.tokenizer is None
else self.tokenizer
)
return
if _is_huggingface_endpoint(self.llm):
self.model_id = self.llm.repo_id or self.llm.model
return
endpoint_url = self.llm.endpoint_url
available_endpoints = list_inference_endpoints("*")
for endpoint in available_endpoints:
if endpoint.url == endpoint_url:
self.model_id = endpoint.repository
if not self.model_id:
msg = (
"Failed to resolve model_id:"
f"Could not find model id for inference server: {endpoint_url}"
"Make sure that your Hugging Face token has access to the endpoint."
)
raise ValueError(msg)
Domain
Subdomains
Calls
Called By
Source
Frequently Asked Questions
What does _resolve_model_id() do?
_resolve_model_id() is a function in the langchain codebase, defined in libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py.
Where is _resolve_model_id() defined?
_resolve_model_id() is defined in libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py at line 994.
What does _resolve_model_id() call?
_resolve_model_id() calls 4 function(s): _is_huggingface_endpoint, _is_huggingface_hub, _is_huggingface_pipeline, _is_huggingface_textgen_inference.
What calls _resolve_model_id()?
_resolve_model_id() is called by 1 function(s): __init__.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free