_generate() — langchain Function Reference
Architecture documentation for the _generate() function in huggingface.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 0e798408_eb45_2a4b_1ddb_579fdb006f07["_generate()"] 8cf0d6c0_abf8_3ee2_fd00_8bfc8c02058a["ChatHuggingFace"] 0e798408_eb45_2a4b_1ddb_579fdb006f07 -->|defined in| 8cf0d6c0_abf8_3ee2_fd00_8bfc8c02058a 4c7378b7_670a_b9c9_e573_d0052d7e8308["_create_message_dicts()"] 0e798408_eb45_2a4b_1ddb_579fdb006f07 -->|calls| 4c7378b7_670a_b9c9_e573_d0052d7e8308 65387b9f_2b3b_0088_ea1b_1b80c0d70ec0["_create_chat_result()"] 0e798408_eb45_2a4b_1ddb_579fdb006f07 -->|calls| 65387b9f_2b3b_0088_ea1b_1b80c0d70ec0 ced9b52f_7bf4_4dc3_bc59_a48a9563d9bc["_stream()"] 0e798408_eb45_2a4b_1ddb_579fdb006f07 -->|calls| ced9b52f_7bf4_4dc3_bc59_a48a9563d9bc e77e373b_eb20_e566_3771_32082ca0dcc2["_to_chat_prompt()"] 0e798408_eb45_2a4b_1ddb_579fdb006f07 -->|calls| e77e373b_eb20_e566_3771_32082ca0dcc2 de46fdd9_0f79_97fa_7be9_db461cf70b2c["_to_chat_result()"] 0e798408_eb45_2a4b_1ddb_579fdb006f07 -->|calls| de46fdd9_0f79_97fa_7be9_db461cf70b2c e8659305_1c54_8f00_e4ea_e2e11763c89f["_is_huggingface_textgen_inference()"] 0e798408_eb45_2a4b_1ddb_579fdb006f07 -->|calls| e8659305_1c54_8f00_e4ea_e2e11763c89f 2b662de8_c3da_0d48_1bc1_b88b1d6c6022["_is_huggingface_endpoint()"] 0e798408_eb45_2a4b_1ddb_579fdb006f07 -->|calls| 2b662de8_c3da_0d48_1bc1_b88b1d6c6022 style 0e798408_eb45_2a4b_1ddb_579fdb006f07 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py lines 723–762
def _generate(
self,
messages: list[BaseMessage],
stop: list[str] | None = None,
run_manager: CallbackManagerForLLMRun | None = None,
stream: bool | None = None, # noqa: FBT001
**kwargs: Any,
) -> ChatResult:
should_stream = stream if stream is not None else self.streaming
if _is_huggingface_textgen_inference(self.llm):
message_dicts, params = self._create_message_dicts(messages, stop)
answer = self.llm.client.chat(messages=message_dicts, **kwargs)
return self._create_chat_result(answer)
if _is_huggingface_endpoint(self.llm):
if should_stream:
stream_iter = self._stream(
messages, stop=stop, run_manager=run_manager, **kwargs
)
return generate_from_stream(stream_iter)
message_dicts, params = self._create_message_dicts(messages, stop)
params = {
"stop": stop,
**params,
**({"stream": stream} if stream is not None else {}),
**kwargs,
}
answer = self.llm.client.chat_completion(messages=message_dicts, **params)
return self._create_chat_result(answer)
llm_input = self._to_chat_prompt(messages)
if should_stream:
stream_iter = self.llm._stream(
llm_input, stop=stop, run_manager=run_manager, **kwargs
)
return generate_from_stream(stream_iter)
llm_result = self.llm._generate(
prompts=[llm_input], stop=stop, run_manager=run_manager, **kwargs
)
return self._to_chat_result(llm_result)
Domain
Subdomains
Calls
Source
Frequently Asked Questions
What does _generate() do?
_generate() is a function in the langchain codebase, defined in libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py.
Where is _generate() defined?
_generate() is defined in libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py at line 723.
What does _generate() call?
_generate() calls 7 function(s): _create_chat_result, _create_message_dicts, _is_huggingface_endpoint, _is_huggingface_textgen_inference, _stream, _to_chat_prompt, _to_chat_result.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free