_generate() — langchain Function Reference
Architecture documentation for the _generate() function in chat_models.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD cf70f32b_032d_e8a9_45fe_13a6419bea5c["_generate()"] 36b59643_acfc_fb1d_752e_ae7ec32a79a4["ChatPerplexity"] cf70f32b_032d_e8a9_45fe_13a6419bea5c -->|defined in| 36b59643_acfc_fb1d_752e_ae7ec32a79a4 d798c5eb_b3ec_7dcd_afbe_81031dc680c3["_stream()"] cf70f32b_032d_e8a9_45fe_13a6419bea5c -->|calls| d798c5eb_b3ec_7dcd_afbe_81031dc680c3 b9349daf_2369_217d_7661_9f71b6258a13["_create_message_dicts()"] cf70f32b_032d_e8a9_45fe_13a6419bea5c -->|calls| b9349daf_2369_217d_7661_9f71b6258a13 d64af782_4aac_1e87_a23b_e6300fcdc624["_create_usage_metadata()"] cf70f32b_032d_e8a9_45fe_13a6419bea5c -->|calls| d64af782_4aac_1e87_a23b_e6300fcdc624 style cf70f32b_032d_e8a9_45fe_13a6419bea5c fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/partners/perplexity/langchain_perplexity/chat_models.py lines 589–644
def _generate(
self,
messages: list[BaseMessage],
stop: list[str] | None = None,
run_manager: CallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> ChatResult:
if self.streaming:
stream_iter = self._stream(
messages, stop=stop, run_manager=run_manager, **kwargs
)
if stream_iter:
return generate_from_stream(stream_iter)
message_dicts, params = self._create_message_dicts(messages, stop)
params = {**params, **kwargs}
response = self.client.chat.completions.create(messages=message_dicts, **params)
if hasattr(response, "usage") and response.usage:
usage_dict = response.usage.model_dump()
usage_metadata = _create_usage_metadata(usage_dict)
else:
usage_metadata = None
usage_dict = {}
additional_kwargs = {}
for attr in ["citations", "images", "related_questions", "search_results"]:
if hasattr(response, attr) and getattr(response, attr):
additional_kwargs[attr] = getattr(response, attr)
if hasattr(response, "videos") and response.videos:
additional_kwargs["videos"] = [
v.model_dump() if hasattr(v, "model_dump") else v
for v in response.videos
]
if hasattr(response, "reasoning_steps") and response.reasoning_steps:
additional_kwargs["reasoning_steps"] = [
r.model_dump() if hasattr(r, "model_dump") else r
for r in response.reasoning_steps
]
response_metadata: dict[str, Any] = {
"model_name": getattr(response, "model", self.model)
}
if num_search_queries := usage_dict.get("num_search_queries"):
response_metadata["num_search_queries"] = num_search_queries
if search_context_size := usage_dict.get("search_context_size"):
response_metadata["search_context_size"] = search_context_size
message = AIMessage(
content=response.choices[0].message.content,
additional_kwargs=additional_kwargs,
usage_metadata=usage_metadata,
response_metadata=response_metadata,
)
return ChatResult(generations=[ChatGeneration(message=message)])
Domain
Subdomains
Source
Frequently Asked Questions
What does _generate() do?
_generate() is a function in the langchain codebase, defined in libs/partners/perplexity/langchain_perplexity/chat_models.py.
Where is _generate() defined?
_generate() is defined in libs/partners/perplexity/langchain_perplexity/chat_models.py at line 589.
What does _generate() call?
_generate() calls 3 function(s): _create_message_dicts, _create_usage_metadata, _stream.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free