_generate() — langchain Function Reference
Architecture documentation for the _generate() function in custom_chat_model.py from the langchain codebase.
Entity Profile
Dependency Diagram
graph TD 8acc69e5_bdb4_2351_c878_2f8a36d8b3c3["_generate()"] 827d4990_b3c8_3ba7_bcd4_0a554daa3db4["ChatParrotLink"] 8acc69e5_bdb4_2351_c878_2f8a36d8b3c3 -->|defined in| 827d4990_b3c8_3ba7_bcd4_0a554daa3db4 style 8acc69e5_bdb4_2351_c878_2f8a36d8b3c3 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
libs/standard-tests/tests/unit_tests/custom_chat_model.py lines 53–102
def _generate(
self,
messages: list[BaseMessage],
stop: list[str] | None = None,
run_manager: CallbackManagerForLLMRun | None = None,
**kwargs: Any,
) -> ChatResult:
"""Override the _generate method to implement the chat model logic.
This can be a call to an API, a call to a local model, or any other
implementation that generates a response to the input prompt.
Args:
messages: the prompt composed of a list of messages.
stop: a list of strings on which the model should stop generating.
If generation stops due to a stop token, the stop token itself
SHOULD BE INCLUDED as part of the output. This is not enforced
across models right now, but it's a good practice to follow since
it makes it much easier to parse the output of the model
downstream and understand why generation stopped.
run_manager: A run manager with callbacks for the LLM.
**kwargs: Additional keyword arguments.
"""
# Replace this with actual logic to generate a response from a list
# of messages.
_ = stop # Mark as used to avoid unused variable warning
_ = run_manager # Mark as used to avoid unused variable warning
_ = kwargs # Mark as used to avoid unused variable warning
last_message = messages[-1]
tokens = last_message.content[: self.parrot_buffer_length]
ct_input_tokens = sum(len(message.content) for message in messages)
ct_output_tokens = len(tokens)
message = AIMessage(
content=tokens,
additional_kwargs={}, # Used to add additional payload to the message
response_metadata={ # Use for response metadata
"time_in_seconds": 3,
"model_name": self.model_name,
},
usage_metadata={
"input_tokens": ct_input_tokens,
"output_tokens": ct_output_tokens,
"total_tokens": ct_input_tokens + ct_output_tokens,
},
)
##
generation = ChatGeneration(message=message)
return ChatResult(generations=[generation])
Domain
Subdomains
Source
Frequently Asked Questions
What does _generate() do?
_generate() is a function in the langchain codebase, defined in libs/standard-tests/tests/unit_tests/custom_chat_model.py.
Where is _generate() defined?
_generate() is defined in libs/standard-tests/tests/unit_tests/custom_chat_model.py at line 53.
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free