Home / Function/ _generate() — langchain Function Reference

_generate() — langchain Function Reference

Architecture documentation for the _generate() function in huggingface_pipeline.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  21c788c9_e3fd_3763_dc05_03fab3c9eba5["_generate()"]
  54333c82_6644_5574_2c41_4cc818ce3595["HuggingFacePipeline"]
  21c788c9_e3fd_3763_dc05_03fab3c9eba5 -->|defined in| 54333c82_6644_5574_2c41_4cc818ce3595
  style 21c788c9_e3fd_3763_dc05_03fab3c9eba5 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

libs/partners/huggingface/langchain_huggingface/llms/huggingface_pipeline.py lines 316–366

    def _generate(
        self,
        prompts: list[str],
        stop: list[str] | None = None,
        run_manager: CallbackManagerForLLMRun | None = None,
        **kwargs: Any,
    ) -> LLMResult:
        # List to hold all results
        text_generations: list[str] = []
        pipeline_kwargs = kwargs.get("pipeline_kwargs", {})
        skip_prompt = kwargs.get("skip_prompt", False)

        for i in range(0, len(prompts), self.batch_size):
            batch_prompts = prompts[i : i + self.batch_size]

            # Process batch of prompts
            responses = self.pipeline(
                batch_prompts,
                **pipeline_kwargs,
            )

            # Process each response in the batch
            for j, response in enumerate(responses):
                if isinstance(response, list):
                    # if model returns multiple generations, pick the top one
                    response = response[0]

                if (
                    self.pipeline.task == "text-generation"
                    or self.pipeline.task == "text2text-generation"
                    or self.pipeline.task == "image-text-to-text"
                ):
                    text = response["generated_text"]
                elif self.pipeline.task == "summarization":
                    text = response["summary_text"]
                elif self.pipeline.task in "translation":
                    text = response["translation_text"]
                else:
                    msg = (
                        f"Got invalid task {self.pipeline.task}, "
                        f"currently only {VALID_TASKS} are supported"
                    )
                    raise ValueError(msg)
                if skip_prompt:
                    text = text[len(batch_prompts[j]) :]
                # Append the processed text to results
                text_generations.append(text)

        return LLMResult(
            generations=[[Generation(text=text)] for text in text_generations]
        )

Domain

Subdomains

Frequently Asked Questions

What does _generate() do?
_generate() is a function in the langchain codebase, defined in libs/partners/huggingface/langchain_huggingface/llms/huggingface_pipeline.py.
Where is _generate() defined?
_generate() is defined in libs/partners/huggingface/langchain_huggingface/llms/huggingface_pipeline.py at line 316.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free