Home / Class/ LLMStringRunMapper Class — langchain Architecture

LLMStringRunMapper Class — langchain Architecture

Architecture documentation for the LLMStringRunMapper class in string_run_evaluator.py from the langchain codebase.

Entity Profile

Dependency Diagram

graph TD
  8b074a3f_8729_0bb7_fe7d_db473df331ce["LLMStringRunMapper"]
  6513456f_be2b_21ef_06af_21023f648595["StringRunMapper"]
  8b074a3f_8729_0bb7_fe7d_db473df331ce -->|extends| 6513456f_be2b_21ef_06af_21023f648595
  2f0b23f2_7760_d68c_7feb_721c5231c4ec["string_run_evaluator.py"]
  8b074a3f_8729_0bb7_fe7d_db473df331ce -->|defined in| 2f0b23f2_7760_d68c_7feb_721c5231c4ec
  c36025c6_da32_fa12_5c71_e60ffc858574["serialize_chat_messages()"]
  8b074a3f_8729_0bb7_fe7d_db473df331ce -->|method| c36025c6_da32_fa12_5c71_e60ffc858574
  132c0c4d_93e7_fc60_9da8_46f58df87075["serialize_inputs()"]
  8b074a3f_8729_0bb7_fe7d_db473df331ce -->|method| 132c0c4d_93e7_fc60_9da8_46f58df87075
  df0ac8ee_9864_89bc_2b10_585085211119["serialize_outputs()"]
  8b074a3f_8729_0bb7_fe7d_db473df331ce -->|method| df0ac8ee_9864_89bc_2b10_585085211119
  eeef200b_d96b_43d0_3074_30604390fe0c["map()"]
  8b074a3f_8729_0bb7_fe7d_db473df331ce -->|method| eeef200b_d96b_43d0_3074_30604390fe0c

Relationship Graph

Source Code

libs/langchain/langchain_classic/smith/evaluation/string_run_evaluator.py lines 58–153

class LLMStringRunMapper(StringRunMapper):
    """Extract items to evaluate from the run object."""

    def serialize_chat_messages(self, messages: list[dict] | list[list[dict]]) -> str:
        """Extract the input messages from the run."""
        if isinstance(messages, list) and messages:
            if isinstance(messages[0], dict):
                chat_messages = _get_messages_from_run_dict(
                    cast("list[dict]", messages)
                )
            elif isinstance(messages[0], list):
                # Runs from Tracer have messages as a list of lists of dicts
                chat_messages = _get_messages_from_run_dict(messages[0])
            else:
                msg = f"Could not extract messages to evaluate {messages}"  # type: ignore[unreachable]
                raise ValueError(msg)
            return get_buffer_string(chat_messages)
        msg = f"Could not extract messages to evaluate {messages}"
        raise ValueError(msg)

    def serialize_inputs(self, inputs: dict) -> str:
        """Serialize inputs.

        Args:
            inputs: The inputs from the run, expected to contain prompts or messages.

        Returns:
            The serialized input text from the prompts or messages.

        Raises:
            ValueError: If neither prompts nor messages are found in the inputs.
        """
        if "prompts" in inputs:  # Should we even accept this?
            input_ = "\n\n".join(inputs["prompts"])
        elif "prompt" in inputs:
            input_ = inputs["prompt"]
        elif "messages" in inputs:
            input_ = self.serialize_chat_messages(inputs["messages"])
        else:
            msg = "LLM Run must have either messages or prompts as inputs."
            raise ValueError(msg)
        return input_

    def serialize_outputs(self, outputs: dict) -> str:
        """Serialize outputs.

        Args:
            outputs: The outputs from the run, expected to contain generations.

        Returns:
            The serialized output text from the first generation.

        Raises:
            ValueError: If no generations are found in the outputs or if the generations
                are empty.
        """
        if not outputs.get("generations"):
            msg = "Cannot evaluate LLM Run without generations."
            raise ValueError(msg)
        generations: list[dict] | list[list[dict]] = outputs["generations"]
        if not generations:
            msg = "Cannot evaluate LLM run with empty generations."
            raise ValueError(msg)
        first_generation: dict | list[dict] = generations[0]
        if isinstance(first_generation, list):
            # Runs from Tracer have generations as a list of lists of dicts
            # Whereas Runs from the API have a list of dicts
            first_generation = first_generation[0]
        if "message" in first_generation:
            output_ = self.serialize_chat_messages([first_generation["message"]])
        else:
            output_ = first_generation["text"]
        return output_

    def map(self, run: Run) -> dict[str, str]:
        """Maps the Run to a dictionary."""
        if run.run_type != "llm":
            msg = "LLM RunMapper only supports LLM runs."
            raise ValueError(msg)
        if not run.outputs:
            if run.error:

Extends

Frequently Asked Questions

What is the LLMStringRunMapper class?
LLMStringRunMapper is a class in the langchain codebase, defined in libs/langchain/langchain_classic/smith/evaluation/string_run_evaluator.py.
Where is LLMStringRunMapper defined?
LLMStringRunMapper is defined in libs/langchain/langchain_classic/smith/evaluation/string_run_evaluator.py at line 58.
What does LLMStringRunMapper extend?
LLMStringRunMapper extends StringRunMapper.

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free