Home / File/ loading.py — langchain Source File

loading.py — langchain Source File

Architecture documentation for loading.py, a python file in the langchain codebase. 10 imports, 0 dependents.

File python LangChainCore ApiManagement 10 imports 9 functions

Entity Profile

Dependency Diagram

graph LR
  fc5c0c58_eb79_058b_cc17_205426e210b8["loading.py"]
  9d14ea65_8b2e_6721_a947_acc89905651f["json"]
  fc5c0c58_eb79_058b_cc17_205426e210b8 --> 9d14ea65_8b2e_6721_a947_acc89905651f
  e27da29f_a1f7_49f3_84d5_6be4cb4125c8["logging"]
  fc5c0c58_eb79_058b_cc17_205426e210b8 --> e27da29f_a1f7_49f3_84d5_6be4cb4125c8
  2bf6d401_816d_d011_3b05_a6114f55ff58["collections.abc"]
  fc5c0c58_eb79_058b_cc17_205426e210b8 --> 2bf6d401_816d_d011_3b05_a6114f55ff58
  927570d8_11a6_5c17_0f0d_80baae0c733e["pathlib"]
  fc5c0c58_eb79_058b_cc17_205426e210b8 --> 927570d8_11a6_5c17_0f0d_80baae0c733e
  a869785a_d507_1688_0b32_0ec94043975a["yaml"]
  fc5c0c58_eb79_058b_cc17_205426e210b8 --> a869785a_d507_1688_0b32_0ec94043975a
  b80605d3_1f01_1dd5_b07c_ebd3b378c1eb["langchain_core.output_parsers.string"]
  fc5c0c58_eb79_058b_cc17_205426e210b8 --> b80605d3_1f01_1dd5_b07c_ebd3b378c1eb
  73ea0f91_f33b_7459_5c78_ce39be8c0528["langchain_core.prompts.base"]
  fc5c0c58_eb79_058b_cc17_205426e210b8 --> 73ea0f91_f33b_7459_5c78_ce39be8c0528
  16c7d167_e2e4_cd42_2bc2_d182459cd93c["langchain_core.prompts.chat"]
  fc5c0c58_eb79_058b_cc17_205426e210b8 --> 16c7d167_e2e4_cd42_2bc2_d182459cd93c
  11cbe9f0_2bd4_762f_01c6_dd44fe3312a0["langchain_core.prompts.few_shot"]
  fc5c0c58_eb79_058b_cc17_205426e210b8 --> 11cbe9f0_2bd4_762f_01c6_dd44fe3312a0
  4b3dcc0f_d872_0044_39ec_2d289f87f9e6["langchain_core.prompts.prompt"]
  fc5c0c58_eb79_058b_cc17_205426e210b8 --> 4b3dcc0f_d872_0044_39ec_2d289f87f9e6
  style fc5c0c58_eb79_058b_cc17_205426e210b8 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

"""Load prompts."""

import json
import logging
from collections.abc import Callable
from pathlib import Path

import yaml

from langchain_core.output_parsers.string import StrOutputParser
from langchain_core.prompts.base import BasePromptTemplate
from langchain_core.prompts.chat import ChatPromptTemplate
from langchain_core.prompts.few_shot import FewShotPromptTemplate
from langchain_core.prompts.prompt import PromptTemplate

URL_BASE = "https://raw.githubusercontent.com/hwchase17/langchain-hub/master/prompts/"
logger = logging.getLogger(__name__)


def load_prompt_from_config(config: dict) -> BasePromptTemplate:
    """Load prompt from config dict.

    Args:
        config: Dict containing the prompt configuration.

    Returns:
        A `PromptTemplate` object.

    Raises:
        ValueError: If the prompt type is not supported.
    """
    if "_type" not in config:
        logger.warning("No `_type` key found, defaulting to `prompt`.")
    config_type = config.pop("_type", "prompt")

    if config_type not in type_to_loader_dict:
        msg = f"Loading {config_type} prompt not supported"
        raise ValueError(msg)

    prompt_loader = type_to_loader_dict[config_type]
    return prompt_loader(config)


def _load_template(var_name: str, config: dict) -> dict:
    """Load template from the path if applicable."""
    # Check if template_path exists in config.
    if f"{var_name}_path" in config:
        # If it does, make sure template variable doesn't also exist.
        if var_name in config:
            msg = f"Both `{var_name}_path` and `{var_name}` cannot be provided."
            raise ValueError(msg)
        # Pop the template path from the config.
        template_path = Path(config.pop(f"{var_name}_path"))
        # Load the template.
        if template_path.suffix == ".txt":
            template = template_path.read_text(encoding="utf-8")
        else:
            raise ValueError
        # Set the template variable to the extracted variable.
        config[var_name] = template
// ... (138 more lines)

Domain

Subdomains

Dependencies

  • collections.abc
  • json
  • langchain_core.output_parsers.string
  • langchain_core.prompts.base
  • langchain_core.prompts.chat
  • langchain_core.prompts.few_shot
  • langchain_core.prompts.prompt
  • logging
  • pathlib
  • yaml

Frequently Asked Questions

What does loading.py do?
loading.py is a source file in the langchain codebase, written in python. It belongs to the LangChainCore domain, ApiManagement subdomain.
What functions are defined in loading.py?
loading.py defines 9 function(s): _load_chat_prompt, _load_examples, _load_few_shot_prompt, _load_output_parser, _load_prompt, _load_prompt_from_file, _load_template, load_prompt, load_prompt_from_config.
What does loading.py depend on?
loading.py imports 10 module(s): collections.abc, json, langchain_core.output_parsers.string, langchain_core.prompts.base, langchain_core.prompts.chat, langchain_core.prompts.few_shot, langchain_core.prompts.prompt, logging, and 2 more.
Where is loading.py in the architecture?
loading.py is located at libs/core/langchain_core/prompts/loading.py (domain: LangChainCore, subdomain: ApiManagement, directory: libs/core/langchain_core/prompts).

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free