prompt.py — langchain Source File
Architecture documentation for prompt.py, a python file in the langchain codebase. 1 imports, 0 dependents.
Entity Profile
Dependency Diagram
graph LR 267b6b9a_ad49_f6fa_7efd_90a6f6eda31d["prompt.py"] 435e49bf_bb2e_2016_ead7_0afb9d57ad71["langchain_core.prompts"] 267b6b9a_ad49_f6fa_7efd_90a6f6eda31d --> 435e49bf_bb2e_2016_ead7_0afb9d57ad71 style 267b6b9a_ad49_f6fa_7efd_90a6f6eda31d fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
# Credit to https://github.com/openai/evals/tree/main
from langchain_core.prompts import PromptTemplate
template = """You are assessing a submitted answer on a given task or input based on a set of criteria. Here is the data:
[BEGIN DATA]
***
[Input]: {input}
***
[Submission]: {output}
***
[Criteria]: {criteria}
***
[END DATA]
Does the submission meet the Criteria? First, write out in a step by step manner your reasoning about each criterion to be sure that your conclusion is correct. Avoid simply stating the correct answers at the outset. Then print only the single character "Y" or "N" (without quotes or punctuation) on its own line corresponding to the correct answer of whether the submission meets all criteria. At the end, repeat just the letter again by itself on a new line.""" # noqa: E501
PROMPT = PromptTemplate(
input_variables=["input", "output", "criteria"], template=template
)
template = """You are assessing a submitted answer on a given task or input based on a set of criteria. Here is the data:
[BEGIN DATA]
***
[Input]: {input}
***
[Submission]: {output}
***
[Criteria]: {criteria}
***
[Reference]: {reference}
***
[END DATA]
Does the submission meet the Criteria? First, write out in a step by step manner your reasoning about each criterion to be sure that your conclusion is correct. Avoid simply stating the correct answers at the outset. Then print only the single character "Y" or "N" (without quotes or punctuation) on its own line corresponding to the correct answer of whether the submission meets all criteria. At the end, repeat just the letter again by itself on a new line.""" # noqa: E501
PROMPT_WITH_REFERENCES = PromptTemplate(
input_variables=["input", "output", "criteria", "reference"], template=template
)
Dependencies
- langchain_core.prompts
Source
Frequently Asked Questions
What does prompt.py do?
prompt.py is a source file in the langchain codebase, written in python.
What does prompt.py depend on?
prompt.py imports 1 module(s): langchain_core.prompts.
Where is prompt.py in the architecture?
prompt.py is located at libs/langchain/langchain_classic/evaluation/criteria/prompt.py (directory: libs/langchain/langchain_classic/evaluation/criteria).
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free