eval_prompt.py — langchain Source File
Architecture documentation for eval_prompt.py, a python file in the langchain codebase. 1 imports, 0 dependents.
Entity Profile
Dependency Diagram
graph LR addd09d3_20fa_b385_5430_d14dbf16a062["eval_prompt.py"] 435e49bf_bb2e_2016_ead7_0afb9d57ad71["langchain_core.prompts"] addd09d3_20fa_b385_5430_d14dbf16a062 --> 435e49bf_bb2e_2016_ead7_0afb9d57ad71 style addd09d3_20fa_b385_5430_d14dbf16a062 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
from langchain_core.prompts import PromptTemplate
template = """You are a teacher grading a quiz.
You are given a question, the student's answer, and the true answer, and are asked to score the student answer as either CORRECT or INCORRECT.
Example Format:
QUESTION: question here
STUDENT ANSWER: student's answer here
TRUE ANSWER: true answer here
GRADE: CORRECT or INCORRECT here
Grade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin!
QUESTION: {query}
STUDENT ANSWER: {result}
TRUE ANSWER: {answer}
GRADE:""" # noqa: E501
PROMPT = PromptTemplate(
input_variables=["query", "result", "answer"], template=template
)
context_template = """You are a teacher grading a quiz.
You are given a question, the context the question is about, and the student's answer. You are asked to score the student's answer as either CORRECT or INCORRECT, based on the context.
Example Format:
QUESTION: question here
CONTEXT: context the question is about here
STUDENT ANSWER: student's answer here
GRADE: CORRECT or INCORRECT here
Grade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin!
QUESTION: {query}
CONTEXT: {context}
STUDENT ANSWER: {result}
GRADE:""" # noqa: E501
CONTEXT_PROMPT = PromptTemplate(
input_variables=["query", "context", "result"], template=context_template
)
cot_template = """You are a teacher grading a quiz.
You are given a question, the context the question is about, and the student's answer. You are asked to score the student's answer as either CORRECT or INCORRECT, based on the context.
Write out in a step by step manner your reasoning to be sure that your conclusion is correct. Avoid simply stating the correct answer at the outset.
Example Format:
QUESTION: question here
CONTEXT: context the question is about here
STUDENT ANSWER: student's answer here
EXPLANATION: step by step reasoning here
GRADE: CORRECT or INCORRECT here
Grade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin!
QUESTION: {query}
CONTEXT: {context}
STUDENT ANSWER: {result}
EXPLANATION:""" # noqa: E501
COT_PROMPT = PromptTemplate(
input_variables=["query", "context", "result"], template=cot_template
)
template = """You are comparing a submitted answer to an expert answer on a given SQL coding question. Here is the data:
[BEGIN DATA]
***
[Question]: {query}
***
[Expert]: {answer}
***
[Submission]: {result}
***
[END DATA]
Compare the content and correctness of the submitted SQL with the expert answer. Ignore any differences in whitespace, style, or output column names. The submitted answer may either be correct or incorrect. Determine which case applies. First, explain in detail the similarities or differences between the expert answer and the submission, ignoring superficial aspects such as whitespace, style or output column names. Do not state the final answer in your initial explanation. Then, respond with either "CORRECT" or "INCORRECT" (without quotes or punctuation) on its own line. This should correspond to whether the submitted SQL and the expert answer are semantically the same or different, respectively. Then, repeat your final answer on a new line.""" # noqa: E501
SQL_PROMPT = PromptTemplate(
input_variables=["query", "answer", "result"], template=template
)
Dependencies
- langchain_core.prompts
Source
Frequently Asked Questions
What does eval_prompt.py do?
eval_prompt.py is a source file in the langchain codebase, written in python.
What does eval_prompt.py depend on?
eval_prompt.py imports 1 module(s): langchain_core.prompts.
Where is eval_prompt.py in the architecture?
eval_prompt.py is located at libs/langchain/langchain_classic/evaluation/qa/eval_prompt.py (directory: libs/langchain/langchain_classic/evaluation/qa).
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free