map_rerank_prompt.py — langchain Source File
Architecture documentation for map_rerank_prompt.py, a python file in the langchain codebase. 2 imports, 0 dependents.
Entity Profile
Dependency Diagram
graph LR 3b36f67c_f657_1b92_236b_f88469d662e8["map_rerank_prompt.py"] 435e49bf_bb2e_2016_ead7_0afb9d57ad71["langchain_core.prompts"] 3b36f67c_f657_1b92_236b_f88469d662e8 --> 435e49bf_bb2e_2016_ead7_0afb9d57ad71 c1b1b626_aa3d_1efe_18fa_d17344e01056["langchain_classic.output_parsers.regex"] 3b36f67c_f657_1b92_236b_f88469d662e8 --> c1b1b626_aa3d_1efe_18fa_d17344e01056 style 3b36f67c_f657_1b92_236b_f88469d662e8 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
from langchain_core.prompts import PromptTemplate
from langchain_classic.output_parsers.regex import RegexParser
output_parser = RegexParser(
regex=r"(.*?)\nScore: (\d*)",
output_keys=["answer", "score"],
)
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
In addition to giving an answer, also return a score of how fully it answered the user's question. This should be in the following format:
Question: [question here]
Helpful Answer: [answer here]
Score: [score between 0 and 100]
How to determine the score:
- Higher is a better answer
- Better responds fully to the asked question, with sufficient level of detail
- If you do not know the answer based on the context, that should be a score of 0
- Don't be overconfident!
Example #1
Context:
---------
Apples are red
---------
Question: what color are apples?
Helpful Answer: red
Score: 100
Example #2
Context:
---------
it was night and the witness forgot his glasses. he was not sure if it was a sports car or an suv
---------
Question: what type was the car?
Helpful Answer: a sports car or an suv
Score: 60
Example #3
Context:
---------
Pears are either red or orange
---------
Question: what color are apples?
Helpful Answer: This document does not answer the question
Score: 0
Begin!
Context:
---------
{context}
---------
Question: {question}
Helpful Answer:""" # noqa: E501
PROMPT = PromptTemplate(
template=prompt_template,
input_variables=["context", "question"],
output_parser=output_parser,
)
Dependencies
- langchain_classic.output_parsers.regex
- langchain_core.prompts
Source
Frequently Asked Questions
What does map_rerank_prompt.py do?
map_rerank_prompt.py is a source file in the langchain codebase, written in python.
What does map_rerank_prompt.py depend on?
map_rerank_prompt.py imports 2 module(s): langchain_classic.output_parsers.regex, langchain_core.prompts.
Where is map_rerank_prompt.py in the architecture?
map_rerank_prompt.py is located at libs/langchain/langchain_classic/chains/question_answering/map_rerank_prompt.py (directory: libs/langchain/langchain_classic/chains/question_answering).
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free