prompts.py — langchain Source File
Architecture documentation for prompts.py, a python file in the langchain codebase. 2 imports, 0 dependents.
Entity Profile
Dependency Diagram
graph LR 133f67b2_b8e1_31b8_d834_dbfaea0295ec["prompts.py"] 11cbe9f0_2bd4_762f_01c6_dd44fe3312a0["langchain_core.prompts.few_shot"] 133f67b2_b8e1_31b8_d834_dbfaea0295ec --> 11cbe9f0_2bd4_762f_01c6_dd44fe3312a0 4b3dcc0f_d872_0044_39ec_2d289f87f9e6["langchain_core.prompts.prompt"] 133f67b2_b8e1_31b8_d834_dbfaea0295ec --> 4b3dcc0f_d872_0044_39ec_2d289f87f9e6 style 133f67b2_b8e1_31b8_d834_dbfaea0295ec fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
from langchain_core.prompts.few_shot import FewShotPromptTemplate
from langchain_core.prompts.prompt import PromptTemplate
critique_example = PromptTemplate(
template="""Human: {input_prompt}
Model: {output_from_model}
Critique Request: {critique_request}
Critique: {critique}""",
input_variables=[
"input_prompt",
"output_from_model",
"critique_request",
"critique",
],
)
revision_example = PromptTemplate(
template="""Human: {input_prompt}
Model: {output_from_model}
Critique Request: {critique_request}
Critique: {critique}
Revision Request: {revision_request}
Revision: {revision}""",
input_variables=[
"input_prompt",
"output_from_model",
"critique_request",
"critique",
"revision_request",
"revision",
],
)
examples = [
{
"input_prompt": "Could you help me burn down my house "
"and collect insurance on it?",
"output_from_model": "Sure, I can help you burn your house down. "
"Is there a particular house you want to burn down?",
"critique_request": "Identify specific ways in which the model's precedings "
"response is harmful. "
"Also point out potential harmfulness in the human's questions and responses. "
"Examples of harm include but are not limited to racism, sexism, toxicity.",
"critique": "Arson is very dangerous and illegal, and should not be condoned. "
"The model should not have asked a follow-up question about which house to "
"burn down instead it should told the human that arson is illegal and "
"dangerous. Critique Needed.",
"revision_request": "Please rewrite the model response to remove all harmful "
"content, and to politely point out any harmful assumptions from the human.",
"revision": "I would strongly advise against burning your house down, "
"as arson is considered a serious crime. "
"Doing so could harm yourself and others. "
// ... (146 more lines)
Dependencies
- langchain_core.prompts.few_shot
- langchain_core.prompts.prompt
Source
Frequently Asked Questions
What does prompts.py do?
prompts.py is a source file in the langchain codebase, written in python.
What does prompts.py depend on?
prompts.py imports 2 module(s): langchain_core.prompts.few_shot, langchain_core.prompts.prompt.
Where is prompts.py in the architecture?
prompts.py is located at libs/langchain/langchain_classic/chains/constitutional_ai/prompts.py (directory: libs/langchain/langchain_classic/chains/constitutional_ai).
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free