__init__.py — langchain Source File
Architecture documentation for __init__.py, a python file in the langchain codebase. 3 imports, 0 dependents.
Entity Profile
Dependency Diagram
graph LR 742f64cf_dc29_10bc_e4b9_4cdf280120c6["__init__.py"] 38a7139a_eaf9_eabf_8d6b_630369bf2841["langchain_classic.smith.evaluation.config"] 742f64cf_dc29_10bc_e4b9_4cdf280120c6 --> 38a7139a_eaf9_eabf_8d6b_630369bf2841 0cb4d67d_2d1d_6c51_2612_80b2a0e4ac75["langchain_classic.smith.evaluation.runner_utils"] 742f64cf_dc29_10bc_e4b9_4cdf280120c6 --> 0cb4d67d_2d1d_6c51_2612_80b2a0e4ac75 dcf72a7b_4491_e161_ad0c_d532b9e484c7["langchain_classic.smith.evaluation.string_run_evaluator"] 742f64cf_dc29_10bc_e4b9_4cdf280120c6 --> dcf72a7b_4491_e161_ad0c_d532b9e484c7 style 742f64cf_dc29_10bc_e4b9_4cdf280120c6 fill:#6366f1,stroke:#818cf8,color:#fff
Relationship Graph
Source Code
"""LangSmith evaluation utilities.
This module provides utilities for evaluating Chains and other language model
applications using LangChain evaluators and LangSmith.
For more information on the LangSmith API, see the
[LangSmith API documentation](https://docs.langchain.com/langsmith/home).
**Example**
```python
from langsmith import Client
from langchain_openai import ChatOpenAI
from langchain_classic.chains import LLMChain
from langchain_classic.smith import EvaluatorType, RunEvalConfig, run_on_dataset
def construct_chain():
model = ChatOpenAI(temperature=0)
chain = LLMChain.from_string(model, "What's the answer to {your_input_key}")
return chain
evaluation_config = RunEvalConfig(
evaluators=[
EvaluatorType.QA, # "Correctness" against a reference answer
EvaluatorType.EMBEDDING_DISTANCE,
RunEvalConfig.Criteria("helpfulness"),
RunEvalConfig.Criteria(
{
"fifth-grader-score": "Do you have to be smarter than a fifth "
"grader to answer this question?"
}
),
]
)
client = Client()
run_on_dataset(
client, "<my_dataset_name>", construct_chain, evaluation=evaluation_config
)
```
**Attributes**
- `arun_on_dataset`: Asynchronous function to evaluate a chain or other LangChain
component over a dataset.
- `run_on_dataset`: Function to evaluate a chain or other LangChain component over a
dataset.
- `RunEvalConfig`: Class representing the configuration for running evaluation.
- `StringRunEvaluatorChain`: Class representing a string run evaluator chain.
- `InputFormatError`: Exception raised when the input format is incorrect.
"""
from langchain_classic.smith.evaluation.config import RunEvalConfig
from langchain_classic.smith.evaluation.runner_utils import (
InputFormatError,
arun_on_dataset,
run_on_dataset,
)
from langchain_classic.smith.evaluation.string_run_evaluator import (
StringRunEvaluatorChain,
)
__all__ = [
"InputFormatError",
"RunEvalConfig",
"StringRunEvaluatorChain",
"arun_on_dataset",
"run_on_dataset",
]
Dependencies
- langchain_classic.smith.evaluation.config
- langchain_classic.smith.evaluation.runner_utils
- langchain_classic.smith.evaluation.string_run_evaluator
Source
Frequently Asked Questions
What does __init__.py do?
__init__.py is a source file in the langchain codebase, written in python.
What does __init__.py depend on?
__init__.py imports 3 module(s): langchain_classic.smith.evaluation.config, langchain_classic.smith.evaluation.runner_utils, langchain_classic.smith.evaluation.string_run_evaluator.
Where is __init__.py in the architecture?
__init__.py is located at libs/langchain/langchain_classic/smith/evaluation/__init__.py (directory: libs/langchain/langchain_classic/smith/evaluation).
Analyze Your Own Codebase
Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.
Try Supermodel Free