Home / File/ __init__.py — langchain Source File

__init__.py — langchain Source File

Architecture documentation for __init__.py, a python file in the langchain codebase. 14 imports, 0 dependents.

File python 14 imports

Entity Profile

Dependency Diagram

graph LR
  b7f4c10f_5c04_34e3_15aa_cc6b9dd7a819["__init__.py"]
  37753809_e65b_154a_063e_096388a7be9e["langchain_classic.evaluation.agents"]
  b7f4c10f_5c04_34e3_15aa_cc6b9dd7a819 --> 37753809_e65b_154a_063e_096388a7be9e
  457fc6e7_b89f_a73b_c836_2b84fa13a203["langchain_classic.evaluation.comparison"]
  b7f4c10f_5c04_34e3_15aa_cc6b9dd7a819 --> 457fc6e7_b89f_a73b_c836_2b84fa13a203
  45757b88_f7f4_79da_e358_6f146873b7c2["langchain_classic.evaluation.criteria"]
  b7f4c10f_5c04_34e3_15aa_cc6b9dd7a819 --> 45757b88_f7f4_79da_e358_6f146873b7c2
  11152143_939d_f18c_9466_cbcb3fa40dfe["langchain_classic.evaluation.embedding_distance"]
  b7f4c10f_5c04_34e3_15aa_cc6b9dd7a819 --> 11152143_939d_f18c_9466_cbcb3fa40dfe
  c35669db_cad4_032e_ae27_8fb8e4a9595c["langchain_classic.evaluation.exact_match.base"]
  b7f4c10f_5c04_34e3_15aa_cc6b9dd7a819 --> c35669db_cad4_032e_ae27_8fb8e4a9595c
  aa747152_2c0e_5408_796a_68aba8e883a9["langchain_classic.evaluation.loading"]
  b7f4c10f_5c04_34e3_15aa_cc6b9dd7a819 --> aa747152_2c0e_5408_796a_68aba8e883a9
  c9d7d0d4_53ec_4047_8b32_2367fdac23d9["langchain_classic.evaluation.parsing.base"]
  b7f4c10f_5c04_34e3_15aa_cc6b9dd7a819 --> c9d7d0d4_53ec_4047_8b32_2367fdac23d9
  998b5174_efcb_2ac1_9fe7_c13b1e8e988a["langchain_classic.evaluation.parsing.json_distance"]
  b7f4c10f_5c04_34e3_15aa_cc6b9dd7a819 --> 998b5174_efcb_2ac1_9fe7_c13b1e8e988a
  3ebdaf90_f47a_e0a0_d750_6bfde3220c5c["langchain_classic.evaluation.parsing.json_schema"]
  b7f4c10f_5c04_34e3_15aa_cc6b9dd7a819 --> 3ebdaf90_f47a_e0a0_d750_6bfde3220c5c
  4e4ddbfc_2f52_b277_2146_edba0a4f2fc2["langchain_classic.evaluation.qa"]
  b7f4c10f_5c04_34e3_15aa_cc6b9dd7a819 --> 4e4ddbfc_2f52_b277_2146_edba0a4f2fc2
  304efcf5_6e26_e616_2e76_73435d90073b["langchain_classic.evaluation.regex_match.base"]
  b7f4c10f_5c04_34e3_15aa_cc6b9dd7a819 --> 304efcf5_6e26_e616_2e76_73435d90073b
  37291248_07a6_a05c_821a_71cc0592429f["langchain_classic.evaluation.schema"]
  b7f4c10f_5c04_34e3_15aa_cc6b9dd7a819 --> 37291248_07a6_a05c_821a_71cc0592429f
  f1d88fe4_bd5e_0b0d_48dc_4ca1e7d21a5d["langchain_classic.evaluation.scoring"]
  b7f4c10f_5c04_34e3_15aa_cc6b9dd7a819 --> f1d88fe4_bd5e_0b0d_48dc_4ca1e7d21a5d
  e3cedb52_705d_f731_a4a1_6d1ebd8e94d0["langchain_classic.evaluation.string_distance"]
  b7f4c10f_5c04_34e3_15aa_cc6b9dd7a819 --> e3cedb52_705d_f731_a4a1_6d1ebd8e94d0
  style b7f4c10f_5c04_34e3_15aa_cc6b9dd7a819 fill:#6366f1,stroke:#818cf8,color:#fff

Relationship Graph

Source Code

"""**Evaluation** chains for grading LLM and Chain outputs.

This module contains off-the-shelf evaluation chains for grading the output of
LangChain primitives such as language models and chains.

**Loading an evaluator**

To load an evaluator, you can use the `load_evaluators <langchain.evaluation.loading.load_evaluators>` or
`load_evaluator <langchain.evaluation.loading.load_evaluator>` functions with the
names of the evaluators to load.

```python
from langchain_classic.evaluation import load_evaluator

evaluator = load_evaluator("qa")
evaluator.evaluate_strings(
    prediction="We sold more than 40,000 units last week",
    input="How many units did we sell last week?",
    reference="We sold 32,378 units",
)
```

The evaluator must be one of `EvaluatorType <langchain.evaluation.schema.EvaluatorType>`.

**Datasets**

To load one of the LangChain HuggingFace datasets, you can use the `load_dataset <langchain.evaluation.loading.load_dataset>` function with the
name of the dataset to load.

```python
from langchain_classic.evaluation import load_dataset

ds = load_dataset("llm-math")
```

**Some common use cases for evaluation include:**

- Grading the accuracy of a response against ground truth answers: `QAEvalChain <langchain.evaluation.qa.eval_chain.QAEvalChain>`
- Comparing the output of two models: `PairwiseStringEvalChain <langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain>` or `LabeledPairwiseStringEvalChain <langchain.evaluation.comparison.eval_chain.LabeledPairwiseStringEvalChain>` when there is additionally a reference label.
- Judging the efficacy of an agent's tool usage: `TrajectoryEvalChain <langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain>`
- Checking whether an output complies with a set of criteria: `CriteriaEvalChain <langchain.evaluation.criteria.eval_chain.CriteriaEvalChain>` or `LabeledCriteriaEvalChain <langchain.evaluation.criteria.eval_chain.LabeledCriteriaEvalChain>` when there is additionally a reference label.
- Computing semantic difference between a prediction and reference: `EmbeddingDistanceEvalChain <langchain.evaluation.embedding_distance.base.EmbeddingDistanceEvalChain>` or between two predictions: `PairwiseEmbeddingDistanceEvalChain <langchain.evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain>`
- Measuring the string distance between a prediction and reference `StringDistanceEvalChain <langchain.evaluation.string_distance.base.StringDistanceEvalChain>` or between two predictions `PairwiseStringDistanceEvalChain <langchain.evaluation.string_distance.base.PairwiseStringDistanceEvalChain>`

**Low-level API**

These evaluators implement one of the following interfaces:

- `StringEvaluator <langchain.evaluation.schema.StringEvaluator>`: Evaluate a prediction string against a reference label and/or input context.
- `PairwiseStringEvaluator <langchain.evaluation.schema.PairwiseStringEvaluator>`: Evaluate two prediction strings against each other. Useful for scoring preferences, measuring similarity between two chain or llm agents, or comparing outputs on similar inputs.
- `AgentTrajectoryEvaluator <langchain.evaluation.schema.AgentTrajectoryEvaluator>` Evaluate the full sequence of actions taken by an agent.

These interfaces enable easier composability and usage within a higher level evaluation framework.

"""  # noqa: E501

from langchain_classic.evaluation.agents import TrajectoryEvalChain
from langchain_classic.evaluation.comparison import (
    LabeledPairwiseStringEvalChain,
    PairwiseStringEvalChain,
// ... (78 more lines)

Dependencies

  • langchain_classic.evaluation.agents
  • langchain_classic.evaluation.comparison
  • langchain_classic.evaluation.criteria
  • langchain_classic.evaluation.embedding_distance
  • langchain_classic.evaluation.exact_match.base
  • langchain_classic.evaluation.loading
  • langchain_classic.evaluation.parsing.base
  • langchain_classic.evaluation.parsing.json_distance
  • langchain_classic.evaluation.parsing.json_schema
  • langchain_classic.evaluation.qa
  • langchain_classic.evaluation.regex_match.base
  • langchain_classic.evaluation.schema
  • langchain_classic.evaluation.scoring
  • langchain_classic.evaluation.string_distance

Frequently Asked Questions

What does __init__.py do?
__init__.py is a source file in the langchain codebase, written in python.
What does __init__.py depend on?
__init__.py imports 14 module(s): langchain_classic.evaluation.agents, langchain_classic.evaluation.comparison, langchain_classic.evaluation.criteria, langchain_classic.evaluation.embedding_distance, langchain_classic.evaluation.exact_match.base, langchain_classic.evaluation.loading, langchain_classic.evaluation.parsing.base, langchain_classic.evaluation.parsing.json_distance, and 6 more.
Where is __init__.py in the architecture?
__init__.py is located at libs/langchain/langchain_classic/evaluation/__init__.py (directory: libs/langchain/langchain_classic/evaluation).

Analyze Your Own Codebase

Get architecture documentation, dependency graphs, and domain analysis for your codebase in minutes.

Try Supermodel Free