Veritell

Developers

This page is for the Python package veritell-langchain. Use it to install the SDK from PyPI, configure your Veritell key, and evaluate LangChain outputs in Python.

View on PyPIRequest accessManage Cloud API key

Install from PyPI

pip install veritell-langchain

The published package targets Python 3.10+ and is designed for LangChain-based AI validation workflows.

Set your environment

$env:VERITELL_API_KEY="<your_api_key>"

The SDK reads VERITELL_API_KEY automatically. You can also set VERITELL_API_BASE_URL and VERITELL_TIMEOUT when needed.

Quick start

from veritell_langchain import VeritellEvaluator

# Uses VERITELL_API_KEY automatically
evaluator = VeritellEvaluator(base_url="https://veritell.ai/api")

for event in evaluator.evaluate_stream(
    prompt="Explain the benefits of renewable energy.",
    primary_model="gpt-4o-mini",
    judges=["gpt-4o-mini", "grok-3-mini-latest"],
):
    print(event.event_type, event.data)

By default, evaluate_stream() can generate the primary response using your selected primary_model and then evaluate it with one or more judge models.

Recommended LangChain integration pattern

import os
from veritell_langchain import VeritellEvaluator

api_key = os.getenv("VERITELL_API_KEY")
if not api_key:
    raise RuntimeError("VERITELL_API_KEY is not set")

evaluator = VeritellEvaluator(api_key=api_key, base_url="https://veritell.ai/api")

prompt = "Explain the benefits of renewable energy."
chain_output = "Renewable energy reduces emissions and improves energy security."

for event in evaluator.evaluate_stream(
    prompt=prompt,
    primary_model="gpt-4o-mini",
    judges=["gpt-4o-mini"],
    model_output=chain_output,
):
    print(event.event_type, event.data)

For production LangChain apps, run your chain or agent first, capture the exact model output, and pass that text as model_output. That keeps Veritell evaluating the real output your application generated.

What this package is for

Package links