Veritell
CLI available nowCloud integrations in private beta · CI-ready · local-friendly · developer-first

Test AI behavior before deployment

Reliability testing for RAG apps, agents, and LLM systems — like pytest for AI.

Catch unsupported claims, retrieval failures, missing abstentions, and output regressions before they reach production. Use the CLI today, and request beta access only if you want to connect to Veritell Cloud.

Get started with the CLIRead the docsSee how it works
Run tests locally or in CIBlock deploys with reliability thresholdsPublic examples available now
Example test definition
assertions:
  - no_unsupported_claims
  - retrieval_contains_answer
  - should_abstain
  - json_schema_valid

veritell test --fail-below 80
Prevent bad AI behavior before deployment
Start with public examples, run tests locally or in CI, and gate releases when reliability drops.
The problem

AI systems fail silently

Traditional testing validates code. Monitoring tools show what happened after the fact. But neither tells you whether the answer relied on retrieved context, whether the model should have refused, or whether a prompt or model change introduced a regression that will hit real users.

Did the answer rely on retrieved context?
Should the model have refused?
Did a prompt or model change break behavior?
Will this regression reach production?
Get started with the CLI

Use Veritell CLI today. Request beta access only for Veritell Cloud.

The CLI and public example suites are available now for local and CI workflows. Beta access is only required if you want to connect your tests to the managed Veritell Cloud environment.

1. Open the CLI repo

Start with the Veritell CLI source and installation guidance, then pair it with the public examples repo.

2. Run public examples

Use the examples repo to try grounded-answer, abstention, hallucination, and structured-output workflows.

3. Add Cloud access later

Request beta access only when you want to connect the CLI to the hosted Veritell Cloud experience.

CLI quick start
# Explore the public examples
veritell test .\examples\grounded-rag\grounded_rag.yaml

# Gate a release in CI
veritell test --fail-below 80
Open CLI repoBrowse public examplesRequest Cloud beta
The solution

Write tests for AI behavior

Define expected properties with assertions, run them like normal tests, and gate releases on reliability.

1. Define assertions

Describe the behaviors you care about: grounded answers, abstentions, schema validity, retrieval quality, and unsupported claims.

2. Run veritell test

Execute the suite locally, in CI, or against demo fixtures. Review per-test assertion results and suite reliability.

3. Gate deployment

Set a minimum score and fail the build when regressions, unsupported claims, or retrieval misses appear.

Example workflow
veritell test
veritell test --fail-below 80

What Veritell detects

Unsupported claims

Answers contain information that is not grounded in the provided context.

Retrieval misses

Relevant documents were not returned, or the retrieved context does not support the answer.

Missing abstentions

The model answers when it should refuse or say it lacks enough evidence.

Schema violations

Structured outputs fail JSON or contract expectations.

Behavioral regressions

Prompt, model, or retrieval changes alter expected behavior.

Example output
Readable in CI logs · machine-readable via JSON
Running 1 AI test

FAILED hallucination_failure

Assertions
- expected_doc_retrieved: PASS
- retrieval_contains_answer: PASS
- no_unsupported_claims: FAIL

Unsupported spans
- "Contractors receive 15 PTO days"
  Rationale: not supported by retrieved policy text.
  Confidence: 0.86

Suite reliability: 62
Deployment blocked

RAG chatbots

Ensure answers stay grounded in retrieved documents and supported context.

AI agents

Validate task behavior, guardrails, and refusal logic before rollout.

Customer support assistants

Catch invented answers, policy violations, and unsafe responses before they reach users.

Structured-output systems

Guarantee JSON and schema stability for APIs, automations, and downstream integrations.

CI/CD integration

Gate deployments on reliability

Run Veritell in your pipeline and block releases when reliability drops below your team’s threshold.

veritell test --fail-below 80
If reliability drops, the CLI exits non-zero and shows the threshold and score directly in the output.

Observability shows what AI did. Veritell tests what AI should do.

Monitoring helps you inspect production behavior after the fact. Veritell helps you define expected behavior before deployment — and enforce it in a repeatable workflow.

Explore examplesView CLI repoAPI overview

CLI available now, Cloud access in private beta

The Veritell CLI and public example suites are available now so teams can try the workflow, evaluate fit, and understand how reliability gates work in practice. Beta access is only required for teams that want to connect to Veritell Cloud.

Works locally first

Public examples and offline-friendly workflows make it easy to evaluate the CLI without standing up extra infrastructure.

Cloud when you need it

Request beta access when you want managed Veritell Cloud connectivity, shared workflows, or hosted integrations.

FAQ

Is Veritell only for RAG?
No. It’s useful for RAG systems, agents, support assistants, and structured-output workflows.
Can I use Veritell in CI/CD?
Yes. The CLI is designed to run locally and in pipelines, with reliability thresholds for release gating.
Do I need cloud infrastructure to get value?
No. Public examples and local-friendly workflows are part of the developer-first experience.
Do I need beta access to use the CLI?
No. The CLI and public examples are available now. Beta access is only required for connecting to the managed Veritell Cloud environment.
Is Veritell only for failures?
No. Veritell helps teams validate both the happy path and edge-case behavior — grounded answers, abstentions, and structured output correctness.
Does Veritell replace observability?
No. Observability helps you inspect what happened. Veritell helps you test what should happen before deployment.

Request Veritell Cloud access

The CLI and public examples are available now. Request beta access only if you want to connect to the managed Veritell Cloud environment.

Veritell — Test AI Behavior Before Deployment