Back to notes

Note

How to Run LLM Evals in Production

How to run LLM evals in production with gold sets, graders, trace checks, online signals, and release gates.

7 min readBy Alex Chernysh
LLMEvalsReliabilityAI Engineering

The useful question is no longer whether a model seems good in a demo. The useful question is whether the system can survive a prompt change, a model swap, and a bad Tuesday without quietly getting worse.

Eval loop
The healthy pattern is simple: define success, measure it, ship carefully, then feed production failures back into the suite.

1. Evals are not a benchmark hobby

A lot of teams still talk about evals as if they were an optional research habit. That may have been tolerable when the system was one prompt and one model.

It is not tolerable once the product has:

  • retrieval
  • tools
  • policies
  • response formats
  • multiple failure cases
  • people who will notice when it breaks

OpenAI's current evaluation guidance is explicit about this: evals are how you make sense of non-deterministic systems in production. Anthropic makes the same point from a different angle: once the system acts over several turns, you need a deliberate way to define success, otherwise every regression turns into anecdote.

That is the real role of evals. They turn "it feels worse" into something a team can argue about productively.

2. Start with one small trusted set

The most useful eval dataset is usually smaller than people expect and more carefully reviewed than they want.

I like to split evaluation data into two tiers.

Trusted set

This is the set you are willing to use for release decisions.

It should be:

  • hand-audited
  • representative of real work
  • small enough to maintain
  • clear enough that graders and humans usually agree

Monitoring set

This set is wider, noisier, and closer to production traffic.

It is useful for:

  • drift detection
  • finding new failure classes
  • catching prompt or routing side effects
  • estimating what changed in the wild

It is not automatically clean enough for hard release gates.

That distinction matters because teams often pour everything into one giant eval bucket and then wonder why half the suite feels unreliable. The dataset is not one thing. It has jobs.

3. Grade the thing that actually failed

A single score is tidy. It is also often useless.

The better pattern is task-specific grading.

For a RAG or legal system, for example, I want different signals for:

  • answer correctness
  • support quality
  • citation or provenance integrity
  • abstention behavior
  • format compliance
  • latency

For an agent, OpenAI's trace grading and Anthropic's recent eval guidance point in the same direction: do not grade only the final answer if the behavior inside the trace matters. Grade the trace when the trace explains success or failure.

That means asking questions like:

  • did the agent choose the right tool?
  • did it call too many tools?
  • did it cross an approval boundary incorrectly?
  • did it retrieve the right evidence and still misuse it?

If the system failed in the middle of the loop, a final-answer score alone will hide the reason.

4. LLM judges are useful. They are not judges in the biblical sense.

Judge models are useful when the task is open-ended and the grading criteria are legible enough to write down.

They are weaker when:

  • the rubric is underspecified
  • the answer format should have been deterministic in the first place
  • the task is sensitive to one factual slip hidden inside otherwise good prose

The practical rule is simple.

Use code-based checks where you can. Use model-based grading where you must. Calibrate the second against the first and against periodic human review.

Anthropic's eval writing makes this point clearly: the value of automated evals compounds, but only if the task definition is concrete. Otherwise you build automation around ambiguity and then call it rigor.

5. Production signals belong in the eval conversation

A lot of teams measure answer quality and then bolt operations on later. That split looks clean on a slide. In a real system it is fake.

A model change that preserves headline answer quality but doubles latency, suppresses abstention, or increases tool churn has still changed the product.

The production eval pack should therefore track more than semantic quality:

  • time to first token
  • end-to-end latency
  • token cost
  • refusal rate
  • escalation rate
  • tool-call count
  • trace length
  • retry rate

You do not need to worship every metric. You do need to know what changed.

6. Release gates should be boring

The best release gate is not philosophically impressive. It is legible.

A practical gate often looks like this:

  1. hard-fail if trusted-set accuracy drops below threshold
  2. hard-fail if format compliance breaks
  3. hard-fail if high-risk refusal cases regress
  4. warn if latency or cost crosses budget
  5. inspect trace deltas when agent behavior shifts materially

That is enough to keep a team honest without building a religion around dashboards.

The mistake is to treat every model or prompt change as a fresh act of intuition. Evals are there so you can stop arguing from memory.

7. Production incidents should feed the suite quickly

The fastest way to build a stale eval program is to treat the dataset as a museum.

A healthier loop is:

  • user or monitoring finds a failure
  • the failure becomes an eval case
  • the fixed system must pass it forever after

This is the compounding part. A good eval harness gets more valuable with every embarrassing bug it absorbs.

That sounds harsh. It is actually efficient.

8. The evaluation program needs an owner

A shared dashboard is not ownership.

Someone needs to decide:

  • which evals are trusted enough to gate releases
  • which failures are signal versus noise
  • when a new task belongs in the suite
  • who signs off on rubric changes

Anthropic's recent write-up mentions dedicated eval ownership plus domain experts contributing tasks. That matches what I see in practice. Central infrastructure matters, but the people closest to the product define what success means.

9. A useful first version is small

If I had to stand up evals for a production system tomorrow morning, I would do this first:

  1. collect 30-50 trusted examples
  2. separate them by failure class
  3. define a code-based or model-based grader for each class
  4. track latency and refusal next to quality
  5. block releases on the small set before trying to boil the ocean

The first eval system does not need to feel impressive. It needs to catch the failures you are actually shipping.

That is enough.

Further reading