Evidence and confidence
Atlas does not treat all information equally. A fact that came from SAP’s own structured catalog is more reliable than a fact inferred from the catalog by ontology reasoning, which is in turn more reliable than a fact found only as a phrase match in a blog post. Atlas expresses that difference as a numeric confidence and groups the numbers into four named tiers. Every claim the graph holds carries one.
What the tiers mean
Section titled “What the tiers mean”Tier 1 is first-party and structured: the Simplification Item Catalog, the released-CDS-view list, a customer’s own system extract. Atlas treats these as authoritative and is willing to emit artifacts based on them without a human pause.
Tier 2 is first-party and textual: api.sap.com entries, SAP help portal canonical pages. Structured enough to trust, not machine-readable enough to treat as tier 1. Atlas emits against tier 2 and records the provenance, so a reviewer can see where the claim came from.
Tier 3 is inferred: a claim that follows from tier-1 or tier-2 facts through the ontology Atlas reasons over. Useful when explicit evidence is missing, but far enough removed from primary sources that Atlas checks whether the downstream gate allows tier-3 inputs before emitting.
Tier 4 is snippet-match or community content. Atlas keeps tier-4 facts in the graph because they are useful for ranking and for generating review candidates, but it will not emit against them. They exist to point a human at where to look, not to become authority.
The threshold
Section titled “The threshold”Atlas draws one line across the tiers and calls it the emit threshold. Above 0.60, Atlas will generate and ship without asking. Below it, Atlas stops and surfaces the claim for human review. That number deliberately sits above tier 3 and below tier 2, which means inferred facts land just under the bar. That is the point — inference is a useful signal, but it is not strong enough on its own to let Atlas skip the human.
When an inferred fact gains a supporting first-party source, it is automatically re-scored and crosses the threshold without a replan. When a first-party source is revised and contradicts an emitted claim, the claim’s confidence drops and Atlas flags it on the plan’s evidence node the next time the plan is opened.
Why provenance is first-class
Section titled “Why provenance is first-class”Every triple Atlas writes to the graph is accompanied by a
sourcedFrom triple, with a version and a timestamp. The practical
consequence is that Atlas can always answer why did you say that?
with the specific document, the specific section, and the moment in
time the claim was ingested. That answer is the evidence trail your
release manager will inspect and that your auditor will export.
The four parts always travel together. If you can point at the claim, the source, the tier, and the actor, you have the sentence you need for a review. That is the shape Atlas standardized around, and it is why the evidence drawer in Studio lays out a claim in exactly these four columns — a reviewer reads it top-to-bottom in one pass.
The habit of keeping provenance attached to every claim is also what lets Atlas do the re-scoring described above without having to rebuild the graph. When a source refreshes, Atlas walks the edges that point at it and recomputes the tier of everything downstream — which is exactly the kind of job relational schemas would struggle with, and exactly the kind the graph was chosen for.
When to open the evidence node
Section titled “When to open the evidence node”Any time a gate fires with a tier label you did not expect. Any time Atlas refuses to emit a piece of code and you want to know why. Any time a reviewer asks where did this come from? Open the Investigate view’s evidence stream, click into the row, and expand the provenance chain — the full chain is rendered inline, with the source URL, the content hash, and the fetch timestamp anyone can verify against.