Skip to content

VDM runtime trace

Atlas plans against the Virtual Data Model. To understand why Atlas’s plans are the shape they are, it helps to watch one end-to-end execution — what happens between the moment a user taps a Fiori tile and the moment the screen has data. Seven stages, and only one of them is surprising.

VDM runtime trace 1 User taps a Fiori tile OData request hits the gateway 2 OData resolves to CDS @OData binding → consumption view 3 CDS view tree expanded Associations collapsed into the query graph 4 DCL authorizations applied Row-level filters injected as SQL WHERE 5 HANA executes pushdown One vectorized column-store scan for the full stack 6 Result + metadata returned @UI / @Semantics annotations travel with the rows 7 Fiori renders the screen Generative UI from the annotations
Runtime trace — every tap is one SQL statement in HANA.

The first half of the flow is standard application-server plumbing. A request lands at the gateway. The OData binding resolves the endpoint to the specific consumption view. The CDS engine expands the view tree — consumption on top, composites below, basics underneath — into a single query graph. The DCL authorization layer injects row-level filters so the user only sees rows they are allowed to see.

None of that is novel. The query graph is still a plan, not execution. Nothing has hit the database yet.

Stage five is where S/4HANA looks different from every generation before it. The CDS compiler translates the entire expanded tree — consumption view, composites, basics, N associations collapsed — into one SQL statement and sends it to HANA’s column store, which executes the whole thing in a single vectorized scan. Nothing is pulled into ABAP memory until the result set is small enough to stream.

The classic ECC pattern was the opposite: the database was a dumb row store and ABAP did the joins, filters, and aggregations in application memory. The costs were predictable — row-store scans, network round trips, ABAP memory pressure. S/4HANA flips it. The database is the engine; ABAP is the orchestrator. The official SAP rule of thumb is do as much as you can in the database to get the best performance.

This inversion is the whole reason CDS pushdown matters. A CDS view that the compiler cannot fully push down has to stream rows through ABAP and filter or aggregate them there, which reintroduces every cost S/4HANA was designed to eliminate.

Same query, two executions WITHOUT PUSHDOWN — G-089 FIRES ABAP application server filters, joins, and aggregates in memory after pulling rows row by row HANA — treated as a row store returns full scans, not filtered results many round-trips, low cache hit latency: hundreds of ms · memory pressure: high WITH PUSHDOWN — ATLAS SHIPS THIS ABAP orchestrator issues one SQL statement, streams the result HANA column store filter + join + aggregate in one vectorized scan metadata travels back with the rows latency: tens of ms · memory pressure: negligible
Same query, two executions — pushdown is the reason S/4HANA is fast, and the reason Atlas refuses plans that block it.

Stage six is subtle. The column store returns rows plus the view’s annotations@Semantics.amount.currencyCode, @UI.lineItem, @ObjectModel.text.association. The metadata travels with the data. That is why Fiori can render a generated view without a line of custom code: the annotations tell the renderer how to format the amounts, where to put the columns, which texts to resolve.

Stage seven is the screen. The Fiori client reads the metadata, arranges the columns, resolves the texts, and paints.

Two rules follow directly.

First, every view Atlas writes has to be fully push-down-able. The gate G-089 fires if a plan would force ABAP to stream rows and filter them there. When it does, Atlas asks you either to adjust the plan so the filter expresses in CDS (preferred) or to override with a documented reason (acceptable, but the evidence will carry the performance exposure forward).

Second, Atlas writes the annotations explicitly. Every amount gets its currency annotation. Every key gets its text association. Every cube gets @Analytics.dataCategory: #CUBE. The cost of forgetting is exactly what stage six’s metadata flow implies — a consumer somewhere downstream mis-renders the data silently. The cost of writing the annotations correctly is that Fiori, the analytic engine, Datasphere, and Atlas’s own evidence tooling all read the same view and agree on what it means.

When a plan Atlas writes works correctly, the end user sees a Fiori screen a few hundred milliseconds after the tap. That responsiveness is the visible signal that the whole stack pushed down cleanly. When a screen is slow, it is almost always because something in the view tree blocked pushdown — and the diagnostic path Atlas prints for a slow plan walks the same seven stages above, looking for the stage where the inversion broke.