The Ask That Sounds Simple
The customer says: "We can see this figure in the Snow Dashboard, but we want it combined with another data from the Snow report that we see here."
Sounds straightforward. They wanted a single-page compliance overview showing purchased licenses, used licenses, available licenses, and the compliance deficit or surplus — per product, at a glance. Snow License Manager has the data in its reporting database, but the standard reports don't present over-licensing and under-licensing in a way that's immediately actionable for decision-makers. The SAM team was spending time digging through raw data to understand their compliance position, when what they needed was a dashboard that told the story in seconds.
Two reports were needed: one for Windows Server compliance, one for SQL Server compliance. The tool chosen was Power BI, connected directly to Snow's SQL Server database.
The Database Schema Rabbit Hole
The first challenge wasn't Power BI. It was finding the data.
I connected to Snow License Manager's reporting database and started searching for the exact tables and views that held the compliance data the customer wanted to see. The database schema is not straightforward. What looks like closely related information in Snow's UI — say, license consumption next to compliance status — can be stored in completely separate areas of the database with no obvious relationship.
I spent hours tracing data through compliance views: underlicensed applications in one view, overlicensed applications in another, license consumption summaries in a third, compliance status indicators somewhere else entirely. None of them mapped one-to-one to what the customer saw on their Snow dashboard.
The process was gradual: extract data from one view, transform it, filter it, build a view on top of that view, join it to another dataset, get closer to the final shape the customer wanted. Each step forward revealed another data relationship that needed to be resolved. The gap between "I can see this number in Snow" and "I can reproduce this number from the database" was wider than I expected.
Power BI's Fragile Transformation Layer
This is where I need to be honest.
You spend hours mapping the database, building your queries, getting the joins right. You finally have a data pipeline that produces the numbers the customer asked for. You build the visualization in Power BI, configure the filters, set up the layout. You're almost there. And then Power BI throws a datasource error right at the end.
It's my fault. I did something wrong. It's OK — we all learn from our mistakes. But the amount of time I spent debugging and re-learning Power BI's data transformation by doing the ETL part again and again was overkill.
Don't get me wrong — Power BI is a remarkable tool. It's excellent for prototyping, quick results, easy graphs, interactive filtering, and many other things. But for me personally, the way data is extracted from sources and transformed inside Power BI was a nightmare on this project. The transformation layer is fragile. A change that looks harmless — adjusting a column type, modifying a filter step, adding a new data source — can cascade through the query pipeline in ways that aren't obvious until the whole thing breaks.
I would honestly prefer to do all the transformation inside the database — build the views, do the joins, handle the calculations in T-SQL where I can test each step independently — and then pass clean, final data to Power BI or any other visualization tool. Let the BI tool do what it's best at: making things visual. Don't make it also be the ETL engine.
Add to this that customers often want more customization and a more "modern" look to UI elements than what Power BI's built-in visuals offer, and it became frustrating. Not universally — this is my honest account of this particular project experience.
P1: Windows Server Compliance (v8.1)
The Windows Server report came first and established the pattern.
I connected Power BI to Snow's compliance views — underlicensed applications, overlicensed applications, and license consumption summaries. The first version was rough: data was there but the layout didn't tell the story. Stakeholder review sessions shaped each iteration: "Can we see the deficit more clearly?" "Can we group by product edition?" "What about the total available?"
Eight versions. Each one refined the layout toward the goal: purchased licenses, used licenses, available licenses, and the compliance deficit or surplus per Windows Server product — all on a single page. v8.1 was the final deliverable. Stakeholders could look at it and immediately know which products were over-licensed and which were under-licensed.
The pattern was set. Now apply it to SQL Server.
P2: SQL Server Compliance (v18.1)
Same approach, adapted for SQL Server edition specifics. But SQL Server compliance had worse data quality.
Empty values in total assigned and total remaining fields. Not nulls — empty strings that Power BI's transformation layer handled differently from nulls, which behaved differently from zeros. Every empty field meant another transformation step, another potential point where the datasource pipeline could break.
Eighteen iterative versions. More than double the Windows report. Each stakeholder review session generated new requirements: different groupings, different filters, different visual emphasis on specific SQL Server editions. The data transformation fragility compounded with each change — adjusting one calculation could break another that depended on the same pipeline step.
v18.1 was the final deliverable. It works. It shows SQL Server compliance at a glance, the same way the Windows report does. But the journey from v1 to v18.1 was the hardest part of this entire project, and most of that difficulty lived in Power BI's data transformation layer, not in the underlying SQL or the visualization design.
The Superset Proposal
After delivering both reports, I proposed Apache Superset as an alternative for future BI reporting. Open source, self-hosted, and SQL-native — the transformation logic stays in the database where it belongs. You write your queries, Superset visualizes the results. No intermediate ETL layer that introduces its own fragility.
This wasn't about Power BI being a bad tool. It was about matching the tool to the workflow. For someone who prefers to build the data pipeline in SQL and hand clean results to a visualization layer, Superset's architecture is a better fit. The proposal gave the customer an option for future reporting work — one that I could support more effectively.
Outcomes
Two compliance dashboards delivered: Windows Server (v8.1) and SQL Server (v18.1). Twenty-six combined versions representing months of stakeholder collaboration.
Both reports connect directly to Snow's database and refresh on schedule. Stakeholders can see at a glance which products are over- or under-licensed — purchased, used, available, and the compliance gap — without digging through raw Snow reports. The SAM and IT management teams received the dashboards with datasource refresh documentation so they can maintain them independently.
The reports replaced manual report interpretation with intuitive visual dashboards. Whether Power BI was the ideal tool for building them is a separate question from whether the end result serves the customer — and it does.
What I Learned
Do the transformation in the database. Power BI is excellent at visualization, but its data transformation layer is fragile for complex compliance data. Build your views, joins, and calculations in T-SQL where they're testable, debuggable, and version-controllable. Then pass clean results to the BI tool. Let each tool do what it's best at.
Version count is a signal, not a failure. Eight versions for Windows, eighteen for SQL Server. Each one was a stakeholder conversation that refined the deliverable. The version count reflects the difficulty of translating "I want to see compliance at a glance" into actual data relationships. If someone tells you a compliance dashboard takes one iteration, they've never built one.
Know your frustration threshold. Power BI's ETL layer was personally frustrating on this project. That's honest feedback, not a universal verdict. Recognizing where a tool fights your workflow is valuable — it led me to propose Superset, which better matches how I prefer to build data pipelines. There's no shame in saying "this tool and I don't work well together for this specific task."
Customers always want more. "Just one more filter." "Can we make it look more modern?" "Can we add this other metric?" It's not scope creep — it's how BI projects work. The dashboard is never done after the first version. Budget for iteration, expect it, and make sure the underlying data pipeline can handle changes without breaking. That's the real engineering challenge in BI work.
