Developers Don't Click Buttons
A global software development company was running Snow License Manager to track their software estate. They needed to classify thousands of applications across their catalog — marking each one as allowed or not allowed, tagging security compliance status, adding licensing attributes and reject reasons. Two teams owned the process: Security and Licensing (LMS), each with their own classification criteria.
In Snow's GUI, updating custom fields means clicking into each application record one at a time. For most organizations, that's tedious. For a company full of developers, it was a non-starter. They expected an automation-first approach — no manual clicking, no repetitive GUI work.
The ask seemed simple: "Let us define our classification rules in Excel. You make Snow apply them automatically."
I said yes. What I didn't fully appreciate yet was how deep the rabbit hole would go.
The Architecture: Excel to Python to SQL Server
I designed a pipeline with four components.
Excel Filter Masks. Each team maintains their own spreadsheet template. Every row is a classification rule: application name patterns with wildcards, version, edition, platform, manufacturer, and the target custom field values — allowed (yes/no), reject reason, and comment. Think of it as a batch configuration file that business users can maintain without touching code or databases.
Python Processing Engine. The core script reads each mask row, translates the filter criteria into a dynamic SQL query, matches against Snow's application catalog, and applies bulk custom field updates via Snow's built-in stored procedures. Each run produces timestamped CSV reports showing exactly which applications were updated and what changed — a complete audit trail.
SQL Server Data Layer. A cached table joins Snow's application, manufacturer, version, operating system, and custom field tables into a single queryable structure. This gets rebuilt on each script run for data freshness. Updates go through Snow's native stored procedure for custom field modifications, ensuring compatibility with Snow's internal data model.
Scheduled Orchestration. A Windows batch job handles execution, archives processed Excel masks with timestamps after each run, and preserves error states for diagnostics. Successfully processed inputs get archived; failed runs get flagged with error suffixes so nothing is silently lost.
The Rabbit Hole
What started as a straightforward bulk updater kept growing. Every "simple" requirement revealed another layer of complexity.
Two Teams, One Field
The Security team classifies applications by their security compliance criteria. The Licensing team classifies the same applications by their licensing attributes. Both teams need to update the same custom fields in Snow — but they can't overwrite each other's work.
I implemented a prefixing and merging system. When the Security team marks an application, the value gets prefixed: sec:no;. When the Licensing team later classifies the same application, their value gets appended: sec:no;lms:yes;. A single field now carries both teams' assessments as a composite value.
Simple idea. Complex in practice. The script has to read existing custom field values before each update, detect which team has already classified the record, and merge the new value without duplicating what's already there. Every mask run is now a read-modify-write cycle rather than a simple write — and that changes the entire execution model.
Wildcard Matching Isn't Simple
The Excel masks use a wildcard character for "match anything" patterns. Intuitive for users — but each field type in the database needs different SQL translation.
Text fields translate the wildcard to a LIKE pattern. Boolean fields need IN (0,1) instead. Fields that can contain NULL values need their own handling. I ended up building field-specific replacement logic — essentially a mapping table in code that knows how to translate each Excel convention into valid T-SQL for that particular column type.
It's the kind of logic that looks trivial in a design document and turns into twenty-plus lines of carefully ordered conditional replacements in production code. Get the ordering wrong, and a wildcard meant for text matching accidentally corrupts a boolean filter.
The SQL Kept Evolving
The first version of the data layer stored procedure used subqueries to pull application data with their custom field values. It worked, but was too slow on a catalog with thousands of applications.
Version two switched to JOINs with temporary tables. Faster, but it introduced a subtler problem: applications that had never been classified — no custom field values at all — were excluded by the INNER JOIN. The very records that most needed classification were invisible to the tool.
Version three fixed this with LEFT JOINs and ISNULL() handling, capturing the complete application set including uncategorized records. Three iterations of the same stored procedure to get the data layer right. Each version was a response to production reality that testing hadn't surfaced.
The "Delete Everything" Feature
After running classification cycles for a while, the teams needed a way to wipe all custom field values and start fresh — a clean slate for reclassification. I implemented a special trigger: a specific comment value in the Excel mask activates bulk deletion mode, clearing all custom fields before applying new values.
Simple to describe. Careful to implement. You don't want accidental wipes because someone typed the wrong comment in a cell. The trigger has to be exact-match, clearly documented, and logged conspicuously so the audit trail shows that a deliberate reset occurred — not a mysterious data loss.
Outcomes
The system replaced hours of manual GUI clicking with minutes of batch processing. Both the Security and Licensing teams maintain their own Excel classification rules independently, on their own schedule, without coordinating with each other or with a developer.
Every update cycle produces timestamped CSV reports — three separate outputs for different audiences — showing exactly which applications were updated and what values changed. The SAM team archives every processed Excel mask alongside these reports, creating a complete compliance trail from input rules to database changes.
The system has been running in production since August 2024. The Excel filter mask format hasn't changed since the first version — it mapped naturally to how the teams already think about application classification, so there was never pressure to redesign the interface.
What I Learned
"Just a bulk updater" is never just a bulk updater. The moment you introduce multi-team classification, field merging, and wildcard matching, complexity compounds. Every "simple" addition interacts with existing logic in ways that aren't obvious until you run it against real data. The gap between "update a field" and "update a field that two teams share, with wildcards, audit trails, and a reset mechanism" is enormous.
Let the SQL evolve. Fighting for the perfect query on day one is wasted effort. Ship version one, discover what it misses in production (performance issues, NULL custom fields, incomplete joins), and iterate. Three versions of a stored procedure isn't failure — it's how production data layers mature. Each version was a direct response to something the previous version couldn't handle.
Excel as an interface is underrated. For SAM teams, a spreadsheet they can edit, review, and version-control in their own workflow beats any custom UI I could have built. The filter mask format is intuitive because it mirrors how the teams already reason about classification: "all applications matching this pattern from this manufacturer should be marked as allowed." No training required.
Know your customer. Developers expect automation. Delivering a manual process to a software development company would have been dead on arrival regardless of how well the underlying logic worked. The Excel-driven approach met them where they were: structured data in, automated processing, structured data out. No GUI, no clicking, no repetition.
