www.dmt-lab.nl
← Back to Blog
Building a Three-Module Fidelis Connector for Snow License Manager visual

Building a Three-Module Fidelis Connector for Snow License Manager

February 15, 20267 min read
See project S3

Three Modules, Three Problems

Fidelis Security isn't one product — it's three. Fidelis Elevate is an EDR platform tracking endpoint agents. Fidelis Network monitors network traffic throughput. Fidelis Deception manages honeypot IP address pools. Each module has its own licensing model, its own data format, and its own ideas about how (or whether) to export data.

The enterprise telecom client needed compliance visibility for all three modules inside Snow License Manager. How many Elevate agents are active vs. licensed? Is Network throughput within the contracted bandwidth? How many Deception IPs are deployed against the licensed pool?

No Snow connector existed for any of them. I needed to build one — or rather, three.

The Data Access Reality

The initial requirements meeting in September 2025 painted an optimistic picture: Elevate had an API for agent data, Network could export traffic statistics, and Deception tracked IPs through its console.

Reality was more nuanced. Over the following weeks, as I dug into each module's actual capabilities, the picture shifted:

  • Elevate: The API existed but only returned a full agent list — no date filtering, no incremental exports. And the data the client actually had access to was manual CSV exports from the console.
  • Network: This turned out to be the only module with a working API — a CGI endpoint returning traffic statistics in TSV format with time-bucketed protocol breakdowns.
  • Deception: Manual CSV export only. Low change frequency (the IP pool rarely shifts), but still needed automated import for compliance reporting.

Three modules, three different data source types. I needed an architecture flexible enough to handle all of them.

The Broken API Detour

On October 28, I attempted my first real API connection to the Network module. Access Denied — both service accounts failed authentication.

The cause wasn't my configuration. Fidelis had been updated days earlier, and the update broke API authentication. The vendor filed a ticket, but there was no timeline for resolution.

This could have stalled the entire project. Instead, I pivoted: while waiting for the API fix, I focused on the CSV-based modules. The client uploaded Elevate endpoint exports and Deception IP lists to the server, and I started building the ETL pipeline around the data I actually had.

By the time the Network API was restored in early November, I had two-thirds of the connector running in production. The API integration became the final piece rather than the foundation — a better architecture in hindsight, since it meant the system wasn't dependent on a single data source type.

Building the ETL Pipeline

Python Transformation Layer

I wrote a Python transformation script that converts raw Fidelis exports into SQL Server-ready CSVs:

Elevate endpoints: The raw export contained thousands of agent records with fields like hostname, IP address, OS, agent installation and connection status, and last contact date. The script preserves the field structure while normalizing encoding to UTF-8 for SQL Server's OPENROWSET compatibility.

Deception IPs: Hundreds of IP records with hostname, role, and subnet connectivity status. The script adds an imported_at timestamp for tracking data freshness.

Both transformations maintain 100% record consistency — every input row produces exactly one output row, verified through QC counts after each run.

SQL Server Import and Reporting

I built six stored procedures covering three custom reports:

Elevate Report — Two procedures: one imports the transformed CSV via OPENROWSET with CODEPAGE 65001 (UTF-8), using TRY_CAST for date parsing and CASE expressions for boolean conversion. The second procedure renders the display view with all agent details.

Deception Report — Same two-procedure pattern: bulk import from CSV into a staging table, then a typed final table with proper DATETIME2, DECIMAL, and INT columns.

Compliance Report — The most complex piece. Two procedures using CTEs to aggregate across all three modules: licensed quantities vs. actual usage, with percentage calculations. The final output shows a single compliance position:

Module Metric Status
Elevate Licensed vs. active agents Compliant
Deception Licensed vs. deployed IPs Compliant
Network Contracted vs. actual throughput Live monitoring

This unified view was the whole point — instead of checking three separate consoles, the SAM team sees one dashboard in Snow License Manager.

Network API Integration

When the Network API came back online in early November, I built the final piece: a Python script that calls the Fidelis Network CGI endpoint for traffic statistics.

The API returns data in an unusual format — TSV with time-bucketed rows (0, 300, 600... seconds) containing colon-separated values for each protocol type: TCP, UDP, ICMP, ARP, DNS, and Other. I wrote parsing logic to split these values, compute Total_Kbits_per_second per time bucket, and append the results to a cumulative CSV file.

A few technical details that mattered:

  • Line endings: The CSV uses LF-only (0x0a) line endings specifically for SQL Server OPENROWSET compatibility. Windows-style CRLF would break the bulk import.
  • Credentials: Base64-obfuscated in the script configuration (not hardcoded in plaintext).
  • Logging: Rotating log handler (1 MB x 5 files) for troubleshooting without filling the disk.
  • Cumulative data: Unlike the Elevate and Deception files that get replaced each cycle, the Network history file is append-only — it builds a traffic timeline over weeks and months.

The compliance calculation pulls MAX(Total_Kbits_per_second) from the Network history, converts Kbps to Gbps, and compares against the contracted throughput limit.

Automation: Making It Run Unattended

The full automation chain has two layers:

Hourly: Windows Task Scheduler runs the Python Network API script, fetching the latest traffic statistics and appending them to the cumulative CSV.

Nightly: A batch script copies the three transformed CSVs to the SQL Server's import share, archives each file with a timestamp suffix, and cleans up local copies. Then SQL Server Agent jobs execute the six stored procedures in sequence — import, transform, calculate compliance.

The batch automation includes error handling and full logging. The Network history file is treated differently from the others: it's kept as a persistent, cumulative log rather than deleted after transfer, building a complete traffic history for trend analysis.

Delivery and Documentation

I delivered the complete solution with operational summary documents in both English and Russian for the client's SAM team. The documentation covers:

  • Server roles and architecture overview
  • The hourly/nightly automation schedule
  • SQL Server Agent job chain details
  • SLM report names for the compliance dashboard
  • Maintenance notes for when Fidelis exports change format

What I Learned

Design for mixed data sources from the start. My initial assumption was that most modules would have APIs. In reality, only one did. Building the ETL pipeline around CSV transformation first meant the architecture handled any data source type — API, manual export, or automated file drop.

Vendor updates will break your integrations. The Fidelis platform update that killed API authentication happened without warning. Building the connector so that each module could function independently meant a broken API didn't take down the entire compliance dashboard — it just left one column showing stale data until the fix arrived.

A unified compliance view is worth more than three separate reports. The client's SAM team doesn't care about the distinction between Elevate agents, Network throughput, and Deception IPs from an operational perspective. They care about one question: are we compliant across all Fidelis modules? The single dashboard answers that question in seconds, and that's what made this connector valuable — not the Python scripts or SQL procedures, but the consolidated view they produce.