Choosing the Right Replication Strategy: Evaluating ADF + CDC vs. Snowflake Openflow
If you're delivering embedded analytics through Sisense, your dashboard is only as good as the data behind it. But how often do you think about what's happening upstream in the pipelines and ingestion architectures that determine whether that data is fresh, reliable, and cost-efficient before it ever reaches a visualization? QBeeQ is known in the Sisense ecosystem as a plugin and integration provider, but that's only part of what we do. This post is a window into the broader, strategic side of our work: architectural decision-making that happens well upstream of the BI layer but directly impacts the analytics experience your customers see. The client here uses Sisense to deliver embedded dashboards. We built their Snowflake-based data platform from the ground up: integrations, ELT, data modeling, and governance with Sisense sitting at the top of the stack. The next question became: is our data replication approach from MS SQL Server into Snowflake the right one long-term, or should we move to Snowflake Openflow? A Platform Already in Motion Our relationship with our client didn't start with this POC. We previously helped them design and build a Snowflake-based data platform from the ground up, including data integrations, ELT processes, data models, and governance. That work gave them a solid, modern foundation for their analytics and reporting, including the embedded dashboards they deliver to their customers via Sisense. This POC was the next strategic step in that journey. With the platform stable and maturing, the question became: are we using the best possible approach for replicating data from their MS SQL Server into Snowflake? And more specifically, should we continue developing our custom ingestion approach, or move toward a more ready-to-go solution using Snowflake Openflow? Our client’s business depends on near-real-time data availability. Their operational data lives in MS SQL Server and feeds directly into Snowflake, where it powers analytics and reporting for their customers across retail, telecom, banking, and energy. These aren't just technical concerns. They directly affect the quality and reliability of the analytics experience delivered to its end customers. The key business drivers behind this evaluation were: Faster availability of incremental data, especially for large transactional tables Lower operational risk and improved robustness Reduced maintenance effort over time Cost optimization at scale Putting Two Approaches to the Test Rather than making a decision based on vendor documentation or assumptions, we designed a structured POC to test both approaches in parallel, on the same three tables and similar data volumes, across four dimensions: performance cost stability maintainability Option 1: Custom ADF + SQL CDC (Our Existing Solution) The key strength of this approach is control. We define the schema, manage data types, handle transformations during ingestion, and have full visibility into every step of the pipeline. This is the approach we built as part of the initial platform work: CDC (Change Data Capture) - a native SQL Server feature that tracks only changed rows (inserts, updates, and deletes), eliminating the need to reload entire tables on every run Azure Data Factory (ADF) - Microsoft's cloud orchestration service, used here to move delta data from SQL Server to Azure Blob Storage Snowflake - which then loads and transforms the data using Snowpipe and scheduled tasks It's more engineering-intensive to set up, but the result is a robust, predictable, and highly tunable architecture. Option 2: Snowflake Openflow Openflow is Snowflake's newer, more plug-and-play approach to data ingestion. It uses Change Tracking (CT) on the SQL Server side, a lighter-weight mechanism than CDC, and ingests data more directly into Snowflake with less custom orchestration required. On paper, it's appealing: faster to configure, fewer moving parts, and tighter integration within the Snowflake ecosystem. But as with any "managed" solution, there are trade-offs. And that's exactly what we needed to quantify. What We Found Performance Both approaches handled initial loads equally well. The gap appeared with delta loads — Openflow's Change Tracking completed incremental updates in 1–2 minutes versus 5–12 minutes for ADF + CDC. For near-real-time use cases, that's a genuine advantage worth noting. Cost This is where the picture shifts dramatically, and where decision makers should pay close attention: Openflow consumed approximately 12.5 Snowflake credits per day, translating to roughly $50/day or ~$1,500/month for this workload alone. ADF + CDC came in at an estimated $8–15/month That's not a marginal difference. That's a 100x cost gap on a three-table POC. Now consider what that looks like at scale. Most production environments don't replicate three tables; they replicate dozens, sometimes hundreds. If costs scale proportionally, an Openflow-based architecture could run into tens of thousands of dollars per month for a full production workload, compared to what remains a very modest cost with the custom ADF approach. For a data platform that's meant to be a long-term foundation, the compounding effect of that cost difference is enormous. Over a single year, the gap between these two approaches could easily reach $200,000 or more — money that could instead fund additional data products, analytics capabilities, or engineering capacity. For decision makers evaluating build vs. buy, or assessing the true TCO of a "managed" solution, this kind of analysis is exactly what's needed before committing to a direction. Stability & Maturity The Openflow SQL Server connector was still in preview status at the time of the POC. Runtime and canvas updates caused instability during our testing, and the solution requires ongoing infrastructure coordination. Azure Data Factory, by contrast, is a mature and battle-tested technology with a proven track record in production environments. Architecture & Maintainability Openflow introduced meaningful architectural complexity: requires Snowflake Business Critical edition (for PrivateLink connectivity) adds authentication and data sharing overhead offers limited control over target data types, and handles soft deletes only, so hard deletes require additional handling Additional Snowflake-side transformations (PARSE_JSON, CLEAN_JSON_DUPLICATES) were also necessary. Our custom ADF solution, while more involved to build: gives full control over schema and transformations handles deletes cleanly can be extended through a reusable automation procedure that simplifies onboarding new tables over time Open Flow ADF + CDC Architecture Complexity ⚠️ Requires Business Critical + PrivateLink ✅ Simpler, fewer dependencies Stability ⚠️ Preview — instability during POC ✅ Mature, production-proven Cost/month ⚠️ ~$1,500 ✅ ~$8–15 Initial load ✅ Comparable ✅ Comparable Delta load speed ✅ ~1–2 min ⚠️ ~5–12 min Onboarding new tables ✅ Fast and easy ⚠️ More setup, but automatable Schema & type control ⚠️ Limited ✅ Full control Delete Handling ⚠️ Soft deletes only ✅ Full delete support Our Recommendation The POC gave our client something genuinely valuable: a data-driven basis for a strategic architectural decision, rather than a choice made on assumption or vendor marketing. Our recommendation leaned toward continuing with the custom ADF + CDC approach as the long-term foundation. Not because Openflow lacks merit, but because the cost differential is substantial, the stability risks are real at this stage of the product's maturity, and the architectural overhead introduces complexity that isn't justified by the performance gain alone. That said, the delta load performance advantage of Openflow is meaningful. As the connector matures and if near-real-time requirements intensify, it remains a viable path to revisit. The POC ensures that if and when that moment comes, the decision will be informed, not reactive. What This Kind of Work Looks Like in Practice In the Sisense ecosystem, you may know us through our plugins and integrations. But this post reflects something equally important: our ability to act as a full-stack data platform partner. What your users see in Sisense is a function of decisions made far upstream. When those decisions lack rigor, the effects show up as latency, data gaps, and rising costs. Running a structured POC like this is often underestimated, but is one of the most valuable things we do for our clients. It's not just about finding the faster or cheaper option. It's about understanding the full picture: performance under realistic conditions, true cost at scale, architectural implications, and long-term maintainability. Every organization's situation is different. The right ingestion architecture depends on your data volumes, latency requirements, existing tooling, team capabilities, and cost constraints. If you're thinking about your broader data strategy, not just what Sisense can do, but whether everything underneath it is built right, that's a conversation we're well-positioned to have. Thinking about your data platform strategy? Let's talk. QBeeQ is data strategy consulting firm made up of former Sisense employees and customers. We are a Sisense Gold Implementation Partner and Snowflake Select Partner14Views1like0CommentsEvery Metric Happens Somewhere: Why the “Where” Dimension Is the Missing Layer in Modern Analytics
In our recent QBeeQ webinar, Every Metric Happens Somewhere, we explored why geospatial context is no longer a niche feature for specialized industries. It’s a core analytical dimension and one that can dramatically elevate engagement, insight discovery, and decision-making. You can view the recording, or keep scrolling for the highlights and key takeaways from the webinar. The Problem: Dashboards Without Geography Miss Patterns Traditional dashboards rely heavily on tables, bar charts, and line graphs. They show totals, trends, and rankings well. But they struggle to reveal spatial relationships. Consider common business questions: How many claims do we have? What’s our revenue by region? Which territories are underperforming? These are valuable questions, but they’re incomplete without geographic context. When you introduce the where dimension, new insights emerge: Clusters that only appear spatially Boundary effects between adjacent territories Pockets of unusually high or low performance Regional anomalies masked in aggregated totals From Niche Feature to Core Capability Maps were once considered specialized and useful for specific industries such as logistics, real estate, or field operations. That’s no longer true. Revenue, risk, compliance, performance, claims, customer distribution – these all happen somewhere. Geography cuts across industries. Approximately 80% of enterprise data already contains a spatial component. Most organizations simply aren’t leveraging it. Instead of placing a map as a static, standalone widget, leading teams are making it central to the analytical experience. Research shows interactive dashboards increase engagement and insight discovery. When you layer interactivity (zooming, filtering, panning) on top of geography, users uncover patterns faster and spend more time exploring. But friction often gets in the way. The Friction: Where Native Mapping Falls Short Many teams start with basic mapping tools that allow simple point plotting or polygon mapping. That works until analysis shifts to become more strategic. Common limitations appear quickly: Needing both points and polygons on the same map Wanting density visualizations or clustering Requiring multiple layers Enabling advanced drill behavior Supporting large-scale or time-based data When mapping tools can’t support these needs, one of three things happens: Maps sit unused on dashboards. Designers revert to traditional charts. Spatial thinking never becomes core to analysis. Remove frictions with the QBeeQ mapping solutions SuperMap: Practical Spatial Analytics for Everyday Use The SuperMap is built for operational decision-making, understanding what’s happening right now, and acting on it. This plugin includes a range of flexible, scalable features, delivered as a zero-code solution, designed for both dashboard builders and end users. Multi-Layer Mapping (Points + Polygons) - Overlay geographic territories such as counties or states with individual location points, allowing each layer to have its own KPIs, category breakdowns, and sizing logic for richer comparative analysis. Heatmaps and Radius-Based Points - Color polygons by one performance metric while simultaneously sizing map points by another, delivering multi-dimensional insight within a single, unified view. Clustering with Drill-In Behavior - As users zoom in, clustered points automatically separate to reveal individual locations, with clusters capable of displaying proportional category breakdowns such as hospitals, police stations, and post offices. Advanced Tooltips - Enhance map interactivity with tooltips that go beyond a single value by displaying multiple KPIs, raw metrics, and calculated insights to support deeper exploration. Jump-to-Dashboard Navigation - Enable users to click on a territory to instantly filter the entire dashboard or navigate directly to a more detailed analytical view for focused investigation. Measure Switching - Allow users to toggle between different KPIs on the same map without duplicating visuals, reducing dashboard clutter while increasing analytical flexibility. Geographic Hierarchy - Support seamless geographic drill-down across states, counties, zip codes, and other levels—either directly within the map interface or through a structured dropdown selection. Deck.gl Map: High-Performance, Large-Scale Exploration While the Super Map focuses on operational clarity, Deck.gl is designed for scale and trends over time. For teams working with event streams, logistics data, or large geospatial datasets, Deck.gl provides performance without sacrificing interactivity. This plugin enables: Arc layers Hex bin density visualizations 2D and 3D polygon layers High-volume point plotting Hierarchical spatial analysis The Bigger Picture: Making “Where” a First-Class Dimension Time has long been treated as a foundational analytical dimension. Geography deserves the same status. When you: Align metrics to business geography Enable exploration instead of passive observation Remove friction from advanced spatial analysis Combine interactivity with spatial context You don’t just make dashboards prettier. You make them more useful. Every metric happens somewhere. When you show that somewhere clearly, insights accelerate and decisions improve. At QBeeQ, we believe geography should be a first-class dimension in every analytical experience. Our mapping solutions are designed to remove friction, scale with your data, and make spatial insight accessible to every dashboard builder and decision-maker. Because when every metric happens somewhere, QBeeQ helps you see exactly where it matters most. QBeeQ is data consulting firm and also a Sisense Gold Implementation Partner.45Views1like0CommentsProduct Update | Asset Auditor incorporates user access, permissions, and asset sharing
A more user-centric Asset Auditor In this release, we’re excited to introduce a user-focused expansion of the Asset Auditor, including two new dashboards, Users and Users Validation, along with enriched underlying data. You can now easily understand: Who has access and permissions to which data assets How assets are shared across your organization Whether access could be impacting engagement Revealing how their access and permissions connect to your data assets With this release, you get a clearer, more actionable picture of how people and assets interact to empower better oversight. We’ve also made major improvements across the existing dashboards to integrate this new data, elevate insights, and provide more actionable recommendations. Assets can’t deliver value unless users can access and engage with them In Sisense, dashboards and data models are governed by separate access controls. And they don’t operate in silos. They’re shared, cloned, embedded, and repurposed across teams. When someone shares a dashboard, they may not have permission to share the underlying model. The result? Users open dashboards expecting insights, only to find missing charts or blank visuals. They’re unsure whether the data is broken, restricted, or simply unavailable, while the sharer assumes everything is fine. The Asset Auditor gives clear visibility into which users or groups have: Access to dashboards but not to the underlying data models Access to data models but no corresponding dashboard access No access to any dashboards Yet access alone doesn’t guarantee adoption, and adoption issues are often misdiagnosed as access problems. Are dashboards underused because people truly lack access? Or because they simply aren’t engaging with the content? By surfacing these mismatches, you can prevent confusion, improve collaboration, and ensure every shared dashboard delivers the full experience it’s meant to. By detecting both over-permissioning and under-permissioning, you can tighten governance without slowing productivity. Permission drift happens quietly, introducing operational risk long before it becomes visible Do users have the correct permissions? Do some users have too many permissions? Do users have permissions to data models or dashboards that they shouldn’t? Use the Asset Auditor to see whether users have the right level of access: too little to be effective, or too much for their role. Identify misaligned configurations, such as users who maintain data model access for development or testing, but no corresponding dashboard access, which is a strong indicator that permissions no longer reflect the real workflow. By detecting both over-permissioning and under-permissioning, you can tighten governance without slowing productivity. Understand the reach of your dashboards across users and groups The Asset Auditor helps you understand the reach of your dashboards across users and groups, revealing how far each asset spreads and where engagement actually concentrates. Detect and reduce redundancy, find duplicates or overlapping assets shared across teams. Pair these insights with Sisense Usage Analytics to understand not just who can access assets, but who actively engages with them. By bringing these signals together, teams can zero in on whether the problem is permissions, visibility, or user behavior. The Asset Auditor provides much more data and insights beyond users and shares! Check it out and get smarter about how you manage your data assets. If you want to start getting better visibility into what your assets are doing inside your environment, reach out to us for a live demo or a free trial.91Views2likes0CommentsQBeeQ Snowflake Monitor: Understand your Snowflake costs and get more value from every credit
QBeeQ Snowflake Monitor, a new Sisense plugin designed to help you bring clarity and control to your Snowflake spend directly within your Sisense environment. Sisense customers can easily connect to their existing Snowflake Account Usage and Organization Usage schemas, no custom ETL pipelines or workarounds required.97Views2likes0CommentsQBeeQ Asset Auditor: A smarter way to manage your Sisense data assets
Optimize to cut storage and processing costs, refine data models, and boost performance Query and dashboard performance are closely linked, often hindered by bloated data models. Excessive columns, unused tables, and inefficient relationships force queries to process unnecessary data, slowing down dashboards. This leads to frustration, delayed insights, and lower productivity. Use the Asset Auditor dashboards to: See all your data sources and follow the dependencies across data sources, data models, tables, columns, and dashboards and widgets Identify table and column utilization across dashboards and widgets for better model design. Target and remove empty and unused data sources, data models, columns, and dashboards By reducing or removing unused tables and columns and optimizing queries, organizations can drive down storage and processing costs while increasing performance and user engagement. Expose (and prevent) hidden dashboard issues affecting your users A key risk in delivering analytics is unintended downstream effects from data model changes, causing broken widgets, missing calculations, and misleading insights. Without full visibility, teams may disrupt critical business data. Errors often surface only when users load dashboards, despite backend checks, leading to frustration, missed insights, and wasted troubleshooting time. The Asset Auditor will help you to identify the source of these errors, from deleted data sources or missing data down to widget-level errors - reducing the time to troubleshoot and identify root causes and push fixes. Use the Asset Auditor at each step to verify that dashboards are error-free when delivered to end-users. Plan and execute changes with more confidence When shared elements are scattered across dashboards, making changes can feel overwhelming without knowing the full scope. The Asset Auditor can help you confidently assess scope by identifying widget distribution across dashboards to answer the questions: Where are these widgets used? Can changes be done manually? Or do I need a script? Making changes to the underlying data models, while preventing errors, has never been easier because the Asset Auditor will show you exactly which dashboards are using which data models, and which widgets are using which tables and columns. When teams make modifications without full visibility, they risk disrupting critical business insights. By proactively assessing the impact of changes, organizations can prevent costly errors, reduce time spent troubleshooting, and maintain high-quality analytics. You can't optimize what you can't see Organizations pour resources into analytics, but without visibility into how data assets are used, inefficiencies pile up, wasting storage, slowing performance, and inflating costs. For those responsible for maintaining Sisense environments, from data architects and model builders to dashboard designers, the challenge isn’t just creating reports—it’s ensuring the entire infrastructure runs efficiently. Asset Auditor changes the game by providing full transparency into how data is structured, utilized, and performing across your Sisense environment. With clear insights into dependencies, usage patterns, and optimization opportunities, teams can refine models, improve query speed, reduce storage costs, and ensure users get accurate, fast insights—all while preventing costly disruptions before they happen.143Views3likes1CommentGeo Analysis or Spatial Insights - Add-Ons that Nail It!
With the importance of geographic analysis coming to light over the last few years, Sisense and QbeeQ are offering geo spatial add-ons that offer a quick and intuitive way to visualize data over an area of interest. Read the article to learn about the full picture of an advanced Geo-Spatial analytical story.4.3KViews6likes0CommentsWinning with Sisense and Redshift
Sisense and Amazon are partners, which means Sisense can create compelling value for our customers, by delivering services that take advantage of the AWS technologies to solve important problems. Learn about how you can optimize the value of the Sisense-AWS integration.1.7KViews0likes0CommentsEnhancing web security: A deep dive into single sign-on (SSO) and web access tokens (WAT)
Single Sign-On (SSO) and Web Access Tokens (WAT) are two pivotal technologies in the domain of web security and user authentication. SSO simplifies the user experience by enabling access to multiple applications with a single set of login credentials, thereby enhancing both convenience and security. Conversely, WAT plays a crucial role in secure access management by granting temporary tokens that verify user identity and permissions across web applications. Both SSO and WAT are integral to creating a secure and seamless digital environment, each addressing unique facets of user authentication and access control. In the following sections, we will explore the mechanisms, benefits, and implementations of SSO and WAT.711Views2likes0CommentsA woman's journey in tech: Empowerment, growth & breaking barriers
Discover Mia’s journey as a woman in tech—her experiences, challenges, and triumphs in the industry. From building trust in male-dominated spaces to advocating for women’s recognition and career growth, Mia shares insights on empowerment, partnerships, and the evolving future of tech and data analytics.606Views4likes2Comments