Analyzing Query Success, Failures, and Performance Using the FinishQuery Table
This article explores how to analyze successful and failed queries using the Sisense FinishQuery usage analytics table. It breaks down query sources, execution phases, performance ratios, throttling impact, and error categories to help identify bottlenecks, failures, and optimization opportunities. The article also includes step-by-step implementation instructions for deploying the required data model and dashboards. This content is intended for Sisense Admin users only.
Background
Sisense provides an out of the box Usage – Query dashboard when the Usage Analytics model is enabled. This dashboard offers insights into queries created and executed by users.
The goal of this article is to introduce a structured flow for Admins to review overall query performance and drill down into successful and failed queries. To support this analysis, the solution is divided into two dashboards, one focused on successful queries and one on failed queries, enabling efficient troubleshooting and performance optimization.
The Dashboards
Successful Query Performance Dashboard
This dashboard analyzes successful queries only and helps Admins evaluate overall query performance by:
- Identifying performance bottlenecks and understanding when they occur
- Analyzing which execution phase (translation, data source execution, throttling) contributes most to total query duration
- Highlighting whether the source of performance impact is clearly identified or requires further investigation
- Reviewing query performance at the dashboard level, widget level, and individual query level
- Tracking query behavior and performance over time, including date and time-based patterns to identify peak hours, recurring slowdowns, or workload-related trends
For best results, it is recommended to:
- Review the ReadMe widget included in the dashboard before getting started, to gain a better understanding of the data, definitions, and assumptions used in the analysis
- Analyze the dashboard by asset query source, using single-select filtering for each option to ensure focused and accurate analysis:
- Dashboard queries
- Widget queries
- Compose SDK queries
- API / JAQL queries
- Filter by duration to review longer-running queries and understand the factors contributing to their performance.
- Review performance by data model source, also using single-select filtering to isolate behavior and performance patterns:
- Live models
- Cube-based models
Failed Queries Performance Dashboard
This dashboard analyzes failed queries only and helps Admins investigate and understand query failures by:
- Identifying the root causes of failed queries using exception messages and categorized error types
- Pinpointing when failures occur, including exact date and time, to detect recurring patterns or peak failure windows
- Analyzing failures by data source and data model, helping determine whether issues are related to Live connections, Cubes, or specific providers
- Reviewing failed queries at the dashboard level, widget level, and individual query (Query GUID) level, enabling precise troubleshooting for specific widgets or assets
- Filter Out Canceled queries
Implementation Steps
- Unzip the provided file locally.
The ZIP file contains:- A TXT file with the SQL used to create the Fact_finish_query table
- The Successful Query Performance dashboard file
- The Failed Queries Overview dashboard file
- Enable Usage Analytics (if not already enabled):
- Navigate to Admin → Monitoring & Support → Usage Analytics
- Enable Collect Usage Analytics
- Open the Usage Analytics model:
- Navigate to the Data tab
- Open the Usage Analytics data model
- Create a custom table:
- Add a new custom table named Fact_finish_query
- Paste the SQL from the provided TXT file into the table definition
- Click Save
- Build the data model:
- Run Build (Full or Changes Only)
- Import the dashboards:
- Navigate to the Analytics tab
- Import both dashboards from the unzipped files
Disclaimer
This is a custom solution provided for reference only. It may require modifications to work in different environments and could be affected by future product updates. Customers are responsible for testing and validating the solution before using it in production.