Ability to Specify Provider in Pulse Alerts
-------------------------------------------------------------------Problem Statement: Inability to Scope System Alerts by Data Provider Currently, Pulse System Alerts for build failures are binary—either enabled for everything or disabled. In complex enterprise environments, we often run hybrid deployments where ElastiCubes are powered by vastly different backend providers (e.g., legacy MSSQL/Oracle vs. modern Snowflake/Redshift/BigQuery). When a legacy database goes down for maintenance, or when we have non-critical local CSV cubes failing, our administrators are flooded with Pulse notifications. This noise often causes us to miss critical failures in our primary Snowflake cloud data warehouse, which has direct cost and SLA implications. Proposed Feature: Provider-Based Alert Routing We need the ability to configure Pulse System Alert rules based on the underlying Provider or Connectivity Type of the ElastiCube. Specifically, in the Pulse > System Alerts > Build Failed configuration, please add a condition logic or filter that allows us to include/exclude specific providers. Configuration Example: Alert Rule 1: Send to CloudOps Team IF Build Fails AND Provider = Snowflake, Redshift. Alert Rule 2: Send to DBA Team IF Build Fails AND Provider = MSSQL, Oracle. Alert Rule 3: Do NOT send alert IF Provider = CSV, Excel. Business Impact Noise Reduction: Eliminates "alert fatigue" by filtering out expected failures from dev/test environments or legacy systems. Targeted Incident Response: Ensures the right team (Cloud Ops vs. Legacy DBAs) receives the alert immediately, reducing Mean Time to Resolution (MTTR). Cost Management: Helps prioritize failures that impact billable cloud compute consumption.11Views0likes0CommentsNative Support for Salesforce Connected App Authentication
Salesforce has announced that API Access Control must be enforced, and once fully enforced, the username/password + security token method will no longer be permitted for API integrations. https://help.salesforce.com/s/articleView?id=005228838&type=1 While the current Sisense guidance allows us to continue using token-based authentication, we belive this is only a temporary gap in Salesforce’s enforcement. We expect that Salesforce will require all integrations to authenticate exclusively through an allowlisted Connected App using OAuth. This is not a feature request in the traditional sense. It is a compliance requirement dictated by Salesforce’s security model, and other analytics and reporting tools that integrate with Salesforce already support Connected App–based OAuth authentication natively. To ensure long term compatibility and security, Sisense needs to provide: Native support for OAuth via Salesforce Connected App, without requiring manual JDBC string assembly or custom development A UI-driven configuration aligned with Salesforce’s allowlisting and API Access Control policies Clear guidance for customers migrating away from token-based authentication Without this capability, Sisense will no longer be able to integrate with Salesforce once Salesforce completes enforcement. Please escalate this as a priority compliance feature.16Views0likes0CommentsReusable/shared connection information for data models
It seems strange to me that each time I need to add another table to my model, I have to re-enter all the connection (or pick a recent connection). If i have 10 tables in my model, they all have their own individual connection information. If a server name ever gets renamed, it'll be a crazy headache for us. There should be a place where you define the distinct list of actual database connections (one per server or database), give it a name, apply security, etc. And then when you go to add a table to a data model, at that point you pick from the previously defined available list of connections.2.2KViews8likes7CommentsDifferent database connections on staging server vs production server
Hi, We have two cloud servers in Sisense: one for development (staging) and one for production. The staging server connects to our staging database. The production server connects to our production database. All cubes and dashboards are identical except for database connection strings and names. Our Git branching strategy follows these steps: Create a feature branch from staging. Make changes and push them to the feature branch. Open a pull request from the feature branch to staging. Test changes in staging. If approved, merge staging into master (production) to deploy changes. The Issue Git integration tracks database connection names, meaning both servers must either run on staging data or production data—this is not feasible for us. Proposed Solution We suggest implementing a decentralized environmental variable for storing database connections. For example: Use {database-server-name} as a placeholder in configurations. Set database-server-name = db_server_staging on staging. Set database-server-name = db_server_production on production. This would allow the same codebase to dynamically connect to the appropriate database without manual adjustments. Would love to hear your thoughts on this!352Views5likes2CommentsAdd a native connector and dialect for Dremio
We are using Dremio as a universal semantic layer to connect to several backend databases: PostgreSql, Oracle, HDFS, Snowflake. Using the Arraw flight jdbc driver allows the connection to Dremio, but often results in bad queries. I believe a custom dialect will be needed.24Views0likes0CommentsCreate New Sample Data Sets with more data and more concurrent date ranges
Create a new sample data sets that have a lot more data, date ranges that are current to our dates today, has more scenarios that we see in all customers cases. Like more complex formula creations, displaying a wide range of different KPI's, have a more complex data model to show how dimensions and fact tables can work. Having a sample data set with custom code, custom tables, and custom column. Having one of the tables connect to a DW to show how the connection works as well.30Views0likes0CommentsSingleStore (MemSQL) Connector: Use CONCAT instead of Pipe operator (||)
The queries Sisense generates in some cases uses the pipe operator (||) to concatenate the strings. However, SingleStore has 2 different modes of treating the pipe operator, depending on @@sql_mode engine variable and PIPES_AS_CONCAT flag: flag is ON: pipe operator is treated as CONCAT and the Sisense-generated query works flag is OFF (default): pipe operator is treated as OR and the Sisense-generated query throws an error In order to avoid the ambiguity, I suggest that SingleStore (MemSQL) Connector uses CONCAT instead of pipe operator by default.28Views0likes0CommentsPreview table column in query builder
It would very helpful to be able to see the columns of a table in the Table Query section of the MySQL query build. Currently it just list the tables and you have to open another table then preview the table and scroll sideways to see the available column. You could create this in a tree view like SQL Sever does. This would help query writers by being able to see what columns belong to the tables, I included what SQL Server does for reference.208Views0likes0Comments