Ability to Specify Provider in Pulse Alerts
-------------------------------------------------------------------Problem Statement: Inability to Scope System Alerts by Data Provider Currently, Pulse System Alerts for build failures are binary—either enabled for everything or disabled. In complex enterprise environments, we often run hybrid deployments where ElastiCubes are powered by vastly different backend providers (e.g., legacy MSSQL/Oracle vs. modern Snowflake/Redshift/BigQuery). When a legacy database goes down for maintenance, or when we have non-critical local CSV cubes failing, our administrators are flooded with Pulse notifications. This noise often causes us to miss critical failures in our primary Snowflake cloud data warehouse, which has direct cost and SLA implications. Proposed Feature: Provider-Based Alert Routing We need the ability to configure Pulse System Alert rules based on the underlying Provider or Connectivity Type of the ElastiCube. Specifically, in the Pulse > System Alerts > Build Failed configuration, please add a condition logic or filter that allows us to include/exclude specific providers. Configuration Example: Alert Rule 1: Send to CloudOps Team IF Build Fails AND Provider = Snowflake, Redshift. Alert Rule 2: Send to DBA Team IF Build Fails AND Provider = MSSQL, Oracle. Alert Rule 3: Do NOT send alert IF Provider = CSV, Excel. Business Impact Noise Reduction: Eliminates "alert fatigue" by filtering out expected failures from dev/test environments or legacy systems. Targeted Incident Response: Ensures the right team (Cloud Ops vs. Legacy DBAs) receives the alert immediately, reducing Mean Time to Resolution (MTTR). Cost Management: Helps prioritize failures that impact billable cloud compute consumption.9Views0likes0CommentsDatamodel should list all dependent dashboards not just where this is the primary model
See pictures. Or go: 1. Open datamodel. 2. Click the Dashboards dropdown. 3. Click Open Dashboard. It lists dependent dashboards. But it includes only dashboards where this data model is the "primary" data source. That is dangerous. It made me think that "No dashboards are available for this model" and I nearly deleted the model. The list should show all dependent dashboards. I will use the API to check dependencies in future.549Views2likes5CommentsLet designers see data models
My designer and data-designer users want to see data models shared with them. Currently, Sisense only shows you data models that you have edit permission on. The reason is that they want to see the relationships, tables, and fields. They want to do that to understand how they should use the data, fix issues in their dashboards, and confirm bugs in the data models. The visual data model page does that better than reading through a list of dimensions. Similar to this post: https://community.sisense.com/discussions/build_analytics/how-to-grant-view-only-access-on-the-elasticubes-to-designers/290922Views0likes1CommentUsage Analytics- Track Field Usage Across Dashboards
Idea: usage analytics reports can tell what reports use a certain field? Filter by a certain field in the usage analytics report and you will see all dashboards using that field. It would also be beneficial to track table usage in dashboards too. Use Case: region will be renamed to region_usa in our sql query and this could potentially break dashboards. to be proactive, i would like to know what dashboards are currently using the field 'region' so that I can redirect them to the correct field, 'region_usa' once the sisense schema & sql has been updated.563Views8likes2CommentsRefresh schema for all tables
Can we get a "refresh schema for all tables" button? Reason: Our tables are usually "Select * From AnalyticsSchema.ViewName". We control which fields to return by editing the view, not the Sisense table definition. When a field gets added/removed/changed, we need to refresh schema. That's fine to do manually as you're working on that datamodel+views, but we need to refresh all when: We copy a datamodel to a different server. We need to refresh schema at least to double-check that the views are as expected on the new server. (If any fields have changed, then I'll need to go fix any widgets using those fields, or, more likely, update the view to include them.) A view gets edited, perhaps for a different datamodel, and my datamodel hasn't been updated. I edit several views and want to refresh schema for all those Sisense tables. If I've changed used fields then I'll need to go into each table manually anyway so it doesn't matter, but I've had a case where I've removed unused fields from several views and now I need to click refresh schema on every table individually.2.7KViews6likes18CommentsReusable/shared connection information for data models
It seems strange to me that each time I need to add another table to my model, I have to re-enter all the connection (or pick a recent connection). If i have 10 tables in my model, they all have their own individual connection information. If a server name ever gets renamed, it'll be a crazy headache for us. There should be a place where you define the distinct list of actual database connections (one per server or database), give it a name, apply security, etc. And then when you go to add a table to a data model, at that point you pick from the previously defined available list of connections.2.2KViews8likes7CommentsFeature request: Let me connect a livemodel to a source with different table names
Let me connect a livemodel to a source with different table names The problem: I want to change my livemodel's connection to a different database. The new database has the same tables, but they belong to a different schema or have different names. As a result, I get a message "select a different database which includes all tables under one schema". So, I cannot change to the new database. Feature request: When I change the source database in an elasticube, I get an interface where I can map my datamodel's tables to the new names. I'd like the same interface for livemodels. Better yet, let me switch to the invalid connection, and then show the error only when I try to publish the model. Workarounds: A) In the old database, I could create the new table names. Then I can point my livemodel to the new names before switching connection. B) I could edit the .smodel file. C) Tech support suggested this: Use the REST API 2.0 PATCH /datamodels/{datamodelId}/schema/datasets/{datasetId}. I used it with the fillowing request body, then I could change the database. == { "database": "c00845810new", "schemaName": "dbo", "type": "live", "connection": { "oid": "99380074-a2bb-4cd7-8bc3-7a5d20152b82" }, "liveQuerySettings": { "timeout": 60000, "autoRefresh": false, "refreshRate": 30000, "resultLimit": 5000 } } Related discussion in support case 500Pk00000s1fVCIAY.17Views0likes1CommentDifferent database connections on staging server vs production server
Hi, We have two cloud servers in Sisense: one for development (staging) and one for production. The staging server connects to our staging database. The production server connects to our production database. All cubes and dashboards are identical except for database connection strings and names. Our Git branching strategy follows these steps: Create a feature branch from staging. Make changes and push them to the feature branch. Open a pull request from the feature branch to staging. Test changes in staging. If approved, merge staging into master (production) to deploy changes. The Issue Git integration tracks database connection names, meaning both servers must either run on staging data or production data—this is not feasible for us. Proposed Solution We suggest implementing a decentralized environmental variable for storing database connections. For example: Use {database-server-name} as a placeholder in configurations. Set database-server-name = db_server_staging on staging. Set database-server-name = db_server_production on production. This would allow the same codebase to dynamically connect to the appropriate database without manual adjustments. Would love to hear your thoughts on this!349Views5likes2CommentsSeconds Granularity: Support DateTimeLevel for seconds & milliseconds (ss:ms) for Elasticubes
Description Limitations when working with DateTime fields in dashboards built on ElasticCube (not Live models). These limitations affect their ability to accurately display and analyze event/log/trade data. Issues: We are unable to display the full timestamp including seconds — the format currently shows only up to minutes. Since our logs are time-sensitive and measured in seconds, this detail is essential. We cannot display the full date and time in a single field. As a workaround, we have to split the data into two separate fields (one for date and one for time). Sorting is also a problem. We can only sort by one field at a time, so if we sort by date, the time is not sorted correctly, making it difficult to follow the log order. We’ve noticed that Live models handle datetime fields more effectively and allow displaying timestamps with seconds (such as “Every Second”). However, due to the size and complexity of our data model, switching to a Live model is not an option for us. Request: We would like improved support for DateTime fields in ElasticCube dashboards, including: The ability to show full timestamps with seconds. Support for displaying date and time in a single field. Better sorting logic when working with split DateTime fields The alternative solution https://community.sisense.com/kb/faqs/show-full-date-format-in-the-pivot-or-table-widget/25504 also does not work for us You can add :ss: in the value format but it just not support it and feels like a bug instead of by design.35Views0likes0Comments