Semantic Layer tables stack up like a deck of cards
Hi All, About once a month, all of the tables in our semantic layer stack on top of one another. I'm not sure why. It takes me about an hour to put them back in their right places. Has anyone else had this problem, and if so, how did you stop it from recurring? Cheers.68Views0likes7CommentsSeeking Best Practice for Live Detail Reporting in Sisense (Replacing SSRS)
Afternoon Sisense community, Our team is looking to replicate the functionality of a crucial SSRS report within Sisense. This report is used by a department to obtain a detailed list of jobs for a specific month. The workflow involves: Running the report for a selected month (typically the current or previous month). Reviewing the output for discrepancies. Updating the source system based on the review. Re-running the report immediately to verify the changes (requiring live data). Current Sisense Implementation & Performance Issue I've attempted to recreate this report's dataset using a Live Model connected to a Redshift SQL View. The view is complex: It contains approximately 50 columns of detailed data. It involves JOINs across 15 different tables to consolidate all necessary dimensions and metrics. The Issue: The performance of this Live Model is unacceptable. Users are accustomed to the SSRS report running a stored procedure and returning the filtered data in under 30 seconds. My Sisense Live Model is timing out. Constraints & Goal Requirement: The data must be live (no ElastiCube, as users need immediate reflection of system changes after updates). Target Performance: Sub-30-second return for monthly filtered data. Request for Guidance Given the high number of columns, multiple joins, and the strict requirement for live data with fast filtering (specifically by month), what would be the recommended best practice for implementing this detailed report in Sisense? Are there specific Sisense configurations, data modeling techniques for live connections that would address this performance bottleneck while meeting the "live" requirement? Thank you for your insights!115Views0likes6CommentsAbility to Specify Provider in Pulse Alerts
-------------------------------------------------------------------Problem Statement: Inability to Scope System Alerts by Data Provider Currently, Pulse System Alerts for build failures are binary—either enabled for everything or disabled. In complex enterprise environments, we often run hybrid deployments where ElastiCubes are powered by vastly different backend providers (e.g., legacy MSSQL/Oracle vs. modern Snowflake/Redshift/BigQuery). When a legacy database goes down for maintenance, or when we have non-critical local CSV cubes failing, our administrators are flooded with Pulse notifications. This noise often causes us to miss critical failures in our primary Snowflake cloud data warehouse, which has direct cost and SLA implications. Proposed Feature: Provider-Based Alert Routing We need the ability to configure Pulse System Alert rules based on the underlying Provider or Connectivity Type of the ElastiCube. Specifically, in the Pulse > System Alerts > Build Failed configuration, please add a condition logic or filter that allows us to include/exclude specific providers. Configuration Example: Alert Rule 1: Send to CloudOps Team IF Build Fails AND Provider = Snowflake, Redshift. Alert Rule 2: Send to DBA Team IF Build Fails AND Provider = MSSQL, Oracle. Alert Rule 3: Do NOT send alert IF Provider = CSV, Excel. Business Impact Noise Reduction: Eliminates "alert fatigue" by filtering out expected failures from dev/test environments or legacy systems. Targeted Incident Response: Ensures the right team (Cloud Ops vs. Legacy DBAs) receives the alert immediately, reducing Mean Time to Resolution (MTTR). Cost Management: Helps prioritize failures that impact billable cloud compute consumption.9Views0likes0CommentsDatamodel should list all dependent dashboards not just where this is the primary model
See pictures. Or go: 1. Open datamodel. 2. Click the Dashboards dropdown. 3. Click Open Dashboard. It lists dependent dashboards. But it includes only dashboards where this data model is the "primary" data source. That is dangerous. It made me think that "No dashboards are available for this model" and I nearly deleted the model. The list should show all dependent dashboards. I will use the API to check dependencies in future.549Views2likes5CommentsLet designers see data models
My designer and data-designer users want to see data models shared with them. Currently, Sisense only shows you data models that you have edit permission on. The reason is that they want to see the relationships, tables, and fields. They want to do that to understand how they should use the data, fix issues in their dashboards, and confirm bugs in the data models. The visual data model page does that better than reading through a list of dimensions. Similar to this post: https://community.sisense.com/discussions/build_analytics/how-to-grant-view-only-access-on-the-elasticubes-to-designers/290923Views0likes1CommentUsage Analytics- Track Field Usage Across Dashboards
Idea: usage analytics reports can tell what reports use a certain field? Filter by a certain field in the usage analytics report and you will see all dashboards using that field. It would also be beneficial to track table usage in dashboards too. Use Case: region will be renamed to region_usa in our sql query and this could potentially break dashboards. to be proactive, i would like to know what dashboards are currently using the field 'region' so that I can redirect them to the correct field, 'region_usa' once the sisense schema & sql has been updated.563Views8likes2CommentsRefresh schema for all tables
Can we get a "refresh schema for all tables" button? Reason: Our tables are usually "Select * From AnalyticsSchema.ViewName". We control which fields to return by editing the view, not the Sisense table definition. When a field gets added/removed/changed, we need to refresh schema. That's fine to do manually as you're working on that datamodel+views, but we need to refresh all when: We copy a datamodel to a different server. We need to refresh schema at least to double-check that the views are as expected on the new server. (If any fields have changed, then I'll need to go fix any widgets using those fields, or, more likely, update the view to include them.) A view gets edited, perhaps for a different datamodel, and my datamodel hasn't been updated. I edit several views and want to refresh schema for all those Sisense tables. If I've changed used fields then I'll need to go into each table manually anyway so it doesn't matter, but I've had a case where I've removed unused fields from several views and now I need to click refresh schema on every table individually.2.7KViews6likes18CommentsReusable/shared connection information for data models
It seems strange to me that each time I need to add another table to my model, I have to re-enter all the connection (or pick a recent connection). If i have 10 tables in my model, they all have their own individual connection information. If a server name ever gets renamed, it'll be a crazy headache for us. There should be a place where you define the distinct list of actual database connections (one per server or database), give it a name, apply security, etc. And then when you go to add a table to a data model, at that point you pick from the previously defined available list of connections.2.2KViews8likes7CommentsFeature request: Let me connect a livemodel to a source with different table names
Let me connect a livemodel to a source with different table names The problem: I want to change my livemodel's connection to a different database. The new database has the same tables, but they belong to a different schema or have different names. As a result, I get a message "select a different database which includes all tables under one schema". So, I cannot change to the new database. Feature request: When I change the source database in an elasticube, I get an interface where I can map my datamodel's tables to the new names. I'd like the same interface for livemodels. Better yet, let me switch to the invalid connection, and then show the error only when I try to publish the model. Workarounds: A) In the old database, I could create the new table names. Then I can point my livemodel to the new names before switching connection. B) I could edit the .smodel file. C) Tech support suggested this: Use the REST API 2.0 PATCH /datamodels/{datamodelId}/schema/datasets/{datasetId}. I used it with the fillowing request body, then I could change the database. == { "database": "c00845810new", "schemaName": "dbo", "type": "live", "connection": { "oid": "99380074-a2bb-4cd7-8bc3-7a5d20152b82" }, "liveQuerySettings": { "timeout": 60000, "autoRefresh": false, "refreshRate": 30000, "resultLimit": 5000 } } Related discussion in support case 500Pk00000s1fVCIAY.17Views0likes1Comment