Semantic Layer tables stack up like a deck of cards
Hi All, About once a month, all of the tables in our semantic layer stack on top of one another. I'm not sure why. It takes me about an hour to put them back in their right places. Has anyone else had this problem, and if so, how did you stop it from recurring? Cheers.62Views0likes6CommentsSeeking Best Practice for Live Detail Reporting in Sisense (Replacing SSRS)
Afternoon Sisense community, Our team is looking to replicate the functionality of a crucial SSRS report within Sisense. This report is used by a department to obtain a detailed list of jobs for a specific month. The workflow involves: Running the report for a selected month (typically the current or previous month). Reviewing the output for discrepancies. Updating the source system based on the review. Re-running the report immediately to verify the changes (requiring live data). Current Sisense Implementation & Performance Issue I've attempted to recreate this report's dataset using a Live Model connected to a Redshift SQL View. The view is complex: It contains approximately 50 columns of detailed data. It involves JOINs across 15 different tables to consolidate all necessary dimensions and metrics. The Issue: The performance of this Live Model is unacceptable. Users are accustomed to the SSRS report running a stored procedure and returning the filtered data in under 30 seconds. My Sisense Live Model is timing out. Constraints & Goal Requirement: The data must be live (no ElastiCube, as users need immediate reflection of system changes after updates). Target Performance: Sub-30-second return for monthly filtered data. Request for Guidance Given the high number of columns, multiple joins, and the strict requirement for live data with fast filtering (specifically by month), what would be the recommended best practice for implementing this detailed report in Sisense? Are there specific Sisense configurations, data modeling techniques for live connections that would address this performance bottleneck while meeting the "live" requirement? Thank you for your insights!112Views0likes5CommentsRefresh schema for all tables
Can we get a "refresh schema for all tables" button? Reason: Our tables are usually "Select * From AnalyticsSchema.ViewName". We control which fields to return by editing the view, not the Sisense table definition. When a field gets added/removed/changed, we need to refresh schema. That's fine to do manually as you're working on that datamodel+views, but we need to refresh all when: We copy a datamodel to a different server. We need to refresh schema at least to double-check that the views are as expected on the new server. (If any fields have changed, then I'll need to go fix any widgets using those fields, or, more likely, update the view to include them.) A view gets edited, perhaps for a different datamodel, and my datamodel hasn't been updated. I edit several views and want to refresh schema for all those Sisense tables. If I've changed used fields then I'll need to go into each table manually anyway so it doesn't matter, but I've had a case where I've removed unused fields from several views and now I need to click refresh schema on every table individually.2.6KViews6likes18CommentsQBeeQ Asset Auditor: A smarter way to manage your Sisense data assets
Optimize to cut storage and processing costs, refine data models, and boost performance Query and dashboard performance are closely linked, often hindered by bloated data models. Excessive columns, unused tables, and inefficient relationships force queries to process unnecessary data, slowing down dashboards. This leads to frustration, delayed insights, and lower productivity. Use the Asset Auditor dashboards to: See all your data sources and follow the dependencies across data sources, data models, tables, columns, and dashboards and widgets Identify table and column utilization across dashboards and widgets for better model design. Target and remove empty and unused data sources, data models, columns, and dashboards By reducing or removing unused tables and columns and optimizing queries, organizations can drive down storage and processing costs while increasing performance and user engagement. Expose (and prevent) hidden dashboard issues affecting your users A key risk in delivering analytics is unintended downstream effects from data model changes, causing broken widgets, missing calculations, and misleading insights. Without full visibility, teams may disrupt critical business data. Errors often surface only when users load dashboards, despite backend checks, leading to frustration, missed insights, and wasted troubleshooting time. The Asset Auditor will help you to identify the source of these errors, from deleted data sources or missing data down to widget-level errors - reducing the time to troubleshoot and identify root causes and push fixes. Use the Asset Auditor at each step to verify that dashboards are error-free when delivered to end-users. Plan and execute changes with more confidence When shared elements are scattered across dashboards, making changes can feel overwhelming without knowing the full scope. The Asset Auditor can help you confidently assess scope by identifying widget distribution across dashboards to answer the questions: Where are these widgets used? Can changes be done manually? Or do I need a script? Making changes to the underlying data models, while preventing errors, has never been easier because the Asset Auditor will show you exactly which dashboards are using which data models, and which widgets are using which tables and columns. When teams make modifications without full visibility, they risk disrupting critical business insights. By proactively assessing the impact of changes, organizations can prevent costly errors, reduce time spent troubleshooting, and maintain high-quality analytics. You can't optimize what you can't see Organizations pour resources into analytics, but without visibility into how data assets are used, inefficiencies pile up, wasting storage, slowing performance, and inflating costs. For those responsible for maintaining Sisense environments, from data architects and model builders to dashboard designers, the challenge isn’t just creating reports—it’s ensuring the entire infrastructure runs efficiently. Asset Auditor changes the game by providing full transparency into how data is structured, utilized, and performing across your Sisense environment. With clear insights into dependencies, usage patterns, and optimization opportunities, teams can refine models, improve query speed, reduce storage costs, and ensure users get accurate, fast insights—all while preventing costly disruptions before they happen.45Views2likes1CommentReusable/shared connection information for data models
It seems strange to me that each time I need to add another table to my model, I have to re-enter all the connection (or pick a recent connection). If i have 10 tables in my model, they all have their own individual connection information. If a server name ever gets renamed, it'll be a crazy headache for us. There should be a place where you define the distinct list of actual database connections (one per server or database), give it a name, apply security, etc. And then when you go to add a table to a data model, at that point you pick from the previously defined available list of connections.2.2KViews8likes7CommentsFeature request: Let me connect a livemodel to a source with different table names
Let me connect a livemodel to a source with different table names The problem: I want to change my livemodel's connection to a different database. The new database has the same tables, but they belong to a different schema or have different names. As a result, I get a message "select a different database which includes all tables under one schema". So, I cannot change to the new database. Feature request: When I change the source database in an elasticube, I get an interface where I can map my datamodel's tables to the new names. I'd like the same interface for livemodels. Better yet, let me switch to the invalid connection, and then show the error only when I try to publish the model. Workarounds: A) In the old database, I could create the new table names. Then I can point my livemodel to the new names before switching connection. B) I could edit the .smodel file. C) Tech support suggested this: Use the REST API 2.0 PATCH /datamodels/{datamodelId}/schema/datasets/{datasetId}. I used it with the fillowing request body, then I could change the database. == { "database": "c00845810new", "schemaName": "dbo", "type": "live", "connection": { "oid": "99380074-a2bb-4cd7-8bc3-7a5d20152b82" }, "liveQuerySettings": { "timeout": 60000, "autoRefresh": false, "refreshRate": 30000, "resultLimit": 5000 } } Related discussion in support case 500Pk00000s1fVCIAY.11Views0likes1CommentDifferent database connections on staging server vs production server
Hi, We have two cloud servers in Sisense: one for development (staging) and one for production. The staging server connects to our staging database. The production server connects to our production database. All cubes and dashboards are identical except for database connection strings and names. Our Git branching strategy follows these steps: Create a feature branch from staging. Make changes and push them to the feature branch. Open a pull request from the feature branch to staging. Test changes in staging. If approved, merge staging into master (production) to deploy changes. The Issue Git integration tracks database connection names, meaning both servers must either run on staging data or production data—this is not feasible for us. Proposed Solution We suggest implementing a decentralized environmental variable for storing database connections. For example: Use {database-server-name} as a placeholder in configurations. Set database-server-name = db_server_staging on staging. Set database-server-name = db_server_production on production. This would allow the same codebase to dynamically connect to the appropriate database without manual adjustments. Would love to hear your thoughts on this!346Views5likes2CommentsSeconds Granularity: Support DateTimeLevel for seconds & milliseconds (ss:ms) for Elasticubes
Description Limitations when working with DateTime fields in dashboards built on ElasticCube (not Live models). These limitations affect their ability to accurately display and analyze event/log/trade data. Issues: We are unable to display the full timestamp including seconds — the format currently shows only up to minutes. Since our logs are time-sensitive and measured in seconds, this detail is essential. We cannot display the full date and time in a single field. As a workaround, we have to split the data into two separate fields (one for date and one for time). Sorting is also a problem. We can only sort by one field at a time, so if we sort by date, the time is not sorted correctly, making it difficult to follow the log order. We’ve noticed that Live models handle datetime fields more effectively and allow displaying timestamps with seconds (such as “Every Second”). However, due to the size and complexity of our data model, switching to a Live model is not an option for us. Request: We would like improved support for DateTime fields in ElasticCube dashboards, including: The ability to show full timestamps with seconds. Support for displaying date and time in a single field. Better sorting logic when working with split DateTime fields The alternative solution https://community.sisense.com/kb/faqs/show-full-date-format-in-the-pivot-or-table-widget/25504 also does not work for us You can add :ss: in the value format but it just not support it and feels like a bug instead of by design.31Views0likes0CommentsEnhance Usage Model to include segmentation of Email Subscription vs Front-End queries
When analyzing dashboard and widget query performance, I would like to be able to focus on only front-end user queries, or just on email subscriptions, or both. Unfortunately, the source data of the Usage Analytics Model does not contain information that allows us to distinguish whether a dashboard was executed via the Web UI or through a scheduled job. Therefore, we are unable to perform the requested analysis.20Views0likes0Comments