Feature request: Let me connect a livemodel to a source with different table names
Let me connect a livemodel to a source with different table names The problem: I want to change my livemodel's connection to a different database. The new database has the same tables, but they belong to a different schema or have different names. As a result, I get a message "select a different database which includes all tables under one schema". So, I cannot change to the new database. Feature request: When I change the source database in an elasticube, I get an interface where I can map my datamodel's tables to the new names. I'd like the same interface for livemodels. Better yet, let me switch to the invalid connection, and then show the error only when I try to publish the model. Workarounds: A) In the old database, I could create the new table names. Then I can point my livemodel to the new names before switching connection. B) I could edit the .smodel file. C) Tech support suggested this: Use the REST API 2.0 PATCH /datamodels/{datamodelId}/schema/datasets/{datasetId}. I used it with the fillowing request body, then I could change the database. == { "database": "c00845810new", "schemaName": "dbo", "type": "live", "connection": { "oid": "99380074-a2bb-4cd7-8bc3-7a5d20152b82" }, "liveQuerySettings": { "timeout": 60000, "autoRefresh": false, "refreshRate": 30000, "resultLimit": 5000 } } Related discussion in support case 500Pk00000s1fVCIAY.7Views0likes1CommentDifferent database connections on staging server vs production server
Hi, We have two cloud servers in Sisense: one for development (staging) and one for production. The staging server connects to our staging database. The production server connects to our production database. All cubes and dashboards are identical except for database connection strings and names. Our Git branching strategy follows these steps: Create a feature branch from staging. Make changes and push them to the feature branch. Open a pull request from the feature branch to staging. Test changes in staging. If approved, merge staging into master (production) to deploy changes. The Issue Git integration tracks database connection names, meaning both servers must either run on staging data or production data—this is not feasible for us. Proposed Solution We suggest implementing a decentralized environmental variable for storing database connections. For example: Use {database-server-name} as a placeholder in configurations. Set database-server-name = db_server_staging on staging. Set database-server-name = db_server_production on production. This would allow the same codebase to dynamically connect to the appropriate database without manual adjustments. Would love to hear your thoughts on this!334Views4likes2CommentsSeeking Best Practice for Live Detail Reporting in Sisense (Replacing SSRS)
Afternoon Sisense community, Our team is looking to replicate the functionality of a crucial SSRS report within Sisense. This report is used by a department to obtain a detailed list of jobs for a specific month. The workflow involves: Running the report for a selected month (typically the current or previous month). Reviewing the output for discrepancies. Updating the source system based on the review. Re-running the report immediately to verify the changes (requiring live data). Current Sisense Implementation & Performance Issue I've attempted to recreate this report's dataset using a Live Model connected to a Redshift SQL View. The view is complex: It contains approximately 50 columns of detailed data. It involves JOINs across 15 different tables to consolidate all necessary dimensions and metrics. The Issue: The performance of this Live Model is unacceptable. Users are accustomed to the SSRS report running a stored procedure and returning the filtered data in under 30 seconds. My Sisense Live Model is timing out. Constraints & Goal Requirement: The data must be live (no ElastiCube, as users need immediate reflection of system changes after updates). Target Performance: Sub-30-second return for monthly filtered data. Request for Guidance Given the high number of columns, multiple joins, and the strict requirement for live data with fast filtering (specifically by month), what would be the recommended best practice for implementing this detailed report in Sisense? Are there specific Sisense configurations, data modeling techniques for live connections that would address this performance bottleneck while meeting the "live" requirement? Thank you for your insights!105Views0likes4CommentsSeconds Granularity: Support DateTimeLevel for seconds & milliseconds (ss:ms) for Elasticubes
Description Limitations when working with DateTime fields in dashboards built on ElasticCube (not Live models). These limitations affect their ability to accurately display and analyze event/log/trade data. Issues: We are unable to display the full timestamp including seconds — the format currently shows only up to minutes. Since our logs are time-sensitive and measured in seconds, this detail is essential. We cannot display the full date and time in a single field. As a workaround, we have to split the data into two separate fields (one for date and one for time). Sorting is also a problem. We can only sort by one field at a time, so if we sort by date, the time is not sorted correctly, making it difficult to follow the log order. We’ve noticed that Live models handle datetime fields more effectively and allow displaying timestamps with seconds (such as “Every Second”). However, due to the size and complexity of our data model, switching to a Live model is not an option for us. Request: We would like improved support for DateTime fields in ElasticCube dashboards, including: The ability to show full timestamps with seconds. Support for displaying date and time in a single field. Better sorting logic when working with split DateTime fields The alternative solution https://community.sisense.com/kb/faqs/show-full-date-format-in-the-pivot-or-table-widget/25504 also does not work for us You can add :ss: in the value format but it just not support it and feels like a bug instead of by design.18Views0likes0CommentsEnhance Usage Model to include segmentation of Email Subscription vs Front-End queries
When analyzing dashboard and widget query performance, I would like to be able to focus on only front-end user queries, or just on email subscriptions, or both. Unfortunately, the source data of the Usage Analytics Model does not contain information that allows us to distinguish whether a dashboard was executed via the Web UI or through a scheduled job. Therefore, we are unable to perform the requested analysis.13Views0likes0CommentsLet designers see data models
My designer and data-designer users want to see data models shared with them. Currently, Sisense only shows you data models that you have edit permission on. The reason is that they want to see the relationships, tables, and fields. They want to do that to understand how they should use the data, fix issues in their dashboards, and confirm bugs in the data models. The visual data model page does that better than reading through a list of dimensions. Similar to this post: https://community.sisense.com/discussions/build_analytics/how-to-grant-view-only-access-on-the-elasticubes-to-designers/290916Views0likes0CommentsRefresh schema for all tables
Can we get a "refresh schema for all tables" button? Reason: Our tables are usually "Select * From AnalyticsSchema.ViewName". We control which fields to return by editing the view, not the Sisense table definition. When a field gets added/removed/changed, we need to refresh schema. That's fine to do manually as you're working on that datamodel+views, but we need to refresh all when: We copy a datamodel to a different server. We need to refresh schema at least to double-check that the views are as expected on the new server. (If any fields have changed, then I'll need to go fix any widgets using those fields, or, more likely, update the view to include them.) A view gets edited, perhaps for a different datamodel, and my datamodel hasn't been updated. I edit several views and want to refresh schema for all those Sisense tables. If I've changed used fields then I'll need to go into each table manually anyway so it doesn't matter, but I've had a case where I've removed unused fields from several views and now I need to click refresh schema on every table individually.2.6KViews6likes15CommentsUnload old cubes when under RAM pressure
Scenario: I have a server with many cubes. Half the cubes are used only once or twice a month. Sisense tech support said an elasticube will "keep its data in memory for fast query access". That means I have to provision the server with enough RAM to hold all cubes, even the rarely-used ones. Goals: I want to provision RAM for the expected level of use. E.g. I expect 60% of the cubes to be used each day. Also, if I have under-provisioned my server, I don't want it to crash (memory safe mode). Instead, I want rarely-used cubes to slow down until I provide more RAM. Solution: There is an existing feature that can help: I can set cubes to unload from RAM if unused for more than one day. I'd like a better way: When Sisense detects high RAM use, I want it to list cubes by last-queried date and unload the oldest ones. My way, the unload happens only when necessary, not daily. It "softens" the impact of insufficient RAM by making it slow down the least important cubes instead of crashing.15Views0likes0CommentsOuter joins (preview) - Release notes
An outer join (left, right, full) combines data from two tables, including all matching rows and any unmatched rows from one or both tables, filling in NULL for missing data. Analytical platforms use outer joins to achieve: Broader analytical capabilities: Ensure all relevant data is visible, even if there is no exact match in another table (e.g., view all products, including products with no sales). Identify Gaps: Easily spot data integrity issues and missing information or relationships, which is crucial for analysis and reporting.241Views1like0Comments