Postgres vs. MongoDB for Storing JSON Data — Which Should You Choose?
As your data keeps growing, you will need to have your data storage in good shape to use in analytics. Use this article to think through the complex decision of using Postgres or MongoDB for your data storage.103KViews0likes0CommentsCustom Widget Script Styling Not Coming Through on sdk-ui Chart Component
I have a widget where I have edited the script to apply custom formatting (changing the “time” from being displayed in seconds => to minutes & seconds). However, when I display this widget in my React project with the <LineChart {...widget.getChartProps()}/> component from sdk-ui, the custom formatting is not showing up. Is there something special I need to do here to be able to see this custom script styling in the sdk-ui component? I’m not finding any community posts describing this problem or a solution.Solved8KViews0likes11CommentsEnsuring Elasticube is in Sync with Data Source (Deletion & Updates in Accumulate Build)
Capturing accurate business transactions is imperative in understanding business operations. If data is constantly changing, such as being deleted or updated, how do you ensure your Elasticube is in sync with your data source? Depending on the use case, sometimes “Replace All” data could be an option, but that might not be the most efficient option if you have a huge amount of data. You also have “Accumulated By” as an option, but that could result in duplicated records and deleted records still being shown in Elasticube. Given these potential results, another path you should consider is a custom code solution. Custom Code Solution Leverage the demo below to learn more. Note in the demo, we will be using an oracle database with a transaction table. The transaction table has an updated_date column that captures the latest data changes and an audit table where deleted transaction IDs are stored with timestamps via a trigger. Please note, all tables in the demo contain dummy records. You can find the DDL for table and trigger here. Using custom code to create new tables in Elasticube will allow the Elasticube to be in sync with your data source table. This new table will only show the latest records, while also removing any deleted records via trigger. The audit table will capture deleted records. (NOTE: Adding triggers could impact database performance as every time a record is deleted there needs to be a write operation). Additionally, this custom code solution provides the ability for change data capture (CDC) to answer business questions like at what intervals the record was updated and/or deleted via the audit table. Let’s begin by building an Elasticube. Use updated_date as “Accumulated By” for both the tables and build the Elasticube by selecting BY TABLE. After this, we will add Custom Code that will contain all the logic required to obtain the latest records and also remove any deleted records. Step 1: Create tables, triggers, and records in Source Database. Sample code is available here. Step 2: Create Elasticube by adding both tables and selecting deleted_on column as accumulated by for audit table and updated_on for fact_sales table. Then build using “BY Table” (Note: The audit table will be empty since we haven’t deleted any records yet) Step 3: Create Custom Code Table If not enabled, go to Admin > Feature Management > Custom Code and save. Select Create your own Notebook In the input parameter, add both base tables, making sure you select the correct parameters and add columns for the output schema. Then hit the open code editor to open a jupyter notebook in a new tab where you will be able to add the logic. Once the jupyter notebook is open, clear all the cells and copy-paste the code from the .ipynb file from the repo. Make changes to the notebook based on your use case. For example, if the Elasticube and table names are different, provide them in cell 2. For this example, the date_change_threshold is the parameter we are choosing to restrict. Save the notebook, return to the previous tab, select the table for input parameters, and click done to create a new Custom Code table in Elasticube. First Build Changes Only which will create the data for the Custom Code Table Step 4: Insert a new record, update an old record, and/or delete a record from the Fact Sales table in your data source. The update trigger will change the Updated_On value for id 9 while the delete trigger will create a new record in the Audit Table Step 5: Build the Elasticube by selecting “By Table”. Now, in the Elasticube you have a fact_sales table which contains all the history/changes of the record and the Custom Code table which has the latest record that are in sync with the data source table. Example below of Table capturing all data changes: Example below of Table in-sync with data source: These steps should result in accurate data in your Elasticube leading to smarter business decisions. Please note, this is one possible workaround for users with similar use cases. Keep in mind that editing the jupyter code is another possibility. For example, you can try to use dask dataframe instead of pandas and see which performance is better with your data. Disclaimer: Please note, that this blog post contains one possible custom workaround solution for users with similar use cases. We cannot guarantee that the custom code solution described in this post will work in every scenario or with every Sisense software version. As such, we strongly advise users to test solutions in their own environment prior to deploying them to ensure that the solutions proffered function as desired in their environment. For the avoidance of doubt, the content of this blog post is provided to you “as-is” and without warranty of any kind, express, implied or otherwise, including without limitation any warranty of security and or fitness for a particular purpose. The workaround solution described in this post incorporates custom coding which is outside the Sisense product development environment and is therefore not covered by not covered by Sisense warranty and support services.5.7KViews2likes0CommentsHow to configure Data Groups
Introduction This article will guide you on how to configure the Data Groups’ parameters based on the current performance and business requirements. What are "Data Groups”? The “Data Groups” administrative feature allows configuring Quality-Of-Service (QoS) to Sisense instances and limiting/reserving the resources of individual data models. By utilizing data groups you’ll make sure available resources are controlled and managed to support your business needs. Why Should One Configure “Data Groups”? The main benefits of configuring “Data Groups” are: Controlling which cluster nodes are used for building and querying Limiting the amount of RAM and CPU cores a data model uses Configuring indexing, parallel processing, and caching behavior of a data model Mitigating “Out-Of-Memory" issues caused by lack of resources Preventing “Safe Mode” exceptions caused by lack of resources Read this article to learn about the different “Data Groups” parameters “Data Group” Use Cases The following two scenarios are good examples of using "Data Groups”: Scenario A customer embeds Sisense (OEM use case) and has many tenants that can design dashboards (a.k.a. Self-Service BI). Each tenant has a dedicated Elasticube data model. Challenge A single tenant is capable of creating a large number of dashboards - Causing the underlying data model to utilize a significant portion of the available CPU/RAM resources. The excessive use of resources by a single tenant may lead to resource depletion and degrade the user experience for them and other tenants. Solution Resource Governance – Set up resource utilization limitations using "Data Groups”. Doing so will limit each tenant’s usable resources and prevent tenants from affecting each other. Scenario A customer has many Elasticube data models created by various departments of the organization. Some of the data models are used to generate high-priority strategical dashboards (used by C-Level managers). Other dashboards are prioritized as well (e.g., operational dashboards vs. dashboards used for testing) Challenge C-Level dashboards must have the highest priority and should always be available. Other dashboards should still be operational and behave based on their priority. In case of conflict or a temporary lack of resources, a critical Elasticube may run out of memory or trigger a ‘safe mode’ event. Solution Resource Governance – Setting up priority-based Data Groups would result in allocating business-critical data models with more resources and limiting the less critical ones. Data Groups Resource Governance Strategies Limiting a Data Model’s Resources - Governance rules can be used to limit the resources used by a single data model or a group of multiple data models. This can be done by configuring a "Maximal” amount of allocated CPU/RAM. Note that the data model would be limited to the configured resource restrictions even though additional resources are available for use. Pre-Allocating a Data Model’s Resources - Governance rules can be used to pre-allocate the resources used by a single data model or a group of multiple data models. This can be done by configuring a “Reserved” amount of allocated CPU/RAM. Note that other data models would be limited to partial server resources even though a pre-allocated resource might be idle. Prioritizing Data Models - Governance rules can be used to prioritize certain data models by allocating a different amount of resources to different data models. High-priority data models would be allocated with more resources than lower-level data models. You may also choose not to limit the data model’s resources (no Data Group configuration). However, this will re-introduce the initial risk of resource depletion. How To Configure Data Groups? Configuring “Data Groups” requires high expertise and attention to detail. A dedicated tool is introduced to assist you in this task. Follow the directions below to acquire the tool and implement Data Groups: Prerequisite To base your calculations and make a decision on how to configure data groups you’ll require data monitoring enabled. To learn more about Sisense monitoring and the data reported read the following article: https://support.sisense.com/kb/en/article/enable-sending-sisense-monitoring-logs Step #1 – Download the “Data Groups Configuration Tool” The data groups tool (Excel Document) is attached to this article. Download a local copy to your computer. Step #2 – Access Logz.IO To access the Logz.IO platform: Log in to the Logz.io website Navigate to the “SA - Linux - Data Groups“ dashboard Set the timeframe filter to a 21-day timeframe Step #3 – Fill out the “Data Groups Configuration Tool” Document The “Elasticubes” sheet holds the data for the decision making regarding the different groups: Field Description Comment CubeName The ElastiCube name Size GB The ElastiCube’s size on disk The measure is taken from the “Data” tab ElastiCube list Estimated size in memory The estimated maximal ElastiCube size in memory Auto-calculated ([Size GB] X 2.5) Peak query memory consumption (GB) The actual maximal ElastiCube size in memory (should be smaller than the estimated maximal size) The measure is taken from the logz.io dashboard: (“Max Query Memory” field) Build frequency The frequency of the ETL Sisense performs The measure is taken from Sisense’s build scheduler Average of concurrent Query Average concurrent queries sent from dashboards to the ElastiCube The measure is taken from the logz.io dashboard: Search for the ‘Max concurrent queries per Cube’ widget and download the CSV file with the data. Calculate the average value of the “Max concurrent queries per Cube: column Business Criticality This is a business measure that determines the QoS of the ElastiCube True (High) / False (Low) Data security Is “Data Security” applied to this ElastiCube This column will help determine if the “ElastiCube Query Recycler” parameter will improve the performance or not. Explanation here under “ElastiCube Query Recycler” Data Group The Data Group this ElastiCube should be a part of Try to fill in the column by classifying your ElastiCubes according to common characteristics both in terms of business and in terms of resource consumption. Step #4 – Define your Data Groups Using the information you’ve collected and the explanations in this article – Define the data groups you wish to use. Fill in the information in the “Required data groups” sheet. The “Required data groups” sheet provides the name and configuration for each data group. Use this to tab describe the Data Groups that meet your business needs, use the “Intent” column to describe each group's purpose. The configuration in this sheet will later be used to configure the Data Groups in the Sisense Admin console: Field Description Comment Group name The Data Group’s name Intent The Data Group’s description (in the business point of view) Instances The number of query instances (pods) created in the Sisense cluster This parameter is very useful, however, increasing this value will result in increasing the Elasticube’s memory footprint. You should only consider changing this value if your CPU usage reaches 100%. The high CPU consumption is mostly caused by high users concurrency Build on Local Storage Is enabled, for Multi-node. Using local storage can decrease the time required to query the cube, after the first build on local storage Simultaneous Query Executions The maximum number of queries processed simultaneously In case your widget contains heavy formulas it is worth reducing the number to make the pressure lower. Used when experiencing out-of-memory (OOM) issues. Parallelization is a trade-off between memory usage & query performance. 8 is the optimal amount of queries % Concurrent Cores per Query Related to Mitosis and Parallelism technology. To minimize the risk of OOM, set this value to the equivalent of 8 cores. e.g. if 32-core, set to 25; for 64 cores set to 14; the value should be an even number. Can be increased up to a maximum of half the total cores - i.e. treat the default as the recommended max that can be adjusted down only. Change it when experiencing out-of-memory (OOM) issues Max Cores Maximum cores allowed to be used by the qry pod Limit for less critical groups Query: Max RAM (MB) The Max RAM will be consumed per each of the ElastiCubes in each group. Take from the column “MAX of Peak query memory consumption (GB)” in the Summary of data groups sheet Maximum RAM that is allowed to be used by the qry pod (dashboards). By each of the query instances if were increased in the Instances. Please note that 85% of the pod usage OR overall server RAM will cause a Safe-Mode exception. In the case of Dashboards qry pod) will delete and start the pod again. At the Safe Mode, the users of a dashboard would see the error of a Safe Mode and the qry pod will be deleted and started again. So the next dashboard refresh will bring the data Step #5 – Verify The “Summary of Data Groups” sheet includes a pivot chart that will calculate the max memory consumption from each data group. This value correlates to the “Query: Max RAM in MB” configuration. We need to take the maximal value of Peak query memory consumption (GB) from the Summary of data groups tab and multiply it by 1.20 to avoid Safe mode. Step #6 – Process the results and configure Sisense Data Groups Prerequisite - Read this article on how to create data-groups (5 min read): https://documentation.sisense.com/docs/creating-data-groups On the Sisense Web Application, for each new data group: Navigate to “Admin” --> Data Groups” and click the “+ Data Group” button. Fill out the information from the “Required data groups” tab. Step #7 – Monitor the Sisense Instance The final step is to follow the information in the “Sisense Monitor” and to make sure the performance is improving. Review the documentation below for more details regarding monitoring https://docs.sisense.com/main/SisenseLinux/monitoring-sisense-on-linux.htm Good luck!5.3KViews3likes0CommentsSisense.js Demo with Filter and Dashboard Changing with Native Web Components
This is a Sisense.js Demo built with React, which includes functionality such as changing filters and dashboards with native web components. This can be used as a reference or starting point for building and customizing Sisense.js applications with custom styling and functionality. React Sisense.js Demo Getting Started Make sure CORS is enabled and the location/port your hosting this applicaiton on is whitelisted on Sisense instance, documentation on changing CORS settings is here. Open the terminal (on Mac/Linux search for terminal, on Windows search for Powershell, Command Prompt, or Windows Subsystem for Linux if installed ) and set path to folder containing project (using cd command). Open the config file in the folder src folder and set Sisense Server URL to the IP or domain name of your Sisense server and set the dashboard id's and optionally filters keys. Run the command. npm install && npm start If npm and node is not installed, install them using the package manager or using nvm (Node version manager). Nvm can be installed by running this command in a terminal curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh | bash Once install script is finished, follow any instructions listed in terminal, restart or open a new terminal and run these command to install node and npm: nvm install node nvm install-latest-npm To start hosting the site, run this command: npm install && npm start The default location is IP of hosting server on port 3000. If the IP address of the server hosting this program is 10.50.74.149 for example, then enter 10.50.74.149:3000 in the address bar of a browser to view application. if hosted locally enter your ip on port 3000 or enter localhost:3000 To use a different port, set the port variable in terminal using this command: export PORT=NEW_PORT_NUMBER and then run npm start. To run in the background use PM2 or system. Requirements Npm (and Node) - Installs all requirements Access to a Sisense instance Description This is a demonstration of some of the capabilities of Sisense.js and Sisense. It is a React single-page application using Typescript that can switch between multiple dashboards, loading all widgets in those dashboards to individual elements (created on dashboard switch). Multiple dashboard filters can be changed (and multiple members in a filter) using a native React dropdown bar to filter the data with the new filtered data visualized with the widgets in the dashboard. Changes made to filters using widgets in Sisense.js using right click are reflected in dropdown filter. Connecting to Sisense // Loads Sisense.js from Server, saves Sisense App copy in window as object to indicate already loaded to prevent loading twice const loadSisense = () => { // If Sisense is already loaded stop if (window.sisenseAppObject) { return; } // Loads sisense app object into app variable, edits being saved to dashboard set to false window.Sisense.connect(config.SisenseUrl, config.saveEdits) // Sisense app object .then((app: SisenseAppObject) => { // loads Sisense app object into window object, so connect is only run once, alternative is export and import window.sisenseAppObject = app; // Calls loadDashboard after Sisense is connected, uses initial widget and dashboard variables loadDashboard(config.DashboardConfig[0].DashboardID, config.DashboardConfig[0].DimArray); }) // error catching and log in console .catch((e: Error) => { console.error(e); }); } Loading Dashboard and Creating Container Element for each Widget // Function to load dashboard taking dashboard id as parameter to load dashboard. Every widget in dashboard is rendered into an element created in loop, looks for parent element with ID 'widgets'. On calling again existing widgets are replaced by new ones. const loadDashboard = (dashboardID: string = '') => { // if empty dashboard id return if (dashboardID === '') { return; } // load dashboard into being active dashboard window.sisenseAppObject.dashboards.load(dashboardID) // after load dashboard is in dash variable .then((dash: DashObject) => { window.currentDashObject = dash // array of loaded widgets // let widgetArray: Array<String> = prism.activeDashboard.widgets.toArray().map( let widgetArray: Array<String> = dash.$$widgets.$$widgets.map( function (widget: Widget) { // widget id for loading return widget.id } ); // set state with loaded dashboard if prism loaded if (widgetArray.length > 0) { this.setState((state, props) => { return { dashboardID: dash.id, }; }); } // get widgets element let widgetsElement: HTMLElement | null = document.getElementById(`widgets`); // type checking if (widgetsElement === null) { return; } // erase previous widgets widgetsElement.innerHTML = ''; // loop through array of widget arrays, loads them into containers by id with first in widget1, second in widget2 and so on widgetArray.forEach((widget, index) => { // check if they exist, type checking if (widgetsElement === null) { return; } // element to load widget into later let widgetElement: HTMLElement | null = document.createElement("div"); // Class included index for widget rendering later widgetElement.classList.add(`widget${index + 1}`, 'widget') // add empty div to widgets parent div widgetsElement.appendChild(widgetElement) // get widget and filter elements by ID let filterElement: HTMLElement | null = document.getElementById("filters"); // check if they exist, type checking if (widgetElement === null || filterElement === null) { return; } // Clear widget and filter elements from previous render filterElement.innerHTML = ''; // put widget in container element dash.widgets.get(widget).container = widgetElement; // Renders filter in HTML element with id of filters dash.renderFilters(filterElement); // reloads and refresh dashboard }); dash.refresh(); }) // error catching and log in console .catch((e: Error) => { console.error(e); }); } Changing a Dashboard Filter // Change a dashboard filter, takes dim to filter by, and new values in filter. const changeDashboardFilter = (dashboard: dashboard, dim: string, newFilterArray: Array<String>) => { // Find matching filter and to make changes to let filterToChange = dashboard.filters.$$filters.find(item => item.jaql.dim === `[${dim}]`); // If filter is undefined create a new filter if (filterToChange === undefined) { // Create the filter options let filterOptions = { // Save to dashboard save: false, // Refresh on change refresh: true, // If filter already used, make changes to that filter instead of creating new ones unionIfSameDimensionAndSameType: true }; // Create the jaql for the filter let jaql = { 'datatype': 'text', 'dim': dim, 'filter': { // Multiple items can be selected 'multiSelection': true, // New filter items 'members': newFilterArray, 'explicit': true }, }; // Create the filter jaql object let applyJaql = { jaql: jaql }; // Set the new filter using update function dashboard.$$model.filters.update(applyJaql, filterOptions); } if (filterToChange && filterToChange.$$model.jaql.filter) { let members = filterToChange.$$model.jaql.filter.members; // Check if members exist if (members !== undefined) { // Set members to new selected filter filterToChange.$$model.jaql.filter.members = newFilterArray; // Save the dashboard // dashboard.$$model.$dashboard.updateDashboard(dashboard.currentDashObject.$$model, "filters"); dashboard.filters.update(filterToChange, { refresh: true, save: false }); // Refresh the dashboard // dashboard.refresh(); } } } Clearing a filter Clearing a filter is done by simply setting the members to an empty array, which disables the filter. changeFilter([]); Monitor For Filter Change to Ensure Dropdown Accurately Displays Filter Change // Watch for element change to indicate filter changes and make change to dropdown if filters don't match displayed filter in dropdown const watchForFilterChange = new MutationObserver((mutations) => { // If element changes mutations.forEach(mu => { // If not class or attribute change return if (mu.type !== "attributes" && mu.attributeName !== "class") return; // Find matching Filter to dropdown let filterToChange = (dashboard.filters.$$filters as Array<DashObject>).filter(element => element.jaql.dim === `[${dim}]`); // If filter values displayed and filter values active don't match each other, set displayed filter to match filter, filter value stays as is filterToChange.forEach((filter) => { if (filter.jaql.filter.members.sort().join(',') !== selectedFilterValues.sort().join(',')) { // Change state of filter values to new filter, sets displayed filter in dropdown in correct one. setSelectedFilterValues(filter.jaql.filter.members); } }); }); }); // Array of element with 'widget-body' class (created by Sisense.js on widgets) for change to check for filter change const widget_body_array = document.querySelectorAll(".widget-body") // Watch 'widget-body' class for filter changed by other means, such as right click select on value widget_body_array.forEach(el => watchForFilterChange.observe(el, { attributes: true })); Component Details Filter - One for each filter, gets dropdown values and other props for dropdown filter component, takes filter dim as prop, parent of Dropdown Filter component Dropdown Filter - Renders individual filter dropdown, renders dropdown of one filter, handles filter changes Clickable Button - Button that calls a function on click, props include text, color and function called Input Number - Sets widgets per row, has up and down button as well as keyboard input, controlled input only accepts numbers, has default value in config file Load Sisense - Loads Sisense, gets URL of server from config file, has load dashboard function, creates elements to load widgets into on dashboard load call Sidebar - Collapsible Sidebar, on click loads dashboard, content of dashboard can be configured by config file App - Parent component, has loading indicator before Sisense has loaded, contains all other components Config Settings DashboardConfig - Array of objects describing dashboards selectable in sidebar DashboardLabel - text to show in expanded sidebar DashboardID - Dashboard ID, get from url of dashboard in native Sisense Icon - Icon to show in sidebar for dashboard DimArray - Values to filter by SisenseUrl - URL of Sisense server initialDashboardCube - Title of initial dashboard to show defaultSidebarCollapsed - Dashboard initial state, collapsed or not defaultSidebarCollapsedMobile - Dashboard initial state, collapsed or not, on mobile collapseSideBarText - Text shown on element that collapses sidebar* hideFilterNativeText - Text to hide native embedded filter useV1 - Use v1 version of Sisense script defaultWidth - initial state of selector for widgets per row saveEdits - Write back to Sisense changes made to filters, and any other persistent changes loadingIndicatorColor - Color of loading indicator loadingIndicatorType - Type of loading indicator, options from ReactLoading sidebarBackgroundColor - Background color of sidebar widgetMargin - Margin of individual widget Available Scripts In the project directory, you can run: npm start Runs the app in the development mode. Open http://localhost:3000 to view it in the browser. The page will reload if you make edits. You will also see any lint errors in the console. npm test Launches the test runner in the interactive watch mode. See the section about running tests for more information. npm run build Builds the app for production to the build folder. It correctly bundles React in production mode and optimizes the build for the best performance. The build is minified and the filenames include the hashes. Your app is ready to be deployed! See the section about deployment for more information. npm run eject Note: this is a one-way operation. Once you eject, you can’t go back! If you aren’t satisfied with the build tool and configuration choices, you can eject at any time. This command will remove the single-build dependency from your project. Instead, it will copy all the configuration files and the transitive dependencies (webpack, Babel, ESLint, etc) right into your project so you have full control over them. All of the commands except eject will still work, but they will point to the copied scripts so you can tweak them. At this point, you’re on your own. You don’t have to ever use eject. The curated feature set is suitable for small and middle deployments, and you shouldn’t feel obligated to use this feature. However, we understand that this tool wouldn’t be useful if you couldn’t customize it when you are ready for it. Download and extract the zip below:5.2KViews5likes0Comments