JumpToDashboard - Troubleshooting the most common configuration issues
JumpToDashboard - Troubleshooting the most common configuration issues This article provides possible solutions to the most common configuration issues with the JumpToDashboard plugin. Please review the symptom of the issue first (what error/behavior you experience with the JumpToDashboard plugin) and then follow the solution instructions. If this doesn’t solve your issue, feel free to contact our Support Team describing your issue in detail. Symptoms: A target (usually with “_drill” in prefix) dashboard disappeared for non-owner users in the left-hand side panel. Solution: this behavior could be intended and is controlled by the JumpToDashboard parameter called “hideDrilledDashboards”. To make the dashboard visible for the non-owners, please check the following: 1. Log in as a dashboard owner and find the dashboard in question in the left-hand side panel. Click the 3-dots menu and make sure it’s not hidden: 2. If it’s not hidden by the owner intentionally, then navigate to the Admin tab > System Management (under Server & Hardware) > File Management > plugins > jumpToDashboard > js > config.js and check if hideDrilledDashboards set to true. If so, then change it to false and save the changes in the config file. 3. Wait until the JumpToDashboard plugin is rebuilt under the Admin tab > Server & Hardware > Add-ons page and ask your user to refresh the browser page to check if a drill dashboard appears on the left-hand side panel. Symptoms: No "Jump to dashboard" menu appears in a widget edit mode clicking the 3-dots menu. Solution: there could be different reasons for such behavior so check the most common cases below: Double-check if the JumpToDashboard plugin is enabled under the Admin tab > Server & Hardware > Add-ons page. Make sure that both dashboards (parent and target) are based on the same ElastiCube. By default, the JumpToDashboard plugin has sameCubeRestriction: true in the config.js file that prevents the ‘jump to’ menu from appearing when a drill dashboard uses a different data source. Check that the prefix you used for the drill dashboard creation is correct. It could be changed in the config.js file. By default, it uses “_drill”: Symptoms: when clicking on a widget that should open a drill dashboard, nothing happens. Solution: in such cases, we recommend opening your browser console (for example, F12 for Chrome > Console tab) to see if there are any errors that could indicate the issue. For example, a 403 error in the console indicates that the target dashboard is not shared with the user who is experiencing the issue. To fix it, login as an owner of the drill dashboard and share it with the relevant user or group. Symptoms: when clicking on a widget to get the drill dashboard you get a 404 error. Solution: This issue usually happens when the target/drill dashboard is removed from the system. In order to fix it, please follow the steps below: Log in to the system as an owner. Find the parent widget and open it in edit mode. Click the 3 dots menu > choose the ‘Jump to dashboard’ menu and select any other dashboard that exists in the system. Press Apply and publish the changes to other users. Note: if you need just to remove a drill dashboard that doesn’t exist from this widget and not substitute it with another one, try the following: after you choose a new drill dashboard, just unselect it after that and then save the changes. If the jump to dashboard menu doesn’t appear for this widget, try to create a new temporary dashboard with “_drill” in the prefix and do the same. Symptoms: The drill dashboard is not opening for some viewers. Solution: republish the drill dashboard to make sure the updated version is delivered to all end users. Additional Resources: JumpToDashboard Plugin - How to Use and Customize1.5KViews1like2CommentsUsing Grafana to troubleshoot performance issues
Overview Out of the box, Sisense running on Linux comes with embedded monitoring tool Grafana. Sisense created a few dashboards that can be helpful in order to troubleshoot the performance issues. Grafana can be accessed from Admin page->System Settings-> Monitoring (in top right corner) or by https://your_web_site/app/grafana/ The dashboards list is available by navigating on the left-hand side menu on a 4 square button-> manage Navigation Grafana has some base controls that makes it straightforward to gather info about the system over a period of time. Within a dashboard you will see similar headers to the following: 1: Filter dropdowns - such as namespace (by default, Sisense components run in the sisense namespace), pods, and node(s). 2: Time period - can be configured to be relative or absolute (according to browser timezone) 3: Refresh - can manually toggle the refresh or set a refresh rate Within a visualization, you can hover over particular lines or click on one or more values Click and drag over a time period to zoom in on the time period for more accurate values For more information, check out the official Grafana docs - Working with Grafana dashboard UI Available Dashboards We will focus our troubleshooting using 4 dashboards: Nodes All Pods per namespace Kubernetes / Compute Resources / Namespace (Workloads) Import the dashboard 12928 Nodes Dashboard This dashboard is useful for troubleshooting overall node performance during a period of time. It can tell us what the CPU, RAM, Disk I/O, and networking usage of the machine during a period of time. Use the 'instance' drop down to select which node to examine. All Pods per Namespace Dashboard The dashboard header will allow you to filter the pods/nodes, setup the timeframe, specify refresh/autorefresh: The dashboard has 3 widgets (CPU, RAM and Network) and will show the load of a server pod-by-pod (separately) overlapping each other This is useful when checking what the RAM consumption of a pod is. Please note this dashboard not does not show overall RAM consumption by default, it will show the RAM of each pod individually. Click to check how to add Total: In Grafana (hosted on /app/grafana), under dashboard named all pods per namespace , two widgets (Cpu Usage, Memory Usage) need to add the sum of all Pods. This is mandatory information when troubleshooting resource pressure situations. Add the following metric to the memory widget: sum (container_memory_working_set_bytes{job="kubelet", pod_name=~"$pod", container_name!=""}) We can then see more clearly the state of the machine (RAM in this case): using the Pod filter you can filter all the query or build POD by typing "qry" or "bld" in a filter and selecting needed Pods: Hover over on the widget item to see the name of a POD Drag and drop to select needed timeframe: Kubernetes / Compute Resources / Namespace (Workloads) This dashboard has a lot more widgets and will show you the total usage of RAM, CPU etc Useful when need to see total usage to identify Safe-Mode trigger. For multinode see the manual This dashboard additional widgets that can be helpful in monitoring your server performance, network usage, etc. Sisense Cluster Detail Dashboard (#12928) This dashboard is included by default in many recent versions of Sisense on Linux. This dashboard has a lot of pre-set-up widgets that will show you pods, cluster, Drive, RAM etc usage. Before you try to import this dashboard, check to see if the dashboard is already in your Grafana dashboard menu. If your instance is on an older release or you would like to import a dashboard from other users, follow the steps below: 1. Click the 4 square menu icon, then Manage dashboard 2. Press on import dashboard in the top right corner 3. Specify the number of a dashboard 12928 (or other dashboard number) and press load 4. Select Logs Data source "Prometheus" 5. Click import and enjoy3.5KViews2likes0Comments- 1.4KViews0likes1Comment
Show the installation logs during silent installation
In case if you need to see the logs in the CMD during the Sisense installation you can run the following command: set logpath=%temp%\install.log start "" /wait "PathTo\SisenseInstall.exe" -q -username=bla -password=bla -l "%logpath%" type "%logpath%"554Views0likes0CommentsInstallation Linux error - Timeout when waiting for 127.0.0.1:10250 to stop
If you see the following errors in the installation log: failed: [node1] (item=10250) => {"changed": false, "elapsed": 2, "item": "10250", "msg": "Timeout when waiting for 127.0.0.1:10250 to stop."} failed: [node1] (item=10248) => {"changed": false, "elapsed": 2, "item": "10248", "msg": "Timeout when waiting for 127.0.0.1:10248 to stop."} failed: [node1] (item=10249) => {"changed": false, "elapsed": 2, "item": "10249", "msg": "Timeout when waiting for 127.0.0.1:10249 to stop."} failed: [node1] (item=10256) => {"changed": false, "elapsed": 2, "item": "10256", "msg": "Timeout when waiting for 127.0.0.1:10256 to stop."} This means that the ports for the Kubernetes cluster are closed: TCP 10248 - 10259 Kubernetes Note - these ports need to be open even if it is not a multi-node deployment.25KViews0likes0CommentsHow to fix Safe-Mode on Build, Dashboard
The Safe-Mode is triggered when the pod (query/build) OR overall server RAM consumption gets to 85% of usage. If Safe Mode is triggered on build - it will cancel the build due to OOM. On the dashboard - the application will restart the Query pod of the cube (delete and start it again) in order to release memory. The safe mode also has a grace period that cancels all new queries for 30 seconds. On the next dashboard page restart after 30 seconds, you will see the results in case of correct Data Groups settings. Related Using Grafana to troubleshoot performance issues Log location: single node, (MultiNode logs located on a first node (the one that is the first in the list of nodes in config.yaml that been used to install Sisense.) /var/log/sisense/sisense/ build - ec-<name of cube>-bld-hash.log for example ec-sample-ecommerce-bld-a63510c3-3672-0.log query - ec-<name of cube>-qry-hash.log for example ec-sample-ecommerce-qry-a63510c3-3672-0.log In case you received the Safe-mode triggered while BUILD. Check out the build error message and identify if it's a pod or server overall issue. Build pod limits issue BE#521691 : In order to fix the POD limitation issue, consider increasing the Build Node RAM limits in the Data Groups: Build server overall OOM issue BE#636134: In case of overall server OOM you need to check what is consuming RAM using Grafana, it can be: - Other builds. If so please consider changing the build schedule accordingly. - In most cases, the RAM is consumed by Query pods. If so, try to stop the cubes of the query pod to release the RAM. - Consider RAM upgrade - Increase MAX RAM in Data Group settings Settings of the Elasticube Build Safe-Mode are located in the Configuration Manager, Elasticube Build Params, where you can enable/disable Safe mode and change the % of RAM that should be saved. It's not recommended to disable the Safe-mode as it supposes to save the 15% of RAM for Technicians to login and fix issues. Note, in case disabling Safe Mode the server may become unresponsive and even server restart might not fix the issue. Do not disable Safe-mode without urgent need! In case you received the Safe-mode triggered while surfing the Dashboard. It's also can be or due to POD limitation, or server OOM. The Dashboards uses RAM of the query pods where Sisense keeps results of Dashboard's query execution results for further reuse by another dashboard user. Sisense collects results for 85% of the RAM allocated in the Data group settings after it tries to remove old results and replace them with new, at this time if the huge quey will be received from a dashboard, the Safe Mode will be triggered that will delete the query pod (remove all saved results) and will start a new pod to calculate results. At this moment the user will see the error on a dashboard, however, after the pod will be restarted (usually about 14 sec, however, the more RAM allocated to the query pod the more it will take to release the RAM and start a new pod) user can restart the page to get the results. UPD: starting from Sisense L2021.5, the Soft-Restart of the Elasticubes has been implemented (beta, please test it before go live). The Soft Restart of the qry pod will restart only Monet DB and not the entire pod, which will speed up restart in case hitting Safe-Mode. to enable Safe Restart: - go to the control panel - click 5 times on Sisense Logo at the top left corner - navigate to Management section on the left-hand side menu - scroll down to the end of a page - enable Soft-Restart in order to fix: - Consider changing Data Group Settings - Consider RAM upgrade - increase amount of Instances in the Data Group. So when one of the instances would be restarted du to a Safe-Mode the other will handle the requests Settings of the Elasticube Query Safe-Mode located in the Configuration Manager, Elasticube Query Params: Cleanup Threshold Percentage - the % of the Safe-Mode (default 15%) Safe-Mode Grace Period - after safe-mode is triggered, Sisense will cancel all queries for 30 sec. Disable/Enable Safe-mode - NOT RECOMENDED!!!3.5KViews1like0CommentsHow to check network performance in Linux
You might need to check the network performance between the nodes of the multi-node cluster or between a Linux server and a remote database on Linux. 1., Install the following tool on each node: For Ubuntu: sudo apt install iperf3 For Red Hat / CentOS / Amazon Linux: sudo yum install iperf 2. Start the server on the first node: iperf3 -s (or 'iperf -s') 3. Notice its port from the output and then launch the client on another node / another server: iperf3 -c IP_of_the_server_above -p port_of_the_server Feel free to heck the the following combination: node1 - node2 , node1 - node3 , node2 - node3 (or Sisense server - database server) And example output: ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from 172.16.0.98, port 49216 [ 5] local 172.16.0.38 port 5201 connected to 172.16.0.98 port 49218 [ ID] Interval Transfer Bandwidth [ 5] 0.00-1.00 sec 105 MBytes 884 Mbits/sec [ 5] 1.00-2.00 sec 110 MBytes 922 Mbits/sec [ 5] 2.00-3.00 sec 110 MBytes 924 Mbits/sec [ 5] 3.00-4.00 sec 110 MBytes 923 Mbits/sec [ 5] 4.00-5.00 sec 110 MBytes 926 Mbits/sec [ 5] 5.00-6.00 sec 110 MBytes 919 Mbits/sec [ 5] 6.00-7.00 sec 110 MBytes 921 Mbits/sec [ 5] 7.00-8.00 sec 110 MBytes 922 Mbits/sec [ 5] 8.00-9.00 sec 110 MBytes 926 Mbits/sec [ 5] 9.00-10.00 sec 109 MBytes 913 Mbits/sec [ 5] 10.00-10.05 sec 4.99 MBytes 927 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 5] 0.00-10.05 sec 0.00 Bytes 0.00 bits/sec sender [ 5] 0.00-10.05 sec 1.07 GBytes 918 Mbits/sec receiver1.5KViews0likes0CommentsHow much RAM used by a dashboard/widget (Linux)
In order to control the RAM usage and to set up proper Data Group Limits, you would like to know how much RAM is used by a dashboard/widget. In Linux, you can use Grafana, in Windows, you can use Performance Monitor 1. Make sure that no one else is using the dashboard connected to the same cube as the dashboard that you checking 2. Open three separate tabs in your browser: - Dashboard you would like to check - Grafana - Elasticube Data Tab 3. Restart the query pod by: - In Data Tab stop the cube that is a datasource of the Dashboard - wait for 5-10 sec and start the cube (do not use the "restart" option, to get the correct result you need to stop and then start the cube) 4. In Grafana "All pods per namespace" dashboard filter for a newly created query pod of the cube by "ec-cube_Name" or by "qry" enable autorefresh for more convenience and check current usage 5. Refresh the tab with the dashboard. It will send the request to query pod to calculate the result 6. Check the RAM usage of a POD in Grafana Using the same technique you can check widget usage, to do so open the required widget by pressing on edit (pencil) button, instead of the entire dashboard and you will be able to restart only the widget page that will send the query only from a particular widget. check the video demo https://www.loom.com/share/227b3acf97dd4c4fa74156642a857258 The main reasons of the performance issues are: - Heavy formulas are used inside widgets. In this case, you can move the calculations to a cube level i.e. create a custom table/column with formulas so they would be calculated when build, and on a dashboard, you will use results and aggregations etc - Data Security. In the case of data security, the main issue is that the results of the queries that Sisense usually re-uses between customers cannot be reused. Data Security adds additional join with Data Security that makes sisense think that the query is unique and calculate result again. - Many-to-many Many-to-many The many-to-many relationship is inflating your data due to duplication resulting from the Cartesian product generated from this relationship. Sisense recommend installing the JAQLine plugin in order to inspect the connections between the tables: For more information, see the following articles: Many to Many Relationships Star Schema Multi Fact Schema Also, please check a comprehensive guide we compiled to troubleshoot the performance issue.1.1KViews0likes0CommentsData Groups In-depth
Data groups are way to limit and control the resources of your instance, avoid Out-Of-Memory and Safe Mode exceptions. First I would like to describe how Sisense works on a general level. 1. When a user opens the dashboard for the first time, Sisense starting (warming up) the Query pod of the cube where it calculates the results and response to the widget/dashboard. (query pod e.g. "ec-SampleCube-qry" ) 2. Query pod calculates the results and returns them to the dashboard AND saves the results in the RAM for further re-use by other users. 3. When another user opens the same dashboard with the same filters, the same query will be sent, the query pod will not calculate the results instead it will take ready results from the memory, which will speed up the dashboard load. 4. If the user changes a filter, the new query will be sent to the query pod where it will be re-calculated, returned to the dashboard, and saved for further use by other users, and so on. (NOTE, if Data Security is applied on the cubes, the query results from another user will not be re-used as Sisense adds additional Joins to apply the Data Security that makes every query unique hence it is calculated every time.) Eventually, a dashboard could (in case of M2M, heavy formulas, complex aggregations, etc.) occupy the RAM of the entire server, which could trigger Out-Of-Memory issues, impact other ec-qry (dashboard) pods from starting and impacting the overall performance of the server. To avoid this from happening you can limit resources using the Data groups. Now let's check the Data Group settings. Followed by tips on best practices. Data group settings are available from Admin tab -> Data group at the left-hand side menu Main Section Group Name - The name of the group that will be reflected in the Data Group list. Build Node(s) - In the case of Multi-node, this is where you should specify the node where the Elasticube will be built. It is crucial to specify the nodes, otherwise, Multi-node capabilities would not be used. In the case of a Single node, the same server is responsible for the build and for the query as well, so it will be the one server to rule them all. Query Nodes - also, where you need to specify the nodes for Query purposed (dashboards) mostly for multinode, in case of Single node it should be the same server as for Build. In the case of multinode, this is where you will specify the node where the ec-ElastiCube-qry pod would run. Needed for redundancy and parallelism. ElastiCubes - the list of the elasticube to which the Data Group limitations would be applied. Note that you will apply the same settings for each of the cubes in a group. This means limiting the Build to 8 GB will give 8 GB to each build pod (ec-Elasticue-bld) in the group and not to the entire group. Instances - the number of query pods created per elasticube when the dashboard is in use. Increasing the number of instances will improve (reduce execution time) the query processing time. This is because the query execution will now be shared between multiple instances. After calculations, the results would not be shared between pods. Increasing the number of instances could help in solving the OOM issues as when one of the pods would be restarted the other will hold the queries. The RAM and CPU limitations would be applied to each of the instances, so if you will limit the RAM for the query to 5 GB you will allow using 5 GB for each instance. Note: In case the number of instances would be set to zero, it will enable the IDLE timeout that will stop the elasticube (delete query pod) in case the Elasticube was not used for 30 minutes. Remember that it will take time to start the cube after it was stopped, however, when it's stopped it's not using any resources. This is useful in case the dashboard is not used often and there is no need to keep the results of the queries. UPDATE: starting from L2022.5 the IDLE option moved from "Instances" and now have own toggle in the bottom of the Data Group Settings. Change IDLE time 1. Go to the Configuration manager. Available from Admin tab -> System Settings. and on the top right corner 2. Scroll down to the bottom of the page 3. Press on "Show Advanced" 4. Expand "Advanced Management Params" 5. Edit "Stop Unused Datasource (minutes)" to desired 6. Save changes. Note settings would be applied on elasticube rebuild. Secondary Section ElastiCube Query Recycler - when disabled, the query pod will store the results of query execution, and on each request will re-calculate the results. Could be useful in the case of Data Security when queries could not be re-used by another user. Or in case you testing the cube and do not want to use much RAM. Connector Running Mode - will create the connector pod inside the build pod, which will increase the build time but will make the build process more stable. Will require more RAM. Should be used for debugging purposes only. Index Size (Short/Long) - Long should be used if the cube has more than 50M rows or in case the text fields have tons of symbols. In this case, Sisense will use another indexing (x64) Simultaneous Query Executions - Limits the number of connections that the query Pod will open to the ec-qry Pod for running/executing queries. In case your widget contains heavy formulas it is worth reducing the number to make the pressure lower. Used when experiencing out-of-memory (OOM) issues (java, not Safe-Mode). Parallelization is a trade-off between memory usage & query performance. 8 is optimal amount of queries NOT RECOMMENDED changing without Sisense Support/Architect advice. Query Timeout (Seconds) - how long dashboard will wait for a response with the results of a query from the query pod % Concurrent Cores per Query - related to Mitosis and Parallelism technology. To minimize the risk of OOM, set this value to the equivalent of 8 cores. e.g. if 32-core, set to 25; for 64 cores set to 14; the value should be an even number. Can be increased up to a maximum of half the total cores - i.e. treat the default as the recommended max that can be adjusted down only. Change it when experiencing out-of-memory (OOM) issues. Query Nodes Settings (for ec-qry Pods, Dashboards) Reserved Cores - number of free cores without which the pod will not start. Meaning if at the moment when the pod should be started ( the dashboard has been opened) Sisense checks if there's 1 free core. If it is - Sisense will start qry pod. If not - will not start. Also, the setting will reserve this core only for the cube, and no one else could use this core(s) Max Cores - Maximum cores allowed to be used by the qry pod. Reserved RAM (MB) - Same as Reserved Cores but regarding the RAM used by Dashboards. Needed RAM to start the qry pod, also it reserves the RAM so the reserved will be used only by this qry-pod of the cube. If other cubes would require RAM while the build/dashboard use happens, they will not get it from the reserved. Max RAM (MB) - Maximum RAM that is allowed to be used by the qry pod ( dashboards). By each of the query instances if were increased in the Instances. Please note that at 85% of the pod usage OR overall server RAM will cause Safe-Mode exception. that in case of Dashboards qry pod) will delete and start the pod again. At the Safe Mode, the users of a dashboard would see the error of a Safe Mode and the qry pod will be deleted and started again. So the next dashboard refresh will bring the data. I would recommend setting the MAX ram to 40GB, this is the max reasonable RAM, if you will see the safe mode error with 40 GB limits - it's time to check the dashboard for optimization and/or many-to-many. Please note that the limit is very dependent on the size of the elasticube, Data Security and formulas you're using. and even 1 gb size on a disk cube can use 40 GB of RAM, when 100GB size on a disk, cube might use 10 GB of RAM, it's ok if elasticube will use 2-3 times its size on a disk. However, in case the elasticube will use 40 GB I would recommend checking some optimization for example in case the RAM is used due to heavy calculations on the widget, you can try to move the calculation to the elastcube build level i.e. create custom table with needed calculation, so they will be performed while the cube is build and on a dashboard - you will use ready results. Yes, in this case, it will use more RAM on a build, but it's will be one time. Please check more best practices and optimizations at the end of the article. Build Nodes Settings (for ec-bld Pods) Reserved Cores - number of free cores without which the pod will not start. Meaning if at the moment when the pod should be started ( the Build started) Sisense check if there's 1 free core. if it is - Sisense will start bld pod. if not - will not start. Max Cores - Maximum cores allowed to be used by the BLD pod. Reserved RAM (MB) - Amount of free RAM to start the build. Could be used if you know that the build will take 30 GB for example and if not, it will fail. So knowing that it will fail anyway, no point to start it and use RAM, if any way it will fail. Or in order to "reserve" the RAM so no one else will use it when the build will happen and you will be sure that it will finish successfully. Max RAM (MB) - the maximum RAM that is allowed to be used by a cube when it builds. Additional Information User label - Assigns labels to nodes in the data group. This is useful for scaling your nodes. For example, if you have implemented auto-scaling you can use these labels to scale up the nodes. You can check further information on this article. Storage Settings Interim Build on Local Storage - Enable to build ElastiCubes on the local storage instead of shared storage. Should be used to troubleshoot the issues on a Multinode where cubes are usually stored on shared storage so that all nodes would have access to it and for redundancy as well. On single-node its always build to local storage as single doesn't have shared storage) Store ElastiCube Data on S3 Enable - Enable to save your ElastiCubes on S3 if you have configured S3 access in your system configuration. Reflected (cannot be changed from there) in Configuration manager->Management Query cube from local storage - Enable to query the cube from the local storage in case Interim Build on Local Storage is enabled, for Multinode. Using local storage can decrease the time required to query the cube, after the first build on local storage. Set as default - apply to set the group as default. In this case, all new cubes would be created with settings of the default Data Group. Best Practices 1. Choose the strategy that is more important in your case, successful build or dashboard loads without Safe-Mode exception. a) Successful build - plan the schedule so the cubes would not overlap each other. as well as set the unlimited (-1) in the Max RAM for Build Nodes and limit the Query pods to the amount that will save RAM for a successful build b) Dashboard priority - in this case, you need to understand the limits that the build will take. Run the build with unlimited, and check Grafana how much it will use. Add 15% (Safe-Mode) and set up the Build limits. in this case, the build will not use more than limited and the SafeMode exception will let you know that the Data is increasing, and that will give more control of your environment 2. Setup reasonable limitations for Query pod. Remember as much as you will limit as much it will take to release the RAM. Also, the reasonable MAX limits in most cases are 40 GB, in case the query pod uses more it is worth checking the dashboards in order to optimize the widgets etc. However, ideal middle-grade cubes should not use more than 10 GB of RAM. Unfortunately, it is not possible to correlate and predict using the size of the cubes it takes on storage VS how much RAM it will use as it depends on the formulas you're using in the widgets. The aggregation from 1 table and pivot with many JOINs will use much more RAM using the same amount of data from cubes. Also, check the Performance Guide 3. Create different groups for different purposes, for example, create a "test" group and limit the resources so that even created by mistake m2m will not overuse the RAM. Create Small Scale group for Demo cubes/dashboards so that under no circumstances they will not use RAM more than you will limit (1 GB for example) 4. Increase the number of instances if RAM is sot an issue but the dashboards load slowly. 5. Consider autoscaling capabilities. For example, you see that the RAM/CPU spike and issues happened only on weekends when you rebuild all your cubes. And the current amount of RAM/CPU is not enough only at this day/week/hour. with autoscaling, Sisense will create an additional node that will handle the additional load and will be scaled down when not needed, for example, IF RAM load >= 80% - start a new node. If RAM get lower 80% scale down the pod. This is cheaper rather upgrade the entire server if at another period of time the RAM is not needed.5.1KViews2likes0CommentsChanging Log Level for specific service
When debugging particular APIs, we would need to increase the verbosity of a particular service. This document describes how to perform this operation: Enable SI CLI Run the following command: si loggers set -service $SERVICE_NAME -level $LOG_LEVEL # Example, setting log level for API Gateway service to debug si loggers set -service api-gateway -level DEBUG 3. The service will print the following message to the log to indicate it has changed verbose: Updating logger for api-gateway in /var/log/sisense log level: DEBUG1.1KViews0likes0Comments