Performance Improvements On The Single-Node Linux Servers Under Heavy Usage
Symptoms Relevant for Linux Single-node Sisense In a default Sisense single-node deployment, there is one pod per service deployment. In cases of heavy usage, which is usually caused by parallel requests, the number of services processing the request could result in slowness or a delay when accessing dashboard objects, opening admin pages, etc. Diagnosis A good test would be to compare how the system behaves under heavy load and during non-working hours. If you see a difference in the time it takes to load the dashboard layout or access pages in admin, this could be caused by the amount of the load placed on the workers of each pod. This should not be an issue in multi-node environments as services are already scaled up, but in a single node environment, if there is enough RAM/CPU to handle the load, Sisense can scale services to boost the performance of parallel requests. Important services include: api-gateway - webserver service. Takes a request first and communicates with other requests. galaxy - service which serves the object of the dashboard, widgets, data security, shares, navigation panel, and alerts. identity - service which provides detail of users, groups, and authentification. configuration - service which provides system settings and configuration Solution If you have enough RAM/CPU resources on the server, you can scale services by running the commands below. Please note that each additional pod replica will consume up to 8.5 GB of RAM, so keep this in mind when scaling. Below is a command to double the amount of services. Remember to change the namespace and to set the amount of --replicas to the correct amount. kubectl -n <namespace> scale deployment identity --replicas=2 kubectl -n <namespace> scale deployment galaxy --replicas=2 kubectl -n <namespace> scale deployment api-gateway --replicas=2 kubectl -n <namespace> scale deployment configuration --replicas=2 The above steps can help if you notice common api-gateway restarts caused by the OOMemory. Sisense's RnD team is working on a solution to the problem, but in the interim, scaling the api-gateway can help prevent service disruptions.1.2KViews1like0CommentsBuild Stability Improvements on Heavy Loaded Systems
Symptoms Relevant for Linux Below are a couple of select symptoms your system could be experiencing under a big load with simultaneous builds. Please note the list is not exhaustive: Build failures over random cubes or a couple of cubes that are being rebuilt manually successfully CubeIsUnreachable error Failure due to Build Service restart Diagnosis Build flow is connected to three main services: Build: Takes care of triggering builds and moving logs between the Pod and UI Management: Creates ec-bld pod and takes care of Kubernetes level communication over the flow Ec: <name_of_the_cube>-bld - actual cube import process which creates the folder, makes import If memory consumption of the ec-bld pod is being controlled by the DataGroup Max RAM for Build (more in-depth article here) and affects just this cube, then Build/Management works for all builds running on the system. Sometimes when there are 4+ cubes that run in parallel, Build/Management services could be under heavy load and require additional RAM. Since both of the services are Java based they have a default memory limit mechanism that allows 500 MB of RAM to be used. If the service needs more, it could be throttling which can affect defined timeouts or cause service restart. If you want to ensure that this is the case, please check graphana for build/management services when builds fail. Linked is additional information on how to use Grafana to troubleshoot performance issues. Keep in mind that by default services are limited to 500 MB but could consume more since there are additional parts of the service that are not Java-based and Java limits can peak from time to time. However, if you see that the service is under load, it is a good idea to allocate more RAM for stability improvements. Solution Increase the Memory Limits for the Build and Management services. Since this is a Java service there are two places to update: Kubernetes deployment limits Java service on the Sisense Configuration side To update Memory Limits please follow the steps below: 1. SSH to the server 2. Execute line 1 for build or line 2 for management: kubectl edit deployment -n <namespace> build kubectl edit deployment -n <namespace> management 3. Find "resources: limits" and update memory size to 4000 (press “i” on the keyboard to enter edit mode → edit value → ESC to exit edit mode → use “:wq!” to exit and save changes). 4. Build/management pod will automatically restart after deployment modification. 5. Navigate to Admin → System management → Configuration → Build → Memory limit for build pod and Navigate to Admin → System management → Configuration → Advanced Management params → Memory limit for management pod. Set the value to be 1000 lower than the value set in step #3 for the deployment. 6. Save settings. Tips: Please ensure that the Memory Limit value is correctly entered. If there is a problem with the value, the service will not start. Please keep in mind that the Deployment value should be 1000m bigger than the Configuration value. If you need any additional help, please contact Sisense Support.1.9KViews4likes0CommentsKubernetes DNS Linux Issue Caused by Missing br_netfilter kernel module
This article is inspired by the customer’s ticket with the issue that looked like a Rabbitmq failure in the beginning, and that ended up being the result of the missing br_netfilter kernel module absent on one of EKS cluster nodes6.2KViews0likes0CommentsSisense Linux Multinode: MongoDB, Zookeeper and Data Recovery
There are situations where a Sisense Linux Multinode instance can become corrupted and the only choice to move forward is reinstallation. Sometimes in these cases, users can face a couple of challenges, especially in cases where there is no backup to recreate the instance. In this article, we will review methods that will help to revive data if a user has active storage in MongoDB or Zookeeper.1.4KViews1like0Comments