cancel
Showing results for 
Search instead for 
Did you mean: 
Oleg_S
Community Team Member
Community Team Member

Symptoms

Relevant for Linux Single-node Sisense

In a default Sisense single-node deployment, there is one pod per service deployment. In cases of heavy usage, which is usually caused by parallel requests, the number of services processing the request could result in slowness or a delay when accessing dashboard objects, opening admin pages, etc.

Diagnosis

A good test would be to compare how the system behaves under heavy load and during non-working hours. If you see a difference in the time it takes to load the dashboard layout or access pages in admin, this could be caused by the amount of the load placed on the workers of each pod. This should not be an issue in multi-node environments as services are already scaled up, but in a single node environment, if there is enough RAM/CPU to handle the load, Sisense can scale services to boost the performance of parallel requests. Important services include:

  • api-gateway - webserver service. Takes a request first and communicates with other requests.
  • galaxy - service which serves the object of the dashboard, widgets, data security, shares, navigation panel, and alerts.
  • identity - service which provides detail of users, groups, and authentification.
  • configuration - service which provides system settings and configuration

Solution

If you have enough RAM/CPU resources on the server, you can scale services by running the commands below. Please note that each additional pod replica will consume up to 8.5 GB of RAM, so keep this in mind when scaling. Below is a command to double the amount of services. Remember to change the namespace and to set the amount of --replicas to the correct amount. 
 
kubectl -n <namespace> scale deployment identity --replicas=2
kubectl -n <namespace> scale deployment galaxy --replicas=2
kubectl -n <namespace> scale deployment api-gateway --replicas=2
kubectl -n <namespace> scale deployment configuration --replicas=2
 
The above steps can help if you notice common api-gateway restarts caused by the OOMemory. Sisense's RnD team is working on a solution to the problem, but in the interim, scaling the api-gateway can help prevent service disruptions.
Rate this article:
Version history
Last update:
‎03-02-2023 10:12 AM
Updated by:
Contributors