Performance Improvements On The Single-Node Linux Servers Under Heavy Usage
Symptoms
Relevant for Linux Single-node Sisense
In a default Sisense single-node deployment, there is one pod per service deployment. In cases of heavy usage, which is usually caused by parallel requests, the number of services processing the request could result in slowness or a delay when accessing dashboard objects, opening admin pages, etc.
Diagnosis
A good test would be to compare how the system behaves under heavy load and during non-working hours. If you see a difference in the time it takes to load the dashboard layout or access pages in admin, this could be caused by the amount of the load placed on the workers of each pod. This should not be an issue in multi-node environments as services are already scaled up, but in a single node environment, if there is enough RAM/CPU to handle the load, Sisense can scale services to boost the performance of parallel requests. Important services include:
- api-gateway - webserver service. Takes a request first and communicates with other requests.
- galaxy - service which serves the object of the dashboard, widgets, data security, shares, navigation panel, and alerts.
- identity - service which provides detail of users, groups, and authentification.
- configuration - service which provides system settings and configuration