cancel
Showing results for 
Search instead for 
Did you mean: 
Oleg_S
Community Team Member
Community Team Member

Symptoms

Relevant for Linux

Below are a couple of select symptoms your system could be experiencing under a big load with simultaneous builds. Please note the list is not exhaustive:

  • Build failures over random cubes or a couple of cubes that are being rebuilt manually successfully
  • CubeIsUnreachable error
Oleg_S_2-1661428662618.png

 

  •  Failure due to Build Service restart
Oleg_S_1-1661428175124.png 

Diagnosis

Build flow is connected to three main services:

  • Build: Takes care of triggering builds and moving logs between the Pod and UI
  • Management: Creates ec-bld pod and takes care of Kubernetes level communication over the flow
  • Ec: <name_of_the_cube>-bld - actual cube import process which creates the folder, makes import

If memory consumption of the ec-bld pod is being controlled by the DataGroup Max RAM for Build (more in-depth article here) and affects just this cube, then Build/Management works for all builds running on the system. Sometimes when there are 4+ cubes that run in parallel, Build/Management services could be under heavy load and require additional RAM. Since both of the services are Java based they have a default memory limit mechanism that allows 500 MB of RAM to be used. If the service needs more, it could be throttling which can affect defined timeouts or cause service restart.

If you want to ensure that this is the case, please check graphana for build/management services when builds fail. Linked is additional information on how to use Grafana to troubleshoot performance issues. Keep in mind that by default services are limited to 500 MB but could consume more since there are additional parts of the service that are not Java-based and Java limits can peak from time to time. However, if you see that the service is under load, it is a good idea to allocate more RAM for stability improvements. 

Solution

Increase the Memory Limits for the Build and Management services. Since this is a Java service there are two places to update:
  • Kubernetes deployment limits
  • Java service on the Sisense Configuration side
 
To update Memory Limits please follow the steps below:
1. SSH to the server
 
2. Execute line 1 for build or line 2 for management:
kubectl edit deployment -n <namespace> build 
kubectl edit deployment -n <namespace> management
 
3. Find "resources: limits" and update memory size to 4000 (press “i” on the keyboard to enter edit mode → edit value → ESC to exit edit mode → use “:wq!” to exit and save changes).
 
Oleg_S_3-1661429254586.png

4. Build/management pod will automatically restart after deployment modification.

 
5. Navigate to Admin → System management → Configuration → Build → Memory limit for build pod and Navigate to Admin → System management → Configuration → Advanced Management params → Memory limit for management pod. Set the value to be 1000 lower than the value set in step #3 for the deployment.
 
 
Oleg_S_6-1661429397007.png

6. Save settings.

Tips:

Please ensure that the Memory Limit value is correctly entered. If there is a problem with the value, the service will not start.

Please keep in mind that the Deployment value should be 1000m bigger than the Configuration value.

If you need any additional help, please contact Sisense Support.

Rate this article:
Version history
Last update:
‎02-01-2024 10:34 AM
Updated by: