ContributionsMost RecentNewest TopicsMost LikesSolutionsOptimizing Containerd Storage in RKE2 (for non-cloud-managed Kubernetes) | [Linux] Note: Applicable for L2025.2 and newer (RKE2). For <L2025.2 (RKE1) instances, please refer to the guide below: Optimizing Docker Storage | Sisense Step-by-Step Guide: Due to the fact that Containerd is used, the ‘docker’ command is no longer relevant, but ‘crictl’ is used instead. ‘crictl’ is not available in the system PATH by default, so below you will find the steps for ‘crictl’ usage and execution of basic commands for cleaning up the unused objects. Navigate to rke2 directory: cd /var/lib/rancher/rke2/ Copy ‘crictl’ binary to the path recognizable by your $PATH variable - in this example, it is ‘/usr/local/bin’ : Sudo cp bin/crictl /usr/local/bin/ Copy the config file, which is necessary for binary execution,n the/etc directory: Sudo cp agent/etc/crictl.yaml /etc/ After performing the steps above, you will be able to execute the ‘crictl’ command directly. Execute the following two commands to 1). Remove all stopped (exited) containers and print the result; 2). Remove all unused container images and print the result: sudo crictl ps -a --state Exited -q| xargs -r sudo crictl rm | wc -l sudo crictl rmi --prune | wc -l Conclusion This article describes the steps for manual cleaning of stopped and unused containers/images in RKE2 deployments in order to release the storage. References/Related Content Optimizing Docker Storage | Sisense Disclaimer: This post outlines a potential custom workaround for a specific use case or provides instructions regarding a specific task. The solution may not work in all scenarios or Sisense versions, so we strongly recommend testing it in your environment before deployment. If you need further assistance with this, please let us know. Changing key-based authentication to password-based for SSH to Linux VM [Linux] When creating a VM for Sisense installation in a cloud environment (such as AWS), it is deployed with the default key-based authentication method, meaning it will be required to provide the respective SSH key when performing an SSH connection to the instance. In some cases, you may want to change this authentication method to password-based so that an SSH connection can be established after the password is prompted, rather than after the SSH key is specified. Since Sisense allows both methods, it is useful to understand how to change the authentication method depending on the actual infrastructure needs. Backing up Sisense configuration without taking full Sisense backup [Linux] Step-by-Step Guide: Usually, manual backup of the configuration is not required since it is being already collected by Sisense on a daily basis and can be found in /opt/sisense/storage/configuration directory. In UI - "configuration" folder in File Management. Each daily backup is stored in JSON format. In rare scenarios, in case "configuration" folder is empty (for example, application was just installed and automated backup was not created yet), it is possible to perform manual backup of the current configuration DB. Steps for the manual configuration backup: 1, Run the following command: ===== kubectl exec -it -n sisense $(kubectl -n sisense get pods --no-headers=true -l app=configuration -o custom-columns=":.metadata.name" | tail -n 1) -- node /usr/src/app/node_modules/@sisense/sisense-configuration/bin/sisense-conf export -p -c $(kubectl -n sisense get endpoints | grep -oE '\b[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+:2181\b' | sed -n '1p') -o /tmp/zookeeperBackup.json ===== 2. Copy the backup to your local storage: ===== kubectl cp sisense/$(kubectl -n sisense get pods --no-headers=true -l app=configuration -o custom-columns=":.metadata.name" | tail -n 1):/tmp/zookeeperBackup.json ~/zookeeperBackup.json ===== 3. Copy the backup from local storage to the “configuration” directory: ===== kubectl cp ~/zookeeperBackup.json sisense/$(kubectl -n sisense get pods --no-headers=true -l app=management -o custom-columns=":.metadata.name"):/opt/sisense/storage/configuration/ ===== To restore the backup of the configuration (!NOTE - it will overwrite the current configuration): navigate to Admin -> Server & Hardware -> System Management -> Configuration -> 5 clicks on Sisense logo -> Backups -> Select the backup file to be restored -> Click Restore Conclusion This article describes the process of saving/backing up the Sisense configuration and restoring the same without interacting with the full Sisense backup. Disclaimer: This post outlines a potential custom workaround for a specific use case or provides instructions regarding a specific task. The solution may not work in all scenarios or Sisense versions, so we strongly recommend testing it in your environment before deployment. If you need further assistance with this, please let us know. Troubleshooting Sisense validator [Linux] Step-by-Step Guide: In case the Validator is failing, we can perform some adjustments and review its process from the pod itself to verify what exactly went wrong. As the Validator pod is launched by its job, it is designed to be stopped after completion, whether it was successful or not. So, in order to proceed, we will need to copy the existing job and launch its test duplicate, but with adding ‘sleep’ option added so it would remain running even after completion. We will also need to get rid of auto-generated values filled during the original job creation. You can run the following command that will: Search for the latest Validator job Remove unneeded values Re-launch its copy with “test” name and “sleep” option =============== kubectl -n sisense get job $(kubectl -n sisense get jobs -l app.kubernetes.io/name=validator -o json | jq -r '.items | sort_by(.metadata.creationTimestamp) | last | .metadata.name') -o json | \ jq 'del( .metadata.creationTimestamp, .metadata.generation, .metadata.resourceVersion, .metadata.uid, .spec.selector, .status, .spec.template.metadata.labels["controller-uid"], .spec.template.metadata.labels["batch.kubernetes.io/controller-uid"] ) | .metadata.name = "validator-test" | .spec.template.metadata.labels["batch.kubernetes.io/job-name"] = "validator-test" | .spec.template.metadata.labels["job-name"] = "validator-test" | .spec.template.spec.containers[0].command[2] |= gsub("# sleep 6h"; "sleep 6h")' | kubectl apply -f - =============== After running this command, the new job and its pod with the name validator-test-xxx will be created and will remain in the running state for 6 hours. For further troubleshooting, follow the steps below: Exec into the pod: kubectl -n sisense exec -it $(kubectl -n sisense get pod | grep validator-test | awk '{print $1}') -- bash Execute run.sh script with ‘bash -x’ (will show detailed process of script execution): bash -x ./run.sh Observe the results that will help to better understand the nature of the issue After troubleshooting is done, feel free to delete the test Validator job: ========= kubectl -n sisense delete job validator-test ========= In case you need further assistance from Sisense, please feel free to open a Support Ticket. Conclusion: This article provides the steps that are useful for troubleshooting Validator issues in Sisense. Disclaimer: This post outlines a potential custom workaround for a specific use case or provides instructions regarding a specific task. The solution may not work in all scenarios or Sisense versions, so we strongly recommend testing it in your environment before deployment. If you need further assistance with this, please let us know. Error in ingress-nginx-controller pod after upgrading Sisense to L2024.1 Error in ingress-nginx-controller pod after upgrading Sisense to L2024.1: main.go:64] port 80 is already in use. Please check the flag --http-port !NOTE! - this guide is relevant for non-cloud managed Sisense deployments i.e. installed with Sisense Kubernetes (RKE). After upgrading Sisense to version L2024.1 you might face the following error in ingress-nginx-controller pod’s logs (the default namespace): main.go:64] port 80 is already in use. Please check the flag --http-port Such an issue might be the result of incompatibility between the ingress-Nginx release version that was updated in Sisense L2024.1 and the Kubernetes version in case it is lower than v1.26. In case Sisense L2024.1 was freshly installed on the non-cloud instance (i.e. with Sisense RKE) as a fresh deployment (and was not upgraded from the previous version) this issue should not reveal since L2024.1 package already comes with v1.26 Kubernetes by default. However, if Sisense was upgraded to L2024.1 or it was deployed/installed on Kubernetes lower than v1.26 there will be an incompatibility between the ingress-Nginx release implemented in L2024.1 (4.10.0). To resolve the issue it is necessary to re-run the Sisense upgrade with the following parameters in the configuration yaml file: update: true update_k8s_version: true In case you are still facing the same issue after that, please open a ticket for Sisense Support. This article provides a brief explanation regarding possible Nginx<->Kubernetes compatibility issues in Sisense L2024.1. The article also provides the steps to resolve the issue described. Disclaimer: This post outlines a potential custom workaround for a specific use case or provides instructions regarding a specific task. The solution may not work in all scenarios or Sisense versions, so we strongly recommend testing it in your environment before deployment. If you need further assistance with this please let us know. "Failed validating Kuberenetes ports at node" error while Sisense installation Encountering the "Failed validating Kubernetes ports at node" error during Sisense installation on Linux? This issue typically arises due to closed ports or lingering Docker containers from a previous installation. Learn how to diagnose and resolve it effectively. Resolving installation error for Sisense - sisense-installer/bin/activate: no such file or directory This article provides basic understanding of a common error encountered during the installation of Sisense on Linux systems. Verifying Python packages exist ... ./sisense.sh: line 240: sisense-installer/bin/activate: No such file or directory Error occurred during generic_pre_install section Exiting Installation ... Verifying Sisense Installation Completion - Analyzing Migration Pod Messages This article provides basic understanding of how to perform initial verification of installation/upgrade process. Specifically, it covers the flow related to the "migration" pod and the "provisioner" pod, and provides steps to analyze if installation/upgrade process was successful.