How to update style sheets in the branding folder for dashboards
In this article, we will address the issue of updates made to style sheets within a branding folder not reflecting on dashboards. Users often encounter this problem due to browser caching, which prevents the most updated CSS files from loading. This guide provides solutions to ensure your dashboard reflects the latest version of your style sheets.211Views1like0CommentsTroubleshooting Pods in Kubernetes Clusters
Troubleshooting Pods in Kubernetes Clusters In Kubernetes, pods can encounter various issues that prevent them from running correctly. Understanding the different pod states and how to troubleshoot them is essential for maintaining a healthy cluster. This guide covers common pod states and provides steps for diagnosing and resolving issues. Common Pod States: Init: The pod is initializing. All init containers must be completed before the main containers start. 0/1 Running: The pod is running, but not all containers are in a ready state. CrashLoopBackOff: The pod repeatedly fails and is restarted. Pending: The pod is waiting to be scheduled on a node. ImagePullBackOff: The pod cannot pull the container image. Troubleshooting Steps 1. Pod in Init State: When a pod is stuck in the Init state, it indicates that one or more init containers haven't completed successfully. Check Pod Description: kubectl -n sisense describe pod <pod_name> In the description, look for the init containers section. All init containers should ideally be in a "Completed" state. If one is still running or has failed, it can block the rest of the pod's containers from starting. Example: Init Containers: init-mongodb: State: Running This indicates an issue with the init-mongodb container. Check Logs of Init Containers: If the init container is still running or has failed, check its logs: kubectl -n sisense logs <pod_name> -c init-mongodb After identifying issues in the init container, investigate the related services or dependencies, such as the MongoDB pod itself in this example. 2. Pod in 0/1, 1/2 Running State: This state indicates that the pod is running, but not all containers are in a ready state. Describe the Pod: kubectl -n sisense describe pod <pod_name> Check the State section for each container. Look for reasons why a container is not in a ready state, such as CrashLoopBackOff, ImagePullBackOff, or other errors. Check Logs for Previous Failed Container: If a container is in an error state or has restarted, checking the logs can provide more context about the issue. Current Logs: kubectl -n sisense logs <pod_name> -c <container_name> Replace <container_name> with the name of the specific container. Previous Logs: kubectl -n sisense logs <pod_name> -c <container_name> -p This command retrieves logs from the previous instance of the container, which can be particularly useful if the container has restarted. 3. Pod in CrashLoopBackOff State: A pod enters the CrashLoopBackOff state when it repeatedly fails and is restarted. To diagnose this issue: Describe the Pod: kubectl -n sisense describe pod <pod_name> This command provides detailed information, including the events and container statuses. Example: State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: OOMKilled The OOMKilled reason indicates that the container was killed due to exceeding its memory limit. Increase the memory limit to fix the issue. Check Events and Container States: At the bottom of the describe output, you'll find the events section, which includes messages about why the pod failed. For example, FailedScheduling may indicate resource constraints or node issues. Review Logs: Logs can provide valuable insights when a pod is in the CrashLoopBackOff state. Check Current Logs: kubectl -n sisense logs <pod_name> This command retrieves the logs from the current running container. Check Previous Logs: kubectl -n sisense logs <pod_name> -p This command retrieves logs from the previous container instance, which is useful if the container was restarted. Specify the container name if the pod has multiple containers: kubectl -n sisense logs <pod_name> -c <container_name> -p 4. Pod in Pending State: If a pod is Pending, it means it hasn't been scheduled on a node yet. Check Pod Scheduling Events: kubectl -n sisense describe pod <pod_name> Look for events like: Warning FailedScheduling 85m default-scheduler 0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules. This message indicates that the scheduler couldn't find a suitable node for the pod due to resource constraints, node affinity rules, or other scheduling policies. 5. Pod in ImagePullBackOff State: This state occurs when a pod cannot pull the container image from the registry. Check Pod Description for Image Issues: kubectl -n sisense describe pod <pod_name> Look for messages indicating issues with pulling the image, such as incorrect image names, tag issues, authentication problems, or network errors. For multinode deployments, note the server in the message. It is possible that the image may not exist on all servers. Verify Image Name and Tag: Ensure that the image name and tag are correct and that the image is available in the specified registry. Check Image Pull Secrets: If the image is in a private registry, ensure that the registry is accessible. Manually Pull the Image: Sometimes, images are not available or cannot be downloaded within the default timeout. To verify the availability of the image and check for any errors, try pulling the image manually on the node specified in the error message: docker pull <image_name>:<tag> Replace <image_name> and <tag> with the appropriate image and tag names. This can help determine if the issue is with the image itself or the registry configuration. Conclusion By understanding these common pod states and following the troubleshooting steps, you can diagnose and resolve many issues in Kubernetes. Regularly monitoring pods and logs is essential for maintaining a stable and reliable Kubernetes environment. Check out this related content: Documentation1.3KViews0likes0CommentsError in ingress-nginx-controller pod after upgrading Sisense to L2024.1
Error in ingress-nginx-controller pod after upgrading Sisense to L2024.1: main.go:64] port 80 is already in use. Please check the flag --http-port !NOTE! - this guide is relevant for non-cloud managed Sisense deployments i.e. installed with Sisense Kubernetes (RKE). After upgrading Sisense to version L2024.1 you might face the following error in ingress-nginx-controller pod’s logs (the default namespace): main.go:64] port 80 is already in use. Please check the flag --http-port Such an issue might be the result of incompatibility between the ingress-Nginx release version that was updated in Sisense L2024.1 and the Kubernetes version in case it is lower than v1.26. In case Sisense L2024.1 was freshly installed on the non-cloud instance (i.e. with Sisense RKE) as a fresh deployment (and was not upgraded from the previous version) this issue should not reveal since L2024.1 package already comes with v1.26 Kubernetes by default. However, if Sisense was upgraded to L2024.1 or it was deployed/installed on Kubernetes lower than v1.26 there will be an incompatibility between the ingress-Nginx release implemented in L2024.1 (4.10.0). To resolve the issue it is necessary to re-run the Sisense upgrade with the following parameters in the configuration yaml file: update: true update_k8s_version: true In case you are still facing the same issue after that, please open a ticket for Sisense Support. This article provides a brief explanation regarding possible Nginx<->Kubernetes compatibility issues in Sisense L2024.1. The article also provides the steps to resolve the issue described. Disclaimer: This post outlines a potential custom workaround for a specific use case or provides instructions regarding a specific task. The solution may not work in all scenarios or Sisense versions, so we strongly recommend testing it in your environment before deployment. If you need further assistance with this please let us know.340Views1like0CommentsCreating and deleting EKS cluster in the existing VPC
This guide demonstrates how to create an Amazon EKS (Elastic Kubernetes Service) cluster within existing VPCs and subnets, and how to remove it using the same configuration file with a single command. It’s not a recommendation but just an example of the working solution and you have to adjust it according to your needs.826Views0likes0CommentsError on installation "cp: cannot create regular file '/usr/local/bin/kubectl': Text file busy"
The error "cp: cannot create regular file '/usr/local/bin/kubectl': Text file busy" typically occurs when multiple upgrade processes are running simultaneously, causing a conflict when trying to update the kubectl binary. [2024-10-30 12:33:34] Getting binaries kubectl (v1.30.3) and helm (v3.12.3) [2024-10-30 12:33:34] Downloading them from the internet % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 138 100 138 0 0 1367 0 --:--:-- --:--:-- --:--:-- 1380 100 49.0M 100 49.0M 0 0 3478k 0 0:00:14 0:00:14 --:--:-- 3525k % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 15.2M 100 15.2M 0 0 3730k 0 0:00:04 0:00:04 --:--:-- 3731k linux-amd64/helm cp: cannot create regular file '/usr/local/bin/kubectl': Text file busy [2024-10-30 12:33:53] ** Error occurred during command sudo cp installer/05_post_infra/files/kubectl installer/05_post_infra/files/helm /usr/local/bin section ** [2024-10-30 12:33:53] ** Exiting Installation ... ** For example, in one case, the error occurred because multiple upgrades were happening on the same environment simultaneously (the same bastion for several cloud environments), which led to the Kubectl binary being in use by another upgrade process. The recommended solution is to check if any process is using Kubectl or Helm before proceeding with the upgrade. Watching an upgrade with "kubectl ... -w" command can cause this issue also. To prevent this error, it is advisable to: Ensure that no other upgrade or deployment processes are running in parallel. Use commands like lsof -t $(which kubectl) and lsof -t $(which helm) to check if these binaries are in use. If any command returns a PID, print out the process details using ps -ef | grep <pid number> and fail the pipeline if necessary. By following these steps, you can avoid the "Text file busy" error and ensure a smooth upgrade process. Related Content: https://academy.sisense.com/sisense-administration https://docs.sisense.com/main/SisenseLinux/upgrading-sisense.htm1.4KViews1like0CommentsInstallation fails with /bin/activate: No such file or directory
If the process halts early with errors related to missing Python packages, you can quickly resolve this by ensuring all required dependencies are in place. For a smooth setup, follow the steps outlined in the Minimum System Requirements, which include commands to update your system and install the necessary Python packages. Don't let a small error disrupt your installation—get back on track with these simple fixes.2.2KViews0likes0CommentsHow to Check SSL Ciphers
How to Check SSL Ciphers If you have enabled SSL on Sisense side, the Nginx controller will be deployed in the default namespace. To check the currently configured ciphers run the following command and check the "nginx.ingress.kubernetes.io/ssl-ciphers:" row: kubectl -n sisense describe ingress Name: sisense-ingress Labels: app=api-gateway app.kubernetes.io/managed-by=Helm chart=api-gateway-2024.2.077 release=sisense sisense-version=2024.2.077 Namespace: sisense Address: Ingress Class: <none> Default backend: <default> TLS: sisense-tls terminates Rules: Host Path Backends ---- ---- -------- paragoninsgroup.sisense.com / api-gateway-external:8456 (10.42.140.227:8456) Annotations: kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: true meta.helm.sh/release-name: sisense meta.helm.sh/release-namespace: sisense nginx.ingress.kubernetes.io/configuration-snippet: more_clear_headers Server; nginx.ingress.kubernetes.io/proxy-body-size: 0m nginx.ingress.kubernetes.io/proxy-read-timeout: 300 nginx.ingress.kubernetes.io/ssl-ciphers: ECDH+AESGCM:ECDH+CHACHA20:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS:!AESCCM nginx.ingress.kubernetes.io/ssl-prefer-server-ciphers: true To decrypt the full list of the currently used ciphers use the string from the mentioned row with the following command: openssl ciphers -v 'ECDH+AESGCM:ECDH+CHACHA20:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS:!AESCCM' | column -t Output Example: ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD ECDH-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AESGCM(256) Mac=AEAD ECDH-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AESGCM(256) Mac=AEAD ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(128) Mac=AEAD ECDH-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AESGCM(128) Mac=AEAD ECDH-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AESGCM(128) Mac=AEAD DH-DSS-AES256-GCM-SHA384 TLSv1.2 Kx=DH/DSS Au=DH Enc=AESGCM(256) Mac=AEAD DH-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH/RSA Au=DH Enc=AESGCM(256) Mac=AEAD DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(256) Mac=AEAD DH-DSS-AES128-GCM-SHA256 TLSv1.2 Kx=DH/DSS Au=DH Enc=AESGCM(128) Mac=AEAD DH-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH/RSA Au=DH Enc=AESGCM(128) Mac=AEAD DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(128) Mac=AEAD ECDHE-RSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA384 ECDHE-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA384 ECDHE-RSA-AES256-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA1 ECDHE-ECDSA-AES256-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA1 ECDH-RSA-AES256-SHA384 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AES(256) Mac=SHA384 ECDH-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AES(256) Mac=SHA384 ECDH-RSA-AES256-SHA SSLv3 Kx=ECDH/RSA Au=ECDH Enc=AES(256) Mac=SHA1 ECDH-ECDSA-AES256-SHA SSLv3 Kx=ECDH/ECDSA Au=ECDH Enc=AES(256) Mac=SHA1 DHE-RSA-AES256-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(256) Mac=SHA256 DH-RSA-AES256-SHA256 TLSv1.2 Kx=DH/RSA Au=DH Enc=AES(256) Mac=SHA256 DH-DSS-AES256-SHA256 TLSv1.2 Kx=DH/DSS Au=DH Enc=AES(256) Mac=SHA256 DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1 DH-RSA-AES256-SHA SSLv3 Kx=DH/RSA Au=DH Enc=AES(256) Mac=SHA1 DH-DSS-AES256-SHA SSLv3 Kx=DH/DSS Au=DH Enc=AES(256) Mac=SHA1 ECDHE-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA256 ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA256 ECDHE-RSA-AES128-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA1 ECDHE-ECDSA-AES128-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA1 ECDH-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH/RSA Au=ECDH Enc=AES(128) Mac=SHA256 ECDH-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH/ECDSA Au=ECDH Enc=AES(128) Mac=SHA256 ECDH-RSA-AES128-SHA SSLv3 Kx=ECDH/RSA Au=ECDH Enc=AES(128) Mac=SHA1 ECDH-ECDSA-AES128-SHA SSLv3 Kx=ECDH/ECDSA Au=ECDH Enc=AES(128) Mac=SHA1 DHE-RSA-AES128-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(128) Mac=SHA256 DH-RSA-AES128-SHA256 TLSv1.2 Kx=DH/RSA Au=DH Enc=AES(128) Mac=SHA256 DH-DSS-AES128-SHA256 TLSv1.2 Kx=DH/DSS Au=DH Enc=AES(128) Mac=SHA256 DHE-RSA-AES128-SHA SSLv3 Kx=DH Au=RSA Enc=AES(128) Mac=SHA1 DH-RSA-AES128-SHA SSLv3 Kx=DH/RSA Au=DH Enc=AES(128) Mac=SHA1 DH-DSS-AES128-SHA SSLv3 Kx=DH/DSS Au=DH Enc=AES(128) Mac=SHA1 AES256-GCM-SHA384 TLSv1.2 Kx=RSA Au=RSA Enc=AESGCM(256) Mac=AEAD AES128-GCM-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AESGCM(128) Mac=AEAD AES256-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA256 AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1 AES128-SHA256 TLSv1.2 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA256 AES128-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(128) Mac=SHA1 Check out this related content: Academy course Sisense Documentation659Views0likes0CommentsConfiguring/Adjusting Readiness Probes for Containers
Readiness probes are critical in container orchestration to ensure that containers are ready to handle traffic before they are included in service load balancing. If a container fails readiness probes due to insufficient thresholds, adjusting these parameters can help. This guide explains how to modify readiness probe settings to accommodate containers with longer startup times.421Views1like0Comments