Installation Linux error - Timeout when waiting for 127.0.0.1:10250 to stop
If you see the following errors in the installation log: failed: [node1] (item=10250) => {"changed": false, "elapsed": 2, "item": "10250", "msg": "Timeout when waiting for 127.0.0.1:10250 to stop."} failed: [node1] (item=10248) => {"changed": false, "elapsed": 2, "item": "10248", "msg": "Timeout when waiting for 127.0.0.1:10248 to stop."} failed: [node1] (item=10249) => {"changed": false, "elapsed": 2, "item": "10249", "msg": "Timeout when waiting for 127.0.0.1:10249 to stop."} failed: [node1] (item=10256) => {"changed": false, "elapsed": 2, "item": "10256", "msg": "Timeout when waiting for 127.0.0.1:10256 to stop."} This means that the ports for the Kubernetes cluster are closed: TCP 10248 - 10259 Kubernetes Note - these ports need to be open even if it is not a multi-node deployment.25KViews0likes0CommentsSolutions to commonly found issues when setting up a new Sisense ComposeSDK project during beta
Solutions to commonly found issues when setting up a new Sisense ComposeSDK project during beta The first two solutions involve issues with installing ComposeSDK dependencies during the beta period while the ComposeSDK dependencies are hosted on GitHub, and GitHub is the preferred method for access. When ComposeSDK exits the beta period, the dependencies will be available on other sources that will not require a custom authentication token. Problem - Errors when installing Compose SDK dependencies from GitHub Solution - Make sure a GitHub token is active in your environment configuration, and that your GitHub account is fully active and accessible. If a custom GitHub token is used, ensure the custom token has "Read Repository" permission. Alternatively, you can use a standard full-access GitHub token. Make certain the entire token is included when copied into the terminal. Problem - SSL errors when downloading ComposeSDK dependencies from GitHub Solution - When downloading from a VPN network, you may experience this error. You can resolve this by making the following npm config change with this command: npm config set strict-ssl false In Yarn, the equivalent command is: yarn config set "strict-ssl" false -g Problem - CORS errors in the browser console when connecting to a Sisense server with ComposeSDK Solution - Add the hosting domain to the Admin > Security Settings page. Also, make sure CORS is enabled. A common issue is a trailing slash at the end of the URL when copied from the URL directly; these must be removed when setting CORS exemptions. Include the first part of a domain (the subdomain, such as subdomain.domain.com) as well as the port number if included. Anything in the URL after the first slash is not required and is not part of the domain. Problem - Sisense authentication errors when connecting to a Sisense server with ComposeSDK Solution - Do not include "Bearer" at the beginning of the token parameter; this is not required in ComposeSDK and is added automatically by ComposeSDK. When "Bearer" is present explicitly, it will be repeated twice in the header. Make sure the entire token is copied and test the token using a program such as Postman or Curl and any documented Sisense API if you are unsure if the token is valid.3.6KViews2likes0CommentsInstallation fails with /bin/activate: No such file or directory
If the process halts early with errors related to missing Python packages, you can quickly resolve this by ensuring all required dependencies are in place. For a smooth setup, follow the steps outlined in the Minimum System Requirements, which include commands to update your system and install the necessary Python packages. Don't let a small error disrupt your installation—get back on track with these simple fixes.2.2KViews0likes0CommentsResolving installation error for Sisense - sisense-installer/bin/activate: no such file or directory
This article provides basic understanding of a common error encountered during the installation of Sisense on Linux systems. Verifying Python packages exist ... ./sisense.sh: line 240: sisense-installer/bin/activate: No such file or directory Error occurred during generic_pre_install section Exiting Installation ...1.5KViews0likes0CommentsHow to check network performance in Linux
You might need to check the network performance between the nodes of the multi-node cluster or between a Linux server and a remote database on Linux. 1., Install the following tool on each node: For Ubuntu: sudo apt install iperf3 For Red Hat / CentOS / Amazon Linux: sudo yum install iperf 2. Start the server on the first node: iperf3 -s (or 'iperf -s') 3. Notice its port from the output and then launch the client on another node / another server: iperf3 -c IP_of_the_server_above -p port_of_the_server Feel free to heck the the following combination: node1 - node2 , node1 - node3 , node2 - node3 (or Sisense server - database server) And example output: ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from 172.16.0.98, port 49216 [ 5] local 172.16.0.38 port 5201 connected to 172.16.0.98 port 49218 [ ID] Interval Transfer Bandwidth [ 5] 0.00-1.00 sec 105 MBytes 884 Mbits/sec [ 5] 1.00-2.00 sec 110 MBytes 922 Mbits/sec [ 5] 2.00-3.00 sec 110 MBytes 924 Mbits/sec [ 5] 3.00-4.00 sec 110 MBytes 923 Mbits/sec [ 5] 4.00-5.00 sec 110 MBytes 926 Mbits/sec [ 5] 5.00-6.00 sec 110 MBytes 919 Mbits/sec [ 5] 6.00-7.00 sec 110 MBytes 921 Mbits/sec [ 5] 7.00-8.00 sec 110 MBytes 922 Mbits/sec [ 5] 8.00-9.00 sec 110 MBytes 926 Mbits/sec [ 5] 9.00-10.00 sec 109 MBytes 913 Mbits/sec [ 5] 10.00-10.05 sec 4.99 MBytes 927 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 5] 0.00-10.05 sec 0.00 Bytes 0.00 bits/sec sender [ 5] 0.00-10.05 sec 1.07 GBytes 918 Mbits/sec receiver1.5KViews0likes0CommentsHow to Troubleshoot UI Issues Using DevTools (HAR File & Console Logs)
If a webpage, dashboard, or widget isn't loading properly, encounters errors during exporting, importing, editing, or opening, or if buttons are unresponsive and error messages appear, you can use Developer Tools (DevTools) in your browser to diagnose the issue. This guide will show you how to: - Check the Network tab for failed requests. - Use the Preview and Response tabs to see details of errors. - Check the Console tab for other issues. - Save a HAR file to share with support.1.5KViews1like0CommentsError on installation "cp: cannot create regular file '/usr/local/bin/kubectl': Text file busy"
The error "cp: cannot create regular file '/usr/local/bin/kubectl': Text file busy" typically occurs when multiple upgrade processes are running simultaneously, causing a conflict when trying to update the kubectl binary. [2024-10-30 12:33:34] Getting binaries kubectl (v1.30.3) and helm (v3.12.3) [2024-10-30 12:33:34] Downloading them from the internet % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 138 100 138 0 0 1367 0 --:--:-- --:--:-- --:--:-- 1380 100 49.0M 100 49.0M 0 0 3478k 0 0:00:14 0:00:14 --:--:-- 3525k % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 15.2M 100 15.2M 0 0 3730k 0 0:00:04 0:00:04 --:--:-- 3731k linux-amd64/helm cp: cannot create regular file '/usr/local/bin/kubectl': Text file busy [2024-10-30 12:33:53] ** Error occurred during command sudo cp installer/05_post_infra/files/kubectl installer/05_post_infra/files/helm /usr/local/bin section ** [2024-10-30 12:33:53] ** Exiting Installation ... ** For example, in one case, the error occurred because multiple upgrades were happening on the same environment simultaneously (the same bastion for several cloud environments), which led to the Kubectl binary being in use by another upgrade process. The recommended solution is to check if any process is using Kubectl or Helm before proceeding with the upgrade. Watching an upgrade with "kubectl ... -w" command can cause this issue also. To prevent this error, it is advisable to: Ensure that no other upgrade or deployment processes are running in parallel. Use commands like lsof -t $(which kubectl) and lsof -t $(which helm) to check if these binaries are in use. If any command returns a PID, print out the process details using ps -ef | grep <pid number> and fail the pipeline if necessary. By following these steps, you can avoid the "Text file busy" error and ensure a smooth upgrade process. Related Content: https://academy.sisense.com/sisense-administration https://docs.sisense.com/main/SisenseLinux/upgrading-sisense.htm1.4KViews1like0CommentsTroubleshooting Pods in Kubernetes Clusters
Troubleshooting Pods in Kubernetes Clusters In Kubernetes, pods can encounter various issues that prevent them from running correctly. Understanding the different pod states and how to troubleshoot them is essential for maintaining a healthy cluster. This guide covers common pod states and provides steps for diagnosing and resolving issues. Common Pod States: Init: The pod is initializing. All init containers must be completed before the main containers start. 0/1 Running: The pod is running, but not all containers are in a ready state. CrashLoopBackOff: The pod repeatedly fails and is restarted. Pending: The pod is waiting to be scheduled on a node. ImagePullBackOff: The pod cannot pull the container image. Troubleshooting Steps 1. Pod in Init State: When a pod is stuck in the Init state, it indicates that one or more init containers haven't completed successfully. Check Pod Description: kubectl -n sisense describe pod <pod_name> In the description, look for the init containers section. All init containers should ideally be in a "Completed" state. If one is still running or has failed, it can block the rest of the pod's containers from starting. Example: Init Containers: init-mongodb: State: Running This indicates an issue with the init-mongodb container. Check Logs of Init Containers: If the init container is still running or has failed, check its logs: kubectl -n sisense logs <pod_name> -c init-mongodb After identifying issues in the init container, investigate the related services or dependencies, such as the MongoDB pod itself in this example. 2. Pod in 0/1, 1/2 Running State: This state indicates that the pod is running, but not all containers are in a ready state. Describe the Pod: kubectl -n sisense describe pod <pod_name> Check the State section for each container. Look for reasons why a container is not in a ready state, such as CrashLoopBackOff, ImagePullBackOff, or other errors. Check Logs for Previous Failed Container: If a container is in an error state or has restarted, checking the logs can provide more context about the issue. Current Logs: kubectl -n sisense logs <pod_name> -c <container_name> Replace <container_name> with the name of the specific container. Previous Logs: kubectl -n sisense logs <pod_name> -c <container_name> -p This command retrieves logs from the previous instance of the container, which can be particularly useful if the container has restarted. 3. Pod in CrashLoopBackOff State: A pod enters the CrashLoopBackOff state when it repeatedly fails and is restarted. To diagnose this issue: Describe the Pod: kubectl -n sisense describe pod <pod_name> This command provides detailed information, including the events and container statuses. Example: State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: OOMKilled The OOMKilled reason indicates that the container was killed due to exceeding its memory limit. Increase the memory limit to fix the issue. Check Events and Container States: At the bottom of the describe output, you'll find the events section, which includes messages about why the pod failed. For example, FailedScheduling may indicate resource constraints or node issues. Review Logs: Logs can provide valuable insights when a pod is in the CrashLoopBackOff state. Check Current Logs: kubectl -n sisense logs <pod_name> This command retrieves the logs from the current running container. Check Previous Logs: kubectl -n sisense logs <pod_name> -p This command retrieves logs from the previous container instance, which is useful if the container was restarted. Specify the container name if the pod has multiple containers: kubectl -n sisense logs <pod_name> -c <container_name> -p 4. Pod in Pending State: If a pod is Pending, it means it hasn't been scheduled on a node yet. Check Pod Scheduling Events: kubectl -n sisense describe pod <pod_name> Look for events like: Warning FailedScheduling 85m default-scheduler 0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules. This message indicates that the scheduler couldn't find a suitable node for the pod due to resource constraints, node affinity rules, or other scheduling policies. 5. Pod in ImagePullBackOff State: This state occurs when a pod cannot pull the container image from the registry. Check Pod Description for Image Issues: kubectl -n sisense describe pod <pod_name> Look for messages indicating issues with pulling the image, such as incorrect image names, tag issues, authentication problems, or network errors. For multinode deployments, note the server in the message. It is possible that the image may not exist on all servers. Verify Image Name and Tag: Ensure that the image name and tag are correct and that the image is available in the specified registry. Check Image Pull Secrets: If the image is in a private registry, ensure that the registry is accessible. Manually Pull the Image: Sometimes, images are not available or cannot be downloaded within the default timeout. To verify the availability of the image and check for any errors, try pulling the image manually on the node specified in the error message: docker pull <image_name>:<tag> Replace <image_name> and <tag> with the appropriate image and tag names. This can help determine if the issue is with the image itself or the registry configuration. Conclusion By understanding these common pod states and following the troubleshooting steps, you can diagnose and resolve many issues in Kubernetes. Regularly monitoring pods and logs is essential for maintaining a stable and reliable Kubernetes environment. Check out this related content: Documentation1.3KViews0likes0CommentsPerformance Improvements On The Single-Node Linux Servers Under Heavy Usage
Symptoms Relevant for Linux Single-node Sisense In a default Sisense single-node deployment, there is one pod per service deployment. In cases of heavy usage, which is usually caused by parallel requests, the number of services processing the request could result in slowness or a delay when accessing dashboard objects, opening admin pages, etc. Diagnosis A good test would be to compare how the system behaves under heavy load and during non-working hours. If you see a difference in the time it takes to load the dashboard layout or access pages in admin, this could be caused by the amount of the load placed on the workers of each pod. This should not be an issue in multi-node environments as services are already scaled up, but in a single node environment, if there is enough RAM/CPU to handle the load, Sisense can scale services to boost the performance of parallel requests. Important services include: api-gateway - webserver service. Takes a request first and communicates with other requests. galaxy - service which serves the object of the dashboard, widgets, data security, shares, navigation panel, and alerts. identity - service which provides detail of users, groups, and authentification. configuration - service which provides system settings and configuration Solution If you have enough RAM/CPU resources on the server, you can scale services by running the commands below. Please note that each additional pod replica will consume up to 8.5 GB of RAM, so keep this in mind when scaling. Below is a command to double the amount of services. Remember to change the namespace and to set the amount of --replicas to the correct amount. kubectl -n <namespace> scale deployment identity --replicas=2 kubectl -n <namespace> scale deployment galaxy --replicas=2 kubectl -n <namespace> scale deployment api-gateway --replicas=2 kubectl -n <namespace> scale deployment configuration --replicas=2 The above steps can help if you notice common api-gateway restarts caused by the OOMemory. Sisense's RnD team is working on a solution to the problem, but in the interim, scaling the api-gateway can help prevent service disruptions.1.2KViews1like0Comments