[Linux] Resolving data model connection errors in Sisense after importing models
This article addresses an issue where data models in Sisense fail to open, producing a "Cannot read properties of undefined (reading 'datasets')" error message. The problem appears during the import of models from one environment to another, accompanied by difficulties in cleaning up datasets using APIs.206Views1like0CommentsHow to update style sheets in the branding folder for dashboards
In this article, we will address the issue of updates made to style sheets within a branding folder not reflecting on dashboards. Users often encounter this problem due to browser caching, which prevents the most updated CSS files from loading. This guide provides solutions to ensure your dashboard reflects the latest version of your style sheets.220Views1like0CommentsPerformance Improvements On The Single-Node Linux Servers Under Heavy Usage
Symptoms Relevant for Linux Single-node Sisense In a default Sisense single-node deployment, there is one pod per service deployment. In cases of heavy usage, which is usually caused by parallel requests, the number of services processing the request could result in slowness or a delay when accessing dashboard objects, opening admin pages, etc. Diagnosis A good test would be to compare how the system behaves under heavy load and during non-working hours. If you see a difference in the time it takes to load the dashboard layout or access pages in admin, this could be caused by the amount of the load placed on the workers of each pod. This should not be an issue in multi-node environments as services are already scaled up, but in a single node environment, if there is enough RAM/CPU to handle the load, Sisense can scale services to boost the performance of parallel requests. Important services include: api-gateway - webserver service. Takes a request first and communicates with other requests. galaxy - service which serves the object of the dashboard, widgets, data security, shares, navigation panel, and alerts. identity - service which provides detail of users, groups, and authentification. configuration - service which provides system settings and configuration Solution If you have enough RAM/CPU resources on the server, you can scale services by running the commands below. Please note that each additional pod replica will consume up to 8.5 GB of RAM, so keep this in mind when scaling. Below is a command to double the amount of services. Remember to change the namespace and to set the amount of --replicas to the correct amount. kubectl -n <namespace> scale deployment identity --replicas=2 kubectl -n <namespace> scale deployment galaxy --replicas=2 kubectl -n <namespace> scale deployment api-gateway --replicas=2 kubectl -n <namespace> scale deployment configuration --replicas=2 The above steps can help if you notice common api-gateway restarts caused by the OOMemory. Sisense's RnD team is working on a solution to the problem, but in the interim, scaling the api-gateway can help prevent service disruptions.1.2KViews1like0CommentsResolving "Internal Server Error" When Importing Model with Generic JDBC Connector [LINUX]
Resolving "Internal Server Error" When Importing Model with Generic JDBC Connector This article addresses the issue of encountering an "Internal Server Error" when importing a model from a local desktop to a production environment using the Generic JDBC connector in Sisense. This error often arises due to compatibility issues with older JDBC frameworks. Here, we provide a solution to resolve this error by updating the JDBC connector and connections. Step-by-Step Guide: Identify the Error: a. When importing the model, if you receive an error message resembling: Unexpected error value: { timestamp: {}, status: 500, error: "Internal Server Error", path: "/api/v1/internal/live_connectors/auth/GenericJDBC… } b. This error could indicate a problem with the outdated Generic JDBC connector. Understand the Cause: The error is due to the Generic JDBC connector using an older framework that is incompatible with the latest Sisense versions. Update the JDBC Connector: Deploy the JDBC connector using the new JDBC framework. Ensure all connections within the cube in the original environment where the model was exported are updated accordingly. Deploying the New Framework: Follow the step-by-step instructions detailed in the Sisense documentation for deploying a custom connector with the new JDBC framework. Access the documentation here: Deploying a Custom Connector - Sisense Documentation. Re-attempt the Model Import: Once the JDBC connector is updated, try importing the model again into the production environment. Conclusion: In conclusion, updating the Generic JDBC connector to align with the new JDBC framework and ensuring the connections are properly configured should resolve the "Internal Server Error" encountered during the import process. Following the above steps will help ensure a smooth and successful model import. References/Related Content: Deploying a Custom Connector - Sisense Documentation Sisense Community Support Forum Introduction to Data Sources Disclaimer: This post outlines a potential custom workaround for a specific use case or provides instructions regarding a specific task. The solution may not work in all scenarios or Sisense versions, so we strongly recommend testing it in your environment before deployment. If you need further assistance with this please let us know.198Views1like0CommentsTroubleshooting Pods in Kubernetes Clusters
Troubleshooting Pods in Kubernetes Clusters In Kubernetes, pods can encounter various issues that prevent them from running correctly. Understanding the different pod states and how to troubleshoot them is essential for maintaining a healthy cluster. This guide covers common pod states and provides steps for diagnosing and resolving issues. Common Pod States: Init: The pod is initializing. All init containers must be completed before the main containers start. 0/1 Running: The pod is running, but not all containers are in a ready state. CrashLoopBackOff: The pod repeatedly fails and is restarted. Pending: The pod is waiting to be scheduled on a node. ImagePullBackOff: The pod cannot pull the container image. Troubleshooting Steps 1. Pod in Init State: When a pod is stuck in the Init state, it indicates that one or more init containers haven't completed successfully. Check Pod Description: kubectl -n sisense describe pod <pod_name> In the description, look for the init containers section. All init containers should ideally be in a "Completed" state. If one is still running or has failed, it can block the rest of the pod's containers from starting. Example: Init Containers: init-mongodb: State: Running This indicates an issue with the init-mongodb container. Check Logs of Init Containers: If the init container is still running or has failed, check its logs: kubectl -n sisense logs <pod_name> -c init-mongodb After identifying issues in the init container, investigate the related services or dependencies, such as the MongoDB pod itself in this example. 2. Pod in 0/1, 1/2 Running State: This state indicates that the pod is running, but not all containers are in a ready state. Describe the Pod: kubectl -n sisense describe pod <pod_name> Check the State section for each container. Look for reasons why a container is not in a ready state, such as CrashLoopBackOff, ImagePullBackOff, or other errors. Check Logs for Previous Failed Container: If a container is in an error state or has restarted, checking the logs can provide more context about the issue. Current Logs: kubectl -n sisense logs <pod_name> -c <container_name> Replace <container_name> with the name of the specific container. Previous Logs: kubectl -n sisense logs <pod_name> -c <container_name> -p This command retrieves logs from the previous instance of the container, which can be particularly useful if the container has restarted. 3. Pod in CrashLoopBackOff State: A pod enters the CrashLoopBackOff state when it repeatedly fails and is restarted. To diagnose this issue: Describe the Pod: kubectl -n sisense describe pod <pod_name> This command provides detailed information, including the events and container statuses. Example: State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: OOMKilled The OOMKilled reason indicates that the container was killed due to exceeding its memory limit. Increase the memory limit to fix the issue. Check Events and Container States: At the bottom of the describe output, you'll find the events section, which includes messages about why the pod failed. For example, FailedScheduling may indicate resource constraints or node issues. Review Logs: Logs can provide valuable insights when a pod is in the CrashLoopBackOff state. Check Current Logs: kubectl -n sisense logs <pod_name> This command retrieves the logs from the current running container. Check Previous Logs: kubectl -n sisense logs <pod_name> -p This command retrieves logs from the previous container instance, which is useful if the container was restarted. Specify the container name if the pod has multiple containers: kubectl -n sisense logs <pod_name> -c <container_name> -p 4. Pod in Pending State: If a pod is Pending, it means it hasn't been scheduled on a node yet. Check Pod Scheduling Events: kubectl -n sisense describe pod <pod_name> Look for events like: Warning FailedScheduling 85m default-scheduler 0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules. This message indicates that the scheduler couldn't find a suitable node for the pod due to resource constraints, node affinity rules, or other scheduling policies. 5. Pod in ImagePullBackOff State: This state occurs when a pod cannot pull the container image from the registry. Check Pod Description for Image Issues: kubectl -n sisense describe pod <pod_name> Look for messages indicating issues with pulling the image, such as incorrect image names, tag issues, authentication problems, or network errors. For multinode deployments, note the server in the message. It is possible that the image may not exist on all servers. Verify Image Name and Tag: Ensure that the image name and tag are correct and that the image is available in the specified registry. Check Image Pull Secrets: If the image is in a private registry, ensure that the registry is accessible. Manually Pull the Image: Sometimes, images are not available or cannot be downloaded within the default timeout. To verify the availability of the image and check for any errors, try pulling the image manually on the node specified in the error message: docker pull <image_name>:<tag> Replace <image_name> and <tag> with the appropriate image and tag names. This can help determine if the issue is with the image itself or the registry configuration. Conclusion By understanding these common pod states and following the troubleshooting steps, you can diagnose and resolve many issues in Kubernetes. Regularly monitoring pods and logs is essential for maintaining a stable and reliable Kubernetes environment. Check out this related content: Documentation1.3KViews0likes0CommentsResolving Build Failures Due to Memory Issues [Safe-Mode]
This is the guide of which settings should be checked and adjusted to resolve the problem and prevent future occurrences, in cases when cubes fail with the error ‘Safe-mode’. Please follow the steps below, and after that try to re-build the failed cubes.560Views1like0CommentsError in ingress-nginx-controller pod after upgrading Sisense to L2024.1
Error in ingress-nginx-controller pod after upgrading Sisense to L2024.1: main.go:64] port 80 is already in use. Please check the flag --http-port !NOTE! - this guide is relevant for non-cloud managed Sisense deployments i.e. installed with Sisense Kubernetes (RKE). After upgrading Sisense to version L2024.1 you might face the following error in ingress-nginx-controller pod’s logs (the default namespace): main.go:64] port 80 is already in use. Please check the flag --http-port Such an issue might be the result of incompatibility between the ingress-Nginx release version that was updated in Sisense L2024.1 and the Kubernetes version in case it is lower than v1.26. In case Sisense L2024.1 was freshly installed on the non-cloud instance (i.e. with Sisense RKE) as a fresh deployment (and was not upgraded from the previous version) this issue should not reveal since L2024.1 package already comes with v1.26 Kubernetes by default. However, if Sisense was upgraded to L2024.1 or it was deployed/installed on Kubernetes lower than v1.26 there will be an incompatibility between the ingress-Nginx release implemented in L2024.1 (4.10.0). To resolve the issue it is necessary to re-run the Sisense upgrade with the following parameters in the configuration yaml file: update: true update_k8s_version: true In case you are still facing the same issue after that, please open a ticket for Sisense Support. This article provides a brief explanation regarding possible Nginx<->Kubernetes compatibility issues in Sisense L2024.1. The article also provides the steps to resolve the issue described. Disclaimer: This post outlines a potential custom workaround for a specific use case or provides instructions regarding a specific task. The solution may not work in all scenarios or Sisense versions, so we strongly recommend testing it in your environment before deployment. If you need further assistance with this please let us know.359Views1like0Comments