Sisense Kubernetes Cluster Health Check
Sisense Kubernetes Cluster Health Check Check the pods status 1. Check if there are pods that are not in a Running or Competed state. kubectl get po -A -o wide | egrep -v 'Running|Completed' -A is used to get pods from all namespaces (Sisense is usually installed in sisense one) -o wide is used to get the extended output The response should be empty: 2. If there is no output (all pods are in a Running or Competed state), check if all containers of the Running pods are READY. kubectl get po -A -o wide | egrep 'Running' You should be looking for numbers in the READY column, x/y, where x is the number of ready containers and y is the total number of containers. Please note if x is less than y, then not all containers are ready! Please refer to the sections below for instructions on troubleshooting this issue. 3. If all pods are in a Running or Completed state, and all Running pods containers are READY, check the status of the nodes: kubectl get nodes. All nodes should have the status ‘Ready.’ If it is a single-node environment, then you will see just one node. If a multi-node environment, then you should see several nodes. 4. If the node is not in a ‘Ready’ state, get details by ‘describing’ the node: kubectl describe node <node-name>. 5. You may also check storage health by running: kubectl -n sisense get PVC. 6. If all pods are in a Running or Completed state, all Running pods containers are READY, and all nodes are in a Ready state, the basic Kubernetes troubleshooting is complete, and the issue is not in the Kubernetes infrastructure. What if kubectl is not running? 1. If Linux doesn't recognize the kubectl command, there is an issue with the Kubernetes installation, or the user doesn’t have permissions to run kubectl: 2. The main Kubernetes component is kubelet. Check the status of the kubelet service (does not apply to RKE deployments): systemctl status kubelet It should be in an active/running state. 3. If the kubelet is not in an active (running) state, try restarting it 4. You may check kubelet logs by running: journalctl -u kubelet Click shift+g to go to the end of the list 5. If the kubelet is missing, then there is an issue with the Kubernetes installation 6. If the kubelet is in an active (running) state, check if there is a .kube directory in the home directory of your current user: cd && ls -la .kube If the .kube directory is missing or empty, the current user is not configured to run kubectl, and there is a problem with the Kubernetes configuration. What if the pods are not running correctly? 1. If you have a meaningful output, but you have pods in a state other than Running or Completed, or not all containers are READY like the below screenshot, then you will have to describe the pod to understand the reason why it is unhealthy. 2. For example, you have a pod with 0/1 containers READY: Assuming the pod is in the sisense namespace, copy the name of the pod, in this case, ‘external-plugins-5dcf494b77-gtsfk’, and run kubectl -n sisense describe pod external-plugins-5dcf494b77-gtsfk The output will look like this: The two main sections we are interested in evaluating are Conditions and Events. Conditions will give you True/False values if the pod is: -Initialized -Ready -ContainerReady -PodScheduled (pod is placed on the node) In the example above, the pod had been placed on the node (PodScheduled: True), but it is not ready, although it has been initialized, because its container is not Ready. Events will give you an excerpt from the kubelet log showing events related to the current pod. In the example above, the Readiness probe for the pod failed, so the problem is in the application itself and not in the Kubernetes infrastructure. 3. You can check the logs of the pod with the command: kubectl -n sisense logs external-plugins-5dcf494b77-gtsfk and look for errors given the clue about the root cause of the problem. 4. The container may be in a state other than Running: Use describe pod to check its Conditions and Events as we did in the previous case: kubectl -n sisense describe po external-plugins-5dcf494b77-gtsfk 5. If conditions and events don’t give you enough information about the root cause of the problem, look at the State/Last state section: In the example above, the Last State is ‘Terminated,’ and the Reason is OOMKilled, which means Out of Memory, Killed. This means that the Kubernetes has killed the container because the latter has exceeded the memory limit. To increase the memory limit, find the problematic pod: kubectl -n sisense get po Then find the Kubernetes object managing the pod. In our example kubectl -n sisense get all | grep connectors Find a resource without additional random letters/digits in the name: In our case, it’s a deployment. Then edit the resource: kubectl -n sisense edit deployment connectors And search for resources: Increase the ‘limits’ for the ‘memory’ in our example. 6. Let’s consider another example: We have a pod in a CrashLoopBackOff state. Let’s describe the pod: kubectl -n sisense describe po sisense-dgraph-alpha-0 It doesn’t give us anything obvious. Let’s check the logs: kubectl -n sisense logs sisense-dgraph-alpha-0 (add –previous if you don’t have any output). You can see that the root cause of the issue is the fact that there is “no space left on device.” This means we should allocate more space to the pod. In this case, you may check the status of persistent volumes and persistent volume claims with kubectl get pv and kubectl -n sisense get PVC. You are looking for statuses other than Bound: If you see a status other than Bound, you may “describe” the resource: kubectl -n sisense describe pvc data-dgraph-0 as in the case above. What else to check? 1. If the cluster looks healthy, but performance suffers, you may check the resource consumption by Sisense services. Start with checking nodes. kubectl top nodes The output should look like: Note if the CPU% is close to 100% or memory% is close to 85% 2. To check the resource consumption of the individual pods run kubectl -n sisense top po Note the pods with abnormally high memory consumption: Conclusion In conclusion, if you are a Kubernetes pro then this article will help you quickly grasp what infrastructure components are involved in Sisense deployment and what to check next. If you are a Kubernetes newbie, these basic instructions will let you troubleshoot the issue quickly and identify the issue to seek further help. If you need any additional help, please contact Sisense Support.2KViews0likes0CommentsElevate Your Data Product’s Quality with Streamlined Version Control leveraging Our Git Integration
Elevate Your Data Product’s Quality with Streamlined Version Control Leveraging the Sisense Git Integration! In today's CI/CD ecosystems, efficient asset migration from development to production environments is crucial for delivering high-quality data products. Sisense being a leading embedded analytics technology offers a powerful Git integration that simplifies and enhances the migration process. In this blog, we will explore leveraging the Sisense Git Version Control to streamline asset migration, ensuring smooth transitions and maintaining data product integrity. To understand the value of Sisense Git Version Control it is important to understand what Git is. Git offers users (often developers and/or engineers) a structured and efficient approach to managing files, collaborating with others, and maintaining a clear history of changes. Git enhances team productivity, reduces errors, and provides a sense of control over projects. Teams who leverage Git ultimately benefit from better organization, teamwork, and effective management of files and projects. When building your data products in a technology like Sisense, there is massive value in integrating with your developer’s CI/CD workflow for continuity, quality, and time to delivery. Users who leverage the Sisense Git Version Control can collaborate on building data products, manage changes to products over time, and migrate assets across Sisense environments through remote Git repositories. The Sisense Git Integration is a feature that is offered out of the box with Sisense Linux Version(s) 2022.10 and up. To begin leveraging the Sisense Git Integration feature you can click on the Git Logo in the top right of your Sisense environment. The Git GUI will open in a separate browser tab and you will be asked to create a new project. After creating a new project your team will be prompted to name the project, name the default branch, and if you desire to connect to a remote Git repository (further instructions are included in Sisense Git Documentation depending on which Git repository your team leverages). After these steps are complete you can choose to invite others to collaborate with you on the project. If you choose collaborators or decide to lone-wolf a project you will be asked next if you’d like to “add assets” to the project. Do not worry lonely wolves, if you would like to invite collaborators down the road you can share the project after the fact. Assets available to modify/track in Sisense Git Version Control are Data Models and Dashboards, or you can simply continue without if you intend to “Pull” Sisense assets from a remote repository. Once a team has created and defined a project, they can start working. Users familiar with Git will find continuity in terminology and functionality with the Sisense Git GUI and popular Git repositories. Dashboards and Models are compressed into JSON files, allowing users to review, commit, or discard changes. Teams can create branches, checkout branches, and revert changes if needed. When a project is ready to progress to the next stage, users can "Push" the assets/branches to the remote repository. The assets can be reviewed in their JSON format in the remote repository. If a CI/CD pipeline includes QA, Staging, or Production Sisense environments, users can leverage the Git GUI in those environments to "Pull" assets for review or publication. So let’s land this plane! The Sisense Git Integration is a tool that provides tremendous value to your developer/engineering team's workflow, while significantly improving your business with better data product quality and delivery. If your team already leverages Git, this tool will be easy to incorporate and drive value. For users unfamiliar with Git, we strongly recommend adopting this approach, as it only involves a minimal learning curve but offers improved version control, streamlined asset migration, and overall enhanced quality. We hope this information3.8KViews3likes0CommentsRelocating /var/lib/docker directory
Docker uses /var/lib/docker to store images, containers, and local named volumes. Deleting this can result in data loss and possibly stop the engine from running. However in case if it is needed to free some space on the root volume, it is possible to relocate this directory to another partition or drive. This tutorial describes two ways on how to move the /var/lib/docker directory in Linux.37KViews0likes0CommentsReverse Proxy with Nginx + SSL configuration
Reverse Proxy with Nginx + SSL configuration Nginx Reverse proxy configuration Step 1. Nginx reverse proxy server set up In this example, we are using nginx, we can install it on the same device as Sisense. To install it run 1. Install nginx for Ubuntu/Debian-like systems: sudo apt install nginx 2. For RHEL systems such a CentOS, use below: sudo yum install nginx 3. Start nginx: sudo systemctl start nginx Step 2. Nginx server configuration 1. Open the browser and go to the IP address of the server. If it's up, you will see the Nginx welcome page– this means nginx is now running on the default port 80. 2. Edit /etc/nginx/sites-enabled/default and add the next configuration under the root server config. Define correct Sisense public IP, and port in the "server {}" section: location /analytics { rewrite /analytics/(.*) /$1 break; proxy_pass http://<sisense-ip>:30845; proxy_http_version 1.1; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $host; proxy_connect_timeout 36000; proxy_send_timeout 36000; proxy_read_timeout 36000; send_timeout 36000; } 3. Before you apply the settings, check that there is no syntax issue by running sudo nginx -t 4. Reload nginx with sudo /etc/init.d/nginx reload or sudo systemctl reload nginx With this configuration, Sisense will be accessed with http://<ip-or-domain-of-nginx-server>/analytics. Also if the https is configured for this nginx server, Sisense would be accessible with https://<ip-or-domain-of-nginx-server>/analytics. If on the proxy level, the HTTPS is enabled, please ensure the application_dns_name has the https prefix to ensure all traffic is used, so something like: application_dns_name: https://company.sisense.com Step 3. Sisense configuration Go to the Admin tab Click on System Management Enter Configuration and choose Web Server In the Proxy URL enter "/analytics" or "http://<ip-or-domain-of-nginx-server>/analytics" as we configured in Nginx. With "/analytics" you will be able to use multiple domains for this instance. Save it and test with a browser by entering http://<ip-or-domain-of-nginx-server>/analytics And now we can configure SSL with our Nginx server, please validate that Nginx is working properly first before moving on. SSL configuration for Nginx Step 1. Obtain self signed SSL certificates You can use a command like this sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/nginx-selfsigned.key -out /etc/ssl/certs/nginx-selfsigned.crt. For an explanation of what the above command does please refer to Setup SSL on Sisense (Linux version) - Link placeholder Step 2. Configure Nginx to use SSL 1. Сreate a new file named self-signed.conf. sudo vi /etc/nginx/snippets/self-signed.conf In self-signed.conf we want to add some variables that will hold the location of our certificate and key files that we generated in Step 1. Like this ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt; ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key; Save and close the file. 2. Now we will create a snippet file to define SSL settings. Start by creating a file like this sudo vi /etc/nginx/snippets/ssl-params.conf In this file, we need to include some SSL settings as below. ssl_protocols TLSv1.3; ssl_prefer_server_ciphers on; ssl_dhparam /etc/nginx/dhparam.pem; ssl_ciphers EECDH+AESGCM:EDH+AESGCM; ssl_ecdh_curve secp384r1; ssl_session_timeout 10m; ssl_session_cache shared:SSL:10m; ssl_session_tickets off; ssl_stapling on; ssl_stapling_verify on; resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 5s; # Disable strict transport security for now. You can uncomment the following # line if you understand the implications. #add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"; add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; Save and close the file. 3. In this step, we need to modify the Nginx configuration to use SSL. Open up your Nginx configuration file which is usually in a location like /etc/nginx/sites-available/<yourconfig>. Before making changes to this file it is best to back it up first in case we break anything. sudo cp /etc/nginx/sites-available/yourconfig /etc/nginx/sites-available/yourconfig.bak And now we open up our current Nginx config file; vi /etc/nginx/sites-available/<yourconfig> In the first server{} block, at the beginning, add the lines below. You might already have a location {} block so leave that there server { listen 443 ssl; listen [::]:443 ssl; include snippets/self-signed.conf; include snippets/ssl-params.conf; server_name your_domain.com www.your_domain.com; //server_name can be anything location / { try_files $uri $uri/ =404; } } Lastly, we need to add another server{} block at the very bottom of the file, with the following parameters. This is a configuration that listens on port 80 and performs the redirect to HTTPS. server { listen 80; listen [::]:80; server_name default.local www.default.local; //use same name return 302 https://$server_name$request_uri; } Please note that you must add this server_name to your local desktop or laptop hosts file. In this example, I will go to my local laptop or desktop hosts file and add <ip address of nginx server> <space> <default.local> [Optional] Step 3. Adjust the firewall The steps below assume you have a UFW firewall enabled. You need to review available profiles by running sudo ufw app list You can check the current setting by typing sudo ufw status: Output Status: active To Action From -- ------ ---- Nginx HTTP DENY Anywhere Nginx HTTP (v6) DENY Anywhere (v6) We need to allow HTTPS traffic, so update permissions for the “Nginx Full” profile. sudo ufw allow 'Nginx Full' Check the update sudo ufw status Output Status: active To Action From -- ------ ---- Nginx Full ALLOW Anywhere Nginx Full (v6) ALLOW Anywhere (v6) This output above confirms the changes made to your firewall were successful. So you are ready to enable the changes in Nginx. Step 4. Enable to changes in Nginx First, check that there are no syntax errors in the files. Run sudo nginx -t The output will most likely look like Output nginx: [warn] "ssl_stapling" ignored, issuer certificate not found for certificate "/etc/ssl/certs/nginx-selfsigned.crt" nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful You can disregard the ssl_stapling warning, this particular setting generates a warning since your self-signed certificate can’t use SSL stapling. This is expected and your server can still encrypt connections correctly. If your output matches the out example above, that confirms your configuration file has no errors. If this is true, then you can safely restart Nginx to implement the changes: sudo systemctl restart nginx Step 5. Test the encryption Open up a browser and navigate to https://<server_name>, use the name you set up in Step 2C. Additional information 1. It was reported that File Manager and Grafana doesn't work with reverse proxy. To get the URLs for file manager and grafana to work, following steps should be taken: kubectl -n sisense set env deploy/filebrowser FILEBROWSER_BASEURL='/<baseurl>/app/explore' kubectl -n sisense set env deploy/filebrowser FB_BASEURL='/<baseurl>/app/explore/' kubectl -n sisense set env deploy/sisense-grafana GF_SERVER_ROOT_URL=<baseurl>/app/grafana 2. Once the reverse proxy is enabled, Sisense will still utilize IP addresses as links in their email communications. To setup correct addresses in Sisense e-mails after reverse proxy is configured: in the configuration yaml file set: update: true application_dns_name: "" and start the installation script to update parameters. After update is completed, in Sisense GUI go to Admin -> Server & Hardware -> System management -> Configuration Set the http://YOUR_PROXY_ADDRESS/analytics in the "Proxy URL" field of "Web Server" menu (or https://YOUR_PROXY_ADDRESS/analytics in case of SSL) Go to Admin -> User Management -> Users Try creating a new user or use the "Resend invitation" option for the existing one (if available) Check the inbox of that user for "Sisense account activation" The "Activate Account" link should now redirect to the http://YOUR_PROXY_ADDRESS/analytics/app/account/activate/HASH address25KViews1like4Comments