Web site demo request broken
Something seems seriously wrong on sisense.com. We are trying to request a demo but that functionality will not work. We also can't get the "Contact Us" page to work. We have tried on different browsers and I even attempted it on a personal machine (non-work). Pretty bad considering we are in the market for a solution such as this and cant even talk to someone there.Solved653Views0likes3CommentsLinux Install Errors when /opt and /opt/sisense are both separate filesystems
I have found a bug in the Linux installer when both /opt and /opt/sisense are separate filesystems. The installer will give an error like ""Sisense /opt minimum requierment disk size: 50GB", even if the /opt/sisense directory is greater than the minimum size. This check happens in the file "installer/02_sisense_validations/remote.sh", in the "validate_opt_disk_configuration()" function. If the /opt filesystem exists, the code does not also check for /opt/sisense. This can easily be worked around by having the code check for /opt/sisense first, and for /opt second if the first is not found, then the code will continue to work as expected.Solved524Views0likes1CommentXero JDBC Connection issue
Hi, Our team has an issue with Xero integration using JDBC driver. Unfortunately, it seems like the connections for other services works but not for Xero. Let me give you as many details as I can. First, we generate a connection string on our side to have the ability for users to connect to their accounts. Here is the structure of the connection string: jdbc:xero:AuthScheme=OAuth;InitiateOAuth=GETANDREFRESH;OAuthClientId=<client_id>;OAuthClientSecret=<client_secret>;OAuthAccessToken=<access_token>;Scope="openid profile email accounting.transactions accounting.reports.read accounting.reports.tenninetynine.read accounting.journals.read accounting.settings accounting.contacts accounting.attachments accounting.budgets.read payroll.employees payroll.payruns payroll.payslip payroll.timesheets payroll.settings files assets projects offline_access";OAuthRefreshToken=<refresh_token>; OAuthSettingsLocation="%APPDATA%\CData\Xero Data Provider\OAuthSettings_<settings_location>.txt";OAuthExpiresIn=1800;OAuthTokenTimestamp=<timestamp>; client_id - client id of our connection app; client_secret - client secret of our connection app; access_token - access token that the client received after implementing a connection; refresh_token - refresh token that the client received after implementing a connection; settings_location - special id that we generate for each integration. This id allow our clients to integrate their own integrations with the services; timestamp - timestamp of received tokens; This approach perfectly works for other integrations like Hubspot. But the main issue appeared because of Xero itself. For some reason they do not give us constant refresh tokens. According to their documentation, every time a refresh token is used, we need to receive a new refresh token. You can find this info here: https://developer.xero.com/documentation/guides/oauth2/token-types/#refresh-token And seems like CData driver don't work in this part, because for the first build (when old refresh token is active) it's working but after initialization, when access_token expires (in 30 minutes after receiving access and refresh tokens) we not able to use it anymore because old token is not valid anymore and seems CData driver don't save new refresh token. Hope all this info makes sense. Please let me know if you have any questions.963Views0likes3CommentsGithub changes are discarded
We are trying to make bulk edits to our dashboards and trying to use our existing github integration to achieve this. When a change to a file is merged in github, the commit will show in the commit history for ~4 minutes, and then disappear and revert to the last change that was saved through the web UI. The changes never appear in the dashboard on the web UI, and the commits never show up in the dashboard history. The sync appears to be working (at least in one direction) because the github repository is updated immediately with all changes made through the web UI. I looked at troubleshooting steps in this doc: https://dtdocs.sisense.com/article/git-integration and no Git Tags are created during this process. What can I do to make changes via the github repository?Solved1.1KViews0likes3CommentsKeep getting messages in Community about verifying my email despite already doing so
Hello! I've noticed recently that almost every time I first open a window to this website (the Sisense Community forums) I will see a blue message at the top of any page I look at that says "You have not yet confirmed your email address!" and a yellow button next to that says "Confirm your email". (Note: this is only when I first visit the website after having not visited in a while. After I've dismissed the message, it doesn't reappear until a decent amount of time has passed, like a few hours or the next day.) Clicking on that button will take you to your account settings page, on the "Email" tab. On this page, near the bottom, I see "Email verification status: Verified". I do not see any options to re-verify (i.e. make it send a new verification email). Why do I keep getting this message about needing to verify, but then it says I'm ALREADY verified? Is there something else I need to do to make the message stop appearing? On that settings screen, there's also an option to set a new email. I tried just re-entering my email address there and saving, but I never saw any additional verification emails come across, and it still just shows my status as "verified" in any case. Sorry if this is the wrong place to post this. I tried going to the "Support" section to open a ticket or something, but the page would just display this spinning symbol for a few minutes before eventually just displaying a message that says "Please Contact your Administrator." What should I do about that? I've never needed to open a ticket directly with Sisense Support, but if I need to in the future, how would I do so?1.4KViews0likes4CommentsLicense server unavailable, MID is not alphanumeric
I'm trying to do new install on a new server and keep getting license server is unavailable. If i select behind a firewall option, I get a key that is not alphanumeric so i can't use it on my.sisense.com portal to get a offline. Version is windows 2023.91.1KViews0likes4CommentsComponents/technologies inside Sisense
I'm curious about what Sisense is built upon. I've seen references to MongoDb in the documentation. But I could swear I've also seen "MonetDb" referenced in some pop-up error messages before. Is Monet for the cube storage, and Mongo is for the dashboards and other sys admin data?Solved1.4KViews0likes2CommentsInstallation Error for EKS
Hello Technical Support, We are trying to do a fresh install of L2024.1 as our pilot testing before migrating to multi-node. While the installation we encounter a error. Please see below logs. 2024-06-27 18:49:20 [ℹ] eksctl version 0.183.0 2024-06-27 18:49:20 [ℹ] using region us-east-2 2024-06-27 18:49:20 [ℹ] subnets for us-east-2a - public:192.168.0.0/19 private:192.168.96.0/19 2024-06-27 18:49:20 [ℹ] subnets for us-east-2b - public:192.168.32.0/19 private:192.168.128.0/19 2024-06-27 18:49:20 [ℹ] subnets for us-east-2c - public:192.168.64.0/19 private:192.168.160.0/19 2024-06-27 18:49:20 [ℹ] using Kubernetes version 1.28 2024-06-27 18:49:20 [ℹ] creating EKS cluster "pilotblusky-EKS" in "us-east-2" region with 2024-06-27 18:49:20 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-2 --cluster=pilotblusky-EKS' 2024-06-27 18:49:20 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "pilotblusky-EKS" in "us-east-2" 2024-06-27 18:49:20 [ℹ] CloudWatch logging will not be enabled for cluster "pilotblusky-EKS" in "us-east-2" 2024-06-27 18:49:20 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-2 --cluster=pilotblusky-EKS' 2024-06-27 18:49:20 [ℹ] 2 sequential tasks: { create cluster control plane "pilotblusky-EKS", wait for control plane to become ready } 2024-06-27 18:49:20 [ℹ] building cluster stack "eksctl-pilotblusky-EKS-cluster" 2024-06-27 18:49:20 [ℹ] deploying stack "eksctl-pilotblusky-EKS-cluster" 2024-06-27 18:49:50 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster" 2024-06-27 18:50:20 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster" 2024-06-27 18:51:20 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster" 2024-06-27 18:52:20 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster" 2024-06-27 18:53:20 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster" 2024-06-27 18:54:20 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster" 2024-06-27 18:55:20 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster" 2024-06-27 18:56:20 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster" 2024-06-27 18:57:20 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster" 2024-06-27 18:58:20 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster" 2024-06-27 19:00:21 [ℹ] waiting for the control plane to become ready 2024-06-27 19:00:21 [✔] saved kubeconfig as "/home/ec2-user/.kube/config" 2024-06-27 19:00:21 [ℹ] no tasks 2024-06-27 19:00:21 [✔] all EKS cluster resources for "pilotblusky-EKS" have been created 2024-06-27 19:00:21 [✔] created 0 nodegroup(s) in cluster "pilotblusky-EKS" 2024-06-27 19:00:21 [✔] created 0 managed nodegroup(s) in cluster "pilotblusky-EKS" 2024-06-27 19:00:21 [✖] getting Kubernetes version on EKS cluster: error running `kubectl version`: exit status 1 (check 'kubectl version') 2024-06-27 19:00:21 [ℹ] cluster should be functional despite missing (or misconfigured) client binaries 2024-06-27 19:00:21 [✔] EKS cluster "pilotblusky-EKS" in "us-east-2" region is ready 2024-06-27 19:00:22 [ℹ] will use version 1.28 for new nodegroup(s) based on control plane version 2024-06-27 19:00:22 [ℹ] nodegroup "pilotblusky-workers-APP-QRY1" will use "" [AmazonLinux2/1.28] 2024-06-27 19:00:23 [ℹ] using EC2 key pair "pilotblusky-KeyPair" 2024-06-27 19:00:23 [ℹ] 1 nodegroup (pilotblusky-workers-APP-QRY1) was included (based on the include/exclude rules) 2024-06-27 19:00:23 [ℹ] will create a CloudFormation stack for each of 1 managed nodegroups in cluster "pilotblusky-EKS" 2024-06-27 19:00:23 [ℹ] 2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create managed nodegroup "pilotblusky-workers-APP-QRY1" } } } 2024-06-27 19:00:23 [ℹ] checking cluster stack for missing resources 2024-06-27 19:00:23 [ℹ] cluster stack has all required resources 2024-06-27 19:00:23 [ℹ] building managed nodegroup stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY1" 2024-06-27 19:00:23 [ℹ] deploying stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY1" 2024-06-27 19:00:23 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY1" 2024-06-27 19:00:53 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY1" 2024-06-27 19:01:39 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY1" 2024-06-27 19:03:29 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY1" 2024-06-27 19:03:29 [ℹ] no tasks 2024-06-27 19:03:29 [✔] created 0 nodegroup(s) in cluster "pilotblusky-EKS" 2024-06-27 19:03:29 [ℹ] nodegroup "pilotblusky-workers-APP-QRY1" has 1 node(s) 2024-06-27 19:03:29 [ℹ] node "ip-192-168-115-122.us-east-2.compute.internal" is ready 2024-06-27 19:03:29 [ℹ] waiting for at least 1 node(s) to become ready in "pilotblusky-workers-APP-QRY1" 2024-06-27 19:03:29 [ℹ] nodegroup "pilotblusky-workers-APP-QRY1" has 1 node(s) 2024-06-27 19:03:29 [ℹ] node "ip-192-168-115-122.us-east-2.compute.internal" is ready 2024-06-27 19:03:29 [✔] created 1 managed nodegroup(s) in cluster "pilotblusky-EKS" 2024-06-27 19:03:29 [ℹ] checking security group configuration for all nodegroups 2024-06-27 19:03:29 [ℹ] all nodegroups have up-to-date cloudformation templates 2024-06-27 19:03:29 [ℹ] will use version 1.28 for new nodegroup(s) based on control plane version 2024-06-27 19:03:30 [ℹ] nodegroup "pilotblusky-workers-APP-QRY2" will use "" [AmazonLinux2/1.28] 2024-06-27 19:03:30 [ℹ] using EC2 key pair "pilotblusky-KeyPair" 2024-06-27 19:03:30 [ℹ] 1 existing nodegroup(s) (pilotblusky-workers-APP-QRY1) will be excluded 2024-06-27 19:03:30 [ℹ] 1 nodegroup (pilotblusky-workers-APP-QRY2) was included (based on the include/exclude rules) 2024-06-27 19:03:30 [ℹ] will create a CloudFormation stack for each of 1 managed nodegroups in cluster "pilotblusky-EKS" 2024-06-27 19:03:30 [ℹ] 2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create managed nodegroup "pilotblusky-workers-APP-QRY2" } } } 2024-06-27 19:03:30 [ℹ] checking cluster stack for missing resources 2024-06-27 19:03:31 [ℹ] cluster stack has all required resources 2024-06-27 19:03:31 [ℹ] building managed nodegroup stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY2" 2024-06-27 19:03:31 [ℹ] deploying stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY2" 2024-06-27 19:03:31 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY2" 2024-06-27 19:04:01 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY2" 2024-06-27 19:04:40 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY2" 2024-06-27 19:05:56 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY2" 2024-06-27 19:05:56 [ℹ] no tasks 2024-06-27 19:05:56 [✔] created 0 nodegroup(s) in cluster "pilotblusky-EKS" 2024-06-27 19:05:56 [ℹ] nodegroup "pilotblusky-workers-APP-QRY2" has 1 node(s) 2024-06-27 19:05:56 [ℹ] node "ip-192-168-131-152.us-east-2.compute.internal" is ready 2024-06-27 19:05:56 [ℹ] waiting for at least 1 node(s) to become ready in "pilotblusky-workers-APP-QRY2" 2024-06-27 19:05:56 [ℹ] nodegroup "pilotblusky-workers-APP-QRY2" has 1 node(s) 2024-06-27 19:05:56 [ℹ] node "ip-192-168-131-152.us-east-2.compute.internal" is ready 2024-06-27 19:05:56 [✔] created 1 managed nodegroup(s) in cluster "pilotblusky-EKS" 2024-06-27 19:05:56 [ℹ] checking security group configuration for all nodegroups 2024-06-27 19:05:56 [ℹ] all nodegroups have up-to-date cloudformation templates 2024-06-27 19:05:56 [ℹ] will use version 1.28 for new nodegroup(s) based on control plane version 2024-06-27 19:05:57 [ℹ] nodegroup "pilotblusky-workers-BLD" will use "" [AmazonLinux2/1.28] 2024-06-27 19:05:57 [ℹ] using EC2 key pair "pilotblusky-KeyPair" 2024-06-27 19:05:58 [ℹ] 2 existing nodegroup(s) (pilotblusky-workers-APP-QRY1,pilotblusky-workers-APP-QRY2) will be excluded 2024-06-27 19:05:58 [ℹ] 1 nodegroup (pilotblusky-workers-BLD) was included (based on the include/exclude rules) 2024-06-27 19:05:58 [ℹ] will create a CloudFormation stack for each of 1 managed nodegroups in cluster "pilotblusky-EKS" 2024-06-27 19:05:58 [ℹ] 2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create managed nodegroup "pilotblusky-workers-BLD" } } } 2024-06-27 19:05:58 [ℹ] checking cluster stack for missing resources 2024-06-27 19:05:58 [ℹ] cluster stack has all required resources 2024-06-27 19:05:58 [ℹ] building managed nodegroup stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-BLD" 2024-06-27 19:05:58 [ℹ] deploying stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-BLD" 2024-06-27 19:05:58 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-BLD" 2024-06-27 19:06:28 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-BLD" 2024-06-27 19:07:27 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-BLD" 2024-06-27 19:08:52 [ℹ] waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-BLD" 2024-06-27 19:08:52 [ℹ] no tasks 2024-06-27 19:08:52 [✔] created 0 nodegroup(s) in cluster "pilotblusky-EKS" 2024-06-27 19:08:52 [ℹ] nodegroup "pilotblusky-workers-BLD" has 1 node(s) 2024-06-27 19:08:52 [ℹ] node "ip-192-168-169-160.us-east-2.compute.internal" is ready 2024-06-27 19:08:52 [ℹ] waiting for at least 1 node(s) to become ready in "pilotblusky-workers-BLD" 2024-06-27 19:08:52 [ℹ] nodegroup "pilotblusky-workers-BLD" has 1 node(s) 2024-06-27 19:08:52 [ℹ] node "ip-192-168-169-160.us-east-2.compute.internal" is ready 2024-06-27 19:08:52 [✔] created 1 managed nodegroup(s) in cluster "pilotblusky-EKS" 2024-06-27 19:08:53 [ℹ] checking security group configuration for all nodegroups 2024-06-27 19:08:53 [ℹ] all nodegroups have up-to-date cloudformation templates { "Return": true, "SecurityGroupRules": [ { "SecurityGroupRuleId": "sgr-03778b5b86b1ac42f", "GroupId": "sg-059a00f34c46a56a7", "GroupOwnerId": "822653642785", "IsEgress": false, "IpProtocol": "tcp", "FromPort": 988, "ToPort": 988, "CidrIpv4": "172.31.0.0/16" } ] } { "Return": true, "SecurityGroupRules": [ { "SecurityGroupRuleId": "sgr-069b478d8086a4377", "GroupId": "sg-092411d06995e5f5a", "GroupOwnerId": "822653642785", "IsEgress": false, "IpProtocol": "tcp", "FromPort": 988, "ToPort": 988, "CidrIpv4": "172.31.0.0/16" } ] } { "Return": true, "SecurityGroupRules": [ { "SecurityGroupRuleId": "sgr-09bfef4a1602d5a0f", "GroupId": "sg-059a00f34c46a56a7", "GroupOwnerId": "822653642785", "IsEgress": false, "IpProtocol": "tcp", "FromPort": 988, "ToPort": 988, "CidrIpv4": "192.168.0.0/16" } ] } { "Return": true, "SecurityGroupRules": [ { "SecurityGroupRuleId": "sgr-084a2eabc256d9a1f", "GroupId": "sg-092411d06995e5f5a", "GroupOwnerId": "822653642785", "IsEgress": false, "IpProtocol": "tcp", "FromPort": 988, "ToPort": 988, "CidrIpv4": "192.168.0.0/16" } ] } { "FileSystem": { "OwnerId": "822653642785", "CreationTime": "2024-06-27T19:09:02.652000+00:00", "FileSystemId": "fs-035b89918f4baec54", "FileSystemType": "LUSTRE", "Lifecycle": "CREATING", "StorageCapacity": 1200, "StorageType": "SSD", "VpcId": "vpc-0aae3c8142babd264", "SubnetIds": [ "subnet-0e3f98d53dfea0ec9" ], "DNSName": "fs-035b89918f4baec54.fsx.us-east-2.amazonaws.com", "KmsKeyId": "arn:aws:kms:us-east-2:822653642785:key/ee0436eb-8d54-46bf-98d2-02504de8366a", "ResourceARN": "arn:aws:fsx:us-east-2:822653642785:file-system/fs-035b89918f4baec54", "Tags": [ { "Key": "Name", "Value": "Lustre-pilotblusky" } ], "LustreConfiguration": { "WeeklyMaintenanceStartTime": "1:03:00", "DeploymentType": "PERSISTENT_1", "PerUnitStorageThroughput": 200, "MountName": "oxzx5bev", "CopyTagsToBackups": false, "DataCompressionType": "NONE", "LogConfiguration": { "Level": "DISABLED" } }, "FileSystemTypeVersion": "2.10" } } Added new context arn:aws:eks:us-east-2:822653642785:cluster/pilotblusky-EKS to /home/ec2-user/.kube/config % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 3055 100 3055 0 0 15570 0 --:--:-- --:--:-- --:--:-- 15507 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 3075 100 3075 0 0 19281 0 --:--:-- --:--:-- --:--:-- 19339 Creating service account ebs-csi-controller-sa with permissions for ebs-csi driver... Creating an IAM OIDC provider for EKS cluster pilotblusky-EKS 2024-06-27 19:09:08 [ℹ] will create IAM Open ID Connect provider for cluster "pilotblusky-EKS" in "us-east-2" 2024-06-27 19:09:08 [✔] created IAM Open ID Connect provider for cluster "pilotblusky-EKS" in "us-east-2" Creating service account ebs-csi-controller-sa serviceaccount/ebs-csi-controller-sa created serviceaccount/ebs-csi-controller-sa labeled serviceaccount/ebs-csi-controller-sa annotated Creating policy pilotblusky-eks-ebs-policy { "Policy": { "PolicyName": "pilotblusky-eks-ebs-policy", "PolicyId": "ANPA37CP2NAQ6GKGH53YD", "Arn": "arn:aws:iam::822653642785:policy/pilotblusky-eks-ebs-policy", "Path": "/", "DefaultVersionId": "v1", "AttachmentCount": 0, "PermissionsBoundaryUsageCount": 0, "IsAttachable": true, "CreateDate": "2024-06-27T19:09:12+00:00", "UpdateDate": "2024-06-27T19:09:12+00:00" } } Creating role pilotblusky-eks-ebs-role { "Role": { "Path": "/", "RoleName": "pilotblusky-eks-ebs-role", "RoleId": "AROA37CP2NAQ4ZOC5SBOI", "Arn": "arn:aws:iam::822653642785:role/pilotblusky-eks-ebs-role", "CreateDate": "2024-06-27T19:09:12+00:00", "AssumeRolePolicyDocument": { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::822653642785:oidc-provider/oidc.eks.us-east-2.amazonaws.com/id/465A67AC30CD7524216DE00AF93C8C12" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "oidc.eks.us-east-2.amazonaws.com/id/465A67AC30CD7524216DE00AF93C8C12:aud": "sts.amazonaws.com" } } } ] } } } Attaching policy ARN arn:aws:iam::822653642785:policy/pilotblusky-eks-ebs-policy to role pilotblusky-eks-ebs-role Annotating service account ebs-csi-controller-sa with role arn arn:aws:iam::822653642785:role/pilotblusky-eks-ebs-role serviceaccount/ebs-csi-controller-sa annotated Done. ssh_key path is: ~/pilotblusky-KeyPair.pem kubernetes_cluster_name: pilotblusky-EKS kubernetes_cluster_location: us-east-2 kubernetes_cloud_provider: aws fsx_dns_name is: fs-035b89918f4baec54.fsx.us-east-2.amazonaws.com fsx_mount_name is: oxzx5bev aws awscliv2.zip cloud_config.yaml cluster_config.yaml config.yaml iam_policy.json installer openshift_config.yaml single_config.yaml sisense-installer sisense-installer.log sisense.sh [ec2-user@ip-172-31-1-133 sisense-L2024.1.0.355]$ vi cloud_config.yaml [ec2-user@ip-172-31-1-133 sisense-L2024.1.0.355]$ ./sisense.sh cloud_config.yaml [2024-06-27 19:31:08] Preparing System ... [2024-06-27 19:31:08] Linux user: ec2-user [2024-06-27 19:31:08] Validating Sudo permissions for user ec2-user ... [2024-06-27 19:31:08] User ec2-user has sufficient sudo permissions [2024-06-27 19:31:08] Detecting Host OS ... [2024-06-27 19:31:08] OS: Amazon Linux, Version: 2023 [2024-06-27 19:31:08] Validating OS and its version [2024-06-27 19:31:09] Validating that namespace name is all lower case ... [2024-06-27 19:31:09] Using private key path /home/ec2-user/pilotblusky-KeyPair.pem for Ansible Installation... [2024-06-27 19:31:09] Verifying Python packages exist ... [2024-06-27 19:31:21] Ensuring sisense main directory /opt/sisense exist Validating connection to dl.fedoraproject.org on port 443 ... [OK] Validating connection to docker.io on port 443 ... [OK] Validating connection to pypi.org on port 443 ... [OK] Validating connection to github.com on port 443 ... [OK] Validating connection to auth.cloud.sisense.com on port 443 ... [OK] Validating connection to bitbucket.org on port 443 ... [OK] Validating connection to download.docker.com on port 443 ... [OK] Validating connection to github.com on port 443 ... [OK] Validating connection to gcr.io on port 443 ... [OK] Validating connection to kubernetes.io on port 443 ... [OK] Validating connection to l.sisense.com on port 443 ... [OK] Validating connection to ppa.launchpad.net on port 443 ... [OK] Validating connection to quay.io on port 443 ... [OK] Validating connection to registry-1.docker.io on port 443 ... [OK] Validating connection to storage.googleapis.com on port 443 ... [OK] Validating connection to mirror.centos.org on port 80 ... [OK] The following Configuration will be delegated to Sisense Installation, Please confirm: { "k8s_nodes": [ { "node": "ip-192-168-115-122.us-east-2.compute.internal", "roles": "application, query" }, { "node": "ip-192-168-131-152.us-east-2.compute.internal", "roles": "application, query" }, { "node": "ip-192-168-169-160.us-east-2.compute.internal", "roles": "build" } ], "deployment_size": "large", "cluster_visibility": true, "offline_installer": false, "private_docker_registry": false, "update": false, "notify_on_upgrade": true, "enable_widget_deltas": false, "is_kubernetes_cloud": true, "kubernetes_cluster_name": "pilotblusky-EKS", "kubernetes_cluster_location": "us-east-2", "kubernetes_cloud_provider": "aws", "cloud_load_balancer": false, "cloud_load_balancer_internal": false, "cloud_auto_scaler": false, "high_availability": true, "application_dns_name": "https://pilot.bluskyreporting.com", "linux_user": "ec2-user", "ssh_key": "/home/ec2-user/pilotblusky-KeyPair.pem", "run_as_user": 1000, "run_as_group": 1000, "fs_group": 1000, "storage_type": "", "nfs_server": "", "nfs_path": "", "efs_file_system_id": "", "efs_aws_region": "", "fsx_dns_name": "fs-035b89918f4baec54.fsx.us-east-2.amazonaws.com", "fsx_mount_name": "oxzx5bev/", "sisense_disk_size": 70, "mongodb_disk_size": 20, "zookeeper_disk_size": 2, "timezone": "UTC", "namespace_name": "kube-system", "gateway_port": 30845, "is_ssl": false, "ssl_key_path": "", "ssl_cer_path": "", "internal_monitoring": true, "external_monitoring": true, "uninstall_cluster": false, "uninstall_sisense": false, "remove_user_data": false } Do you wish to install Sisense L2024.1.0.355 (y/n)? y [2024-06-27 19:31:41] Getting binaries kubectl (v1.27.10) and helm (v3.12.3) [2024-06-27 19:31:41] Downloading them from the internet % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 138 100 138 0 0 1722 0 --:--:-- --:--:-- --:--:-- 1725 100 47.0M 100 47.0M 0 0 83.3M 0 --:--:-- --:--:-- --:--:-- 112M % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 15.2M 100 15.2M 0 0 56.0M 0 --:--:-- --:--:-- --:--:-- 56.1M linux-amd64/helm [2024-06-27 19:31:43] Helm plugin mapkubeapis already installed. [2024-06-27 19:31:43] Installing bash-completion Last metadata expiration check: 1 day, 2:48:27 ago on Wed Jun 26 16:43:17 2024. Package bash-completion-1:2.11-2.amzn2023.0.2.noarch is already installed. Dependencies resolved. Nothing to do. Complete! [2024-06-27 19:31:44] Adding kubectl and helm auto completion source <(kubectl completion bash) 2>/dev/null source <(helm completion bash) 2>/dev/null [2024-06-27 19:31:44] Generating overriding params file /tmp/sisense/overrided_params.yaml [2024-06-27 19:31:44] INFO: Getting Kubernetes Cloud Provider and location ... [2024-06-27 19:31:47] INFO: Configuration Completed [2024-06-27 19:31:48] Single | Deploy Manual StorageClass storageclass.storage.k8s.io/manual created [2024-06-27 19:31:49] Generating overriding params file /tmp/sisense/overrided_params.yaml [2024-06-27 19:31:49] INFO: Getting Kubernetes Cloud Provider and location ... [2024-06-27 19:31:53] INFO: Configuration Completed [2024-06-27 19:31:56] Validating node ip-192-168-115-122.us-east-2.compute.internal pods capacity [2024-06-27 19:31:56] Node ip-192-168-115-122.us-east-2.compute.internal meets with the minimum requirements for pod capacity (minimum: 58, current: 58) [2024-06-27 19:31:56] Validating node ip-192-168-131-152.us-east-2.compute.internal pods capacity [2024-06-27 19:31:56] Node ip-192-168-131-152.us-east-2.compute.internal meets with the minimum requirements for pod capacity (minimum: 58, current: 58) [2024-06-27 19:31:57] Validating node ip-192-168-169-160.us-east-2.compute.internal pods capacity [2024-06-27 19:31:57] Node ip-192-168-169-160.us-east-2.compute.internal meets with the minimum requirements for pod capacity (minimum: 58, current: 58) [2024-06-27 19:31:57] Adding single label to node ip-192-168-115-122.us-east-2.compute.internal node/ip-192-168-115-122.us-east-2.compute.internal labeled [2024-06-27 19:32:02] Getting Sisense extra values [2024-06-27 19:32:02] Generating Sisense values file [2024-06-27 19:32:02] Evaluting template file installer/07_sisense_installation/templates/sisense-values.yaml.j2 into /opt/sisense/config/umbrella-chart/kube-system-values.yaml [2024-06-27 19:32:03] Getting Prometheus extra values [2024-06-27 19:32:03] Generating Kube Prometheus Stack values file [2024-06-27 19:32:03] Evaluting template file installer/07_sisense_installation/templates/kube-prometheus-stack-values.yaml.j2 into /opt/sisense/config/logging-monitoring/kube-prometheus-stack-values.yaml [2024-06-27 19:32:04] Generating Alertmanager PV file [2024-06-27 19:32:04] Evaluting template file installer/07_sisense_installation/templates/alertmanager-pv.yaml.j2 into /opt/sisense/config/logging-monitoring/alertmanager-pv.yaml [2024-06-27 19:32:05] Generating Prometheus PV file [2024-06-27 19:32:05] Evaluting template file installer/07_sisense_installation/templates/prometheus-pv.yaml.j2 into /opt/sisense/config/logging-monitoring/prometheus-pv.yaml [2024-06-27 19:32:06] Deleting cAdvisor (ignoring not found) [2024-06-27 19:32:07] Getting Logging Monitoring extra values [2024-06-27 19:32:07] Generating Logging Monitoring values file [2024-06-27 19:32:07] Evaluting template file installer/07_sisense_installation/templates/logging-monitoring-values.yaml.j2 into /opt/sisense/config/logging-monitoring/logmon-values.yaml [2024-06-27 19:32:08] Getting Cluster Metrics extra values [2024-06-27 19:32:09] Generating Cluster Metrics values file [2024-06-27 19:32:09] Evaluting template file installer/07_sisense_installation/templates/cluster-metrics-values.yaml.j2 into /opt/sisense/config/logging-monitoring/cluster-metrics-values.yaml [2024-06-27 19:32:11] Getting ALB Controller extra values [2024-06-27 19:32:11] Generating ALB Controller values file [2024-06-27 19:32:11] Evaluting template file installer/07_sisense_installation/templates/alb-controller-values.yaml.j2 into /opt/sisense/config/umbrella-chart/alb-controller-values.yaml [2024-06-27 19:32:12] Generating Helmfile file [2024-06-27 19:32:12] Evaluting template file installer/07_sisense_installation/templates/helmfile.yaml.j2 into /opt/sisense/config/umbrella-chart/helmfile.yaml [2024-06-27 19:32:12] Deploying Sisense using helmfile with file /opt/sisense/config/umbrella-chart/helmfile.yaml [2024-06-27 19:32:12] Deploying all Helm charts using Helmfile file /opt/sisense/config/umbrella-chart/helmfile.yaml Upgrading release=sisense-prom-operator, chart=/home/ec2-user/sisense-L2024.1.0.355/installer/07_sisense_installation/files/kube-prometheus-stack-L2024.1.0.355.tgz Upgrading release=aws-load-balancer-controller, chart=/home/ec2-user/sisense-L2024.1.0.355/installer/07_sisense_installation/files/aws-load-balancer-controller-1.4.3.tgz Release "aws-load-balancer-controller" does not exist. Installing it now. NAME: aws-load-balancer-controller LAST DEPLOYED: Thu Jun 27 19:32:16 2024 NAMESPACE: kube-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: AWS Load Balancer controller installed! Listing releases matching ^aws-load-balancer-controller$ aws-load-balancer-controller kube-system 1 2024-06-27 19:32:16.673862756 +0000 UTC deployed aws-load-balancer-controller-1.4.3 v2.4.2 Release "sisense-prom-operator" does not exist. Installing it now. NAME: sisense-prom-operator LAST DEPLOYED: Thu Jun 27 19:32:17 2024 NAMESPACE: monitoring STATUS: deployed REVISION: 1 NOTES: kube-prometheus-stack has been installed. Check its status by running: kubectl --namespace monitoring get pods -l "release=sisense-prom-operator" Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator. Listing releases matching ^sisense-prom-operator$ sisense-prom-operator monitoring 1 2024-06-27 19:32:17.996804862 +0000 UTC deployed kube-prometheus-stack-2024.1.0355 v0.72.0 hook[postsync] logs | persistentvolume/alertmanager-db created hook[postsync] logs | hook[postsync] logs | persistentvolume/prometheus-db-prometheus-0 created hook[postsync] logs | Upgrading release=kube-system, chart=/home/ec2-user/sisense-L2024.1.0.355/installer/07_sisense_installation/files/sisense-L2024.1.0.355.tgz Upgrading release=cluster-metrics, chart=/home/ec2-user/sisense-L2024.1.0.355/installer/07_sisense_installation/files/cluster-metrics-L2024.1.0.355.tgz Release "cluster-metrics" does not exist. Installing it now. NAME: cluster-metrics LAST DEPLOYED: Thu Jun 27 19:32:48 2024 NAMESPACE: monitoring STATUS: deployed REVISION: 1 TEST SUITE: None Listing releases matching ^cluster-metrics$ cluster-metrics monitoring 1 2024-06-27 19:32:48.028757588 +0000 UTC deployed cluster-metrics-2024.1.0355 1.0 Release "kube-system" does not exist. Installing it now. UPDATED RELEASES: NAME CHART VERSION DURATION aws-load-balancer-controller /home/ec2-user/sisense-L2024.1.0.355/installer/07_sisense_installation/files/aws-load-balancer-controller-1.4.3.tgz 1.4.3 9s sisense-prom-operator /home/ec2-user/sisense-L2024.1.0.355/installer/07_sisense_installation/files/kube-prometheus-stack-L2024.1.0.355.tgz 2024.1.0355 34s cluster-metrics /home/ec2-user/sisense-L2024.1.0.355/installer/07_sisense_installation/files/cluster-metrics-L2024.1.0.355.tgz 2024.1.0355 5s FAILED RELEASES: NAME CHART VERSION DURATION kube-system /home/ec2-user/sisense-L2024.1.0.355/installer/07_sisense_installation/files/sisense-L2024.1.0.355.tgz 22s in /opt/sisense/config/umbrella-chart/helmfile.yaml: failed processing release kube-system: command "/usr/local/bin/helm" exited with non-zero status: PATH: /usr/local/bin/helm ARGS: 0: helm (4 bytes) 1: upgrade (7 bytes) 2: --install (9 bytes) 3: kube-system (11 bytes) 4: /home/ec2-user/sisense-L2024.1.0.355/installer/07_sisense_installation/files/sisense-L2024.1.0.355.tgz (102 bytes) 5: --create-namespace (18 bytes) 6: --namespace (11 bytes) 7: kube-system (11 bytes) 8: --values (8 bytes) 9: /tmp/helmfile3511981587/kube-system-kube-system-values-664598dcb5 (65 bytes) 10: --values (8 bytes) 11: /tmp/helmfile2017665039/kube-system-kube-system-values-5d447766b8 (65 bytes) 12: --values (8 bytes) 13: /tmp/helmfile78220084/kube-system-kube-system-values-7dc58bfd (61 bytes) 14: --reset-values (14 bytes) 15: --history-max (13 bytes) 16: 0 (1 bytes) ERROR: exit status 1 EXIT STATUS 1 STDERR: coalesce.go:175: warning: skipped value for zookeeper.configuration: Not a table. coalesce.go:175: warning: skipped value for rabbitmq.configuration: Not a table. coalesce.go:175: warning: skipped value for rabbitmq.plugins: Not a table. coalesce.go:175: warning: skipped value for mongodb.configuration: Not a table. W0627 19:33:07.895137 552291 warnings.go:70] spec.template.spec.containers[0].resources.limits[memory]: fractional byte value "1288490188800m" is invalid, must be an integer W0627 19:33:08.138918 552291 warnings.go:70] annotation "kubernetes.io/ingress.class" is deprecated, please use 'spec.ingressClassName' instead Error: 1 error occurred: * Internal error occurred: failed calling webhook "vingress.elbv2.k8s.aws": failed to call webhook: Post "https://aws-load-balancer-webhook-service.kube-system.svc:443/validate-networking-v1-ingress?timeout=10s": no endpoints available for service "aws-load-balancer-webhook-service" COMBINED OUTPUT: Release "kube-system" does not exist. Installing it now. coalesce.go:175: warning: skipped value for zookeeper.configuration: Not a table. coalesce.go:175: warning: skipped value for rabbitmq.configuration: Not a table. coalesce.go:175: warning: skipped value for rabbitmq.plugins: Not a table. coalesce.go:175: warning: skipped value for mongodb.configuration: Not a table. W0627 19:33:07.895137 552291 warnings.go:70] spec.template.spec.containers[0].resources.limits[memory]: fractional byte value "1288490188800m" is invalid, must be an integer W0627 19:33:08.138918 552291 warnings.go:70] annotation "kubernetes.io/ingress.class" is deprecated, please use 'spec.ingressClassName' instead Error: 1 error occurred: * Internal error occurred: failed calling webhook "vingress.elbv2.k8s.aws": failed to call webhook: Post "https://aws-load-balancer-webhook-service.kube-system.svc:443/validate-networking-v1-ingress?timeout=10s": no endpoints available for service "aws-load-balancer-webhook-service" [2024-06-27 19:33:08] ** Error occurred during Deploying all Helm charts using Helmfile file /opt/sisense/config/umbrella-chart/helmfile.yaml section ** [2024-06-27 19:33:08] ** Exiting Installation ... **Solved2.8KViews0likes4Comments