cancel
Showing results for 
Search instead for 
Did you mean: 

Installation Error for EKS

jomel2013
7 - Data Storage
7 - Data Storage

Hello Technical Support,

We are trying to do a fresh install of L2024.1 as our pilot testing before migrating to multi-node. While the installation we encounter a error. Please see below logs.

 

2024-06-27 18:49:20 [ℹ]  eksctl version 0.183.0
2024-06-27 18:49:20 [ℹ]  using region us-east-2
2024-06-27 18:49:20 [ℹ]  subnets for us-east-2a - public:192.168.0.0/19 private:192.168.96.0/19
2024-06-27 18:49:20 [ℹ]  subnets for us-east-2b - public:192.168.32.0/19 private:192.168.128.0/19
2024-06-27 18:49:20 [ℹ]  subnets for us-east-2c - public:192.168.64.0/19 private:192.168.160.0/19
2024-06-27 18:49:20 [ℹ]  using Kubernetes version 1.28
2024-06-27 18:49:20 [ℹ]  creating EKS cluster "pilotblusky-EKS" in "us-east-2" region with
2024-06-27 18:49:20 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-2 --cluster=pilotblusky-EKS'
2024-06-27 18:49:20 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "pilotblusky-EKS" in "us-east-2"
2024-06-27 18:49:20 [ℹ]  CloudWatch logging will not be enabled for cluster "pilotblusky-EKS" in "us-east-2"
2024-06-27 18:49:20 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-2 --cluster=pilotblusky-EKS'
2024-06-27 18:49:20 [ℹ]
2 sequential tasks: { create cluster control plane "pilotblusky-EKS", wait for control plane to become ready
}
2024-06-27 18:49:20 [ℹ]  building cluster stack "eksctl-pilotblusky-EKS-cluster"
2024-06-27 18:49:20 [ℹ]  deploying stack "eksctl-pilotblusky-EKS-cluster"
2024-06-27 18:49:50 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster"
2024-06-27 18:50:20 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster"
2024-06-27 18:51:20 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster"
2024-06-27 18:52:20 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster"
2024-06-27 18:53:20 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster"
2024-06-27 18:54:20 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster"
2024-06-27 18:55:20 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster"
2024-06-27 18:56:20 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster"
2024-06-27 18:57:20 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster"
2024-06-27 18:58:20 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-cluster"
2024-06-27 19:00:21 [ℹ]  waiting for the control plane to become ready
2024-06-27 19:00:21 [✔]  saved kubeconfig as "/home/ec2-user/.kube/config"
2024-06-27 19:00:21 [ℹ]  no tasks
2024-06-27 19:00:21 [✔]  all EKS cluster resources for "pilotblusky-EKS" have been created
2024-06-27 19:00:21 [✔]  created 0 nodegroup(s) in cluster "pilotblusky-EKS"
2024-06-27 19:00:21 [✔]  created 0 managed nodegroup(s) in cluster "pilotblusky-EKS"
2024-06-27 19:00:21 [✖]  getting Kubernetes version on EKS cluster: error running `kubectl version`: exit status 1 (check 'kubectl version')
2024-06-27 19:00:21 [ℹ]  cluster should be functional despite missing (or misconfigured) client binaries
2024-06-27 19:00:21 [✔]  EKS cluster "pilotblusky-EKS" in "us-east-2" region is ready
2024-06-27 19:00:22 [ℹ]  will use version 1.28 for new nodegroup(s) based on control plane version
2024-06-27 19:00:22 [ℹ]  nodegroup "pilotblusky-workers-APP-QRY1" will use "" [AmazonLinux2/1.28]
2024-06-27 19:00:23 [ℹ]  using EC2 key pair "pilotblusky-KeyPair"
2024-06-27 19:00:23 [ℹ]  1 nodegroup (pilotblusky-workers-APP-QRY1) was included (based on the include/exclude rules)
2024-06-27 19:00:23 [ℹ]  will create a CloudFormation stack for each of 1 managed nodegroups in cluster "pilotblusky-EKS"
2024-06-27 19:00:23 [ℹ]
2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create managed nodegroup "pilotblusky-workers-APP-QRY1" } }
}
2024-06-27 19:00:23 [ℹ]  checking cluster stack for missing resources
2024-06-27 19:00:23 [ℹ]  cluster stack has all required resources
2024-06-27 19:00:23 [ℹ]  building managed nodegroup stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY1"
2024-06-27 19:00:23 [ℹ]  deploying stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY1"
2024-06-27 19:00:23 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY1"
2024-06-27 19:00:53 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY1"
2024-06-27 19:01:39 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY1"
2024-06-27 19:03:29 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY1"
2024-06-27 19:03:29 [ℹ]  no tasks
2024-06-27 19:03:29 [✔]  created 0 nodegroup(s) in cluster "pilotblusky-EKS"
2024-06-27 19:03:29 [ℹ]  nodegroup "pilotblusky-workers-APP-QRY1" has 1 node(s)
2024-06-27 19:03:29 [ℹ]  node "ip-192-168-115-122.us-east-2.compute.internal" is ready
2024-06-27 19:03:29 [ℹ]  waiting for at least 1 node(s) to become ready in "pilotblusky-workers-APP-QRY1"
2024-06-27 19:03:29 [ℹ]  nodegroup "pilotblusky-workers-APP-QRY1" has 1 node(s)
2024-06-27 19:03:29 [ℹ]  node "ip-192-168-115-122.us-east-2.compute.internal" is ready
2024-06-27 19:03:29 [✔]  created 1 managed nodegroup(s) in cluster "pilotblusky-EKS"
2024-06-27 19:03:29 [ℹ]  checking security group configuration for all nodegroups
2024-06-27 19:03:29 [ℹ]  all nodegroups have up-to-date cloudformation templates
2024-06-27 19:03:29 [ℹ]  will use version 1.28 for new nodegroup(s) based on control plane version
2024-06-27 19:03:30 [ℹ]  nodegroup "pilotblusky-workers-APP-QRY2" will use "" [AmazonLinux2/1.28]
2024-06-27 19:03:30 [ℹ]  using EC2 key pair "pilotblusky-KeyPair"
2024-06-27 19:03:30 [ℹ]  1 existing nodegroup(s) (pilotblusky-workers-APP-QRY1) will be excluded
2024-06-27 19:03:30 [ℹ]  1 nodegroup (pilotblusky-workers-APP-QRY2) was included (based on the include/exclude rules)
2024-06-27 19:03:30 [ℹ]  will create a CloudFormation stack for each of 1 managed nodegroups in cluster "pilotblusky-EKS"
2024-06-27 19:03:30 [ℹ]
2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create managed nodegroup "pilotblusky-workers-APP-QRY2" } }
}
2024-06-27 19:03:30 [ℹ]  checking cluster stack for missing resources
2024-06-27 19:03:31 [ℹ]  cluster stack has all required resources
2024-06-27 19:03:31 [ℹ]  building managed nodegroup stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY2"
2024-06-27 19:03:31 [ℹ]  deploying stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY2"
2024-06-27 19:03:31 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY2"
2024-06-27 19:04:01 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY2"
2024-06-27 19:04:40 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY2"
2024-06-27 19:05:56 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-APP-QRY2"
2024-06-27 19:05:56 [ℹ]  no tasks
2024-06-27 19:05:56 [✔]  created 0 nodegroup(s) in cluster "pilotblusky-EKS"
2024-06-27 19:05:56 [ℹ]  nodegroup "pilotblusky-workers-APP-QRY2" has 1 node(s)
2024-06-27 19:05:56 [ℹ]  node "ip-192-168-131-152.us-east-2.compute.internal" is ready
2024-06-27 19:05:56 [ℹ]  waiting for at least 1 node(s) to become ready in "pilotblusky-workers-APP-QRY2"
2024-06-27 19:05:56 [ℹ]  nodegroup "pilotblusky-workers-APP-QRY2" has 1 node(s)
2024-06-27 19:05:56 [ℹ]  node "ip-192-168-131-152.us-east-2.compute.internal" is ready
2024-06-27 19:05:56 [✔]  created 1 managed nodegroup(s) in cluster "pilotblusky-EKS"
2024-06-27 19:05:56 [ℹ]  checking security group configuration for all nodegroups
2024-06-27 19:05:56 [ℹ]  all nodegroups have up-to-date cloudformation templates
2024-06-27 19:05:56 [ℹ]  will use version 1.28 for new nodegroup(s) based on control plane version
2024-06-27 19:05:57 [ℹ]  nodegroup "pilotblusky-workers-BLD" will use "" [AmazonLinux2/1.28]
2024-06-27 19:05:57 [ℹ]  using EC2 key pair "pilotblusky-KeyPair"
2024-06-27 19:05:58 [ℹ]  2 existing nodegroup(s) (pilotblusky-workers-APP-QRY1,pilotblusky-workers-APP-QRY2) will be excluded
2024-06-27 19:05:58 [ℹ]  1 nodegroup (pilotblusky-workers-BLD) was included (based on the include/exclude rules)
2024-06-27 19:05:58 [ℹ]  will create a CloudFormation stack for each of 1 managed nodegroups in cluster "pilotblusky-EKS"
2024-06-27 19:05:58 [ℹ]
2 sequential tasks: { fix cluster compatibility, 1 task: { 1 task: { create managed nodegroup "pilotblusky-workers-BLD" } }
}
2024-06-27 19:05:58 [ℹ]  checking cluster stack for missing resources
2024-06-27 19:05:58 [ℹ]  cluster stack has all required resources
2024-06-27 19:05:58 [ℹ]  building managed nodegroup stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-BLD"
2024-06-27 19:05:58 [ℹ]  deploying stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-BLD"
2024-06-27 19:05:58 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-BLD"
2024-06-27 19:06:28 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-BLD"
2024-06-27 19:07:27 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-BLD"
2024-06-27 19:08:52 [ℹ]  waiting for CloudFormation stack "eksctl-pilotblusky-EKS-nodegroup-pilotblusky-workers-BLD"
2024-06-27 19:08:52 [ℹ]  no tasks
2024-06-27 19:08:52 [✔]  created 0 nodegroup(s) in cluster "pilotblusky-EKS"
2024-06-27 19:08:52 [ℹ]  nodegroup "pilotblusky-workers-BLD" has 1 node(s)
2024-06-27 19:08:52 [ℹ]  node "ip-192-168-169-160.us-east-2.compute.internal" is ready
2024-06-27 19:08:52 [ℹ]  waiting for at least 1 node(s) to become ready in "pilotblusky-workers-BLD"
2024-06-27 19:08:52 [ℹ]  nodegroup "pilotblusky-workers-BLD" has 1 node(s)
2024-06-27 19:08:52 [ℹ]  node "ip-192-168-169-160.us-east-2.compute.internal" is ready
2024-06-27 19:08:52 [✔]  created 1 managed nodegroup(s) in cluster "pilotblusky-EKS"
2024-06-27 19:08:53 [ℹ]  checking security group configuration for all nodegroups
2024-06-27 19:08:53 [ℹ]  all nodegroups have up-to-date cloudformation templates
{
    "Return": true,
    "SecurityGroupRules": [
        {
            "SecurityGroupRuleId": "sgr-03778b5b86b1ac42f",
            "GroupId": "sg-059a00f34c46a56a7",
            "GroupOwnerId": "822653642785",
            "IsEgress": false,
            "IpProtocol": "tcp",
            "FromPort": 988,
            "ToPort": 988,
            "CidrIpv4": "172.31.0.0/16"
        }
    ]
}
{
    "Return": true,
    "SecurityGroupRules": [
        {
            "SecurityGroupRuleId": "sgr-069b478d8086a4377",
            "GroupId": "sg-092411d06995e5f5a",
            "GroupOwnerId": "822653642785",
            "IsEgress": false,
            "IpProtocol": "tcp",
            "FromPort": 988,
            "ToPort": 988,
            "CidrIpv4": "172.31.0.0/16"
        }
    ]
}
{
    "Return": true,
    "SecurityGroupRules": [
        {
            "SecurityGroupRuleId": "sgr-09bfef4a1602d5a0f",
            "GroupId": "sg-059a00f34c46a56a7",
            "GroupOwnerId": "822653642785",
            "IsEgress": false,
            "IpProtocol": "tcp",
            "FromPort": 988,
            "ToPort": 988,
            "CidrIpv4": "192.168.0.0/16"
        }
    ]
}
{
    "Return": true,
    "SecurityGroupRules": [
        {
            "SecurityGroupRuleId": "sgr-084a2eabc256d9a1f",
            "GroupId": "sg-092411d06995e5f5a",
            "GroupOwnerId": "822653642785",
            "IsEgress": false,
            "IpProtocol": "tcp",
            "FromPort": 988,
            "ToPort": 988,
            "CidrIpv4": "192.168.0.0/16"
        }
    ]
}
{
    "FileSystem": {
        "OwnerId": "822653642785",
        "CreationTime": "2024-06-27T19:09:02.652000+00:00",
        "FileSystemId": "fs-035b89918f4baec54",
        "FileSystemType": "LUSTRE",
        "Lifecycle": "CREATING",
        "StorageCapacity": 1200,
        "StorageType": "SSD",
        "VpcId": "vpc-0aae3c8142babd264",
        "SubnetIds": [
            "subnet-0e3f98d53dfea0ec9"
        ],
        "DNSName": "fs-035b89918f4baec54.fsx.us-east-2.amazonaws.com",
        "KmsKeyId": "arn:aws:kms:us-east-2:822653642785:key/ee0436eb-8d54-46bf-98d2-02504de8366a",
        "ResourceARN": "arn:aws:fsx:us-east-2:822653642785:file-system/fs-035b89918f4baec54",
        "Tags": [
            {
                "Key": "Name",
                "Value": "Lustre-pilotblusky"
            }
        ],
        "LustreConfiguration": {
            "WeeklyMaintenanceStartTime": "1:03:00",
            "DeploymentType": "PERSISTENT_1",
            "PerUnitStorageThroughput": 200,
            "MountName": "oxzx5bev",
            "CopyTagsToBackups": false,
            "DataCompressionType": "NONE",
            "LogConfiguration": {
                "Level": "DISABLED"
            }
        },
        "FileSystemTypeVersion": "2.10"
    }
}
Added new context arn:aws:eks:us-east-2:822653642785:cluster/pilotblusky-EKS to /home/ec2-user/.kube/config
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  3055  100  3055    0     0  15570      0 --:--:-- --:--:-- --:--:-- 15507
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  3075  100  3075    0     0  19281      0 --:--:-- --:--:-- --:--:-- 19339
Creating service account ebs-csi-controller-sa with permissions for ebs-csi driver...
Creating an IAM OIDC provider for EKS cluster pilotblusky-EKS
2024-06-27 19:09:08 [ℹ]  will create IAM Open ID Connect provider for cluster "pilotblusky-EKS" in "us-east-2"
2024-06-27 19:09:08 [✔]  created IAM Open ID Connect provider for cluster "pilotblusky-EKS" in "us-east-2"
Creating service account ebs-csi-controller-sa
serviceaccount/ebs-csi-controller-sa created
serviceaccount/ebs-csi-controller-sa labeled
serviceaccount/ebs-csi-controller-sa annotated
Creating policy pilotblusky-eks-ebs-policy
{
    "Policy": {
        "PolicyName": "pilotblusky-eks-ebs-policy",
        "PolicyId": "ANPA37CP2NAQ6GKGH53YD",
        "Arn": "arn:aws:iam::822653642785:policy/pilotblusky-eks-ebs-policy",
        "Path": "/",
        "DefaultVersionId": "v1",
        "AttachmentCount": 0,
        "PermissionsBoundaryUsageCount": 0,
        "IsAttachable": true,
        "CreateDate": "2024-06-27T19:09:12+00:00",
        "UpdateDate": "2024-06-27T19:09:12+00:00"
    }
}
Creating role pilotblusky-eks-ebs-role
{
    "Role": {
        "Path": "/",
        "RoleName": "pilotblusky-eks-ebs-role",
        "RoleId": "AROA37CP2NAQ4ZOC5SBOI",
        "Arn": "arn:aws:iam::822653642785:role/pilotblusky-eks-ebs-role",
        "CreateDate": "2024-06-27T19:09:12+00:00",
        "AssumeRolePolicyDocument": {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Principal": {
                        "Federated": "arn:aws:iam::822653642785:oidc-provider/oidc.eks.us-east-2.amazonaws.com/id/465A67AC30CD7524216DE00AF93C8C12"
                    },
                    "Action": "sts:AssumeRoleWithWebIdentity",
                    "Condition": {
                        "StringEquals": {
                            "oidc.eks.us-east-2.amazonaws.com/id/465A67AC30CD7524216DE00AF93C8C12:aud": "sts.amazonaws.com"
                        }
                    }
                }
            ]
        }
    }
}
Attaching policy ARN arn:aws:iam::822653642785:policy/pilotblusky-eks-ebs-policy to role pilotblusky-eks-ebs-role
Annotating service account ebs-csi-controller-sa with role arn arn:aws:iam::822653642785:role/pilotblusky-eks-ebs-role
serviceaccount/ebs-csi-controller-sa annotated
Done.
ssh_key path is: ~/pilotblusky-KeyPair.pem
kubernetes_cluster_name: pilotblusky-EKS
kubernetes_cluster_location: us-east-2
kubernetes_cloud_provider: aws
fsx_dns_name is: fs-035b89918f4baec54.fsx.us-east-2.amazonaws.com
fsx_mount_name is: oxzx5bev






aws  awscliv2.zip  cloud_config.yaml  cluster_config.yaml  config.yaml  iam_policy.json  installer  openshift_config.yaml  single_config.yaml  sisense-installer  sisense-installer.log  sisense.sh
[ec2-user@ip-172-31-1-133 sisense-L2024.1.0.355]$ vi cloud_config.yaml
[ec2-user@ip-172-31-1-133 sisense-L2024.1.0.355]$ ./sisense.sh cloud_config.yaml
[2024-06-27 19:31:08] Preparing System ...
[2024-06-27 19:31:08] Linux user: ec2-user
[2024-06-27 19:31:08] Validating Sudo permissions for user ec2-user ...
[2024-06-27 19:31:08] User ec2-user has sufficient sudo permissions
[2024-06-27 19:31:08] Detecting Host OS  ...
[2024-06-27 19:31:08] OS: Amazon Linux, Version: 2023
[2024-06-27 19:31:08] Validating OS and its version
[2024-06-27 19:31:09] Validating that namespace name is all lower case ...
[2024-06-27 19:31:09] Using private key path /home/ec2-user/pilotblusky-KeyPair.pem for Ansible Installation...
[2024-06-27 19:31:09] Verifying Python packages exist ...
[2024-06-27 19:31:21] Ensuring sisense main directory /opt/sisense exist



Validating connection to dl.fedoraproject.org on port 443 ... [OK]
Validating connection to docker.io on port 443 ... [OK]
Validating connection to pypi.org on port 443 ... [OK]
Validating connection to github.com on port 443 ... [OK]
Validating connection to auth.cloud.sisense.com on port 443 ... [OK]
Validating connection to bitbucket.org on port 443 ... [OK]
Validating connection to download.docker.com on port 443 ... [OK]
Validating connection to github.com on port 443 ... [OK]
Validating connection to gcr.io on port 443 ... [OK]
Validating connection to kubernetes.io on port 443 ... [OK]
Validating connection to l.sisense.com on port 443 ... [OK]
Validating connection to ppa.launchpad.net on port 443 ... [OK]
Validating connection to quay.io on port 443 ... [OK]
Validating connection to registry-1.docker.io on port 443 ... [OK]
Validating connection to storage.googleapis.com on port 443 ... [OK]
Validating connection to mirror.centos.org on port 80 ... [OK]

The following Configuration will be delegated to Sisense Installation, Please confirm:
{
  "k8s_nodes": [
    {
      "node": "ip-192-168-115-122.us-east-2.compute.internal",
      "roles": "application, query"
    },
    {
      "node": "ip-192-168-131-152.us-east-2.compute.internal",
      "roles": "application, query"
    },
    {
      "node": "ip-192-168-169-160.us-east-2.compute.internal",
      "roles": "build"
    }
  ],
  "deployment_size": "large",
  "cluster_visibility": true,
  "offline_installer": false,
  "private_docker_registry": false,
  "update": false,
  "notify_on_upgrade": true,
  "enable_widget_deltas": false,
  "is_kubernetes_cloud": true,
  "kubernetes_cluster_name": "pilotblusky-EKS",
  "kubernetes_cluster_location": "us-east-2",
  "kubernetes_cloud_provider": "aws",
  "cloud_load_balancer": false,
  "cloud_load_balancer_internal": false,
  "cloud_auto_scaler": false,
  "high_availability": true,
  "application_dns_name": "https://pilot.bluskyreporting.com",
  "linux_user": "ec2-user",
  "ssh_key": "/home/ec2-user/pilotblusky-KeyPair.pem",
  "run_as_user": 1000,
  "run_as_group": 1000,
  "fs_group": 1000,
  "storage_type": "",
  "nfs_server": "",
  "nfs_path": "",
  "efs_file_system_id": "",
  "efs_aws_region": "",
  "fsx_dns_name": "fs-035b89918f4baec54.fsx.us-east-2.amazonaws.com",
  "fsx_mount_name": "oxzx5bev/",
  "sisense_disk_size": 70,
  "mongodb_disk_size": 20,
  "zookeeper_disk_size": 2,
  "timezone": "UTC",
  "namespace_name": "kube-system",
  "gateway_port": 30845,
  "is_ssl": false,
  "ssl_key_path": "",
  "ssl_cer_path": "",
  "internal_monitoring": true,
  "external_monitoring": true,
  "uninstall_cluster": false,
  "uninstall_sisense": false,
  "remove_user_data": false
}
Do you wish to install Sisense L2024.1.0.355 (y/n)? y
[2024-06-27 19:31:41] Getting binaries kubectl (v1.27.10) and helm (v3.12.3)
[2024-06-27 19:31:41] Downloading them from the internet
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   138  100   138    0     0   1722      0 --:--:-- --:--:-- --:--:--  1725
100 47.0M  100 47.0M    0     0  83.3M      0 --:--:-- --:--:-- --:--:--  112M
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 15.2M  100 15.2M    0     0  56.0M      0 --:--:-- --:--:-- --:--:-- 56.1M
linux-amd64/helm
[2024-06-27 19:31:43] Helm plugin mapkubeapis already installed.
[2024-06-27 19:31:43] Installing bash-completion
Last metadata expiration check: 1 day, 2:48:27 ago on Wed Jun 26 16:43:17 2024.
Package bash-completion-1:2.11-2.amzn2023.0.2.noarch is already installed.
Dependencies resolved.
Nothing to do.
Complete!
[2024-06-27 19:31:44] Adding kubectl and helm auto completion
source <(kubectl completion bash) 2>/dev/null
source <(helm completion bash) 2>/dev/null
[2024-06-27 19:31:44] Generating overriding params file /tmp/sisense/overrided_params.yaml
[2024-06-27 19:31:44] INFO: Getting Kubernetes Cloud Provider and location ...
[2024-06-27 19:31:47] INFO: Configuration Completed
[2024-06-27 19:31:48] Single | Deploy Manual StorageClass
storageclass.storage.k8s.io/manual created
[2024-06-27 19:31:49] Generating overriding params file /tmp/sisense/overrided_params.yaml
[2024-06-27 19:31:49] INFO: Getting Kubernetes Cloud Provider and location ...
[2024-06-27 19:31:53] INFO: Configuration Completed
[2024-06-27 19:31:56] Validating node ip-192-168-115-122.us-east-2.compute.internal pods capacity
[2024-06-27 19:31:56] Node ip-192-168-115-122.us-east-2.compute.internal meets with the minimum requirements for pod capacity (minimum: 58, current: 58)
[2024-06-27 19:31:56] Validating node ip-192-168-131-152.us-east-2.compute.internal pods capacity
[2024-06-27 19:31:56] Node ip-192-168-131-152.us-east-2.compute.internal meets with the minimum requirements for pod capacity (minimum: 58, current: 58)
[2024-06-27 19:31:57] Validating node ip-192-168-169-160.us-east-2.compute.internal pods capacity
[2024-06-27 19:31:57] Node ip-192-168-169-160.us-east-2.compute.internal meets with the minimum requirements for pod capacity (minimum: 58, current: 58)
[2024-06-27 19:31:57] Adding single label to node ip-192-168-115-122.us-east-2.compute.internal
node/ip-192-168-115-122.us-east-2.compute.internal labeled
[2024-06-27 19:32:02] Getting Sisense extra values
[2024-06-27 19:32:02] Generating Sisense values file
[2024-06-27 19:32:02] Evaluting template file installer/07_sisense_installation/templates/sisense-values.yaml.j2 into /opt/sisense/config/umbrella-chart/kube-system-values.yaml
[2024-06-27 19:32:03] Getting Prometheus extra values
[2024-06-27 19:32:03] Generating Kube Prometheus Stack values file
[2024-06-27 19:32:03] Evaluting template file installer/07_sisense_installation/templates/kube-prometheus-stack-values.yaml.j2 into /opt/sisense/config/logging-monitoring/kube-prometheus-stack-values.yaml
[2024-06-27 19:32:04] Generating Alertmanager PV file
[2024-06-27 19:32:04] Evaluting template file installer/07_sisense_installation/templates/alertmanager-pv.yaml.j2 into /opt/sisense/config/logging-monitoring/alertmanager-pv.yaml
[2024-06-27 19:32:05] Generating Prometheus PV file
[2024-06-27 19:32:05] Evaluting template file installer/07_sisense_installation/templates/prometheus-pv.yaml.j2 into /opt/sisense/config/logging-monitoring/prometheus-pv.yaml
[2024-06-27 19:32:06] Deleting cAdvisor (ignoring not found)
[2024-06-27 19:32:07] Getting Logging Monitoring extra values
[2024-06-27 19:32:07] Generating Logging Monitoring values file
[2024-06-27 19:32:07] Evaluting template file installer/07_sisense_installation/templates/logging-monitoring-values.yaml.j2 into /opt/sisense/config/logging-monitoring/logmon-values.yaml
[2024-06-27 19:32:08] Getting Cluster Metrics extra values
[2024-06-27 19:32:09] Generating Cluster Metrics values file
[2024-06-27 19:32:09] Evaluting template file installer/07_sisense_installation/templates/cluster-metrics-values.yaml.j2 into /opt/sisense/config/logging-monitoring/cluster-metrics-values.yaml
[2024-06-27 19:32:11] Getting ALB Controller extra values
[2024-06-27 19:32:11] Generating ALB Controller values file
[2024-06-27 19:32:11] Evaluting template file installer/07_sisense_installation/templates/alb-controller-values.yaml.j2 into /opt/sisense/config/umbrella-chart/alb-controller-values.yaml
[2024-06-27 19:32:12] Generating Helmfile file
[2024-06-27 19:32:12] Evaluting template file installer/07_sisense_installation/templates/helmfile.yaml.j2 into /opt/sisense/config/umbrella-chart/helmfile.yaml
[2024-06-27 19:32:12] Deploying Sisense using helmfile with file /opt/sisense/config/umbrella-chart/helmfile.yaml
[2024-06-27 19:32:12] Deploying all Helm charts using Helmfile file /opt/sisense/config/umbrella-chart/helmfile.yaml
Upgrading release=sisense-prom-operator, chart=/home/ec2-user/sisense-L2024.1.0.355/installer/07_sisense_installation/files/kube-prometheus-stack-L2024.1.0.355.tgz
Upgrading release=aws-load-balancer-controller, chart=/home/ec2-user/sisense-L2024.1.0.355/installer/07_sisense_installation/files/aws-load-balancer-controller-1.4.3.tgz
Release "aws-load-balancer-controller" does not exist. Installing it now.
NAME: aws-load-balancer-controller
LAST DEPLOYED: Thu Jun 27 19:32:16 2024
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
AWS Load Balancer controller installed!

Listing releases matching ^aws-load-balancer-controller$
aws-load-balancer-controller  kube-system 1        2024-06-27 19:32:16.673862756 +0000 UTC      deployed        aws-load-balancer-controller-1.4.3      v2.4.2

Release "sisense-prom-operator" does not exist. Installing it now.
NAME: sisense-prom-operator
LAST DEPLOYED: Thu Jun 27 19:32:17 2024
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
  kubectl --namespace monitoring get pods -l "release=sisense-prom-operator"

Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.

Listing releases matching ^sisense-prom-operator$
sisense-prom-operator  monitoring 1         2024-06-27 19:32:17.996804862 +0000 UTC     deployed        kube-prometheus-stack-2024.1.0355       v0.72.0


hook[postsync] logs | persistentvolume/alertmanager-db created
hook[postsync] logs |

hook[postsync] logs | persistentvolume/prometheus-db-prometheus-0 created
hook[postsync] logs |
Upgrading release=kube-system, chart=/home/ec2-user/sisense-L2024.1.0.355/installer/07_sisense_installation/files/sisense-L2024.1.0.355.tgz
Upgrading release=cluster-metrics, chart=/home/ec2-user/sisense-L2024.1.0.355/installer/07_sisense_installation/files/cluster-metrics-L2024.1.0.355.tgz
Release "cluster-metrics" does not exist. Installing it now.
NAME: cluster-metrics
LAST DEPLOYED: Thu Jun 27 19:32:48 2024
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
TEST SUITE: None

Listing releases matching ^cluster-metrics$
cluster-metrics monitoring 1        2024-06-27 19:32:48.028757588 +0000 UTC deployed    cluster-metrics-2024.1.0355     1.0

Release "kube-system" does not exist. Installing it now.


UPDATED RELEASES:
NAME                           CHART                                                                                                                  VERSION       DURATION
aws-load-balancer-controller   /home/ec2-user/sisense-L2024.1.0.355/installer/07_sisense_installation/files/aws-load-balancer-controller-1.4.3.tgz    1.4.3               9s
sisense-prom-operator          /home/ec2-user/sisense-L2024.1.0.355/installer/07_sisense_installation/files/kube-prometheus-stack-L2024.1.0.355.tgz   2024.1.0355        34s
cluster-metrics                /home/ec2-user/sisense-L2024.1.0.355/installer/07_sisense_installation/files/cluster-metrics-L2024.1.0.355.tgz         2024.1.0355         5s


FAILED RELEASES:
NAME          CHART                                                                                                    VERSION   DURATION
kube-system   /home/ec2-user/sisense-L2024.1.0.355/installer/07_sisense_installation/files/sisense-L2024.1.0.355.tgz                  22s

in /opt/sisense/config/umbrella-chart/helmfile.yaml: failed processing release kube-system: command "/usr/local/bin/helm" exited with non-zero status:

PATH:
  /usr/local/bin/helm

ARGS:
  0: helm (4 bytes)
  1: upgrade (7 bytes)
  2: --install (9 bytes)
  3: kube-system (11 bytes)
  4: /home/ec2-user/sisense-L2024.1.0.355/installer/07_sisense_installation/files/sisense-L2024.1.0.355.tgz (102 bytes)
  5: --create-namespace (18 bytes)
  6: --namespace (11 bytes)
  7: kube-system (11 bytes)
  8: --values (8 bytes)
  9: /tmp/helmfile3511981587/kube-system-kube-system-values-664598dcb5 (65 bytes)
  10: --values (8 bytes)
  11: /tmp/helmfile2017665039/kube-system-kube-system-values-5d447766b8 (65 bytes)
  12: --values (8 bytes)
  13: /tmp/helmfile78220084/kube-system-kube-system-values-7dc58bfd (61 bytes)
  14: --reset-values (14 bytes)
  15: --history-max (13 bytes)
  16: 0 (1 bytes)

ERROR:
  exit status 1

EXIT STATUS
  1

STDERR:
  coalesce.go:175: warning: skipped value for zookeeper.configuration: Not a table.
  coalesce.go:175: warning: skipped value for rabbitmq.configuration: Not a table.
  coalesce.go:175: warning: skipped value for rabbitmq.plugins: Not a table.
  coalesce.go:175: warning: skipped value for mongodb.configuration: Not a table.
  W0627 19:33:07.895137  552291 warnings.go:70] spec.template.spec.containers[0].resources.limits[memory]: fractional byte value "1288490188800m" is invalid, must be an integer
  W0627 19:33:08.138918  552291 warnings.go:70] annotation "kubernetes.io/ingress.class" is deprecated, please use 'spec.ingressClassName' instead
  Error: 1 error occurred:
     * Internal error occurred: failed calling webhook "vingress.elbv2.k8s.aws": failed to call webhook: Post "https://aws-load-balancer-webhook-service.kube-system.svc:443/validate-networking-v1-ingress?timeout=10s": no endpoints available for service "aws-load-balancer-webhook-service"

COMBINED OUTPUT:
  Release "kube-system" does not exist. Installing it now.
  coalesce.go:175: warning: skipped value for zookeeper.configuration: Not a table.
  coalesce.go:175: warning: skipped value for rabbitmq.configuration: Not a table.
  coalesce.go:175: warning: skipped value for rabbitmq.plugins: Not a table.
  coalesce.go:175: warning: skipped value for mongodb.configuration: Not a table.
  W0627 19:33:07.895137  552291 warnings.go:70] spec.template.spec.containers[0].resources.limits[memory]: fractional byte value "1288490188800m" is invalid, must be an integer
  W0627 19:33:08.138918  552291 warnings.go:70] annotation "kubernetes.io/ingress.class" is deprecated, please use 'spec.ingressClassName' instead
  Error: 1 error occurred:
     * Internal error occurred: failed calling webhook "vingress.elbv2.k8s.aws": failed to call webhook: Post "https://aws-load-balancer-webhook-service.kube-system.svc:443/validate-networking-v1-ingress?timeout=10s": no endpoints available for service "aws-load-balancer-webhook-service"
[2024-06-27 19:33:08] ** Error occurred during Deploying all Helm charts using Helmfile file /opt/sisense/config/umbrella-chart/helmfile.yaml section **
[2024-06-27 19:33:08] ** Exiting Installation ... **

 

1 ACCEPTED SOLUTION

jomel2013
7 - Data Storage
7 - Data Storage

Hello Dray,

This is already been resolve. The issue was the script.

Regards

View solution in original post

4 REPLIES 4

DRay
Community Team Member
Community Team Member

Hello @jomel2013.

Thank you for reaching out! 

The error seems to be related to the failure of a webhook call which is essential for the provisioning process:
Error: 1 error occurred: * Internal error occurred: failed calling webhook "vingress.elbv2.k8s.aws": failed to call webhook: Post "https://aws-load-balancer-webhook-service.kube-system.svc:443/validate-networking-v1-ingress?timeout...": no endpoints available for service "aws-load-balancer-webhook-service"
This error indicates that the Kubernetes service responsible for handling this webhook, aws-load-balancer-webhook-service, is not available or not responding. Here are a few steps to troubleshoot and resolve this issue:

1. Check Service Status:
- Use kubectl get svc -n kube-system to check if the aws-load-balancer-webhook-service is available in the kube-system namespace.
- If the service is not listed, there may have been an issue during the setup of the AWS Load Balancer Controller. You may need to reinstall or verify the installation steps.


2. Review Controller Logs:
- Check the logs of the AWS Load Balancer Controller to identify any errors or warnings that could indicate why the webhook service is not functional. Use the command:
bash kubectl logs -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller

3. Check Webhook Configurations:
- Verify the MutatingWebhookConfiguration and ValidatingWebhookConfiguration using:
bash kubectl get mutatingwebhookconfigurations,validatingwebhookconfigurations

- Look for entries related to the aws-load-balancer-controller and check if they reference the correct service and namespace.


4. Network Policies and Firewall Rules:
- Ensure that network policies or firewall rules do not block communication to and from the webhook service.

5. Endpoint Check:
- Sometimes, the endpoints might not be set correctly for services. Verify the endpoints using:

bash kubectl get endpoints -n kube-system aws-load-balancer-webhook-service

- If no endpoints are available, this suggests an issue with the service or related pods not registering themselves correctly.


6. Reinstall AWS Load Balancer Controller:
- If the above steps do not resolve the issue, consider reinstalling the AWS Load Balancer Controller. Ensure that the Kubernetes cluster role and role bindings are correctly configured as per the AWS documentation.


7. Kubernetes and EKS Versions:
- Since you are using Kubernetes version 1.28, ensure that all associated components and controllers are compatible with this version, as incompatibilities might lead to services not functioning correctly.
After resolving the issue with the webhook service, attempt the Sisense installation again. If you continue to encounter issues, consider reaching out to AWS support for more detailed analysis specific to your EKS environment, or you may also want to engage with Sisense support if the issue seems related to application configuration or bugs in the deployment scripts.

I hope that helps. If you need any more help please let us know.

Have a great day!

David Raynor (DRay)

DRay
Community Team Member
Community Team Member

Hi @jomel2013,

Was my previous post helpful? If you still have questions I recommend reaching out to support. They have the resources to dig into these logs, and loop in technical resources as needed.

David Raynor (DRay)

jomel2013
7 - Data Storage
7 - Data Storage

Hello Dray,

This is already been resolve. The issue was the script.

Regards

DRay
Community Team Member
Community Team Member

Thank you for the update. I'm glad to hear you are up and running!

David Raynor (DRay)