AWS installation
Last updated
Was this helpful?
Last updated
Was this helpful?
This guide describes the full installation process of UltiHash in AWS environment including:
provision of EKS cluster in a dedicated VPC
deployment of the essential Kubernetes controllers
installation of UltiHash on the EKS cluster
This guide outlines the recommended UltiHash setup for managing 10 TB of data. The setup diagram is shown below. UltiHash cluster is deployed on a single EC2 instance of type r8g.4xlarge
with a network load balancer that routes traffic to it. The cluster uses gp3
volumes optimized for performance, ensuring efficient storage management. In case you have other storage requirements, you may freely change the volume sizes in the configuration. You are free to select any EC2 instance type and EBS volume configurations for production purposes based on your specific needs. The diagram below depicts the resources to deploy in an AWS account by the Terraform scripts.
Expected performance:
Write throughput: up to 200 MB/s
Read throughput: up to 1000 MB/s
Expected costs:
Hourly:
EC2 cost: 1.14 USD
EBS cost: 1.58 USD
Monthly:
EC2 cost: 829.98 USD
EBS cost: 1152.38 USD
List of billable AWS services:
mandatory:
EKS, EC2, S3, KMS
optional:
SQS, Eventbridge
Estimated amount of time to complete a deployment: ~45 minutes.
good knowledge of the following AWS services: IAM, VPC, EKS, EC2
high-level knowledge of Terraform
access to an AWS account
Warning: do not use AWS account root to provision and manage the deployed resources! Instead create an IAM user that has sufficient privileges to manage these AWS services: IAM, VPC, EKS, EC2.
IAM permissions required to deploy and manage UltiHash cluster are listed in IAM permissions required to deploy and manage an UH cluster
make sure your AWS account has sufficient limits before deploying UltiHash cluster: Manage AWS service limits
Since the Terraform state for this setup has to be stored on S3, need to provision a dedicated S3 bucket. Execute the following command, replacing the <bucket-name>
and <aws-region>
placeholders:
The S3 bucket will be created with the default encryption of type SSE-S3 (AWS managed KMS key) enabled.
Clone the repository by executing the command below:
Later its code will be required to setup UltiHash in AWS environment.
Initialize and apply the Terraform project
Wait until the installation is completed.
Execute the following kubectl
command to check the available EKS cluster nodes:
The command has to output a name of a single provisioned EC2 instance.
Nginx Ingress
- exposes UltiHash outside of the EKS cluster with a Network Load Balancer.
Load Balancer Controller
- provisions a Network Load Balancer for the Nginx Ingress
controller.
Karpenter
- provisions EC2 instances on-demand to host UltiHash workloads.
Perform the following actions to deploy the Terraform project:
Initialize and apply the Terraform project
Wait until the installation is completed. A Network Load balancer should be provisioned in the same region as the EKS cluster.
Initialize and apply the Terraform project
Wait until the installation is completed.
The UltiHash cluster is installed in the default
Kuberentes namespace, you kubectl
to see the deployed workloads:
To get access to the deployed UltiHash cluster, configure your AWS CLI/SDK with the Ultihash root credentials:
To uninstall all previously deployed AWS resources follow the steps below:
Whenever interacting with AWS cloud, we strongly encourage you to follow the principle of least privilege. This means permissions should be limited to the minimum actions and resources required for each role or service to function.
Why this matters:
Reduces the attack surface and limits the impact of compromised credentials or components.
Prevents unintentional changes or access to unauthorized resources.
Aligns with AWS security best practices and the Well-Architected Framework.
Enables better auditing, control, and compliance with security standards.
The IAM user or the role that is used to provision and manage UH cluster in an AWS account should have the following IAM permissions. The IAM permissions below are applied for all resources, after successful deployment they could be adjusted to match certain resource ARNs for improved security.
S3 permissions (required to manage Terraform states in S3):
EventBridge permissions (required by Karpenter to manage EC2 interruption events):
SQS permissions (required by Karpenter to manage EC2 interruption events):
KMS permissions (required by EKS cluster to manage Kubernetes secrets):
EKS permissions (required to manage EKS cluster):
IAM permissions (required by EKS cluster and EC2 instances):
EC2 permissions (required to manage EC2 instances):
When deploying UltiHash cluster on Amazon EKS, it is important to ensure that your AWS account has sufficient EC2 vCPU-based instance limits in the selected region. Amazon EKS worker nodes are backed by EC2 instances, and if vCPU quotas are too low, the cluster may fail to scale or provision nodes, causing deployment failures.
The relevant quota is: Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances Default limit: 5 vCPUs per region
If the EKS cluster attempts to launch EC2 instances exceeding your vCPU quota, node provisioning will fail, and workloads may not start or scale properly. In case you need more vCPUs in your region than the quota provides, we recommend increasing quota proactively before scaling out your UltiHash cluster.
and AWS CLI
terraform
kubectl of version 1.30
personal credentials found on the
Since UltiHash has to be deployed on Kubernetes cluster, need to provision EKS cluster on AWS. For this purpose use . The project deploys a dedicated VPC and provisions there an EKS cluster with a single c5.large
machine to host the essential Kubernetes controllers.
Note: by default the EKS cluster is provisioned with a public endpoint that is reachable over the Internet. In case the EKS cluster endpoint should be private, change the parameter from true to false.
Once the repository is cloned, perform the following actions to deploy the Terraform project:
Update the bucket name
and its region
in the with the onces done at .
Update the configuration in . The only required change is the parameter cluster_admins
- specify the list of ARNs of IAM users and/or IAM roles that need to have access to the provisioned EKS cluster. Other parameters could be left intact.
Make sure the access to the EKS cluster has been granted to the required IAM users and roles To check that, download the kubeconfig
for the EKS cluster, executing the command below. Replace the <cluster-name>
(by default ultihash-test
) and the <aws-region>
(by default eu-central-1
) with the corresponding values defined in
The next step is installation of the essential Kubernetes controllers on the provisioned EKS cluster. For this purpose use . The project deploys the following Kuberentes controllers on the EKS cluster:
EBS CSI Driver
- CSI controller that automatically provisions persistent volumes the UltiHash workfloads. The volumes are based on gp3
storage class and optimised in terms of performance. The default storage class provisions unencrypted EBS volumes. To provision encrypted EBS volumes, create a new storage class like .
Update the bucket name
and its region
in the with the onces done at .
Update the configuration in if required. The helm values for the deployed controlers are found . It is not recommended to change any of these configurations, the only parameter that should be selected in advance is the Network Load Balancer type
(internal
or internet-facing
) in this .
In case it is required to change the instance type for the UltiHash services, update it in the following .
The last step is installation of UltiHash. For this purpose use . Perform the following actions to deploy the Terraform project:
Update the bucket name
and its region
in the with the ones done at .
Update the configuration in with the credentials obtained from your account on . The credentials in the config.tfvars
are mocked. The helm values for UltiHash are found . Adjust the helm values to set your custom storage class if required.
Finally access the UltiHash cluster by using AWS CLI/SDK, use the domain name of the Network Load Balancer provisioned at :
More information on this topic could be found under
Check your current Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances) quota on and if it is not enough, create a quota increase request by clicking on the button Request increase at account level in the top right corner.