Install Self-Hosted on-premises

How to set up UltiHash Self-Hosted on your local infrastructure with Kubernetes

On-premises environments remain vital for many organizations. Through its Kubernetes-native architecture, UltiHash supports easy scaling and load balancing, ensuring that systems can handle fluctuating workloads with minimal reconfiguration. This design ensures that on-premises deployments remain flexible and customizable, allowing businesses to maintain control over their infrastructure.

This guide provides a detailed walkthrough for setting up an UltiHash cluster in a Kubernetes environment, whether on-premises or in a cloud environment. The process is divided into four main steps:

  1. Prerequisites: Gather the necessary credentials, tools, and environment configurations.

  2. Cluster setup: Configure your Kubernetes cluster, including creating namespaces and provisioning secrets.

  3. Helm installation: Deploy UltiHash using Helm, customizing the setup for your specific environment.

  4. Post-installation: Verify the installation.

System hardware requirements
  • Storage: NVMe SSDs are required for optimal disk performance.

  • Network: 10 Gbps interface minimum between nodes.

  • Kubernetes: Version 1.20+ with Nginx Ingress and a CSI Controller installed.

  • Containerization: Docker 19.03+ or Containerd 1.3+.

  • Helm: Version 3.x.

  • On-Premises: Minimum of 1 Kubernetes node with NVMe SSDs.

Resource needs will vary depending on the amount of data being stored and managed. For best performance, especially with larger datasets, it’s essential to provision additional resources accordingly.

Step 1: Prerequisites

Before you begin the installation, ensure you have the following:

  • Skills: good knowledge of Kubernetes, kubectl, and helm.

  • UltiHash Account: Sign up at ultihash.io/signup and verify your email.

  • Credentials: After signing up on the UltiHash website, you will get the following credentials on your dashboard:

    • Registry login and password (referred to as registry_login and registry_password).

    • Customer ID (referred to as customer_id).

    • Access token (referred to as access_token).

    • Monitoring token (referred to as monitoring_token).

  • Kubernetes cluster:

    • Version: Ensure you have a Kubernetes cluster running version 1.20 or higher.

    • Controllers:

      • Ingress controller: Exposes the UltiHash cluster API endpoint outside the Kubernetes cluster.

      • CSI controller: Manages persistent volumes.

    • Note: You can use any Kubernetes version starting from 1.20, and any CSI controller that dynamically provisions and attaches persistent volumes. For optimal performance, use a CSI controller that imposes the least disk performance degradation.

  • Local environment:

    • kubectl: Ensure Kubernetes command line tool kubectl is installed and configured to access the cluster.

    • Helm: Install Kubernetes package manager Helm (version 3.x) to manage Kubernetes packages.

Step 2: Cluster setup

  1. Namespace creation:

    • Create a Kubernetes namespace for the UltiHash installation:

    • Replace <namespace> with your desired namespace name.

  2. Secrets provisioning:

    • Registry credentials: Provision a secret in Kubernetes to store the UltiHash registry credentials:

    • Replace <namespace> with the namespace name. Replace <registry_login>, and <registry_password> with the appropriate values obtained from your dashboard on the UltiHash website.

    • Ultihash credentials and monitoring token: Create a secret in Kubernetes for the license key and monitoring token:

    • Replace <namespace> with the namespace name. Replace <customer_id>,<access_token>, and <monitoring_token> with the corresponding values found on your UltiHash dashboard.

Step 3: Helm installation

  1. Helm chart deployment:

    • Log into the UltiHash registry with your registry_login and registry_password :

    • Deploy the Helm chart with a specific release name and namespace:

    • Replace <release_name> and namespace with your chosen names. values.yaml should be configured as described below.

  2. Component configuration:

    • Customize the values.yaml file with the necessary configurations:

      • Storage class: Specify the storage class name created by your CSI controller.

      • Domain name: Enter a valid domain name for your UltiHash cluster.

      • Service replicas and storage size: Adjust the number of replicas and storage size for services like etcd, entrypoint, storage, and deduplicator based on your requirements.

Step 4: Post-installation

  1. Verification:

    • After deployment, verify that all services are running correctly by checking the Kubernetes namespace:

      Replace <namespace> with the namespace where UltiHash cluster has been deployed.

    • Ensure that all pods are either in the Running or in the Completed state with no errors.

  2. Get access to the UltiHash cluster:

    • Obtain the UltiHash root user credentials:

    • Replace <release_name> and <namespace> with the Helm release name and namespace name correspondingly.

    • Use AWS CLI and AWS SDK to interact with the UltiHash cluster:

    • Replace <cluster-url> with the appropriate scheme: either https://<domain_name> or http://<domain_name>, depending on whether your entrypoint.ingress object in the Helm values has been configured with or without TLS. The <domain_name> corresponds to the domain name chosen for the UltiHash cluster, as set in the entrypoint.ingress.host object.


Frequent issues troubleshooting

Helm chart install or upgrade failure

Symptoms:

  • helm install or helm upgrade hangs or returns an error

  • Application pods do not start

  • Helm status is stuck at pending-install or failed

Steps to resolve:

  • Inspect the Helm release status:

  • Check for resource creation errors or pending pods:

  • Describe a failing pod to view events and errors:

  • Debug with Helm’s dry run mode:

  • After the issue has been found and eliminated, process with install or upgrade further.

Recommendation: Always use --dry-run and --debug to validate changes before applying them in production.

Missing or incorrect values in values.yaml

Symptoms:

  • Helm fails with a rendering error

  • Application fails at runtime due to missing config (e.g., secrets, ports, env vars)

Steps to resolve:

  • Compare your values file with the chart defaults:

  • Test the rendered templates locally:

  • Reapply the corrected configuration:

Recommendation: Use a version-controlled values file and validate changes in a staging environment before rolling out to production.

3. Application pods stuck in CrashLoopBackOff or ImagePullBackOff

Purpose: Diagnose runtime pod failures due to misconfiguration or image issues.

Symptoms:

  • Pods keep restarting or cannot pull the container image

Steps to resolve:

  • Inspect the pod state:

  • Check the logs of the failing pod:

  • Correct the config causing failure, then upgrade:

Recommendation: Ensure that image repositories are accessible and secrets for private registries are correctly configured in the cluster.

Last updated

Was this helpful?