UltiHash documentation
← back to ultihash.io
  • Get started with UltiHash
  • Get help + troubleshooting
  • Installation
    • Test with Docker
    • Install Self-Hosted on-premises
    • Install Self-Hosted on AWS
    • Set up UltiHash Serverless
    • Migrate your data
  • Operations
    • Upload + download data
    • Use the S3-compatible API
    • Prebuilt connections
      • Airflow
      • AWS Glue
      • Iceberg
      • Icechunk
      • Kafka
      • Neo4j
      • Presto
      • PySpark
      • PyTorch
      • SuperAnnotate
      • Trino
      • Vector databases
    • Set up pre-signed URLs
    • Save space with deduplication
    • Delete stored data
    • Set up object versioning
  • Administration
    • Customize your deployment
    • Monitor your cluster
    • Scale your cluster
    • Update your cluster
    • Backup + restore your cluster
    • Manage users + access policies
    • Erasure coding for data resiliency
    • Set up encryption
  • Reference
    • Changelog
      • Core image
      • Helm chart
Powered by GitBook
On this page
  1. Operations

Use the S3-compatible API

How to interact with your UltiHash cluster using the S3-compatible API

Last updated 25 days ago

Was this helpful?

CtrlK
  • Essential API operations
  • Full list of API operations

Was this helpful?

UltiHash offers a powerful, S3-compatible API that allows developers to interact with storage clusters using familiar commands and libraries designed for Amazon S3. The API was implemented in this way to ensure maximum integration flexibility across a wide variety of applications and services in your existing stack - removing the need for complex reconfigurations or middleware solutions.

Achieving S3 compatibility

S3 compatibility can be achieved in different ways depending on your environment. For example, Python developers typically use the boto3 library, which is part of the AWS SDK for Python and provides a straightforward interface for interacting with UltiHash as if it were S3. In contrast, data processing tools like PySpark don’t use boto3, instead, they rely on connectors like s3a, which is part of the Hadoop ecosystem and optimized for distributed data processing. This distinction is important: boto3 is great for scripting and general-purpose workloads, while s3a is better suited for large-scale data operations in frameworks like Spark.

Generally, you can use any S3-compliant SDK to interact with UltiHash. The AWS SDKs offer extensive support across various programming languages like Python, Java, Node.js, and more. As these SDKs are well-documented and maintained, they are the ideal choice for most developers. For developers looking to explore SDKs across various languages and environments, we highly recommend visiting AWS Developer Tools, which offers comprehensive support for integrating with S3-compatible APIs, including UltiHash.

Essential API operations

  • CreateBucket: Create new buckets in your UltiHash storage

aws s3api create-bucket --bucket your-bucket-name --endpoint-url https://your-ultihash-endpoint

  • PutObject: Upload files to your UltiHash buckets.

aws s3 cp local-file.txt s3://your-bucket-name/ --endpoint-url https://your-ultihash-endpoint

  • GetObject: Retrieve files from your UltiHash storage.

aws s3 cp s3://your-bucket-name/file.txt local-file.txt --endpoint-url https://your-ultihash-endpoint

You can find premade Python scripts for uploading and downloading data here.

  • ListObjectsV2: List the contents of your buckets.

aws s3api list-objects-v2 --bucket your-bucket-name --endpoint-url https://your-ultihash-endpoint

  • DeleteObject:

Manage your storage by removing one unnecessary object.

aws s3api delete-object --bucket your-bucket-name --key your-object-key --endpoint-url https://your-ultihash-endpoint

Facilitate object deletion operations by removing all objects in one bucket

aws s3 rm s3://your-bucket-name/ --recursive --endpoint-url https://your-ultihash-endpoint

  • DeleteBucket: Manage your storage by removing unnecessary buckets (buckets should be empty before removal).

aws s3api delete-bucket --bucket your-bucket-name --endpoint-url https://your-ultihash-endpoint

Full list of API operations

S3 Compatibility Layer

  • AbortMultipartUpload

  • CreateMultipartUpload

  • CompleteMultipartUpload

  • CopyObject

  • CreateBucket

  • DeleteBucket

  • DeleteBucketPolicy

  • DeleteObject

  • DeleteObjects

  • GetBucketPolicy

  • GetBucketVersioning

  • GetObject

  • HeadBucket

  • HeadObject

  • ListBuckets

  • ListMultipartUploads

  • ListObjects

  • ListObjectsV2

  • ListObjectVersions

  • PutBucketPolicy

  • PutBucketVersioning

  • PutObject

  • UploadPart

IAM Compatibility Layer

  • CreateAccessKey

  • CreateUser

  • DeleteAccessKey

  • DeleteUser

  • DeleteUserPolicy

  • GetUserPolicy

  • ListUserPolicies

  • PutUserPolicy

For more details on available SDKs and language-specific guides, check out the AWS SDK hub.