S3 Gateway

Uploads

Uploading files to Shelby via the S3 Gateway

The S3 Gateway supports uploading files to Shelby storage using standard S3 operations. This guide covers single-part uploads (PutObject) and multipart uploads for large files.

Prerequisites

Before using the examples on this page, you need a running S3 Gateway with a valid configuration that includes aptosPrivateKey (YAML) or aptosSigner (TypeScript). See Configuration to set up your shelby.config.yaml and Integrations to configure your S3 client (i.e., rclone, boto3, AWS CLI).

The bucket name must match your signer's Aptos address. You can only upload to buckets that correspond to accounts you control.

Supported Write Operations

OperationDescription
PutObjectUpload a single file
CreateMultipartUploadInitiate a multipart upload
UploadPartUpload a part in a multipart upload
CompleteMultipartUploadComplete a multipart upload
AbortMultipartUploadCancel a multipart upload

Expiration

Unlike AWS S3 (which uses bucket-level lifecycle rules), Shelby requires an explicit expiration for each blob at upload time. Set the expiration using S3 metadata:

MethodHeader
S3 Metadata (recommended)x-amz-meta-expiration-seconds: 86400
Raw Header (fallback)x-expiration-seconds: 86400

The value is the number of seconds until the blob expires (e.g., 86400 = 24 hours).

PutObject

Upload a single file using the standard S3 PutObject operation.

# Upload a file with 24-hour expiration
rclone copyto ./local-file.txt \
  shelby:<YOUR_ACCOUNT_ADDRESS>/path/to/file.txt \
  --header-upload "x-amz-meta-expiration-seconds: 86400" \
  --s3-no-check-bucket

# Upload with 7-day expiration
rclone copyto ./data.json \
  shelby:<YOUR_ACCOUNT_ADDRESS>/data.json \
  --header-upload "x-amz-meta-expiration-seconds: 604800" \
  --s3-no-check-bucket
# Upload a file with 24-hour expiration
aws --profile shelby --endpoint-url http://localhost:9000 \
  s3 cp ./local-file.txt \
  s3://<YOUR_ACCOUNT_ADDRESS>/path/to/file.txt \
  --metadata expiration-seconds=86400
import boto3

s3 = boto3.client(
    's3',
    endpoint_url='http://localhost:9000',
    aws_access_key_id='AKIAIOSFODNN7EXAMPLE',
    aws_secret_access_key='wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
    region_name='shelbyland',
)

# Upload with 24-hour expiration
s3.put_object(
    Bucket='<YOUR_ACCOUNT_ADDRESS>',
    Key='path/to/file.txt',
    Body=open('local-file.txt', 'rb'),
    Metadata={'expiration-seconds': '86400'}
)

Multipart Uploads

For large files, use multipart uploads. This splits the file into parts that are uploaded separately, then combined on the server.

rclone and most S3 SDKs handle multipart uploads automatically for large files. You only need to configure the chunk size threshold.

# Upload large file with multipart (5MB chunks)
rclone copyto ./large-file.zip \
  shelby:<YOUR_ACCOUNT_ADDRESS>/large-file.zip \
  --header-upload "x-amz-meta-expiration-seconds: 86400" \
  --s3-chunk-size 5M \
  --s3-upload-cutoff 5M \
  --s3-no-check-bucket \
  -v
OptionDescription
--s3-chunk-sizeSize of each part (minimum 5MB)
--s3-upload-cutoffFile size threshold to trigger multipart
--s3-no-check-bucketSkip bucket existence check (required for Shelby)
-vVerbose output to see upload progress
import boto3
from boto3.s3.transfer import TransferConfig

s3 = boto3.client(
    's3',
    endpoint_url='http://localhost:9000',
    aws_access_key_id='AKIAIOSFODNN7EXAMPLE',
    aws_secret_access_key='wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
    region_name='shelbyland',
)

# Configure multipart settings
config = TransferConfig(
    multipart_threshold=5 * 1024 * 1024,  # 5MB
    multipart_chunksize=5 * 1024 * 1024,  # 5MB
)

# Upload with multipart
s3.upload_file(
    'large-file.zip',
    '<YOUR_ACCOUNT_ADDRESS>',
    'large-file.zip',
    Config=config,
    ExtraArgs={'Metadata': {'expiration-seconds': '86400'}}
)
import { S3Client } from "@aws-sdk/client-s3";
import { Upload } from "@aws-sdk/lib-storage";
import { createReadStream } from "fs";

const client = new S3Client({
  endpoint: "http://localhost:9000",
  region: "shelbyland",
  credentials: {
    accessKeyId: "AKIAIOSFODNN7EXAMPLE",
    secretAccessKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
  },
  forcePathStyle: true,
});

const upload = new Upload({
  client,
  params: {
    Bucket: "<YOUR_ACCOUNT_ADDRESS>",
    Key: "large-file.zip",
    Body: createReadStream("./large-file.zip"),
    Metadata: { "expiration-seconds": "86400" },
  },
  partSize: 5 * 1024 * 1024, // 5MB
});

upload.on("httpUploadProgress", (progress) => {
  console.log(`Uploaded ${progress.loaded} of ${progress.total} bytes`);
});

await upload.done();

Manual Multipart Upload

For fine-grained control, you can manage the multipart upload process manually:

import boto3

s3 = boto3.client('s3', endpoint_url='http://localhost:9000', ...)

bucket = '<YOUR_ACCOUNT_ADDRESS>'
key = 'large-file.zip'

# 1. Initiate multipart upload
response = s3.create_multipart_upload(Bucket=bucket, Key=key)
upload_id = response['UploadId']

try:
    parts = []
    part_number = 1

    # 2. Upload parts
    with open('large-file.zip', 'rb') as f:
        while chunk := f.read(5 * 1024 * 1024):  # 5MB chunks
            response = s3.upload_part(
                Bucket=bucket,
                Key=key,
                PartNumber=part_number,
                UploadId=upload_id,
                Body=chunk
            )
            parts.append({
                'PartNumber': part_number,
                'ETag': response['ETag']
            })
            part_number += 1

    # 3. Complete upload (include expiration metadata)
    s3.complete_multipart_upload(
        Bucket=bucket,
        Key=key,
        UploadId=upload_id,
        MultipartUpload={'Parts': parts},
        Metadata={'expiration-seconds': '86400'}  # Set expiration here
    )
except Exception as e:
    # Abort on failure
    s3.abort_multipart_upload(Bucket=bucket, Key=key, UploadId=upload_id)
    raise

For multipart uploads, the expiration-seconds metadata is set during the CompleteMultipartUpload call, not during CreateMultipartUpload.

Conditional Uploads

Use the If-None-Match: * header to prevent overwriting existing objects:

try:
    s3.put_object(
        Bucket='<YOUR_ACCOUNT_ADDRESS>',
        Key='unique-file.txt',
        Body=b'content',
        Metadata={'expiration-seconds': '86400'},
        IfNoneMatch='*'  # Fail if object exists
    )
except s3.exceptions.ClientError as e:
    if e.response['Error']['Code'] == 'PreconditionFailed':
        print("Object already exists!")
    raise
import { PutObjectCommand } from "@aws-sdk/client-s3";

try {
  await client.send(new PutObjectCommand({
    Bucket: "<YOUR_ACCOUNT_ADDRESS>",
    Key: "unique-file.txt",
    Body: "content",
    Metadata: { "expiration-seconds": "86400" },
    IfNoneMatch: "*", // Fail if object exists
  }));
} catch (error) {
  if (error.name === "PreconditionFailed") {
    console.log("Object already exists!");
  }
  throw error;
}

Error Handling

Common Errors

Error CodeDescriptionSolution
InvalidArgumentMissing or invalid expirationSet expiration-seconds metadata
AccessDeniedBucket doesn't match signer addressUse your signer's address as bucket
AccessDeniedNo aptosSigner configuredAdd aptosPrivateKey in YAML or aptosSigner in TypeScript config
PreconditionFailedObject exists (with If-None-Match: *)Object already exists
EntityTooLargePart exceeds 5GBUse smaller part size
EntityTooSmallNon-final part < 5MBIncrease part size

Troubleshooting

"Expiration is required" error:

Ensure you're setting the expiration-seconds metadata:

  • rclone: --header-upload "x-amz-meta-expiration-seconds: 86400"
  • boto3: Metadata={'expiration-seconds': '86400'}
  • AWS CLI: --metadata expiration-seconds=86400

"Cannot write to bucket" error:

The bucket name must match your Aptos signer's address. Check that:

  1. Your credential has an aptosSigner configured
  2. The bucket address matches aptosSigner.accountAddress.toString()

"Write operations require an Aptos signer" error:

Your credential doesn't have an Aptos signer. Add aptosPrivateKey to your credential in shelby.config.yaml, or add aptosSigner in shelby.config.ts. See Configuration for details.