S3 Gateway

Deletions

Deleting files from Shelby via the S3 Gateway

The S3 Gateway supports deleting files from Shelby storage using standard S3 delete operations. This guide covers single object deletion and batch deletion of multiple objects.

Prerequisites

Before using the examples on this page, you need a running S3 Gateway with a valid configuration that includes aptosPrivateKey (YAML) or aptosSigner (TypeScript). See Configuration to set up your shelby.config.yaml and Integrations to configure your S3 client (i.e., rclone, boto3, AWS CLI).

The bucket name must match your signer's Aptos address. You can only delete objects from buckets that correspond to accounts you control.

Supported Delete Operations

OperationDescription
DeleteObjectDelete a single object
DeleteObjectsDelete multiple objects in one request

DeleteObject

Delete a single object using the standard S3 DeleteObject operation.

Per the S3 specification, DeleteObject returns a 204 success response even if the object doesn't exist. This is idempotent behavior—deleting the same object twice won't cause an error.

# Delete a single file
rclone delete shelby:<YOUR_ACCOUNT_ADDRESS>/path/to/file.txt

# Delete with verbose output
rclone delete shelby:<YOUR_ACCOUNT_ADDRESS>/old-data.json -v
# Delete a single file
aws --profile shelby --endpoint-url http://localhost:9000 \
  s3 rm s3://<YOUR_ACCOUNT_ADDRESS>/path/to/file.txt

# Delete with output
aws --profile shelby --endpoint-url http://localhost:9000 \
  s3api delete-object \
  --bucket <YOUR_ACCOUNT_ADDRESS> \
  --key path/to/file.txt
import boto3

s3 = boto3.client(
    's3',
    endpoint_url='http://localhost:9000',
    aws_access_key_id='AKIAIOSFODNN7EXAMPLE',
    aws_secret_access_key='wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
    region_name='shelbyland',
)

# Delete a single object
s3.delete_object(
    Bucket='<YOUR_ACCOUNT_ADDRESS>',
    Key='path/to/file.txt'
)
import { S3Client, DeleteObjectCommand } from "@aws-sdk/client-s3";

const client = new S3Client({
  endpoint: "http://localhost:9000",
  region: "shelbyland",
  credentials: {
    accessKeyId: "AKIAIOSFODNN7EXAMPLE",
    secretAccessKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
  },
  forcePathStyle: true,
});

await client.send(new DeleteObjectCommand({
  Bucket: "<YOUR_ACCOUNT_ADDRESS>",
  Key: "path/to/file.txt",
}));

DeleteObjects (Batch Delete)

Delete multiple objects in a single request. This is more efficient than deleting objects one at a time.

Batch deletes are atomic—either all objects are deleted or none are. If any object in the batch doesn't exist, the entire operation will fail with a NoSuchKey error for all objects.

# Delete multiple objects
aws --profile shelby --endpoint-url http://localhost:9000 \
  s3api delete-objects \
  --bucket <YOUR_ACCOUNT_ADDRESS> \
  --delete '{
    "Objects": [
      {"Key": "file1.txt"},
      {"Key": "file2.txt"},
      {"Key": "data/file3.json"}
    ]
  }'

# Delete with quiet mode (suppress successful delete output)
aws --profile shelby --endpoint-url http://localhost:9000 \
  s3api delete-objects \
  --bucket <YOUR_ACCOUNT_ADDRESS> \
  --delete '{
    "Objects": [{"Key": "file1.txt"}, {"Key": "file2.txt"}],
    "Quiet": true
  }'
import boto3

s3 = boto3.client(
    's3',
    endpoint_url='http://localhost:9000',
    aws_access_key_id='AKIAIOSFODNN7EXAMPLE',
    aws_secret_access_key='wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY',
    region_name='shelbyland',
)

# Delete multiple objects
response = s3.delete_objects(
    Bucket='<YOUR_ACCOUNT_ADDRESS>',
    Delete={
        'Objects': [
            {'Key': 'file1.txt'},
            {'Key': 'file2.txt'},
            {'Key': 'data/file3.json'},
        ],
        'Quiet': False  # Set True to suppress successful delete output
    }
)

# Check results
if 'Deleted' in response:
    for obj in response['Deleted']:
        print(f"Deleted: {obj['Key']}")

if 'Errors' in response:
    for err in response['Errors']:
        print(f"Error deleting {err['Key']}: {err['Code']} - {err['Message']}")
import { S3Client, DeleteObjectsCommand } from "@aws-sdk/client-s3";

const client = new S3Client({
  endpoint: "http://localhost:9000",
  region: "shelbyland",
  credentials: {
    accessKeyId: "AKIAIOSFODNN7EXAMPLE",
    secretAccessKey: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
  },
  forcePathStyle: true,
});

const response = await client.send(new DeleteObjectsCommand({
  Bucket: "<YOUR_ACCOUNT_ADDRESS>",
  Delete: {
    Objects: [
      { Key: "file1.txt" },
      { Key: "file2.txt" },
      { Key: "data/file3.json" },
    ],
    Quiet: false,
  },
}));

// Check results
for (const obj of response.Deleted ?? []) {
  console.log(`Deleted: ${obj.Key}`);
}

for (const err of response.Errors ?? []) {
  console.log(`Error deleting ${err.Key}: ${err.Code} - ${err.Message}`);
}

Delete Directory (Prefix)

To delete all objects with a common prefix, first list them, then delete in batch.

# Delete all objects in a "directory"
rclone delete shelby:<YOUR_ACCOUNT_ADDRESS>/data/old-files/

# Delete with dry-run first to see what would be deleted
rclone delete shelby:<YOUR_ACCOUNT_ADDRESS>/data/old-files/ --dry-run

# Purge entire directory (same as delete for S3-like storage)
rclone purge shelby:<YOUR_ACCOUNT_ADDRESS>/temp-data/
# Delete all objects with a prefix
aws --profile shelby --endpoint-url http://localhost:9000 \
  s3 rm s3://<YOUR_ACCOUNT_ADDRESS>/data/old-files/ --recursive

# Dry run to see what would be deleted
aws --profile shelby --endpoint-url http://localhost:9000 \
  s3 rm s3://<YOUR_ACCOUNT_ADDRESS>/data/old-files/ --recursive --dryrun
import boto3

s3 = boto3.client('s3', endpoint_url='http://localhost:9000', ...)

bucket = '<YOUR_ACCOUNT_ADDRESS>'
prefix = 'data/old-files/'

# List all objects with prefix
response = s3.list_objects_v2(Bucket=bucket, Prefix=prefix)
objects = response.get('Contents', [])

if objects:
    # Delete all found objects
    delete_response = s3.delete_objects(
        Bucket=bucket,
        Delete={
            'Objects': [{'Key': obj['Key']} for obj in objects]
        }
    )
    print(f"Deleted {len(delete_response.get('Deleted', []))} objects")
else:
    print("No objects found with prefix")

Error Handling

Common Errors

Error CodeDescriptionSolution
AccessDeniedNo aptosSigner configuredAdd aptosPrivateKey in YAML or aptosSigner in TypeScript config
AccessDeniedBucket doesn't match signer addressUse your signer's address as bucket
NoSuchKeyObject not found (batch delete only)Verify object exists before batch delete
MalformedXMLInvalid DeleteObjects request bodyCheck XML format for batch delete

Troubleshooting

"Delete operations require an Aptos signer" error:

Your credential doesn't have an Aptos signer. Add aptosPrivateKey to your credential in shelby.config.yaml, or add aptosSigner in shelby.config.ts. See Configuration for details.

"Cannot delete from bucket" error:

The bucket name must match your Aptos signer's address. Check that:

  1. Your credential has an aptosSigner configured
  2. The bucket address matches aptosSigner.accountAddress.toString()

Batch delete fails with "NoSuchKey":

Unlike single DeleteObject (which succeeds even if the object doesn't exist), batch DeleteObjects is atomic. If any object in the batch doesn't exist, the entire operation fails. Either:

  • Verify all objects exist before deleting
  • Use individual DeleteObject calls if you need idempotent behavior

Deletion succeeded but object still appears in listing:

Shelby deletions are recorded on the blockchain. There may be a brief delay before the indexer reflects the deletion. Wait a few seconds and try listing again.