RepoFlow Team · Aug 9, 2025
Benchmarking Self-Hosted S3-Compatible Storage
A clear comparison between seven self-hosted S3-compatible storage solutions.
Self-hosted object storage is a strong choice for developers and teams who want full control over how their data is stored and accessed. Whether you are replacing Amazon S3, hosting internal files, building a CI pipeline, or serving package repositories, the storage layer can significantly affect speed and stability.
We tested seven popular object storage solutions that support the S3 protocol. The goal was to compare their performance under identical conditions using real upload and download operations.
We tested seven popular object storage solutions that support the S3 protocol. The goal was to compare their performance under identical conditions using real upload and download operations.
Solutions We Tested
Each of the following was deployed using Docker on the same server with no volume mounting and no special tuning:
- MinIO
- Ceph
- SeaweedFS
- Garage
- Zenko (Scality Cloudserver)
- LocalStack
- RustFS
Non-Parallel Download Speed
Average download speed for a single file at different sizes.
Non Parallel Upload Speed
Average upload speed for a single file at different sizes.
Listing Performance
Measures how long it takes to list all 2,000 test objects in a bucket using different page sizes (100, 500, and 1000 results per request).
Parallel Upload Speed
Measures how long it takes to upload multiple files of the same size in parallel. Upload speed is calculated as:
(number of files × file size) ÷ total time
Parallel Upload Speed - 1 MB Files
Parallel Upload Speed - 10 MB Files
Parallel Upload Speed - 100 MB Files
Parallel Download Speed
Measures how long it takes to download multiple files of the same size in parallel. Download speed is calculated as:
(number of files × file size) ÷ total time
Parallel Download Speed - 1 MB Files
Parallel Download Speed - 10 MB Files
Parallel Download Speed - 100 MB Files
How the Tests Were Performed
For each solution, we:
- Uploaded and downloaded files in 7 different sizes: 50 KB, 200 KB, 1 MB, 10 MB, 50 MB, 100 MB, and 1 GB
- Repeated each upload and download 20 times to get stable averages
- Measured the average upload and download speed in megabytes per second (MB/s)
- Ran all tests on the same machine using the default Docker container for each storage system, with no external volumes, mounts, or caches
All solutions were tested in a single-node setup for consistency. While some systems (for example, Ceph) are designed to perform better in a clustered environment, we used the same conditions across all solutions to ensure a fair comparison.
Final Thoughts
These results represent how each solution behaved under our specific single node test environment. They should be viewed as a relative comparison of performance ratios and not as absolute hard values that will apply in every setup.
When selecting the right storage solution, consider the typical file sizes you will store, since some systems handle small files better while others excel with large files. Also think about the core capabilities you require such as scalability, replication, durability, or a built in GUI. Finally, remember that performance may differ greatly between single node and multi node clusters.
Our tests provide a baseline to understand how these systems compare under identical conditions, but your real world performance will depend on your specific hardware, workload, and configuration.
When selecting the right storage solution, consider the typical file sizes you will store, since some systems handle small files better while others excel with large files. Also think about the core capabilities you require such as scalability, replication, durability, or a built in GUI. Finally, remember that performance may differ greatly between single node and multi node clusters.
Our tests provide a baseline to understand how these systems compare under identical conditions, but your real world performance will depend on your specific hardware, workload, and configuration.