§ · the case for ceph

Storage that doesn't
rent itself back to you.

Ceph is the open-source storage stack CERN runs the LHC on. It pools the disks across a bunch of plain Linux boxes and gives you S3, block, and a real filesystem out the other end. Drives die, servers reboot, the cluster keeps serving. We've been running it in production for a decade — this is our pitch for why you should too.

01

The disks are yours.

No egress bill. No surprise tiering charge because someone listed a bucket. No price hike at renewal. Pull a drive out and you can hold it in your hand — useful when legal asks where the data lives.

02

S3, block, and a filesystem. Same disks.

You don't need a separate object store, a SAN, and an NFS box. Ceph does all three off one pool. We've replaced six-figure NetApp renewals with a couple of racks and never looked back.

03

Out of room? Add a server.

Rack it, plug it in, claim the disks in Sentinel. Ceph rebalances on its own. No migration window, no forklift, no quote from your account rep.

§ · do the math

What's a petabyte actually cost?

Drag the slider. We size a real Ceph cluster — actual Supermicro SKUs, actual drive counts — and put the 3-year bill next to what AWS, GCP, Azure, and Backblaze charge for the same capacity. Numbers are public list pricing; pick them apart however you like.

Usable capacity 1.0 PB
on-prem

Ceph on Supermicro

3-year TCO · hardware you own + power & cooling

$126K over 3 years

$109,950 on day one · then $16,493 in power, cooling & spares over 3 yr

Supermicro SSG-640P-E1CR36L 4 × server · 36-bay 4U, 2× Xeon, 256 GB ECC, 2× 25 GbE
$48,000
22 TB enterprise SATA 137 × drive · Seagate Exos / WD Ultrastar class
$47,950
Top-of-rack switching redundant 25/100 GbE leaf pair · scales every 16 nodes
$8,000
Rack, PDU, cabling, install $1,500 / server allocation
$6,000
Hardware (you own it)
$109,950
Power, cooling, drive spares · 3 yr budgeted at 5% of capex / year
$16,493
Raw 3.0 PB
Replication
Built capacity 1.0 PB
hyperscaler

Cloud object storage

3-year TCO · public list pricing · storage charges only

$848K AWS S3 Standard, 3 years

$0 on day one · then $282,624/yr, every year, until you stop paying

AWS S3 Standard $0.023 / GB-month · $24K/mo
$847,872
Google Cloud Storage Standard $0.020 / GB-month · $20K/mo
$737,280
Azure Blob Hot (LRS) $0.0184 / GB-month · $19K/mo
$678,298
Backblaze B2 $0.006 / GB-month · $6K/mo
$221,184

Storage only. Egress, request charges, and lifecycle transitions aren't in here — and they're where the surprise bills come from.

$721K 85% cheaper

That's what you don't hand to AWS over 3 years at 1.0 PB. After year 3 the hardware is paid off and you keep using it. The S3 invoice shows up again in January.

How we got these numbers
  • Replication: 3 copies — that's Ceph's safe default. If you're comfortable with erasure coding (4+2 is common) the overhead drops to 1.5× and you cut the hardware bill roughly in half. Slower writes, more CPU, no free lunch.
  • Server: Supermicro SSG-640P-E1CR36L. 36 × 3.5" top-load bays, dual Xeon, 256 GB ECC, dual 25 GbE, mirrored NVMe boot. We've put a few of these in production. ~$12K in a typical OEM build; you can usually do better through a reseller.
  • Drives: 22 TB enterprise SATA — Seagate Exos X22 or WD Ultrastar DC HC580. $350 list. We've seen sub-$300 in volume.
  • Switching: Redundant 25/100 GbE leaf pair. Think Mellanox SN2100 or an Arista 7050. $8K per pair, one pair per ~16 nodes.
  • Minimum cluster: 3 nodes, because monitor quorum needs an odd number and you want a real failure domain. Under ~100 TB the 3-node floor is what you're really paying for, which is why this calculator starts there.
  • Power, cooling, spares: 5% of capex per year. Your colo bill is whatever it is — adjust accordingly.
  • Cloud pricing: US-region list price, hot tier, single-region redundancy. Q1 2026.
  • What's not in here: staff time, egress, PUT/GET/LIST charges, lifecycle transitions, your existing rack space, backups. Ceph itself is Apache 2.0 and free; Sentinel is a separate line. We left both sides honest.

Convinced? Now you have to run it.

The math works. The hard part is the day-two stuff — quorum changes, OSD lifecycle, rolling upgrades at 2 a.m. Sentinel is what we built so you don't have to live in a terminal to do any of that.