v1.9 — free up to 3 nodes, no card · what shipped this week ↗

Ceph, without
the operator tax.

We ran Ceph at scale for a decade. Sentinel is the tooling we kept wishing existed — one appliance that provisions bare metal, stands up a cluster, and handles day two, from a browser on your own hardware.

reel 01 · the tour

Six things Ceph never let you click.

reel · 01 / 06 — : — —

Auto-register nodes

recording coming soon

Power on a server. iPXE boot, hardware inventory, registration — all automatic. Under 60 seconds to first contact.

§01 · from boot to ready

From bare metal to
running cluster, in under five.

  1. 01.

    Boot from the network.

    Point your DHCP server at the master node. Power on your storage servers. Each one pulls the bootstrap image over iPXE, inventories its hardware, and registers — no hands.

    nf network create --range 10.10.1.0/24

  2. 02.

    Review and deploy.

    Inspect auto-discovered specs: cores, memory, drives, NICs, IPMI. Select nodes, choose Ceph version and failure domain, hit deploy. One command, no playbook.

    nf cluster create --version reef --wait

  3. 03.

    Manage. Scale. Repeat.

    Live IOPS and capacity, pools, OSDs, rolling upgrades, extra clusters — same UI, same API, same CLI. Bring a fourth cluster up at the edge without a new tool.

    nf osd create prod-01 --all-drives

§02 · zero-touch

Power on.
Done.

Sentinel's iPXE server boots each node, inventories every CPU core, gigabyte of RAM, and attached drive, then registers it in the control plane — with no operator involvement.

New nodes appear in the browser dashboard within 60 seconds of power-on. Every action available in the UI is also available via the nf CLI and REST API — use whichever fits your workflow.

  • BIOS and UEFI iPXE supported
  • Versioned, signed OS images served over HTTP/TFTP
  • IPMI / BMC registered automatically on first boot
  • Per-drive S.M.A.R.T. data captured at registration
nf — terminal also available in the browser dashboard ↓
storage-01 Waiting…
storage-02 Waiting…
storage-03 Waiting…

$ nf network create --name stor-net \
    --range 10.10.1.0/24 --gateway 10.10.1.1

§03 · what's in the box

Everything Ceph needs.
Nothing it doesn't.

// switcher

Many clusters, one control plane.

Independent health, nodes, alerts — shared API. Hop between prod, staging, and the edge lab from a dropdown that remembers where you were.

prod-01 847 TiB · HEALTH_OK
staging 64 TiB · HEALTH_OK
edge-dc-03 12 TiB · 1 warn
// resilience

HA, first-class.

Sentinel's control plane runs active-passive with automatic master failover — native, not bolted on. Ceph MONs replicate across 3+ hosts underneath. The data path stays pure Ceph; the control plane never sits in front of your I/O.

// day two

Rolling upgrades.

Drain, upgrade, rejoin — one OSD at a time, while clients keep reading and writing. Automatic rollback on failed health checks.

client I/O 1.24 GB/s · no drop
// interfaces

RBD. CephFS. RGW. NVMe-oF.
All four.

Block devices, distributed filesystem, S3-compatible object gateway, and NVMe-over-Fabrics targets — every production Ceph protocol, provisioned from one UI.

$
RBD rbd block · kernel + userspace
/
CephFS posix fs · active-active MDS
s3
RadosGW S3 + Swift · multi-site
oF
NVMe-oF tcp + rdma targets
// observability

Metrics & alerts.

Per-OSD IOPS, throughput, and latency retained for 30 days. Webhook alerts to Slack, PagerDuty, or any HTTP endpoint.

// your iron

Works on
what you have.

Any x86_64 with BIOS or UEFI PXE. We regression-test against the configurations below every release.

  • DellPowerEdge R640 / R650 / R750
  • HPEProLiant DL360 / DL385 Gen10+
  • SupermicroSSG-6029P · SSG-6049P
  • whiteboxX9/X10/X11 + AM4/SP3 boards
// predictive

S.M.A.R.T.
before they die.

Reallocated sectors, pending sectors, end-to-end errors — every HDD and NVMe, continuously. We warn before the placement group goes degraded.

reallocated 0
pending 2
end-to-end err 0

§04 · who built this

Twelve years in anger.
Every failure mode, catalogued.

Our founding team ran Ceph for major cloud providers and infrastructure-intensive shops long before Sentinel existed. We've triaged corrupt placement groups at 2 AM, recovered from cascading OSD flaps, and pushed Luminous-to-Squid upgrades through live production clusters.

Sentinel is the tooling we kept wishing existed — built with the operational depth that only comes from paging yourself.

12+ years running Ceph
30+ teams in beta
PB storage managed

§ included support What you also get.

8h Response SLA
Professional-plan customers get guaranteed first response within 8 business hours — direct to the engineers who wrote the code.
CSE Dedicated engineer
Enterprise customers get a named customer success engineer who knows your cluster topology and is on-call for escalations.
MIG Migration assistance
Moving off cephadm or a self-managed cluster? We'll plan the cutover and sit on the bridge while you execute. Zero downtime.
ARCH Architecture review
Before day one, we'll go through your hardware, network design, and failure-domain strategy. The cheap bugs are the ones we catch now.

§05 · pricing, roughly

Free up to three nodes.
Paid when you need more.

No card to start, no time limit, no "talk to sales" gate. Follow the quickstart, install the master on one box, point PXE at it — first cluster in under an hour. Upgrade paths for multi-cluster, SLA, and air-gapped deploys when you're ready.

Free up to 3 nodes
Professional multi-cluster · 8h SLA
Enterprise dedicated CSE · air-gap

§06 · objections, anticipated

The questions everyone asks.

§07 · ready?

Stop hand-rolling Ceph.

Free up to three nodes. No card. First cluster deploys in an hour, not a quarter.