§ · capabilities

Every knob
worth turning.

Every operational feature that ships in Sentinel today — what you can do, what you can see, and what you can click through a browser. No plumbing, no architecture diagrams — just what you get.

§ 01

Day-zero provisioning

Power on servers. They find the master, register, and show up ready.

  • Zero-touch PXE bootstrap — kernel and initrd chainloaded over HTTP, self-registration on first boot
  • Live hardware inventory — CPU, RAM, disks, and NICs discovered automatically on every node
  • Built-in DHCP with cluster-wide scopes — no separate DHCP server to maintain
  • OS image catalog with per-node kernel parameters
  • Guided setup wizard — admin account, SSH keys, initial cluster config in a handful of clicks
§ 02

Ceph daemon lifecycle

Install, upgrade, stop, and remove every Ceph daemon — no cephadm dependency.

  • Full lifecycle for MON, MGR, MDS, OSD, and RGW daemons
  • Monitor quorum orchestration — FSID bootstrap, clean add and remove from a running cluster
  • Automatic keyring generation with scoped capabilities per daemon
  • Every action maps directly to official Ceph (Reef 18.x) CLI paths — no custom shim you can't audit
§ 03

Disks and OSDs

Safe teardown, predictable provisioning, OSDs that show up with their IDs on day one.

  • Safe disk wipes — ceph-volume LVM zap with sgdisk full-surface fallback
  • OSD provisioning with LVM preparation and mid-task activation
  • OSD ID visible immediately at creation — no post-hoc hunting for the number
  • Per-drive surfacing from the live hardware inventory
§ 04

Pool management

Every Ceph pool, unified — with guardrails around the things that go wrong in production.

  • Unified inventory across RBD, CephFS, RGW, and NFS pools — stored bytes, % used, object count, live
  • Replicated and erasure-coded pool creation
  • PG autoscaler or manual pg_num / pgp_num tuning
  • Size, min-size, CRUSH rule, quotas, target-size-ratio — all in the UI
  • Automatic tagging for pools owned by an NFS cluster or CephFS filesystem
  • Guided deletion — refuses to delete gateway-owned pools, surfaces mon_allow_pool_delete remediation when the cluster blocks it
§ 05

RBD block storage

Images, snapshots, QoS, and cross-site mirroring — without ever leaving the browser.

  • Image CRUD — create, resize with shrink guard, rename, trash-based safe delete
  • Snapshots and clones — create, protect, unprotect, rollback, cross-pool clone, flatten
  • Pool-level QoS — six independent token buckets (total / read / write for both IOPS and BPS) with burst and burst-seconds
  • RBD mirroring for disaster recovery — pool or image mode, bootstrap-token exchange, live peer and image health
§ 06

CephFS

Distributed filesystem with a web file browser attached — no shell needed.

  • Filesystem creation with automatic data and metadata pool provisioning
  • Auto-mount that survives every node restart
  • Web-based file browser — list, mkdir, delete, view ownership and permissions
§ 07

NFS gateway

Multi-member NFS clusters backed by CephFS, with client access rules and dynamic reloads.

  • NFS cluster lifecycle with shared RADOS recovery pool
  • Per-export configuration — CephFS-backed, pseudo path, access type (RW/RO), squash mode
  • Client access rules — IP, CIDR, or hostname allow-lists with per-client access and squash overrides
  • Dynamic Ganesha reload across every cluster member on config change
§ 08

S3 / object storage (RGW)

Full S3 stack — users, keys, policies, buckets — with a visual policy editor.

  • RGW gateway lifecycle with per-node deployment and backend tracking
  • S3 user and credential management with full capability matrix
  • Visual S3 policy editor — statement builder with JSON import/export and effective-access evaluation
  • Bucket inventory with grant-level access controls
§ 09

Load balancing

HAProxy + Keepalived, pre-wired to your S3 and NFS backends with instant VRRP failover.

  • Multi-member load-balancer clusters with active/passive VRRP failover
  • Floating VIP with configurable VRID and priority per member
  • HTTP and TCP mode frontends
  • Auto-discovers S3 (RGW) and NFS endpoints — you don't hand-maintain a backend list
  • PROXY protocol support for upstream client-IP preservation
§ 10

Networking

Per-node NIC config, cluster DNS, and VIP binding that actually fails over fast.

  • Per-node NIC and IP configuration with automatic reconciliation
  • VIP binding with gratuitous ARP — instant MAC-table convergence on failover
  • Cluster-wide DNS server provisioning
§ 11

Control-plane HA

Sentinel itself runs highly available — and the Ceph data path never depends on it.

  • Active-passive master failover with automatic leader promotion
  • Floating cluster VIP that follows the active leader
  • Standby API for health checks — no proxying
  • Data path is pure Ceph — client I/O keeps flowing through a control-plane failover
§ 12

Config editor & drift control

Every service config is editable in the browser. Drift gets caught and fixed on its own.

  • In-browser editor for ceph.conf, haproxy.cfg, keepalived.conf, and Ganesha exports
  • 30-second reconciliation loop — detects drift, pushes a refresh only when a real change is present
  • Automatic post-reload hooks (systemctl reload and equivalents)
  • One-click drift repair — no "push to all hosts" mental model to carry around
§ 13

Monitoring & observability

Live state, live logs, live metrics — streamed to the UI, not polled on a stale loop.

  • Ceph cluster, OSD, MON, and pool metrics tracked in real time
  • Per-daemon process state and logs
  • Hardware state refreshed every 5 seconds
  • Task progress, log lines, and state transitions stream to the UI in real time
  • Queryable task run history with per-task stdout, stderr, timing, and hooks
§ 14

Users & access

Accounts, roles, SSH keys — and a web terminal to every node when you need one.

  • User account management — create, update, rotate password, delete
  • Role and permission controls per user
  • SSH key pair management with automatic deployment to cluster nodes
  • Web-based SSH terminal to any node — zero client install, connects through the active leader automatically
  • License sync
§ +

And more, every week.

This list grows with every release. The changelog has the shipping history. Docs have the how-tos. And if the capability you need isn't on this page yet — tell us. We build toward operator problems, not roadmaps.