Server-side features
- Basic part: highly-available block storage with symmetric clustering and no SPOF
- Performance ;-D
- Multiple redundancy schemes: Replication, XOR n+1, Reed-Solomon erasure codes based on jerasure and ISA-L libraries with any number of data and parity drives in a group
- Configuration via simple JSON data structures in etcd (parameters, pools and images)
- Automatic data distribution over OSDs, with support for:
- Mathematical optimization for better uniformity and less data movement
- Multiple pools
- Placement tree, OSD selection by tags (device classes) and placement root
- Configurable failure domains
- Recovery of degraded blocks
- Rebalancing (data movement between OSDs)
- Lazy fsync support
- Per-OSD and per-image I/O and space usage statistics in etcd
- Snapshots and copy-on-write image clones
- Write throttling to smooth random write workloads in SSD+HDD configurations
- RDMA/RoCEv2 support via libibverbs
- Scrubbing (verification of copies)
- Checksums
- Client write-back cache
- Intelligent recovery auto-tuning
- Clustered file system
- Experimental internal etcd replacement - antietcd
- Built-in Prometheus metric exporter
Plugins and tools
- Proxmox storage plugin and packages
- OpenNebula storage plugin
- CSI plugin for Kubernetes
- OpenStack support: Cinder driver, Nova and libvirt patches
- Debian and CentOS packages
- Image management CLI (vitastor-cli)
- Disk management CLI (vitastor-disk)
- Generic user-space client library
- Native QEMU driver
- Loadable fio engine for benchmarks
- NBD proxy for kernel mounts
- Simplified NFS proxy for file-based image access emulation (suitable for VMWare)
Roadmap
The following features are planned for the future:
- Control plane optimisation
- Other administrative tools
- Web GUI
- iSCSI and NVMeoF gateways
- Multi-threaded client
- Faster failover
- S3
- Tiered storage (SSD caching)
- NVDIMM support
- Compression (possibly)