Fast and simple distributed
software-defined storage

Vitastor

  • Distributed SDS
  • ...but Fast 🚀 — latency 0.1 ms
  • ...and Simple ✌️ — 60k lines of code, not 1 mln
  • From Russia with love 🙂

Software-Defined Storage (SDS)

Software that assembles usual servers with usual drives into a single scalable fault-tolerant storage cluster with extended features

Why do IaaS/PaaS providers use SDS?

  • Scalability
  • Client data preservation
  • Reduced costs due to hyperconvergence
  • No need for server "micromanagement"
  • Different storage classes (SSD, HDD)

But most SDS are an overhead

Overhead

  • Q=1 — best possible latency
  • 4 kb write to SSD — 0.04 ms
  • 4 kb write to Ceph — from ~ 1 ms
  • Internal cloud SDS's – ± the same
  • 2400 % overhead !

Vitastor

  • ~ 0.1 ms replicated latency
  • ~ 0.2 ms erasure-coded latency
  • 3-8 GByte/s per client (VM)
  • ~ 400000 iops per client
  • Just ~1 CPU core per NVMe disk
  • Low (50%) rebalance impact

Vitastor — protocols

  • Block access (VM disks, containers):
    Kubernetes, Proxmox, OpenNebula, OpenStack and others
  • VitastorFS (NFS) — clustered POSIX FS
  • Object storage (S3) — based on Zenko CloudServer

Features

  • Replication, erasure codes (N+K)
  • All disk support: SSD/NVMe, HDD, SSD+HDD
  • Flexible data placement
  • RDMA / RoCEv2 support
  • Fast snapshots and clones
  • Fast checksums, data scrubbing
  • Monitoring
  • Kubernetes operator
  • And more...

Architecture

  • Symmetric distributed, Ø SPOF
  • Block base layer
  • Uniform load balancing
  • Transactional writes → data loss protection
  • Optimised for modern SSD/HDD

Ease of support

  • Low number of components
  • Human-readable metadata in etcd
  • Minimal external dependencies
  • Compact implementation:
    ~60k lines of code (Ceph ~1M)
  • Non-standard architecture support (E2K)

Licensing

  • Own Copyleft license: VNPL
  • Free use in OpenSource environment
  • Closed-source services require commercial support
  • Technical and architectural support from author

Contacts

Block Storage

With support for all major KVM-based systems and containers: OpenNebula, OpenStack, Proxmox VE, Kubernetes

Clustered File System

Ground-up implementation of a scalable POSIX
Read-Write-Many file system
, mountable over NFS 3.0

Object Storage (S3)

Based on Zenko CloudServer

Latest Posts

Vitastor 3.0.0 released

A single new feature: the new log-structured metadata store implementation, described in the presentation from Moscow Highload’2025 (check it out here).

It’s now the default store for new OSDs. The support for the old store is also left in place, you can still choose it for new OSDs with vitastor-disk prepare --meta_format 2.

OSDs from previous versions with the old store format will also continue to operate just like before.

Some documentation: atomic_write_size, meta_format.

2025-12-06 Continue reading →

Vitastor 2.4.4 released

  • Implement vitastor-kv rescue command which dumps all readable blocks from the database regardless of the tree structure and allows to recover a corrupted database.
  • Simplify primary OSD selection - use just PG number instead of the hash. This allows to distribute primary OSDs more evenly and reach better read speeds.
  • Fix vitastor-kv-stress hang on parallel list & reopen.
  • Fix CAS writes without immediate_commit. Fix is sufficient for vitastor-kv to operate correctly in non-immediate_commit pools.
  • Fix NFS ACCESS calls for broken inodes (allow root access to them).
  • Fix broken vitastor-kv-stress --runtime parameter.
  • Fix bad vitastor-kv dump output buffering leading to incorrect JSON in dumps.
  • Fix writing garbage from uninitialized memory to the empty space in VitastorKV DB.
  • Refactor some code in preparation to the release of the new log-structured metadata store.
  • Replace assert(done != expected) error message, probably caused by kernel issues with zero-copy TCP send, with a more concise error description.

2025-12-01 Continue reading →

Vitastor 2.4.3 released

  • Daemonize before forking in NFS proxy to fix OSD RDMA support (#107)
  • Fix possible PG_INCOMPLETE on node outage when allow_net_split is false
  • Fix build under Ubuntu 25.10
  • Fix journal_no_same_sector_overwrites mode after optimizing WA (#109)

2025-11-09 Continue reading →

Vitastor 2.4.2 released

  • Fix make-etcd broken after adding ipv6 support in 2.4.1 :)
  • Do not mark “inverse child” as deleted during rename-delete
  • Support QEMU 10.1

2025-11-04 Continue reading →

Vitastor 2.4.1 released

  • Optimize WA - reduce it from 4-5 to 3.x by batching journal writes
  • Fix ls-pool showing wrong used % when osds are reweighted (#105)
  • Fix vitastor-cli rm sometimes hanging when PGs are down (#99)
  • Fix vitastor-cli ls displaying raw used space for all inodes when -p is specified (#101)
  • Fix “Cannot set property of undefined” in monitor when deleting pool with live inode op stats (#103)
  • Revert vitastor-kv change from 2.2.0 leading to crashes in vitastor-kv-stress (#100)
  • Add pg_size change check for XOR pools
  • Fix root access to VitastorFS files
  • Add IPv6 support to make-etcd
  • Fix IPv6 etcd address support in OSD & client library

2025-10-29 Continue reading →

All Posts