Fast and simple distributed
software-defined storage

Vitastor

  • Distributed SDS
  • ...but Fast 🚀 — latency 0.1 ms
  • ...and Simple ✌️ — 60k lines of code, not 1 mln
  • From Russia with love 🙂

Software-Defined Storage (SDS)

Software that assembles usual servers with usual drives into a single scalable fault-tolerant storage cluster with extended features

Why do IaaS/PaaS providers use SDS?

  • Scalability
  • Client data preservation
  • Reduced costs due to hyperconvergence
  • No need for server "micromanagement"
  • Different storage classes (SSD, HDD)

But most SDS are an overhead

Overhead

  • Q=1 — best possible latency
  • 4 kb write to SSD — 0.04 ms
  • 4 kb write to Ceph — from ~ 1 ms
  • Internal cloud SDS's – ± the same
  • 2400 % overhead !

Vitastor

  • ~ 0.1 ms replicated latency
  • ~ 0.2 ms erasure-coded latency
  • 3-8 GByte/s per client (VM)
  • ~ 400000 iops per client
  • Just ~1 CPU core per NVMe disk
  • Low (50%) rebalance impact

Vitastor — protocols

  • Block access (VM disks, containers):
    Kubernetes, Proxmox, OpenNebula, OpenStack and others
  • VitastorFS (NFS) — clustered POSIX FS
  • Object storage (S3) — based on Zenko CloudServer

Features

  • Replication, erasure codes (N+K)
  • All disk support: SSD/NVMe, HDD, SSD+HDD
  • Flexible data placement
  • RDMA / RoCEv2 support
  • Fast snapshots and clones
  • Fast checksums, data scrubbing
  • Monitoring
  • Kubernetes operator
  • And more...

Architecture

  • Symmetric distributed, Ø SPOF
  • Block base layer
  • Uniform load balancing
  • Transactional writes → data loss protection
  • Optimised for modern SSD/HDD

Ease of support

  • Low number of components
  • Human-readable metadata in etcd
  • Minimal external dependencies
  • Compact implementation:
    ~60k lines of code (Ceph ~1M)
  • Non-standard architecture support (E2K)

Licensing

  • Own Copyleft license: VNPL
  • Free use in OpenSource environment
  • Closed-source services require commercial support
  • Technical and architectural support from author

Contacts

Block Storage

With support for all major KVM-based systems and containers: OpenNebula, OpenStack, Proxmox VE, Kubernetes

Clustered File System

Ground-up implementation of a scalable POSIX
Read-Write-Many file system
, mountable over NFS 3.0

Object Storage (S3)

Based on Zenko CloudServer

Latest Posts

Vitastor 3.0.9 released

  • Fix clients reading from snapshots incorrectly on retries (for example when OSDs were restarted during read)
  • Fix a possible “send-after-free” in OSD which could probably also affect snapshot reads
  • Fix broken garbage collection in the new store leading to very high memory usage
  • Make the new store do full garbage collection on OSD startup by default (can be turned off with a new gc_on_start parameter)
  • Make startup logging with the new store more informative, allow to log amount of garbage with log_level > 1
  • Fix OSD crashing when handling a read_bitmap operation from a deleted object with the new store

2026-04-04 Continue reading →

Vitastor 3.0.8 released

  • Surrogate increment-only peer ID is now used instead of the file descriptor number, and client cleanup sequence is reworked in another attempt to fix a possible PG hang in ‘peering’ states. The history of this bug is unclear and it still hasn’t been reproduced in tests, but several users have reported it since 3.0.4 (only with enabled RDMA).
  • Fix a minor bug where an OSD could skip reporting misplaced PG state when moving an object in non-immediate_commit mode.
  • Add Ubuntu 26.04 (Resolute Raccoon) support and patches for qemu 10.2 and libvirt 12.0

Note: version 3.0.7 is re-released as 3.0.8 because of an originally incorrect build.

2026-03-29 Continue reading →

Vitastor 3.0.6 released

  • Fix one more bug in old store checksums O_o - checksum verification wasn’t working correctly on reads quickly after small initial offsetted writes (when a BIG_WRITE was read from the journal). Could lead to incorrect detection of objects as corrupted.
  • Fix clients possibly not destroyed after being switched to RDMA. Not 100% verified, but in theory it could rarely lead to PGs hanging in peering/stopping states with RDMA enabled.
  • Use vitastor-cli in the Cinder driver for modifications instead of direct etcd interaction.
  • Fix mon_https_ca parsing.
  • Fix Docker install.sh script, add instructions for Podman.

2026-03-22 Continue reading →

Vitastor 3.0.5 released

A single urgent fix for the new store: metadata was written incorrectly when filled metadata area exceeded 4 GB due to a simple & stupid 32-bit integer overflow. 🤦 This was leading to OSDs corrupting metadata and being unable to start on the next run.

The bug affected only the new store (meta_format=3) and OSDs with metadata area size larger than 4 GB. You can check your OSD metadata area size by checking data_offset in vitastor-disk read-sb /dev/vitastor/osdXX-data for single-disk OSDs or blockdev --getsize64 /dev/vitastor/osdXX-meta for hybrid OSDs.

If you’re affected, it’s highly recommended to:

  • Install updated packages without restarting OSDs
  • Restart OSDs one by one
  • If a restart of an OSD triggers “has_degraded” rebalance - wait for it to finish before restarting another OSD
  • If an OSD fails to start with “double-claimed block” or “metadata is corrupt or too old” error in the log - purge and recreate it, and again wait for “has_degraded” rebalance to finish before proceeding to the next OSD

2026-03-07 Continue reading →

Vitastor 3.0.4 released

New store bug fixes

  • Fix very slow OSD startup (hours) possible in unlucky cases
  • Fix incorrect garbage entry validation on start in some cases, preventing OSD startup
  • Fix small_write_throttling not functioning and either hanging or crashing OSDs
  • Fix “different fact_versions returned from subops” error possible with EC in some cases
  • Add a recovery option for the new store: skip_corrupted_meta_entries
  • Disable intent writes (even 4k) for hybrid OSDs by default for better write buffering

2026-03-03 Continue reading →

All Posts