Fast and simple distributed
software-defined storage

Vitastor

  • Distributed SDS
  • ...but Fast 🚀 — latency 0.1 ms
  • ...and Simple ✌️ — 60k lines of code, not 1 mln
  • From Russia with love 🙂

Software-Defined Storage (SDS)

Software that assembles usual servers with usual drives into a single scalable fault-tolerant storage cluster with extended features

Why do IaaS/PaaS providers use SDS?

  • Scalability
  • Client data preservation
  • Reduced costs due to hyperconvergence
  • No need for server "micromanagement"
  • Different storage classes (SSD, HDD)

But most SDS are an overhead

Overhead

  • Q=1 — best possible latency
  • 4 kb write to SSD — 0.04 ms
  • 4 kb write to Ceph — from ~ 1 ms
  • Internal cloud SDS's – ± the same
  • 2400 % overhead !

Vitastor

  • ~ 0.1 ms replicated latency
  • ~ 0.2 ms erasure-coded latency
  • 3-8 GByte/s per client (VM)
  • ~ 400000 iops per client
  • Just ~1 CPU core per NVMe disk
  • Low (50%) rebalance impact

Vitastor — protocols

  • Block access (VM disks, containers):
    Kubernetes, Proxmox, OpenNebula, OpenStack and others
  • VitastorFS (NFS) — clustered POSIX FS
  • Object storage (S3) — based on Zenko CloudServer

Features

  • Replication, erasure codes (N+K)
  • All disk support: SSD/NVMe, HDD, SSD+HDD
  • Flexible data placement
  • RDMA / RoCEv2 support
  • Fast snapshots and clones
  • Fast checksums, data scrubbing
  • Monitoring
  • Kubernetes operator
  • And more...

Architecture

  • Symmetric distributed, Ø SPOF
  • Block base layer
  • Uniform load balancing
  • Transactional writes → data loss protection
  • Optimised for modern SSD/HDD

Ease of support

  • Low number of components
  • Human-readable metadata in etcd
  • Minimal external dependencies
  • Compact implementation:
    ~60k lines of code (Ceph ~1M)
  • Non-standard architecture support (E2K)

Licensing

  • Own Copyleft license: VNPL
  • Free use in OpenSource environment
  • Closed-source services require commercial support
  • Technical and architectural support from author

Contacts

Block Storage

With support for all major KVM-based systems and containers: OpenNebula, OpenStack, Proxmox VE, Kubernetes

Clustered File System

Ground-up implementation of a scalable POSIX
Read-Write-Many file system
, mountable over NFS 3.0

Object Storage (S3)

Based on Zenko CloudServer

Latest Posts

Vitastor 2.3.0 released

ublk, Veeam and checksum enablement fix.

2025-08-25 Continue reading →

Vitastor 2.2.2 released

  • Fix a bug introduced in 2.2.0 - pg_locks weren’t disabled for pools without local_reads correctly which could lead to inactive pools during various operations
  • Fix an old bug where OSDs could send sub-operations to incorrect peer OSDs when their connections were stopped and reestablished quickly, in 2.2.0 it was usually leading to “sequencing broken” messages in OSD logs
  • Fix debug use_sync_send_recv mode

2025-06-07 Continue reading →

Vitastor 2.2.1 released

  • Fix vitastor-disk purge broken after adding the “OSD is still running” check
  • Fix iothreads hanging after adding zero-copy send support
  • Fix enabling localized reads online (without restarting OSDs) in the default PG lock mode

2025-05-25 Continue reading →

Vitastor 2.2.0 released

New features

  • Localized read support for multi-datacenter setups.
  • io_uring-based zero-copy send support - read the instruction carefully for optimal performance!
  • Improve and speedup data distribution, especially in cases of very large hosts (100 OSD+). Previously, PG optimization speed depended on the number of OSDs, now it only depends on the number of failure domains. Distribution over specific OSDs is now also more even and becomes strictly more even when you increase the number of PGs.
  • Add a very interesting instruction to change NFS_MAX_FILE_IO_SIZE
  • Check operation sequencing and stop connections when it breaks - should help catch some very rare RDMA packet loss problems.
  • vitastor-cli rm-osd now refuses to remove OSDs which are still up and suggests to use vitastor-disk purge.
  • Allow removal of direntries referring non-existent inodes in VitastorFS.

2025-05-11 Continue reading →

All Posts