Latest Posts

Vitastor 2.3.0 released

ublk, Veeam and checksum enablement fix.

2025-08-25 Continue reading →

Vitastor 2.2.2 released

  • Fix a bug introduced in 2.2.0 - pg_locks weren’t disabled for pools without local_reads correctly which could lead to inactive pools during various operations
  • Fix an old bug where OSDs could send sub-operations to incorrect peer OSDs when their connections were stopped and reestablished quickly, in 2.2.0 it was usually leading to “sequencing broken” messages in OSD logs
  • Fix debug use_sync_send_recv mode

2025-06-07 Continue reading →

Vitastor 2.2.1 released

  • Fix vitastor-disk purge broken after adding the “OSD is still running” check
  • Fix iothreads hanging after adding zero-copy send support
  • Fix enabling localized reads online (without restarting OSDs) in the default PG lock mode

2025-05-25 Continue reading →

Vitastor 2.2.0 released

New features

  • Localized read support for multi-datacenter setups.
  • io_uring-based zero-copy send support - read the instruction carefully for optimal performance!
  • Improve and speedup data distribution, especially in cases of very large hosts (100 OSD+). Previously, PG optimization speed depended on the number of OSDs, now it only depends on the number of failure domains. Distribution over specific OSDs is now also more even and becomes strictly more even when you increase the number of PGs.
  • Add a very interesting instruction to change NFS_MAX_FILE_IO_SIZE
  • Check operation sequencing and stop connections when it breaks - should help catch some very rare RDMA packet loss problems.
  • vitastor-cli rm-osd now refuses to remove OSDs which are still up and suggests to use vitastor-disk purge.
  • Allow removal of direntries referring non-existent inodes in VitastorFS.

2025-05-11 Continue reading →

Vitastor 2.1.0 released

Separate cluster network support, RDMA-CM and allow_net_split setting.

2025-04-01 Continue reading →

First release of Vitastor S3

The moment has come - Vitastor S3 implementation based on Zenko CloudServer is finally released.

Key differences from the prototype:

  • Volume defragmentation is implemented;
  • Volume metadata may now be stored in the same MongoDB as object metadata, not just in VitastorKV;
  • Tests for the Vitastor S3 backend added;
  • S3 is now packaged in a convenient Docker build.

2025-03-16 Continue reading →

Vitastor 2.0.0 released

No breaking features, it’s 2.0.0 just because it includes S3 and because there are already too many 1.x releases :).

2025-03-16 Continue reading →

Vitastor 1.11.0 released

New features

  • Support containerized Vitastor installations: http://vitastor.io/en/docs/installation/docker.html
  • Add new functions to the node.js binding: delete(), get_immediate_commit(), on_ready(), get_min_io_size(), get_max_atomic_write_size()
  • S3 (Zenko Cloudserver with Vitastor support) is coming shortly and will be released separately

2025-03-01 Continue reading →

Vitastor 1.10.1 released

New features

  • Add “deleted” image flag which is set when vitastor-cli rm starts to delete an image, but can’t delete it fully due to inactive PGs or stopped OSDs
  • Support JSON output in vitastor-disk prepare and purge
  • Show backfillfull pools in vitastor-cli status
  • Make object listings consistent (used in vitastor-cli rm/rm-data/merge/etc). This means that there is now a guarantee that if a data block is present when you invoke rm, rm will attempt to delete it, even if rm is invoked when the PG switches state. Previously in such cases rm could skip and leave some objects behind as garbage, and merge probably could incorrectly move data between snapshots.
  • Make deletions (rm/rm-data) consistent. This means that rm/rm-data will either complete successfully and delete all requested image data or complete with an error if some objects could not be deleted or if there is a possibility that some data is left on stopped OSDs. Previously, when some PGs or OSDs were inactive at the moment of deletion, rm-data was behaving incorrectly: it wasn’t retrying deletions failed due to dropped OSD connections, it could hang waiting for PGs to activate, and it could return with a successful error code while some garbage was still possibly left on some OSDs. Deletions are not fully atomic cluster-wide yet, which means that you still have to repeat the deletion request after you return stopped OSDs back, but now you always know for sure if you have to repeat it.

2025-01-03 Continue reading →

Older Posts →