Posts up to 2024-12-19

Vitastor 1.10.0 released

VitastorFS in CSI, NFS-RDMA, backfillfull and more.

2024-12-19 Continue reading →

Vitastor 1.9.3 released

  • Support custom hybrid OSD creation (vitastor-disk prepare --hybrid --fast-devices /dev/xxx,/dev/yyy)
  • Auto-change partition paths to /dev/disk/by-partuuid/ in vitastor-disk prepare
  • Allow to select cached I/O in vitastor-disk commands
  • Fix multiple bugs in vitastor-disk resize & add tests for them
  • Fix vitastor-disk write-meta/write-journal in superblock-based mode writing it to an incorrect device
  • Fix vitastor-disk prepare sometimes again not seeing new partitions
  • Cleanup PG history and stats of deleted pools
  • Fix “is already mounted” checks in CSI

2024-11-07 Continue reading →

Vitastor 1.9.2 released

New features

  • Support resizing normal vitastor-disk partitions and moving journal/metadata: vitastor-disk resize
  • Support simple forms of vitastor-disk {dump,write}-{meta,journal} for OSD partitions

2024-10-20 Continue reading →

Vitastor 1.9.1 released

Hotfixes for OpenNebula and upgrade hotfix for 1.7

  • Fix deploy.vitastor, save.vitastor, restore.vitastor scripts not working for nodes other than master oned
  • Fix deploy.vitastor not working for VMs without Vitastor disks
  • Disable clearing old PG configuration when upgrading from 1.7 or older versions (it was breaking old clients)

2024-09-14 Continue reading →

Vitastor 1.9.0 released

  • OpenNebula support! Installation instructions
  • Added vitastor-cli rm --exact|–matching command
  • Added vitastor-cli dd command - copy files between Vitastor images, files and pipes
  • Add a startup timeout to vitastor-cli to not wait for etcd infinitely
  • Fix non-working OSD_OP_READ_CHAIN_BITMAP O_o
  • Autodetect block_size/bitmap_granularity/immediate_commit when creating pools
  • Do not allow to create multiple pools with the same name from vitastor-cli
  • Fix skip_cache_check option not applied due to type issue (see github issue #70)

2024-09-06 Continue reading →

Vitastor 1.8.0 released

Bugfix release, would be 1.7.2, but etcd layout changes mandate it to be 1.8.0. :-)

  • Change etcd layout: /config/pgs is now /pg/config, /pg/stats/* is now /pgstats/*. This is required to fix a rare PG history tracking issue caused by non-atomic delivery of etcd events sometimes resulting in incomplete objects in EC pools after mass OSD restarts. Upgrading can be performed freely, downgrade requires additional action: 1.8.0 to 1.7.1
  • Fix a rare client hang on PG primary OSD switch
  • Fix vitastor-nfs started using mount command sometimes not stopping automatically after unmount
  • Fix vitastor-nfs mounts started using mount command sometimes hanging after daemonizing
  • Fix merge/flatten into a pool with different object size (image migration between pools case)
  • Do not print extra “PG disappeared after reload” verbose log messages for non-existing PGs
  • Fix clustered Antietcd support and persistence filter
  • Do not try to purge the same OSD multiple times if its multiple devices are passed to purge
  • Various node.js binding fixes

2024-08-11 Continue reading →

Vitastor S3 prototype based on Zenko CloudServer

The prototype of S3 Vitastor backend for Zenko CloudServer is now running 😊

This is of course far from being a release, it lacks at least defragmentation. But you can already try to run it and write/read something to it via S3. It’s even usable with GeeseFS. 😊

So, here are the instructions!

2024-08-11 Continue reading →

Vitastor 1.7.1 released

Some stupid hotfixes for 1.7.0 :)

  • Fix NFS mount
  • Fix modify-osd
  • Fix use_antietcd not taken from /etc

2024-07-16 Continue reading →

Vitastor 1.7.0 released

Omnidirectional release

2024-07-15 Continue reading →

S3 implementation comparison

One question: Where to pilfer which S3 implementation to reuse?

I have no desire to write S3 from scratch, because the protocol, even though simple, has a lot of small details that require attention and cannot be ignored, because users always want maximum compatibility with Amazon S3.

At the time of writing this post, the following options were found: Minio, SeaweedFS, Ceph (RadosGW), Zenko CloudServer, OpenStack Swift, and Deuxfleurs Garage.

The main interest during comparison is comparing the S3 frontend, the external part of the server, because the storage layer will anyway be replaced with our own one (Vitastor).

S3 tests from Ceph were run during comparison, but in the simplest configuration, without setting up additional integrations possibly required for some of the tests.

Let’s try to look at the features of each implementation!

2024-05-09 Continue reading →

← Newer Posts Older Posts →