FAQ

FREQUENTLY ASKED QUESTIONS

VFS supports any software RAID configuration in realtime without any of the configuration lock-in.

Conventional file systems don’t automatically compress for all data and file types. VFS optimizes data storage efficiency, including compressed data based on file type and size.

VFS uses Erasure Coding for raw media efficiency. VFS offers the advantage of:

a) data on a failed disk is automatically rebuilt elsewhere – the failed disk can be replaced when convenient
b) fast recovery procedure (hours).

VFS uses Erasure Coding for raw media efficiency. VFS offers the advantage of
a) data on a failed disk is automatically rebuilt elsewhere – the failed disk can be replaced when convenient
b) fast recovery procedure (hours).

Data redundancy with conventional technology such as RAID 5 & 6, have the following constraints:
a) a failed disk must be replaced immediately and manually
b) you can’t lose more than 3 disks in a single RAID 6 data pool; more disks are needed to have redundancy, therefore more disks for storage
c) a longer recovery procedure (days) which stresses remaining disks.

SVFS on the otherhand:
a) data overhead is fractional and can be anywhere from 1% overhead to 100% overhead, equating to bit-flip error correction up to a distributed copy across all drives
b) VFS can tolerate up to P drive failures and still continue to function without interuption
c) when faulty hardware is replaced the missing data is regenerated
d) VFS allows you to store more data on the same number of drives due to the use of fractional parity overhead for lossless error correction, rather than using RAID stripes.

Data redundancy with conventional technology such as RAID 5 & 6, have the following constraints:
a) a failed disk must be replaced immediately and manually
b) you can’t lose more than 3 disks in a single RAID 6 data pool; more disks are needed to have redundancy, therefore more disks for storage
c) a longer recovery procedure (days) which stresses remaining disks.

SVFS on the otherhand:
a) data overhead is fractional and can be anywhere from 1% overhead to 100% overhead, equating to bit-flip error correction up to a distributed copy across all drives
b) VFS can tolerate up to P drive failures and still continue to function without interuption
c) when faulty hardware is replaced the missing data is regenerated
d) VFS allows you to store more data on the same number of drives due to the use of fractional parity overhead for lossless error correction, rather than using RAID stripes.

Swiss Vault’s innovative VFS architecture stores metadata in P2P-distributed nodes – no master nodes allows node failure.

Swiss Vault’s innovative VFS architecture stores metadata in P2P-distributed nodes – no master nodes allows node failure.

No. The technology offers a hands-off data management system. It is automated to identify problems (bit rot or failed disks), fix and report back to the system administrator. No manual intervention is needed.

No. The technology offers a hands-off data management system. It is automated to identify problems (bit rot or failed disks), fix and report back to the system administrator. No manual intervention is needed.

Yes. Unlike conventional systems which are fixed and not scalable, VFS is completely scalable by simply adding more disks. More disks means more IO performance and higher robustness against hardware failures, achieving the best of both RAID-0 and RAID-1 without any of the configuration lock-in.

Yes. Unlike conventional systems which are fixed and not scalable, VFS is completely scalable by simply adding more disks. More disks means more IO performance and higher robustness against hardware failures, achieving the best of both RAID-0 and RAID-1 without any of the configuration lock-in.

Unlike other systems, which are typically separate offerings, the VFS parallel distributed file system is paired with carefully engineered storage hardware to improve energy efficiency by up to 10x over off-the-shelf solutions.

Unlike other systems, which are typically separate offerings, the VFS parallel distributed file system is paired with carefully engineered storage hardware to improve energy efficiency by up to 10x over off-the-shelf solutions.

RAID is the classical approach to making storage devices more reliable, however data in any RAID configuration is still prone to bit-flips, and data recovery is still a manual procedure.

Configuring Ceph requires understanding and manually configuring `cephadm, ceph, rbd, rados`, four different software tools in order to set up a single Ceph storage cluster. This has caused developers to create `rook`: a ceph automation tool which piggybacks off of `kubernetes` to simplify deploying ceph; since ceph is so complicated, but even then you still need to learn `kubernetes` to maintain a `ceph` cluster via `rook`.

VFS on the other hand only requires a single `yaml` file to describe the ip addresses and disks (size and shape) of your storage infrastructure, we provide scripts which will deploy and scale up VaultFS to fit your `yaml` infrastructure target. No need to learn any command-line tools to spin up a basic SwissVaultFS cluster, however we do also provide the `vman` VaultFS management tool for admins who want to manually configure which directories to assign higher storage importance.

RAID is the classical approach to making storage devices more reliable, however data in any RAID configuration is still prone to bit-flips, and data recovery is still a manual procedure.

Configuring Ceph requires understanding and manually configuring `cephadm, ceph, rbd, rados`, four different software tools in order to set up a single Ceph storage cluster. This has caused developers to create `rook`: a ceph automation tool which piggybacks off of `kubernetes` to simplify deploying ceph; since ceph is so complicated, but even then you still need to learn `kubernetes` to maintain a `ceph` cluster via `rook`.

VFS on the other hand only requires a single `yaml` file to describe the ip addresses and disks (size and shape) of your storage infrastructure, we provide scripts which will deploy and scale up VaultFS to fit your `yaml` infrastructure target. No need to learn any command-line tools to spin up a basic SwissVaultFS cluster, however we do also provide the `vman` VaultFS management tool for admins who want to manually configure which directories to assign higher storage importance.

The installation process requires first setting up a user with the same name on all machines, then running `SwissSSH.sh` which is an SSH configuration tool for enabling root user SSH support, which then installs root SSH keys onto all nodes in the cluster. Afterwards you can run `SwissDeploy.sh` which reads `vfs-infrastructure.yaml` (list of ip addresses and disks to add into the VaultFS cluster) then installs VaultFS onto all machines and partitions all the disks in the cluster in one click.

The installation process requires first setting up a user with the same name on all machines, then running `SwissSSH.sh` which is an SSH configuration tool for enabling root user SSH support, which then installs root SSH keys onto all nodes in the cluster. Afterwards you can run `SwissDeploy.sh` which reads `vfs-infrastructure.yaml` (list of ip addresses and disks to add into the VaultFS cluster) then installs VaultFS onto all machines and partitions all the disks in the cluster in one click.
The installation process requires first setting up a user with the same name on all machines, then running `SwissSSH.sh` which is an SSH configuration tool for enabling root user SSH support, which then installs root SSH keys onto all nodes in the cluster. Afterwards you can run `SwissDeploy.sh` which reads `vfs-infrastructure.yaml` (list of ip addresses and disks to add into the VaultFS cluster) then installs VaultFS onto all machines and partitions all the disks in the cluster in one click.
The installation process requires first setting up a user with the same name on all machines, then running `SwissSSH.sh` which is an SSH configuration tool for enabling root user SSH support, which then installs root SSH keys onto all nodes in the cluster. Afterwards you can run `SwissDeploy.sh` which reads `vfs-infrastructure.yaml` (list of ip addresses and disks to add into the VaultFS cluster) then installs VaultFS onto all machines and partitions all the disks in the cluster in one click.

VFS saves money for its users by allowing them to use unreliable storage hardware until the point of failure, without sacrificing the availability of the data across all disks in the cluster, VFS ensures you get the maximum life out of each drive in your storage cluster. It also saves time by enabling users to skip the data-center refresh cycle, instead only needing to replace hardware as it fails, rather than pre-emptively replacing hardware before failure.

VFS saves money for its users by allowing them to use unreliable storage hardware until the point of failure, without sacrificing the availability of the data across all disks in the cluster, VFS ensures you get the maximum life out of each drive in your storage cluster. It also saves time by enabling users to skip the data-center refresh cycle, instead only needing to replace hardware as it fails, rather than pre-emptively replacing hardware before failure.

VFS does not require NFS to operate, when VFS starts it shows itself as a folder in the `/mnt/vaultFS/` directory which abstracts away all the disks in the cluster into one self-healing storage pool located in one directory. NFS is only required if you wish to share access to your VFS cluster to the outside world or other machines on the same network, without needing to install VFS on the clients.

VFS does not require NFS to operate, when VFS starts it shows itself as a folder in the `/mnt/vaultFS/` directory which abstracts away all the disks in the cluster into one self-healing storage pool located in one directory. NFS is only required if you wish to share access to your VFS cluster to the outside world or other machines on the same network, without needing to install VFS on the clients.