Storage

We are the biggest storage providers for EO market in Europe

Storage - storageStorage - storage

An integral part of our public and private clouds. Network storage solutions available in CloudFerro Cloud are a crucial component of our portfolio. Our expertise in delivering multi-petabyte storage with various usage and price profiles positions us as one of the leading storage providers in Europe. Our RAW capacity exceeds 1 Exabyte.

Storage

CloudFerro Cloud users have different storage options with different prices, IO performance, access speed and data resilience. Users can choose configurations that best suite their projects. Worth noting are new, powerful configurations with local storage. Users can benefit from more than 10x better I/O performance and 10x lower latency, comparing to the previously available solutions.

From the service point of view:

  • Volume storage (SSD or HDD network storage) - if configured, it can be used to boot a VM.
  • VM related storage (SSD network storage) which you can provide only together with VM and it is used as a default system disc. This storage is physically identical do SSD Volume Storage;
  • Object Storage which is a mix of storage space and metadata with a special protocol to access the data;
  • Local NVMe storage for DS servers (each DS server receives two identical very fast NVME PCIe drives);
  • Local ephemeral storage for HMD virtual machine in which a physical very fast NVMe drive is attached to a VM

From the physical point of view:

  • Network HDD Ceph storage – this is a cheap, reliable, very resilient and immensely big storage pool. This storage is available both as block (volumes) and object (S3) storage. Both storage types have different points of access with different costs and performance.
  • Network SSD Ceph storage – a fast, reliable and resilient storage. It is a default storage media for VMs. VM related storage and volume SSD storage are stored on this type of media.
  • Local compute storage (usually NVME) – this storage is located inside a server that hosts your VM on a very fast disc. It means that when the computer server encourages hardware malfunction, the storage media becomes inaccessible or, on very rare occasions, you may experience data loss. The NVMe drive that hosts data for HMD configuration is a single, very reliable high-performance drive with MTBF>2M h and up to 400k IOPS.
  • Dedicated drives - In certain scenarios, particularly with data storage servers, clients are given access to physical SATA disks. This is commonly implemented in storage-focused nodes where the client configures their own storage solution.

Our VM storage is based on Network Storage. Network drives are more reliable by a few orders of magnitude because they are built from hundreds of storage servers and thousands of discs. The obvious downside of any network storage is a need to transport the data over the network. As a result, in comparison with local solutions, it takes more time to get the response from the network storage medium.

In scenarios where the data can be accessed or written in many queues, the network storage offers a substantial advantage of having hundreds of individual drives to write parallelly. This increases the IO and bandwidth performance exponentially. That is why, the network storage is ideal for parallel operations.

The local storage is dependent on physical media performance and cannot rely on thousands of drives to boost performance. In HMD and DS solution we use very fast local NVMe drives. For this reason those configurations are ideal in scenarios that need very low latency and very hight I/O.

Some examples of expected performance

All the tests were carried out on 8 vCPU HMD VM. The multi queue performance is very CPU dependant as numbers of vCPU correspond to the number of maximum storage and network operations, and if we used larger VM we would obtain better results for the network storage. For big blocks and high queue depth, the vCPU and network may be a limiting factor of IOPS/Bandwitch performance, not the storage medium itself.

It is important to know that CEPH storage is designed in a way that practically eliminates a risk of data loss. In this case, natural disasters or human errors are more probable by a few orders of magnitude than any hardware failure leading to data corruption.

Read

Network HDD Storage single que IOPS performance 4k blocks - 1120 IOPS
Network HDD Storage multi que IOPS performance 4k blocks – 44000 IOPS
Network HDD Storage maximum bandwidth on 4M blocks - 2169 MiB/s
Network SSD Storage single que IOPS performance 4k blocks - 1500 IOPS
Network SSD Storage multi que IOPS performance 4k blocks – 47000 IOPS
Network SSD Storage maximum bandwidth on 4M blocks – 3269 MiB/s
Local HMD storage single que IOPS performance 4k blocks - 34500 IOPS
Local HMD storage multi que IOPS performance 4k blocks – 337000 IOPS
Local HMD storage maximum bandwidth on 4M blocks - 2963 MiB/s

Write

Network HDD Storage single que IOPS performance 4k blocks - 96 IOPS
Network HDD Storage multi que IOPS performance 4k blocks – 2948 IOPS
Network HDD Storage maximum bandwidth on 4M blocks - 260 MiB/s
Network SSD Storage single que IOPS performance 4k blocks - 650 IOPS
Network SSD Storage multi que IOPS performance 4k blocks – 6006 IOPS
Network SSD Storage maximum bandwidth on 4M blocks – 550 MiB/s
Local HMD storage single que IOPS performance 4k blocks - 25000 IOPS
Local HMD storage multi que IOPS performance 4k blocks – 270000 IOPS
Local HMD storage maximum bandwidth on 4M blocks - 1371Mib/s