CloudFerro in ESA Civil Security programme
CloudFerro is proud to announce that we have become a Partner to the ESA Civil Security programme and a participant to the ESA Civil Security from Space Industry Association.
We are the biggest storage providers for EO market in Europe
Network storage solutions available in CloudFerro Cloud are a crucial component of our portfolio. Our expertise in delivering multi-petabyte storage with various usage and price profiles positions us as one of the leading storage providers in Europe. Our RAW capacity exceeds 1 Exabyte.
CloudFerro Cloud users have different storage options with different prices, IO performance, access speed and data resilience. Users can choose configurations that best suite their projects. Worth noting are new, powerful configurations with local storage. Users can benefit from more than 10x better IO performance and 10x lower latency, comparing to the previously available solutions.
Our VM storage is based on Network Storage. Network drives are more reliable by a few orders of magnitude because they are built from hundreds of storage servers and thousands of discs. The obvious downside of any network storage is a need to transport the data over the network. As a result, in comparison with local solutions, it takes more time to get the response from the network storage medium.
In scenarios where the data can be accessed or written in many queues, the network storage offers a substantial advantage of having hundreds of individual drives to write parallelly. This increases the IO and bandwidth performance exponentially. That is why, the network storage is ideal for parallel operations.
The local storage is dependent on physical media performance and cannot rely on thousands of drives to boost performance. In HMD and DS solution we use very fast local NVMe drives. For this reason those configurations are ideal in scenarios that need very low latency and very hight IO.
All the tests were carried out on 8 vCPU HMD VM. The multi queue performance is very CPU dependant as numbers of vCPU correspond to the number of maximum storage and network operations, and if we used larger VM we would obtain better results for the network storage. For big blocks and high queue depth, the vCPU and network may be a limiting factor of IOPS/Bandwitch performance, not the storage medium itself.
It is important to know that CEPH storage is designed in a way that practically eliminates a risk of data loss. In this case, natural disasters or human errors are more probable by a few orders of magnitude than any hardware failure leading to data corruption.
Network HDD Storage single que IOPS performance 4k blocks - 1120 IOPS
Network HDD Storage multi que IOPS performance 4k blocks – 44000 IOPS
Network HDD Storage maximum bandwidth on 4M blocks - 2169 MiB/s
Network SSD Storage single que IOPS performance 4k blocks - 1500 IOPS
Network SSD Storage multi que IOPS performance 4k blocks – 47000 IOPS
Network SSD Storage maximum bandwidth on 4M blocks – 3269 MiB/s
Local HMD storage single que IOPS performance 4k blocks - 34500 IOPS
Local HMD storage multi que IOPS performance 4k blocks – 337000 IOPS
Local HMD storage maximum bandwidth on 4M blocks - 2963 MiB/s
Network HDD Storage single que IOPS performance 4k blocks - 96 IOPS
Network HDD Storage multi que IOPS performance 4k blocks – 2948 IOPS
Network HDD Storage maximum bandwidth on 4M blocks - 260 MiB/s
Network SSD Storage single que IOPS performance 4k blocks - 650 IOPS
Network SSD Storage multi que IOPS performance 4k blocks – 6006 IOPS
Network SSD Storage maximum bandwidth on 4M blocks – 550 MiB/s
Local HMD storage single que IOPS performance 4k blocks - 25000 IOPS
Local HMD storage multi que IOPS performance 4k blocks – 270000 IOPS
Local HMD storage maximum bandwidth on 4M blocks - 1371Mib/s