Save on cloud costs with Spots
Users can now save on cloud resources costs as they gain access to CloudFerro cloud instances at significantly reduced prices compared to standard VMs.
CloudFerro Kubernetes as a Service provides container orchestration solution smoothly integrated with the cloud environment. Kubernetes enables fast deployment of production-grade, highly scalable applications, systems and workloads. This allows to reduce both the infrastructure and development costs. Data scientists can also benefit from Kubernetes on CloudFerro clouds to run complex workflows, with integrated EO data access.
What do you pay for:
What we provide:
The cluster autoscaler expands the capability of Horizontal Pod Auto-Scaler available in Kubernetes. Cluster auto-scaler can automatically increase or decrease the number of worker nodes based on the sizing needs of the scheduled workloads. The cluster auto-scaler periodically scans the cluster to adjust the number of worker nodes in response to workload resource requests. By default the scaling is performed on any node group with the role "worker" and the maximum node count parameter set.
The cluster autohealer in CloudFerro clouds is another component to complement the Kubernetes-native capabilities and provision a fully self-healing Kubernetes cluster. Kubernetes as a platform can detect issues with application pods and redeploy them if necessary, but cannot replace unhealthy nodes when problems arise. This is where autohealer steps in. Autohealer constantly monitor status of Kubernetes cluster by checking status of nodes and API. If any problem arise, autohealer will replace unhealthy node with the new server.
CloudFerro Kubernetes as a Service provides container orchestration natively integrated with the cloud infrastructure. Services set up on CloudFerro Kubernetes clusters can be exposed using the cloud LoadBalancers and get provisioned with automatically assigned public floating IP, either directly or using Kubernetes ingress. Cloud storage can also be provisioned automatically utilizing Persistent Volumes via available Storage Classes, either HDD or SSD.
Earth Observation and Science domains are first-class citizens for our Kubernetes as a Service offering. We utilize our learnings from running own large-scale EO processing workloads to optimize our K8s offering to this type of use cases. Refer to our knowledge base articles to browse our starter guides for setting up popular services e.g. JupyterHub, Argo Workflows and others.
You can run Kubernetes on WAW3-1, WAW3-2 and FRA1-2 clouds, which have Kubernetes support built-in (OpenStack Magnum module).
The Containers billing depends on the billing of the underlying infrastructure. The VMs creating a cluster are billed as described in the section on VMs.