CloudFerro offers competitive and flexible pricing plans for cloud services, tailored to fit the needs and budget of businesses, organizations, and individuals. With our transparent pricing structure, you only pay for the services you need, allowing you to optimize your cloud usage and minimize costs.
Join our experienced CloudFerro team and start your dream job. We are a group of people creating a well-integrated and experienced team with unique competencies. Our company culture is focused on employee development and cooperation. We make sure that everyone feels good with us.
By Paweł Markowski, IT System Architect and Development Director - CloudFerro
The effective calculations and analysis of Earth Observation data need a substantial allocation of computing resources. To achieve optimal efficiency, we can rely on a cloud infrastructure, such as the one provided by CloudFerro. This infrastructure allows users to spawn robust virtual machines or Kubernetes clusters. Additionally, leveraging this infrastructure in conjunction with tools such as the EO Data Catalog is of utmost significance in accelerating our computational operations. Efficient and well-considered enquiries that we use might help our script speed up the whole process.
We will develop an efficient application compatible with the Catalogue API(OData) and highly reliable in handling various error scenarios. These errors may include network-related issues, such as problems with the network connection, encountering limits on the API (e.g., HTTP 429 - Too Many Requests), timeouts, etc.
The image below illustrates the schematic representation of the workflow. We have modified the code in order to execute a simple query: count all SENTINEL-2 online products where the observation date is between 2023-03-01 and 2023-08-31
It is important to note that the ODATA API allows users to construct queries with various conditions or even nested structures. Our script assumes basic query structure like:
https://datahub.creodias.eu/odata/v1/Products?$filter=((ContentDate/Start ge 2023-03-01T00:00:00.000Z and ContentDate/Start lt 2023-08-31T23:59:59.999Z) and (Online eq true) and (((((Collection/Name eq ‘SENTINEL-2’))))))&$expand=Attributes&$expand=Assets&$count=True&$orderby=ContentDate/Start asc
To set your local Temporal server in motion, simply execute the following command within your terminal: temporal server start-dev With this, your local Temporal server springs into action, awaiting workflow executions.
Once the local Temporal server is up and running, you can activate the worker by executing:
go run main.go 2023/11/07 13:28:30 INFO No logger configured for temporal client. Created default one. 2023/11/07 13:28:30 Starting worker (ctrl+c to exit) 2023/11/07 13:28:30 INFO Started Worker Namespace default TaskQueue catalogue-count-queue WorkerID 3114@XXX
Worker implements Workflow that simply divides date ranges into manageable five-day timeframes. This approach ensures that our queries to the EO Catalog will be not only swift but also effective. Armed with these segmented timeframes, we embark on fresh queries, optimizing the process of counting products and advancing toward our ultimate goal.
The optimized process of counting products will help us to get the correct number of specified products.
Run WorkflowWorker: cd sample-workflow/ && go mod tidy && go run main.go
In our workflow definition, we have not only segmented dates but also incorporated monitoring functionality, enabling us to query the current state / progress.
These queries can be invoked through an API or accessed via the Temporal server Web UI, which is available at the following address: https://localhost:8233/namespaces/default/workflows. You can use the Web UI to check the status, as demonstrated in the image below.
Last but not least, part of our short demo is the Activity definition.
We can find the code in /activities/count_products_activity.py. That code executes received query with a static 40sec timeout. Before you start running that activity process, please remember about installing requirements.txt in your python env.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behaviour or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.