dClimate announced last Monday, May 10, through its Twitter account, that its REST API for data consumers and the data consumer client code base are now available.
“The mandate of dClimate is to build a decentralized ecosystem for climate data”, he said through an article published on the Medium platform. While explaining that said ecosystem consists of three high-level architectural components:
• A data consuming component where users can retrieve clean, standardized, highly available and immutable climate data.
• A data publishing component where users can publish data and charge for it without any real knowledge of the blockchain.
• A governance component where the behavior of the network can be managed.
According to the company, within the first two components, they have identified specific milestones related to free (unencrypted) and paywall (encrypted) data sets.
“As explained in the whitepaper, even access to free data sets is often prohibitive. The authorities commit a number of “sins” with the way they publish their data: lack of documentation, outdated recovery protocols (FTP), review of old data after the fact without recording the change, indexing of data in very impractical ways, weird drive conventions, and just having a lot of service downtime”, he added.
In his view, the above presents serious challenges even for simple climate data use cases, and completely shuts down any blockchain project that uses climate or meteorological data.
The company assured that from Arbol, they have dedicated many resources during the last 1 or 2 years to standardize large sets of climate data (free) and publish them in IPFS to use them with their smart contracts. “We have a code base for ETLing of big data sets, which is currently centralized, but it will move to the publisher’s infrastructure once we launch the publisher client (in collaboration with Chainlink)”, he said.
Regarding the Data Consumption side, the company ensured that they have a client code base (still under the somewhat inherited name of “dWeather client”). “This is a Python library that runs alongside the go IPFS implementation. It passes you a dataset and its time / location query and parses the IPFS post chain (we sometimes refer to this string as a “linked list” of posts) and returns the results of your query in a data structure”, he specified.
Similarly, he highlighted that the above have been included in a REST API so that the user does not have to install IPFS and synchronize data on their machine just to run a query. “At Arbol, we use the API for our” lighter “applications, such as quotes and preliminary analysis; payment evaluations, for example, need to install the client to obtain the immutability guarantee at the IPFS protocol level”, he highlighted.