Over the past two years at Forest Carbon, I’ve witnessed first-hand how rapidly technology has evolved to support ecological restoration. With this progress, however, comes an increasing demand for data-intensive evidence to uphold the integrity of projects that are seeking certification with the Woodland Carbon Code and Peatland Code.
For example, for peatland certification, what began as a requirement for simple “before and after” site photographs has grown into something far more complex. Increasingly, peatland projects require high-resolution orthomosaics (geometrically corrected aerial images stitched together from drone-captured data) to accurately assess pre- and post-restoration conditions. More recently, the Woodland Carbon Code has begun trialling the use of orthomosaics for Year 5 verifications, a practice I expect will become standard in the near future.
In principle, this is a welcome development. These detailed visuals offer compelling, verifiable proof that restoration is occurring—trees are being planted, peatlands are being restored. This boosts confidence for auditors, buyers, and the wider industry and streamlines the verification process.
At the same time, as part of the sector’s broader push for quality and integrity, there’s growing expectation that digital MRV (Measurement, Reporting and Verification) will enable more frequent verifications—or even near real-time project updates. While this promises greater transparency, it inevitably increases the volume of data that needs to be captured, processed, and stored.
But there's a significant and often overlooked challenge: data volume.
The Data Burden
All this digital evidence must be stored, often in multiple places. A single 250-hectare orthomosaic can easily exceed 50 GB. If that data is copied for each stakeholder—landowners, project developers, auditors, drone operators—and then backed up, one project can consume up to 400 GB. (Google’s AI overview tells me that this is equivalent to storing 105 copies of The Lion King, which I’m not sure is an accurate or a useful comparison. Perhaps it is better to say that 400 GB is equivalent to 50,000 - 80,000 photographs taken on a modern smartphone or 100,000 songs on Spotify.) If "before" and "after" images are required (as they are for peatland projects), that doubles to 800 GB. Now imagine this process repeating at 5- and 10-year verification intervals over the next century.
It’s easy to see how data demand scales quickly and becomes a long-term challenge.
Infrastructure & Environmental Costs
Many stakeholders don’t have the infrastructure to reliably upload or download hundreds of gigabytes of data. Even when they do, the cost of cloud storage, third-party platforms, and high-bandwidth transfers can be significant. For example, cloud storage alone can cost £18 per user per month for 15 TB of space (using Dropbox as a reference). That would hold fewer than twenty 800 GB projects. If we needed 800 GB for every one of Forest Carbon’s ±400 projects, that’s close to £5,000/year.
Perhaps more pertinently, there’s also the carbon footprint to consider: storing and transmitting data—whether locally or via cloud services—requires energy, and that energy use contributes to emissions. The footprint per terabyte may vary, but it’s real, and it’s rarely accounted for in restoration project impact assessments.
Looking Ahead: Biodiversity & Beyond
Now, consider the increasing focus on biodiversity uplift. Initiatives like Biodiversity Net Gain and emerging voluntary biodiversity markets will demand similarly high standards of data-driven evidence. The complexity of measuring and verifying ecological change will only grow, and so too will the associated data requirements.
So, What Can We Do?
We’re not alone in facing this challenge. Every industry striving for Net Zero amidst rapid digitalisation is grappling with similar dilemmas. And unless the carbon and nature restoration sectors take a radical turn away from evidence-based standards (which seems both unlikely and unwise), this issue must be addressed head-on.
The most immediate and practical solution? Centralised data storage.
A shared storage platform—accessible to all relevant stakeholders—would eliminate redundancy, reduce transfer costs, and simplify data access. It would also make large-scale audits and cross-project analysis easier and more consistent.
But this raises a critical question: who should provide this infrastructure?
Should the Woodland Carbon Code and Peatland Code take responsibility for their own respective centralised systems? Should the government step in to fund and maintain a national repository? Or is there space for a trusted third party to fill this gap?
And perhaps more crucially - who pays?
It’s likely that the cost of enhanced data infrastructure and storage will eventually be reflected in the final cost per unit. But with that, it’s worth recognising what buyers are gaining: increased transparency, stronger traceability, and greater assurance that the projects they support are backed by robust, verifiable data.
The growing operational and environmental burden of data management is a nuanced, industry-wide challenge within the ecosystem restoration space. Addressing it now could save costs, reduce emissions, and ultimately strengthen the credibility of nature markets in the long term.
As we move forward, the use of technology in this space should serve a dual purpose:
- First, to improve integrity and transparency of projects by offering clearer data based evidence of real-time impact.
- Second, to drive pragmatism and efficiency by reducing time spent on slow, burdensome processes—freeing up resources to drive further impact and reduce costs in certain areas.
So, how do we strike the right balance?
How should these goals stack up when it comes to managing project costs—and, ultimately, what cost per unit will the market be willing to bear?
We’d love to hear your thoughts and any potential solutions you might have on this topic. If you have comments, ideas, or experiences to share, please don’t hesitate to get in touch!