Getting down to numbers, carbon stocking via direct measurement

@DanT @kanedan29 Christophe

We (Our Sci) are quickly approaching a point where we’re getting asked for numbers (costs, prices, and net to farmer) from both ends (Nori, farmer groups, OpenTEAM partners, etc.).

I know there’s a lot of flexibility and unknowns, but we have enough experience there’s surprising number of knowns, and at least from a process perspective, I think we know how the stocking process would go (certainly Dan Kane knows!).

So I want to get the conversation started. I wrote this up to get things started here on the forum.

Goal of carbon stocking

Demand for carbon stocking for agricultural or land management applications are for tracking carbon for internal standards (companies with internal ag-related standards like General Mills, City of Boulder for internal GHG offsets within the city), for management decisions (grass fed beef for increasing soil C, regenerative ag transitions), and for carbon markets (row crop farmers w/ Nori or similar markets).

While all of these markets are real, the most significant increase in value to all those applications is connecting to a carbon market as it benefits all of them. We estimate that farms >100 acres would find it monetarily worth while to accumulate carbon credits, assuming a 1% change in carbon over 10 years and a local sample collection strategy involving multiple fields / farms.

As such, the primary design feature of carbon stocking measurements is to accurately track change over time on a single field.

We believe, based on discussions with Nori, estimates of expected carbon increase on typical row crop fields, and discussions with farmers directly, that a $2 - 3 cost per year for carbon stock estimation over a 10 year period (Christophe did I get this right? Edit plz or respond). This is the target any carbon stocking technology must hit to be successful.

Sampling Strategy Summary

Year 1: Collect samples, measure lab carbon (LOI) on all samples and Reflectance on all samples
–> build local model for predicting LOI from Reflectance.
middle years (years 3, 5, 7…): Collect samples, measure Reflectance only.
–> generate payouts based on increases in C.
Year 10: Collect samples, measure lab carbon (LOI) only.
–> final payout based on increase in C.

Modeling + selection

Background

Current modeling using Stratify and subsequent modeling software in R is effective at predicting carbon differences between ecosystems (soil types, texture class, and location). However, this isn’t the primary goal of carbon stocking - the goal is to capture change over time with predictable (and ideally high) confidence.

Spectroscopy (in field / or in lab using a reflectometer) is very effective at identifying change over time, but to do so requires a locally calibrated model (defining local as same physical field, same texture class, same soil type, etc.).

The core question is how to minimize cost / maximize accuracy of a local model with the goal of use in carbon markets.

Currently, Stratify is effective in specifying the highest impact sampling points within a region, relative to creating that local model. It takes into account:

  • Soil Texture / Soil type
  • Elevation
  • Aspect (NSEW)
  • Slope
  • NDVI

Within those ‘buckets’, samples are randomized. This is useful, but doesn’t maximize our ability to identify change over time because the model information is tight around a limited set of carbon values.

In short, expected future values will be outside of the training set because they will likely be higher. Also, in general, a more accurate model will be built with a wider range of similar carbon values (inside the buckets).

(IMO) One good way to handle this is to intentionally collect higher and lower carbon values than the standard field values. See below for an example graph showing random sampling (existing Stratify), as compared to sampling which intentionally selects high and low carbon values.

Recommended changes

Discussion with Jeff Herrick, Dan Kane, and internally (Dan T and Greg), there are few small changes that can be made to improve local models for change over time stocking estimates (increase accuracy / reduce sampling).

  1. Include hedgerows and areas near fields to support higher carbon value sampling within Stratify.
  2. Change Stratify to create buckets initially excluding NDVI (let’s call these ‘core buckets’). Show this mapping output to the user so they can see them. Then subset the ‘core buckets’ using NDVI (high to low) to more intentionally select both high and low carbon regions (assuming NDVI is a good proxy for carbon levels which I think it is).
  3. Also subset additional sampling at 10 - 20cm or 20 - 30cm within each ‘core bucket’ to pull more low carbon samples.
  4. If a ‘core bucket’ is likely to have minimal carbon range (there’s no high/low NDVI areas, etc.), the user should be informed and suggest to identify nearby locations that could support greater variability.

In addition, it’s important to add value to this process by stacking functions and packaging the offering in a way that farmers can do what makes most sense for them. It seems based on initial discussions, pH (for variable rate lime) and biological activity (for tracking regenerative ag changes) would have the most benefit to map alongside carbon, and are relatively low cost on a per sample basis.

Finally, allowing farmers to identify the depths (0 - 10, 10 - 20, 0 - 20, 20 - 30… dunno but something in there) to track may also be helpful as farms who need to occasionally till versus farms with perennial crops will likely maximize carbon benefit at different levels. This helps each farm maximize their benefit.

Cost estimation

See below cost estimate, but I think it gives a sense of where costs will be. There is a high cost and low cost version (more dense sampling, higher cost per sample, etc.) to get a sense of the variation.

None of the numbers are absolute, but @DanT and @kanedan29 you have enough experience to at least guess here and ballpark.

I created some tabs so you can play around with it. Feel free and see what you think.

What’s not in the sheet is…

  1. this requires someone who’s invested in soil sampling, has 4 wheelers, equipment and has enough farms in a given area that they can be efficient with their time. Need to work with agronomists and others locally to do this right.
  2. cost to generate maps and provide feedback to farmers / agronomists performing the work
  3. marketing + customer success

I was impressed that we are in the $0.5 - $2 per year per acre cost for carbon stock estimation with direct sampling. Feels achievable!

Please comment / edit / discuss below. I would love to start moving this conversation forward this winter, so we all feel confident (or not!) that this is an achievable method in carbon markets like Nori.

Hey Greg,
Well as promised here are my thoughts for whatever it’s worth. I think you have identified an important problem, your graphic example illustrates it perfectly. Not only is it important to capture that variability, but there are also likely non-linearities in these spectral relationships. Just wonder if it possible that there is another way to go about this. I will attach a an article below, but I wonder if using the concepts of data mining and spiking might work here, you have a lot of variability out there at this point in terms of high and low carbon values. Statistical learning might be able to do something with that, but not sure maybe someone is already working on that. The question of whether spectral readings where taken consistently is probably a valid one however, but something to possibly consider moving forward.

Also interested in the “stacked function” (as you name it) of total nitrogen and talking to anyone else out there doing work or having interest in this aspect. Obviously this correlates well with total C and informs management decisions with current economic implications. C/N ratio has been used in a lot of ways, but spatially understanding this might integrate directly into management zones very quickly. C and N also form (along with their temporal dynamics sort of implicit in the C02 burst tests you are already doing) form the basis for the USDA soil health index. I think the benefits in helping producers quantify and manage soil health and productivity at a broader and more spatially explicit scale might encompass the most immediate demand for this work.

Finally, I will just mention that I would question the idea of NDVI as a reliable proxy for soil carbon, that may be true in some places, but in Michigan for example that varies by season, in years like this one NDVI would be inversely related to carbon content in most cases due the wet climatic conditions. In my mind you are on the right track, trying to maximize the variability of within field sampling (the whole reason traditional grid sampling has pretty much disappeared), but love the idea of combining that with other data. Great post Greg, thanks!

1 Like

@dornawcox not sure if someone from Comet Farm is on but @Craig from Cool Farm Tool is on the forum and may have some interest.

Craig - is there utility in a lower cost prediction of total N for you all?

I would imagine that total N may be a useful additional piece of info in the modeling world, and increasing utility of the effort of ground truthing on models is always a consideration.

Greg - thanks for putting this together - great to see someone else thinking about the prices and sampling density to make quantification worthwhile!

From the CFT perspective we don’t use total N in our model since it doesn’t simulate the C/N dynamics in the soil (it’s empirical rather than process-based), so I’m not sure if it would be useful at this point. However, I think it may be helpful to have as the CFT evolves to be more process-oriented.

We do have some projects spinning up where we need to be able to quantify changes in SOC over a number of farms and fields, so I will be spending quite a bit of time this fall thinking about the required sampling methodology. A few thoughts related to this need and your summary:

  • The capacity for changing SOC is going to be very different in distinct bioclimatic regions and in different fields. One fear of mine is that we over-promise payments/changes in SOC and therefore sour farmers and investors on the idea of soil SOC sequestration. For those reasons, I think it would be incredibly helpful to have a reference for the range of potential SOC values in specific climates, soil types, etc. This could provide at least an initial estimate for whether carbon sequestration credits would be achievable on any given piece of land.

  • I don’t know enough about the Stratify methodology, but I think it would probably be helpful to have different blocking variables in different regions. In addition, determining the optimum number of samples within each stratum could be informed by variograms from analogous locations (see paper on Spatial Soil Sampling).

  • In terms of stacking functions, I feel like mobile nutrients (e.g. nitrate) don’t offer a great value add since they’re so fickle. However, phosphorus could be a nice addition, especially where manure is frequently applied and regulations are becoming increasingly stringent.

  • Your thoughts on trying to capture greater variability make complete sense. If you’re sampling a bunch of fields in one area, I wonder if you could capture a greater range of variability in just a few fields and extrapolate rather than intentionally targeting high/low carbon areas in each field that’s seeking credits?

  • Have you considered using the topographic wetness index (TWI) for stratification? It’s easy to derive from DEMs and can sometimes provide a nice proxy for various soil characteristics.

re: reference for expected SOC ranges I completely agree. We were just walking through what an interaction with someone who would want this would look like… and first thing is to understand the potential of the land, and likely changes based on changes in practices they would seriously evaluate. That boxes in the potential pretty clearly.

re: optimum number of samples YES! @kanedan29 has been working on that for some time, and we have 3 locations we’re testing in this year that I hope will help nail that down (we’ll have SOC + spectral on it all, so we can ask what’s the minimum number of SOC’s to reasonably predict using the remaining spectral).

re: capture a greater range of variability in just a few fields and extrapolate DOUBLE YES! It feels reasonable to me given that we’re already using models to predict changes in carbon with no on-the-ground measurement (this is almost like a hybrid model / measure strategy). Would significantly reduce sampling load.

re: topographic wetness index… nope! I’d like to hear @DanT or @kanedan29 's perspective on this.

Differences in bulk density especially in field crop systems at different sampling times often introduces ~ 10% variability which is a challenge to any attempt using dry combustion or LOI or spectral to assess soil C change over time (e.g. samples vary in bulk density from 1.4 to 1.6 typically depending on if a field has been disturbed recently, thus altering the layer that is actually being sampled)… could you factor this in? Add bulk density measurements in the field? And variability in low C soil types at the 1 to 30 meter scale can be really high, how is this being addressed, is this a grid sampling being proposed, not quite following. I like the idea of include sampling of a fence row as this can indeed provide a higher soil C ‘goalpost’ for the soil type and environmental conditions

1 Like

Good point Sieg… yeah, not sure about that though I’m assuming a model based approach would suffer from the same issue, and if they’ve addressed it seems like we’d be able to address it in at a minimum the same way (right?).

Also, heard on the Nori call from @cjospe is that $0.25 per ton for their current process using COMET.

I added cost per ton in my calculations as well - I added in the spreadsheet above (here is the link again).

A per ton cost range for direct measurement is $0.9 - $5.1, depending on the complexity of the terrain, etc. So definitely higher, but if the current Nori process involves some total carbon measurements at year 1 and year 10 right now, I don’t quite know how you do that at a lot less than $0.9 per ton…

Christophe do you have some calculations on how you got to $0.25?

Other updates from Nori call!

I was wrong on the that’s $0.25 - it’s $0.25 per verification event (every 3 years)… so in a 10 year period that’s more like $1 for the entire period (?) maybe?. Sorry. I may wait more to post until the meeting is all the way over :slight_smile: Trey (initial farmer they worked on) used granular so it made things easier, probably harder for folks with less organized.

Also, updated a rate of error on the COMET farm confidence interval estimates, which should come out soon (days or weeks). So that’s cool! It’ll be in an open access journal, @cjospe post that here!

Hey Greg and others -

There’s clearly a lot of value in publishing a methodology that quantifies the trade-offs associated with field-scale SOC sampling, uncertainty quantification, and carbon market payments (excluding modeling-based approaches) - and that creates a procedure for integrating available information to arrive at optimal sampling numbers/spatial layouts. Building on some of my previous work I’ve been gathering resources and starting to put time towards a paper. Are any other folks out there trying to publish something along these lines (no need to duplicate efforts)? If not, who would be interested in collaborating or contributing time/resources to the project? It feels like this is a critical juncture to have a transparent, open-access methodology for answering these questions and building trust in the process.

1 Like

+1!!! I’d love to figure out how to support something like that. I know that Nori is interested in establishing a working group around soil sampling, and I think @dornawcox is as well and got interest from Mad Ag folks Phil Tayl. But curious to hear from Dan and Dan and Sieg and others in the space for sure. A lot of these folks are at Trisocieties right now, I hope they can connect on this topic.