Getting down to numbers, carbon stocking via direct measurement

@DanT @kanedan29 Christophe

We (Our Sci) are quickly approaching a point where we’re getting asked for numbers (costs, prices, and net to farmer) from both ends (Nori, farmer groups, OpenTEAM partners, etc.).

I know there’s a lot of flexibility and unknowns, but we have enough experience there’s surprising number of knowns, and at least from a process perspective, I think we know how the stocking process would go (certainly Dan Kane knows!).

So I want to get the conversation started. I wrote this up to get things started here on the forum.

Goal of carbon stocking

Demand for carbon stocking for agricultural or land management applications are for tracking carbon for internal standards (companies with internal ag-related standards like General Mills, City of Boulder for internal GHG offsets within the city), for management decisions (grass fed beef for increasing soil C, regenerative ag transitions), and for carbon markets (row crop farmers w/ Nori or similar markets).

While all of these markets are real, the most significant increase in value to all those applications is connecting to a carbon market as it benefits all of them. We estimate that farms >100 acres would find it monetarily worth while to accumulate carbon credits, assuming a 1% change in carbon over 10 years and a local sample collection strategy involving multiple fields / farms.

As such, the primary design feature of carbon stocking measurements is to accurately track change over time on a single field.

We believe, based on discussions with Nori, estimates of expected carbon increase on typical row crop fields, and discussions with farmers directly, that a $2 - 3 cost per year for carbon stock estimation over a 10 year period (Christophe did I get this right? Edit plz or respond). This is the target any carbon stocking technology must hit to be successful.

Sampling Strategy Summary

Year 1: Collect samples, measure lab carbon (LOI) on all samples and Reflectance on all samples
–> build local model for predicting LOI from Reflectance.
middle years (years 3, 5, 7…): Collect samples, measure Reflectance only.
–> generate payouts based on increases in C.
Year 10: Collect samples, measure lab carbon (LOI) only.
–> final payout based on increase in C.

Modeling + selection


Current modeling using Stratify and subsequent modeling software in R is effective at predicting carbon differences between ecosystems (soil types, texture class, and location). However, this isn’t the primary goal of carbon stocking - the goal is to capture change over time with predictable (and ideally high) confidence.

Spectroscopy (in field / or in lab using a reflectometer) is very effective at identifying change over time, but to do so requires a locally calibrated model (defining local as same physical field, same texture class, same soil type, etc.).

The core question is how to minimize cost / maximize accuracy of a local model with the goal of use in carbon markets.

Currently, Stratify is effective in specifying the highest impact sampling points within a region, relative to creating that local model. It takes into account:

  • Soil Texture / Soil type
  • Elevation
  • Aspect (NSEW)
  • Slope
  • NDVI

Within those ‘buckets’, samples are randomized. This is useful, but doesn’t maximize our ability to identify change over time because the model information is tight around a limited set of carbon values.

In short, expected future values will be outside of the training set because they will likely be higher. Also, in general, a more accurate model will be built with a wider range of similar carbon values (inside the buckets).

(IMO) One good way to handle this is to intentionally collect higher and lower carbon values than the standard field values. See below for an example graph showing random sampling (existing Stratify), as compared to sampling which intentionally selects high and low carbon values.

Recommended changes

Discussion with Jeff Herrick, Dan Kane, and internally (Dan T and Greg), there are few small changes that can be made to improve local models for change over time stocking estimates (increase accuracy / reduce sampling).

  1. Include hedgerows and areas near fields to support higher carbon value sampling within Stratify.
  2. Change Stratify to create buckets initially excluding NDVI (let’s call these ‘core buckets’). Show this mapping output to the user so they can see them. Then subset the ‘core buckets’ using NDVI (high to low) to more intentionally select both high and low carbon regions (assuming NDVI is a good proxy for carbon levels which I think it is).
  3. Also subset additional sampling at 10 - 20cm or 20 - 30cm within each ‘core bucket’ to pull more low carbon samples.
  4. If a ‘core bucket’ is likely to have minimal carbon range (there’s no high/low NDVI areas, etc.), the user should be informed and suggest to identify nearby locations that could support greater variability.

In addition, it’s important to add value to this process by stacking functions and packaging the offering in a way that farmers can do what makes most sense for them. It seems based on initial discussions, pH (for variable rate lime) and biological activity (for tracking regenerative ag changes) would have the most benefit to map alongside carbon, and are relatively low cost on a per sample basis.

Finally, allowing farmers to identify the depths (0 - 10, 10 - 20, 0 - 20, 20 - 30… dunno but something in there) to track may also be helpful as farms who need to occasionally till versus farms with perennial crops will likely maximize carbon benefit at different levels. This helps each farm maximize their benefit.

Cost estimation

See below cost estimate, but I think it gives a sense of where costs will be. There is a high cost and low cost version (more dense sampling, higher cost per sample, etc.) to get a sense of the variation.

None of the numbers are absolute, but @DanT and @kanedan29 you have enough experience to at least guess here and ballpark.

I created some tabs so you can play around with it. Feel free and see what you think.

What’s not in the sheet is…

  1. this requires someone who’s invested in soil sampling, has 4 wheelers, equipment and has enough farms in a given area that they can be efficient with their time. Need to work with agronomists and others locally to do this right.
  2. cost to generate maps and provide feedback to farmers / agronomists performing the work
  3. marketing + customer success

I was impressed that we are in the $0.5 - $2 per year per acre cost for carbon stock estimation with direct sampling. Feels achievable!

Please comment / edit / discuss below. I would love to start moving this conversation forward this winter, so we all feel confident (or not!) that this is an achievable method in carbon markets like Nori.

Hey Greg,
Well as promised here are my thoughts for whatever it’s worth. I think you have identified an important problem, your graphic example illustrates it perfectly. Not only is it important to capture that variability, but there are also likely non-linearities in these spectral relationships. Just wonder if it possible that there is another way to go about this. I will attach a an article below, but I wonder if using the concepts of data mining and spiking might work here, you have a lot of variability out there at this point in terms of high and low carbon values. Statistical learning might be able to do something with that, but not sure maybe someone is already working on that. The question of whether spectral readings where taken consistently is probably a valid one however, but something to possibly consider moving forward.

Also interested in the “stacked function” (as you name it) of total nitrogen and talking to anyone else out there doing work or having interest in this aspect. Obviously this correlates well with total C and informs management decisions with current economic implications. C/N ratio has been used in a lot of ways, but spatially understanding this might integrate directly into management zones very quickly. C and N also form (along with their temporal dynamics sort of implicit in the C02 burst tests you are already doing) form the basis for the USDA soil health index. I think the benefits in helping producers quantify and manage soil health and productivity at a broader and more spatially explicit scale might encompass the most immediate demand for this work.

Finally, I will just mention that I would question the idea of NDVI as a reliable proxy for soil carbon, that may be true in some places, but in Michigan for example that varies by season, in years like this one NDVI would be inversely related to carbon content in most cases due the wet climatic conditions. In my mind you are on the right track, trying to maximize the variability of within field sampling (the whole reason traditional grid sampling has pretty much disappeared), but love the idea of combining that with other data. Great post Greg, thanks!

1 Like
1 Like

@dornawcox not sure if someone from Comet Farm is on but @Craig from Cool Farm Tool is on the forum and may have some interest.

Craig - is there utility in a lower cost prediction of total N for you all?

I would imagine that total N may be a useful additional piece of info in the modeling world, and increasing utility of the effort of ground truthing on models is always a consideration.

Greg - thanks for putting this together - great to see someone else thinking about the prices and sampling density to make quantification worthwhile!

From the CFT perspective we don’t use total N in our model since it doesn’t simulate the C/N dynamics in the soil (it’s empirical rather than process-based), so I’m not sure if it would be useful at this point. However, I think it may be helpful to have as the CFT evolves to be more process-oriented.

We do have some projects spinning up where we need to be able to quantify changes in SOC over a number of farms and fields, so I will be spending quite a bit of time this fall thinking about the required sampling methodology. A few thoughts related to this need and your summary:

  • The capacity for changing SOC is going to be very different in distinct bioclimatic regions and in different fields. One fear of mine is that we over-promise payments/changes in SOC and therefore sour farmers and investors on the idea of soil SOC sequestration. For those reasons, I think it would be incredibly helpful to have a reference for the range of potential SOC values in specific climates, soil types, etc. This could provide at least an initial estimate for whether carbon sequestration credits would be achievable on any given piece of land.

  • I don’t know enough about the Stratify methodology, but I think it would probably be helpful to have different blocking variables in different regions. In addition, determining the optimum number of samples within each stratum could be informed by variograms from analogous locations (see paper on Spatial Soil Sampling).

  • In terms of stacking functions, I feel like mobile nutrients (e.g. nitrate) don’t offer a great value add since they’re so fickle. However, phosphorus could be a nice addition, especially where manure is frequently applied and regulations are becoming increasingly stringent.

  • Your thoughts on trying to capture greater variability make complete sense. If you’re sampling a bunch of fields in one area, I wonder if you could capture a greater range of variability in just a few fields and extrapolate rather than intentionally targeting high/low carbon areas in each field that’s seeking credits?

  • Have you considered using the topographic wetness index (TWI) for stratification? It’s easy to derive from DEMs and can sometimes provide a nice proxy for various soil characteristics.

re: reference for expected SOC ranges I completely agree. We were just walking through what an interaction with someone who would want this would look like… and first thing is to understand the potential of the land, and likely changes based on changes in practices they would seriously evaluate. That boxes in the potential pretty clearly.

re: optimum number of samples YES! @kanedan29 has been working on that for some time, and we have 3 locations we’re testing in this year that I hope will help nail that down (we’ll have SOC + spectral on it all, so we can ask what’s the minimum number of SOC’s to reasonably predict using the remaining spectral).

re: capture a greater range of variability in just a few fields and extrapolate DOUBLE YES! It feels reasonable to me given that we’re already using models to predict changes in carbon with no on-the-ground measurement (this is almost like a hybrid model / measure strategy). Would significantly reduce sampling load.

re: topographic wetness index… nope! I’d like to hear @DanT or @kanedan29 's perspective on this.

Differences in bulk density especially in field crop systems at different sampling times often introduces ~ 10% variability which is a challenge to any attempt using dry combustion or LOI or spectral to assess soil C change over time (e.g. samples vary in bulk density from 1.4 to 1.6 typically depending on if a field has been disturbed recently, thus altering the layer that is actually being sampled)… could you factor this in? Add bulk density measurements in the field? And variability in low C soil types at the 1 to 30 meter scale can be really high, how is this being addressed, is this a grid sampling being proposed, not quite following. I like the idea of include sampling of a fence row as this can indeed provide a higher soil C ‘goalpost’ for the soil type and environmental conditions

1 Like

Good point Sieg… yeah, not sure about that though I’m assuming a model based approach would suffer from the same issue, and if they’ve addressed it seems like we’d be able to address it in at a minimum the same way (right?).

Also, heard on the Nori call from @cjospe is that $0.25 per ton for their current process using COMET.

I added cost per ton in my calculations as well - I added in the spreadsheet above (here is the link again).

A per ton cost range for direct measurement is $0.9 - $5.1, depending on the complexity of the terrain, etc. So definitely higher, but if the current Nori process involves some total carbon measurements at year 1 and year 10 right now, I don’t quite know how you do that at a lot less than $0.9 per ton…

Christophe do you have some calculations on how you got to $0.25?

Other updates from Nori call!

I was wrong on the that’s $0.25 - it’s $0.25 per verification event (every 3 years)… so in a 10 year period that’s more like $1 for the entire period (?) maybe?. Sorry. I may wait more to post until the meeting is all the way over :slight_smile: Trey (initial farmer they worked on) used granular so it made things easier, probably harder for folks with less organized.

Also, updated a rate of error on the COMET farm confidence interval estimates, which should come out soon (days or weeks). So that’s cool! It’ll be in an open access journal, @cjospe post that here!

Hey Greg and others -

There’s clearly a lot of value in publishing a methodology that quantifies the trade-offs associated with field-scale SOC sampling, uncertainty quantification, and carbon market payments (excluding modeling-based approaches) - and that creates a procedure for integrating available information to arrive at optimal sampling numbers/spatial layouts. Building on some of my previous work I’ve been gathering resources and starting to put time towards a paper. Are any other folks out there trying to publish something along these lines (no need to duplicate efforts)? If not, who would be interested in collaborating or contributing time/resources to the project? It feels like this is a critical juncture to have a transparent, open-access methodology for answering these questions and building trust in the process.

1 Like

+1!!! I’d love to figure out how to support something like that. I know that Nori is interested in establishing a working group around soil sampling, and I think @dornawcox is as well and got interest from Mad Ag folks Phil Tayl. But curious to hear from Dan and Dan and Sieg and others in the space for sure. A lot of these folks are at Trisocieties right now, I hope they can connect on this topic.

@gbathree It certainly seems like there’s a lot of interest in this. I was at tri-societies on Monday and Tuesday and there were numerous mentions of the need to have such a methodology/framework, although it seems like most folks are at the “working group” stage. Bill Salas (I think) at Dagan said that the Ecosystem Services Market Consortium is convening some sort of working group early next year as well. TBH I’d ideally have something to use on a shorter time frame and made some good progress towards generating the core of the paper over the last week. I don’t have any need to be at the center of it, but am also willing to be the one pushing it if need be. At this point, I may keep moving with the writing and will simultaneously send out an email to interested parties to gauge resources and build collaboration (unless you or anyone else has recommendations for a better way to proceed). Here’s the list of potential partners (in no particular order):

  1. Quick Carbon
  2. OurSci
  3. Mad Ag
  4. Nori
  5. Regen
  6. Dagan
  7. SHI
  8. SHP
  9. SFL (perhaps that’s obvious)
  10. ESMC
  11. Any others?

The way I see it, this first phase will be focused on defining the methodology and framework. After that, there would ideally be a range of case studies that could demonstrate real-world applications.

Based on this weeks OpenTEAM call, I’m going to set up a call and invite as many of those folks as I know (if I miss please add) and at least we can have a starting point chat now.

To make it easier, I’m just going to share the doodle poll link here.

Anyone can join. Based on results of the poll, we’ll move that to the OpenTEAM community calendar. If I’m missing folks, reply at @ mention them so they get pinged and see the doodle link.

@aaronc @DanT @Daniella @plawrence @ircwaves @dornawcox @mstenta @cjospe (can someone ping mark easter also, I don’t have his contact info or @mention name) … @ mention below if I missed someone!

1 Like

Thanks Greg! I’ll send the link to Mark Easter.

Not sure if any of the folks doing field trials might want to be involved, but I’ll mention @maria.bowman in case. Also wonder if @kanedan29 might want to participate, esp. with all their work on sampling stratification.

A quick update. You all may be familiar with this (somehow I hadn’t found it), but I figured I’d link to a paper that seems at first glance to do exactly what I had been thinking methods-wise! Amazing how that can sometimes happen!

Farm-scale soil carbon auditing

Happy to share with anyone else - just shoot me a message.

And here’s an associated Julia package!

Awesome thanks Patrick!

Also, we had 4 replies quick, and the time quickly narrowed in.

Dec 5, 12:00 EST, click here to see the event on the OpenTEAM calendar - or here for the entire OpenTEAM calendar.

Please add that to your calendar and join us on the call!


Present: Ryan from Nori, @mstenta, @cjospe, @mark.easter, @dornawcox, @DanT… I’m sorry if I’m forgetting anyone else that’s all I can remember I didn’t do this at the beginning of the call!!!

  • (dant t) (re. Conversation on the papers from @plawrence above) first paper is based on actual remote sensing data, and add observations so it’s not 100% relevant.
  • <-- but this one ncludes sample costs in the calculation of #s of measurements and sample collection strategy - this is key!!.
  • Also, should we be sampling from unique locations each year to reduce fraudulence, rather than sampling from the same location.
  • (dan k) There’s also an R package as well as the Julia package. This is a really clever procedure, and indigo ag is starting to use this as well. So we should probably just build a microservice which can swap in stratification algorithms, maybe built around R.
  • (christophe) Is there a standard way to quantify uncertainty? Have been pushing this on comet farm, so now that we’re talking about his re. direct measurement, does the sampling uncertainty connect back to what Nori would define as uncertainty…?
  • (dan k) structural uncertainty - if you’re using something like Comet or a model, it’s uncertainly associated with the parameters. When it comes to spacial stuff, uncertainty is around data coverage - measurement uncertainty (within a measurement), uncertainty between measurements (outside of what you actually measured).
  • (mark) structural uncertainty - we have a model and we think we understand, but the uncertainty comes in about processes that we don’t know - the known unknown, and the unknown unknowns. Recently completed rebuilding uncertainty model for Dacent… tied in measurement dataset w/ 65 sites with data, 800 different measurements (100s of people, processing soil samples, different people doing lab analysis, etc. etc.). This isn’t even structural uncertainty - that’s measurement error.
  • (dan k) re. the paper - it’s trying to answer ‘are you measuring enough’
  • (christophe) so the question re error is can we get farmers who are ‘beating the model’ using direct measurement to identify that they are beating it can be used to increase
  • (mark) yes, but timeframe is long. Takes 3 - 5 years to detect changes in soil carbon to identify a treatment change… but can we wait for that???
  • (mark) also, we need to start drawing down atmospheric CO2 levels… what’s blocking us is financial risk… we KNOW what will sequester carbon in soil, there’s uncertainty about how much… so we just need to do it… <-- key point
  • (dan) there are standard procedures, but ish
  • (greg) can we just accept general consensus from smart people as good enough?
  • (ryan) that’s what we’d like to do… we’re trying to build a general open group to establish that consensus. We need to get the folks together to get a consensus based approach on this. We have folks already as a ‘coalition of the willing’… ideally there’d be an openteam resources to bear - not just to benefit nori, but we could be the platform to help get this done. It certainly would align with OpenTEAM objectives
  • (christophe) we think in terms of projects, we have them lined up, so in an ideal world we’d enable a learning by doing approach… so yes to consensus but let’s get started!
  • (greg) with the caveat that ‘something is better than nothing - it’s ok if it’s not perfect’ is the mantra at the beginning of that working group started.
  • (dan) … who else could be on the group? Like the ecosystem market consortium, or others.
  • (dorn) there’s another group that FFARs convening on this as well to start strategizing to align these things from FFAR’s perspective. Pitch for Jan 7th meeting as well… what can we align on given the urgency of this year? What tools + methodologies alignment ASAP!
  • (greg / dorn) - directionally correct, versioned, process for updates / contributions, critical + productive review as the structure for ‘evaluating’ a methodology.
  • (mark) - yes, that sounds great! Another question… is there a good understanding about what the least cost method is to do this sort of thing to accomplish a couple goals. Low cost --> improve data sets to reduce uncertainty, but also, to better inform the least cost way to asses if I’m making a difference.
  • (dan) - we’re working on that! Hoping to take some datasets and crack that question. Another way to think about this, is what are the procedures for determining error (from tools, to models, to final stock estimating error)… let’s make a defined way to estimate error that we all agree to use. This feels tractable also.
  • (mark) - if it helps we can present on the soil stocking estimate error we’ve developed. Would this be better pre-holidays, or during the Jan 7th meeting. action item below for mark to present their error estimate models at Nori’s webinar
  • (ryan) - we have an hour for Nori already, so we could touch on Comet Farm there (15 minutes)…
  • (greg) - what can we do to be useful this year
  • (ryan) - Nori is running a pilot for croplands right now… let’s ask ‘who’s interested’ in taking additional on the ground measurement data… can we get them into the pilot so we can work through and learning from ‘both and’ approach on modeling + sensor based results.
  • (dan t) - we’re talking with Mad Ag who also has folks who work with Nori and Comet right now as well
  • (ryan) - yep we’re working with them as well, we have at least 3 to get started.
  • (dan t) - we have a person interested working with Savory as well.
  • (greg) - clarity on a package to discuss with farmers on how to use Comet and direct measurement
  • (mark + christophe) - I’m interested in engaging on the grass and grazing areas.
  • (dorn) - has a meeting with Maria + Steve on Jan 7th meeting, so what should we add or cover for the agenda…
    • Get feedback on our draft agenda…
    • (mark) be happy to engage in the future


Next call - Mark present on their error analysis for carbon stocking at Nori’s OpenTEAM call.
Open google doc on our best, first stab for direct measurement process for this year. Not sure how to overcommit myself, but could we get started… Dan T is going to write up a draft… seems like all the players are in place to do something interesting this year.

@gbathree, @mstenta, @cjospe, @mark.easter, @dornawcox, @DanT

Hi All,

My apologies for missing the meeting due to a time zone mix-up - I really wish I had been on the call and had been excited for it! But thank you Greg for taking and posting the notes.

I’d tend to agree that we can’t wait the 3-5 years for detecting changes in SOC, but that it is important to have the measurement mechanisms in place to validate the model predictions once the 3-5 years has transpired.

I’m keen to contribute resources and time towards creating the directionally correct, versioned, process for updates/contributions. SFL also may have some farms on which we could begin the tandem model/direct measurement approach.

I’ll try to connect with a few of you individually to assess next steps (perhaps in the google doc) and if we can help facilitate.