JavaScript Basics of starting an Our Sci measurement script

Hi All! I’m starting on a simple measurement script for a Regen Network project. And I’m playing around with that works.

I have an object setup up as an answer key, it seems like I would want to pull answers from the survey in sensor.js, and then pass them along to the processor? And to do that I just put a result object into app.result()?

It looks like app.getAnswer is used for reading in data. And either the AndroidApp or DesktopApp version determine whether that data is from a mock file, or from the survey app…but how/where is it determined which one is used? Something in the build process that I don’t have to worry about?

So then result comes into processor, and I just do whatever I want with it there, and the app.csvExport function loads it to the CSV that ultimately would get ‘sent’ to the Our Sci web App? And finally I close it with an and thats it?

Am I able to display output to the device at the time of the survey being finished in the field?

Thank you guys n gals!!! I appreciate any light you can shed for me

Hey Jared!

Exciting to hear you checking out Our-Sci! As you probably already found out, we are still lacking proper documentation of scripts but are working on it with new examples.

Some of the mechanisms are undergoing change right now, yet I’d like to give you a rundown of how sensor.js and processor.js are working together.

  1. app.getAnswer()
    You are completely right, in case you use this function on the Android application, it will attempt to read an answer from a question of a survey (this will return null if the script is run outside a survey). Also as you mentioned, in the dev environment, mock data is pulled in (from the file ./mock/answers.json). The build process will figure out if you are running in the dev environment or on the Android application. For this we are using webpack.

  2. app.result()
    Passing an object to app.result(resultObject) will make the data available in processor.js for further “processing”. We are currently reworking names, as processor is really not fitting for this. Whatever you pass as argument to app.result() will be stored as JSON object together with your survey and be available on the web. It is meant to be used as storage for “raw” data.

  3. app.exportCSV()and
    Here is where things get a bit funky, in the processor those two functions allow you to create an additional “column” in your survey data (visible as column when you go to in the data table of a survey). The idea behind it is that you can store processed values based on the the collected resultObject that has been produced by app.result().

  4. Displaying output when survey is finished
    If you would like to have some kind of summary at the end of the survey, there is a “summary” feature in place ( . The outputs will not be saved though and it is simply a custom “UI” component that is ran at the end of the survey. It could be used for recommendations or showing comparison with other data sets.

Super excited you are checking out the dev environment, please don’t hesitate to give us feedback on what is confusing or what we can improve, “DX” is one of our key focuses we want to improve on and get right.

@neuralsplash Hope that helps, please post any more questions, either Manuel or I can help from there

Thankyou @mdc!! I super appreciate that is indeed very helpful. I’m going to be working on it again today, so I’m sure I will come up with more questions! One quickie that I have right now, is for associating the summary script, the document referenced above says I go to form properties in odkbuild, and input a submission url. What format is that url? Do I just reference the summary script from the root directory?


? Or something else entirely. Also, what is the method doing? It looks like it has the results object passed in entirely, and then individual properties passed in one at a time.

One other question…I see that the npm script sensor runs the sensor. Does that run the ‘built’ version of it, or the ‘raw’ version of it?(I see now, it runs the raw version) And is there a similar way to run the processor locally? I’m imagining run-processor-dev? None of the breakpoints I dropped in there seem to trigger, I’m guessing it has something to do with how it’s being rebuilt with webpack.

Which I suppose sort of comes down to, whats the best way for me to test the processor file as I work? It seems like the three ways I am imagining would be to send results to the oursci server, to output results via, or to use a debugger. Thanks!

another update
I’m playing around with UI and run-processor-dev, trying to get to display anything at all. Here’s the current processor file without a bunch of defines and the includes.

(() => {
  const result = (() => {'Fecha', result.info_general.fecha);
    if (typeof processor === 'undefined') {
      return require('../data/result.json');
    return JSON.parse(processor.getResult());

  Object.keys(result.error).forEach((a) => {
    ui.error(`Answer '${a}' is ${result.error[a]}`, 'Return to the answer and fix it.');
  });'Fecha', 'yesyesyes';

All I’m seeing on the browser after run-processor-dev is a blank brownish page with the following source.

Also, getting a few warnings about UI library when I run it

I’ll answer a few --> is just creating a nice blue box to display information to the user when they run the measurement script. You can have a title and information so like'title', 'more info here')

In terms of the errors, yeah, I had the same issues. It’s the nature of being in a shifting system (though @wgardiner is helping nail it down pretty quick here).

@wgardiner can you follow up? I could help on this (I know how to solve the problem), but I feel like this is a good time to start using your boilerplate and all that jazz which is a better solution than mine (which will be hacky) and it’s worth you connecting directly with folks from Regen!

@neuralsplash I’ve set up a new way to scaffold the boilerplate for measurement scripts that I think will solve the issues that you’re running into. Here’s the Gitlab repo. It’s still under development so let me know if you have any issues.

you can download the boilerplate with degit

npm i -g degit
degit my-new-measurement-script

The errors in your second pastebin link look like there was an issue loading in the measurement script ui module. In older scripts this lived in ./lib/ui.js but recently we’ve moved it into it’s own npm package @oursci/measurements-ui, and the new boilerplate above uses it (here’s an example, which is downloaded as part of the boilerplate scaffold).

Regarding the built versus “raw” version version of the sensor script npm run sensor currently runs sensor.js directly, via babel-node without and build. You’re correct, npm run run-processor-dev runs processor.js in the webpack dev server and is suitable for development use.

I’ll look into why debugger statements aren’t executing.

@neuralsplash I just updated the source map used by @oursci/measurement-script-bundler package for the processor development server in the measurement script boilerplate. Debugger statements now execute as expected :+1:

If you continue to have any issues with getting them to execute double check your package-lock.json or yarn.lock and make sure that you’re using @oursci/measurement-script-bundler version 1.0.12

Performance for HMR is pretty lousy right now so hopefully I can find some optimizations.