I was mulling over today’s Human Centered Design breakout meeting and it occurred to me that one possible lens for looking at all the issues we discussed is to think about design data; by that I mean, the data which is required to formulate and execute on a given design. Whether we’re talking about user feedback, feature requirements, budget numbers, or even our own mockups, I realize it can be helpful (at least, for me) to think of it all as different forms of data, which can inform the decision making process. And so like any kind of data, we can ask certain questions about it:
How do we capture that data and guarantee its completeness?
How do we organize and structure that data?
How do we clean, filter and process that data?
How do we make that data accessible and actionable to the user (ie, ourselves)?
Or, to get totally meta, how do we design a system for handling this design data in its own right?
I’m not sure if this is helpful to anyone else, and I guess it’s just a re-phrasing of the same idea(s) we’ve already tread upon, but perhaps we could work towards answering some of those questions within the specific context of OpenTEAM, with special attention to how all that can be achieved in coordination with the hub farms (keeping in mind the idea of “survey fatigue” that Kita or Greg mentioned).
Anyways, that’s my 2 cents. Looking forward to picking up the conversation tomorrow!
Without knowing anything about the HCD breakout, I quite like the list. I need to write something about an ISOBlue project that we ran in the Netherlands, the list actually helps me in structuring the content I would like to offer in the report.
Hi. My background is economic and business. I have been working to develop open collaboration to beat ¨the blind competition behaviors (system)¨, and to your list I would add some other questions to frame the whole collaboration:
what is the need that we want to solve and how do we ensure, in advance, that ¨the innovation¨ will serve future users?
Who are giving and working to create value and contributing to solve ¨the need¨?
How do we measure openly the value of each contribution?
Who are capturing the value or how do we reward to the contributors.?
Many of us talk a lot about openness and transparence, but the simple basic questions above are very diffuse.
One proposal to face these questions is here. But of course it has to be improved and proved. Would you like to work on this open collaboration set of rules. The rules are very simple. The complex issue is unmask ourselves about our real motivations and desires… I am pretty sure that if we solve these issues, the collaboration will flourish…
Oh awesome! The ontology spreadsheet is great, and I could see it having direct usefulness to design. Names, and the relationships between those names, matter so much in interface design, as much as in data science. I can’t tell you how much time I’ve spent with users on the semantics of the word “crop” (does it refer to the actual stuff planted in my field, or is it more like Platonic form of that stuff growing in the field?).
This also reminded of this good discussion we had over on the farmOS forum:
The crux of that discussion was: what patterns do we find useful when structuring information for our data models, and how are they different from the patterns we find useful when structuring the same information in our interface designs? I think it’s important to identify a process for mapping information from our data models to our interface designs, without forcing the interface to take on the same structures and patterns of the underlying data model.
Perhaps an interesting exercise for the HCD group would be to shadow the work being done by tech team on ontologies, and try to map those concepts to a useful design model. Again, drawing on the farmOS forum discussion, I would ask what are the relationships that cut across these hierarchical models? For instance, I see a lot of the ontologies already in the spreadsheet are very strictly bracketed into categories such as “crop list” and “fertilizers”, but what if we look at the seasonality and consider what crops and fertilizers are used in the month of August, versus April? How does this relationship cut across all the ontologies, and what new insights might this relationship reveal to the user?
Absolutely - that same transition (from a hierarchy organized by one group’s norms to a hierarchy organized by another groups norms) is what motivated the need for this in the first place. @sudokita showed me existing crop lists which are all built around genetics, phenomics, and/or breeding.
I’ve love to even come up with a very large list of ‘organize by month’ type ideas as test cases to ensure that we can serve that up. Even if on day 1 we’re still talking crop ontology in the ‘normal’ format, we should plan that our API can spit out crop by month ontology (or whatever similar variant) later in the game.
Excellent! And perhaps, getting back to the initial idea of this post, such a list could be the germ for a larger library of design data, as we test the usability of these ontologies against user expectations and document those results.
Now I’m imagining we create a sort of “shortest path” algorithm for how a user can navigate from one node to another in a hierarchical ontology, or trie, without having to traverse back up the root of the trie, by instead superimposing a graph-like ontology on top of the trie-like ontology, allowing for more lateral, direct movements. Something like this:
I am not a programmer but I have been practicing my R I have been following this project for 5 years now and I love that it is still under development by real people with real ideas and goals. I have have finally submitted my PhD thesis in aquaculture nutrition. I started writing my thesis in RMarkdown and spent 2 years trying to get it working before moving back to MSWord. That being said, I enjoyed R and want to collaborate by getting into this more. I have “time” and I want to see if I can add to this team effort.
I really enjoyed working in R and the power that it has. There have been several big improvements to the entire ecosystem and there are some really cool ways of generating dynamic reports. I have been playing with trying to create dashboards using quantity reports from FarmOS to get an understanding of my current banana crop. Things like, box:bunch, cumulative harvest per field, anomalies. rainfall, inputs, sales price and a map with observations. I think of this as my crop specific dashboard that will be adjustable using filters to get an idea of the performance of to field now compared to previous years and how the field is performing overall. The way I was going to do this was to routinely create quantity reports that are added to a single csv file that is updated. I will write generic reports that can give summary data for specific questions and then have the dashboard contain all the data available for each report. I find this to be a little bit barbaric and I will have to force a few things to happen. If I could describe a quantity report that I wanted to save that was dynamically updated each time new logs are added it would save a lot of time. Quicken and GNU cash have the option of saving reports that are updated with new information as it is added. Going further on this thought, I would really like to be able to enter data into spreadsheets that are “watched” by farmOS and then the reports being generated are saved to a folder that I would write my r script to go fetch for the dashboard/reporting.
This is what has been keeping me up at night as I look for new ways to spend my newly acquired free time. I think I am adding to your points by putting out what I have been trying to do @jgaehring? I would be happy to discuss this further if you have any comments or plans to clean and visualize the data from FarmOS.
First off, huge CONGRATS on submitting the thesis!!! w/ reserved in case you still need to defend it.
I’m not too familiar with R. I take it the dashboard you’re talking about is something generated by the R environment, independent of farmOS itself? Is it anything like Jupyter notebooks for Python?
Basically they are all at war about which one is best for doing similar things. I suppose it depends on where you are coming from for which language best suits your needs. I used R for my stats and data analysis for my thesis so I am familiar with it. It is awesome [R] Following a set of guidelines from the tidyverse it makes it possible to script the data clean up from various formats, manipulate the data and visualising it dynamically, and using RMarkdown like a notebook it can all be reported where the updated sheets are automatically processed when running a new report. If that makes sense.
take it the dashboard you’re talking about is something generated by the R environment, independent of farmOS itself?
Yes. Basically it can be unique to the cultivators system where the code library for various users can be updates and farm specific combinations are created. My thinking is that data is nice to have but what does it mean to the farmer? I am looking at creating a dashboard for my banana data that will help inform management decisions without having to dig into the data manually every time. My dash will not be useful to every farmer but if we created a way of “looking at the data” that FarmOS so very nicely keeps, maybe there will likely be interesting insights gained by all users. How do we create actionable outcomes from our data? Below are the tools I have played with in the past to create a dashboard. Shiny Flexboard
Can the formatted logs be generated to create quantity reports for routine reporting or when updated so that everything stays ‘live’? If these reports were sent as .csv files to the folder where R is running the dashboards could be updated. Obviously this stuff isn’t all user friendly but I think that the next step for all this very nicely managed data is starting to make sense of it for actionable outcomes.
The library can format the logs as JSON, and from there it’s a hop skip and a jump to get them into CSV format. Although this is also making me wonder if it would just be simpler to save the spreadsheet itself as a CSV?