Folks, I want to seek early feedback for an idea, please help!
I’ve been thinking about tools that can help trigger some sort of reflection or “ethical thinking” during the model building process. My target “users” are individual data scientists and researchers or AI/ML/DL practitioners. My goal is to make reflection a habit. Seeking feedback on one such (very simple) tool…
Proposal
A set of 5 questions provided as a fastai widget OR a form (html/google). Running the widget triggers the questions, when you answer/submit they become part of your Jupyter notebooks (yay literate programming!) available for all to read.
Now, for the feedback I’m seeking, easy stuff first:
1) Do you think this is valuable?
- Yes
- No
0 voters
2) As a user, will you actually fill it (once per project)?
- Sure
- Never
- Maybe, if you add/change <pls leave a comment!>
0 voters
Harder part of the feedback:
Do the 5 questions below (3 in bold are key imo) make sense to you? What do you think I’m asking for? Any key question missing from this? Any other feedback?
(Please leave responses as replies to this post. Thank you!!)
The 5 Reflections/Questions:
1. State ONE target application for this work.
2. Is your data/reward EXACTLY what you want to predict/classify/learn
3. State ONE application of this work that you think is harmful.
4. State who or what is NOT represented in your data but is impacted by the results.
5. These results are only valid FOR <list groups/conditions>.
Please Note
I am intentionally not explaining the questions - I want to see how folks understand them and if that doesn’t meet my intended reflection I will revisit the wording/question.
I am specifically not targeting for this to be useful to big teams/corporations. They need something more intense/comprehensive. The “Model Cards for Models” and “Datasheets for Datasets” papers provide such intense/comprehensive tools. If anyone here works at a company using something like this, please let me know, I’d like to chat more!
I am also not intending to reduce ethics to a checklist or process, though that is a risk. We have to start somewhere, and I think a simple reflection as an individual habit is a good starting point. Not enough, but I think valuable, if you disagree, I want to hear more!
Also, plenty of ideas on what more can be done or how to do it differently - for example we can reduce it to 3 reflections if that increases adoption without loss of meaning, or use an entirely different method to trigger it, can also evolve to be enforceable etc etc. But this is just a POC, so keeping it simple for now.
Proof of Concept
Jupyter Notebook: https://github.com/nalinicommits/reflectionwidget/blob/nbonfastai/nbs/dl1/InitiateReflectionWidget.ipynb
If that doesn’t render, try the gist: https://gist.github.com/nalinicommits/79770dc78e13aa52c22d23c71dd00cfc
If neither renders, try loading the notebook on nbviewer.
Thank you all for your time!