Awesome! I often find that a lot of the experimentation with GLAM context stays in offline processing to generate static results (some made available in more traditional interfaces like searchable transcribed text or lists of similar records. I’m very interested in moving some of the control of parameters over to users, so that they can make their own visualisations and find what is of interest to them. To me, this feels like the next step up from ‘Generous Interfaces’. Of course a major challenge to keep such interfaces intuitive!
oh cool! can you tell more on this or share a link?
It feels like the machine learning community has made great strides in processing images in recent years, which I hope will progress to the medium of video