I figured I’d get your opinion on this as I figured I’d be likely to find people who have been in this situation and perhaps recently. Last year I was looking at the amazing achievements by AI and was frustrated that I wasn’t a part of it. So I pitched changing the direction of my PhD 2/3 the way through to work on automation and AI. Last winter I watched the fast.ai course videos and jumped into it. Few months later I had proof of concepted segmentation for an important application in the company and we are almost done drafting the scientific article! So first of all, amazing work by you fast.ai for allowing people like me to take the leap! And you too community by helping.
Fast forward some months, I have made a new method in the lab which WOULD incorporate imaging and deep-learning based segmentation. It’s a huge success but I have yet to show that the segmentation will work and they are of course interested in the development time. As they are likely to hire me to finalize development of this, I would like your opinion on how to handle this situation.
While I am confident that it will work technologically, as always, the robustness and generalizability of the solution will depend on data volume and quality. I don’t really have the experience to estimate the needed data. So when they ask me how long it will take to develop the AI, how do I approach it?
The complexity of the images and objects is relatively low. Circular dark objects. They can vary in size, shape and texture somewhat. They can also meld together, which is why I want to use deep-learning and not e.g. thresholding, as the method needs to be quite accurate I separating them. I can probably create and label 400 images a day, each containing 20-200 objects.
Either way, I am wondering how seasoned developers and companies approach the unknown in development time. Can the time be estimated well by experienced developers? Are more seasoned companies just more open to take the chance?
Any input or thoughts you have are appreciated.