Hello everyone, I am new to programming. I have learnt the concepts of machine learning and deep learning. I am trying to figure out how to implement an idea that can help blind people. The idea is to create spacial perception using different intensities of sound. For example, Let us say the device is like audio sunglasses shown below instead of glasses we put cameras.
We process the data and perform object detection. now in each frame we can get the centroid of the object “chair” for example, is on the left of the frame. now if we could generate speech saying “chair” but delivering it with different intensities in both ears. Loudly in left ear and softly in the right ear, of course calculated.Couldn’t we deliver perception that the chair is on the left? This is just an example to explain the idea. The core concept stretches further with the final goal to paint the world with different sounds, not confined to speeches. I am trying to make an open source platform for the development of this device but I have just started and I have lots of questions to ask. Like creating and managing open source discussions, repositories etc. these are all new to me. So looking for guidance and support.