# How to train a model by accounting for boundary constraints?

I have a car shuttling back and forth between loading and unloading points. These points are static and stationary. I don’t have coordinates for any objects illustrated in the plots; however, I can use the green triangle as a reference and so based on the wheel diff I’m tracking the path of the shuttle car. The grids have a certain shape (27m x 15m) with 7m inter-space setting them apart. I now compute the distances traveled by the car based on the speed and the time period as determined by the mechanism of wheel speed difference to classify the motion belonging to the categories of left, right and straight. However, thus computed distances are, as illustrated, not perfect and, as such, when mapped against the grid lay-out they jump around quite a bit.

You can see the distances computed are a little fuzzy and therefore causes the path to stray off into the blocks. I was wondering if I could treat this problem as Reinforcement learning to impose the constraints to help keep the robot strictly off to the lanes between the blocks. I’ve the red box and the green triangle fixed in space. So the total distance is covered in each direction is roughly the same( however, the distances I computed are a little off to match exactly). Please refer to the below link for more details.

Stack Exchange post

Hi,

Not sure if this feedback will help, but I think you can treat this problem similarly as a agent is trying to leave a maze. You should divide your space in to a grid and mark the “buildings” as constrains, other movement should provide you reward based on the distance from your goal (unloading place). The correct movement should move you closer to maximizing your reward, while a bad movement should provide you less reward. I am not sure if this approach will be useful, since you have problems displaying your grid. However, I assume if you create small blocks to cover all your grid, it shouldn’t be a problem.

I am still learning only the basics of RL, so not sure if this link will help, but I hope it does:
https://www.samyzaf.com/ML/rl/qmaze.html

I would recommend checking this course for the RL basics: https://www.edx.org/course/reinforcement-learning-explained-2

Much information about RL, you can find here: https://github.com/aikorea/awesome-rl

1 Like

Thank you, Valentas. Although I was looking for an idea to pursue in solving the problem the maze problem might hold something that is applicable to my problem. I shall look into it. Much appreciated for your time.