I downloaded 10K images from google image using a variety of search terms that found images of text normally aligned.
["newspaper articles", "magazine articles", "newspapers", "pdf text", "text pages", "text chapters",
"text descriptions", "typed documents", "word documents", "scientific papers", "two column papers",
"text wikipedia articles", "text heavy websites"]
I then rotated them and used the rotated image as X and angle as Y.
Yes I have this working using image processing but I want to see if I could do it with deep learning for fun! Also using an algorithm based approach may miss things. For example if there are handwritten notes or the corner is blacked out then the simple algorithm fails. I figure deep learning should be able to handle edge cases better.
Thanks for the link. Will read in detail and see if can be applied.