Wanted to share with you a little experiment i ran as homework after Lesson 1: could the current best-in-class image recognition algorithms be used to predict “winning” and “losing” price chart configurations instead of cats and dogs?
To try and answer the question, I’ve gone ahead and built an entire mini-pipeline (excluding the trading engine part) in Python:
- it gets FX price data (i’m using Oanda.com)
- then it plots it according to specific “viewPorts” (60 minutes of price data shown as a line, with 1 minute data increments)
- saves each plot as a .png file, naming it according to its class (buy, sell or hold) and also places it in the right folder inside either “train” or “valid” folder
- it also takes into account Rachel’s excellent advice on how to set-up train and valid datasets properly (only latest price data is used in the valid set)
- then the data is run through the Lesson1 image recognition model
All the code can be found on my Github , presented in Jupyter Notebooks with explanations for each step and code section.
At this stage the accuracy rate is random at best (~50%), which is to be expected for the specifics of my dataset (granular FX data with little denoising work done). I knew this from the beginning, but it was a useful exercise none the less.
Appreciate any feedback, and hope that some of my code can be useful as a starting base for anyone else interested in the topic.