Grassland Weed detector

Hi I’m looking to train a weed detector to detect specific weeds I would like to spray in my grassland fields. Im a hay farmer and the alternative is to hand spray the weeds over hundreds of acres or spray large ares with a tractor mounted sprayer. Both options are time consuming and the tractor uses significantly more chemical. I have a video of what I have managed so far using Tensorflow.

I retrained this Tensorflow model starting from the ssd_mobilenet_v1_coco_11_06_2017. My training data was pictures I have taken myself of the actual weeds in the actual field I am planning to test the machine on.

I used a tutorial by sentdex on youtube and I was surprised I got it working at all. (cpu training on a iMac)

My problem is the model does not train well I am struggling to get the total loss below 2. I plan to watch the fast ai tutorials and try and improve my detection.

Things I am thinking about.

  1. Choosing a different model to retrain that might transfer to grass field weeds detection better.
  2. Training for much longer I am at about 5000k steps
  3. My bonding boxes take in too much of the grass in the back ground and no enough just weed? Could I use segmentation to only capture the weeds??

If anyone wants to join the project I am calling it opensprayer.com and the end goal is too build a autonomous grassland sprayer for a total materials build cost of £2000. Hopefully we will soon see some small boards in the future like Arduino that can run neural nets cheaply and fast. Other than the cost of the compute I think this may be possible. I would like to see some practical ai projects that don’t cost the world but still do useful work and act on the physical world.

Ideally I would not be training the model but instead maybe just building the hardware and collecting the data. In the meantime I am keen to learn about training the best model I can

Thanks Gavin

18 Likes

Great idea @Gavztheouch! I would love to help, as much as my beginner-level skills allow…

Do you already have a github for the code?

You could find something useful from the Kernel or the Discussion tabs from this Kaggle competition:

@jamesrequa won this competition and he shared his code here:
https://www.kaggle.com/jamesrequa/keras-k-fold-inception-v3-1st-place-lb-0-99770

IMO, higher resolution was very important to get optimal performance in this competition and probably for your problem.

2 Likes

@aragalie it would be great to have your skills on board. I have made a GitHub at www.GitHub.com/opensprayer I have no files uploaded so far I will upload the image dataset if that is possible (200mb)? This would be the first time I have used GitHub.

@alexandrecc this looks great I was trying to find something similar like this. Taking high quality images for the high res will be one of the main challenges as the vehicle moves over bumpy ground.

1 Like

@Gavztheouch no worries, same here :slight_smile: google it and i’m sure you’ll figure it out quickly. You could also use dropbox and put the link to it in the readme file…

On the topic of high-res photos: I assume you’ve already considered buying a drone with a high-end camera and using that to survey the plot in high-res? Is it a cost issue or are there other challenges you see in using this method to get the photos?

@aragalie Thats a method I have seen being used to great effect. Infact even a basic DJI Phantom 4 can produce great images from height, there is a youtube video of someone doing that for grain crop monitoring. I don’t see why it could not be used as you said to gather data. The only problems I can see would be battery life (Maybe not in near future) and mapping the coordinates of the weed location to match the gps coordinates on the land drone.

By mounting the compute and cameras on the land drone you simplify the location problem, but you got me thinking about the drone as in theory if you had a preplanned spray map made using drone images you could really ramp up the speed of the sprayer drone without worrying about bumps in the field.

Maybe you could tether an air drone to a land drone with a power cable to combine the best of both worlds. :slight_smile:

I have made a gaggle database. IF anyone wants to try and make a detector but finds my pictures are no good please let me know what would work better and I can try and get some more.

Thanks

1 Like

I have changed tactics a little with this project and instead of trying to object detect the Broad leaved docks I am planning on breaking the images down into 8 by 6 chunks and then running a simpler image classifier on each 256 by 256 pixel square. The system will only know which of the 48 squares has Broad leaved Docks present but this should be more than accurate enough.

I know have 17,000 images that I can start hand labelling.

How should I label my images? Is it ok to have only two folders, one for pics of Broad leaved Docks on their own + Docks and Grass, and another folder for images of Grass that may include other types of weed but no Docks. So Basically dock or no dock in the picture.

One issue I see is do I include a square in the dock folder that only just includes the edge of dock leaf. Would a borderline case like this confuse the network as there is very little dock and lots of grass?

2 Likes

Gavin - this is an awesome project, super inspiring and I totally resonate with I would like to see some practical ai projects that don’t cost the world but still do useful work and act on the physical world - reminds me of the Tensorflow-for-cucumber-farming project

I think for a binary classifier having them in two folders would work out just fine.

I’d be down to help you out a little on this if that would be useful :slight_smile:

Hi Jon, yes I was inspired by the tensorflow Cucumber project and the Lego sorting project, both actually doing something quite unimaginable before deep learning. There are also a lot of highly funded startups working on a similar problem. But I think there is space for an open source version, there is not much reason to think that these machines should be expensive. And I like the idea of being able to build, repair and upgrade yourself instead of being tied into a large company’s upgrade path. I think the open source/DIY 3d printer community has shown that open design can work.

I would welcome any help at all :grin: be it advice or actually building models/Arduino code to run the robot and interface with the laptop running the neural network.

I have bought some parts to make a simple drive system with a wheelchair motor and motorbike wheel. I plan to build a great platform onto which I can upload and test anyones models for dock recognition and also receive and test other people people’s Arduino code to move and spray the weeds.

I will update my kaggle account with the new labeled data.

If we need much larger numbers of labeled data I could collate other people’s photos taken in their particular location with their own camera etc to maybe help the model be more general.

Updated my Kaggle dateset with my new smaller image chunks. Only the first 2000 or so images and it is about 3/4 not Docks and 1/4 Docks so the dataset is not balanced.

Should hopefully make it easier to try and make a model with.

It is the open sprayer 1.zip

Thanks
Gavin

4 Likes

Just managed to train a dock detector using colab and the first fastai notebook for the cat Vs dog. The notebooks are fantastic, I love the feature that takes samples from the dataset and also shows the most/least confident.

Managed 87 % accuracy using my latest kaggle dataset. Room to improve

Hey Gavin,

So I am the maintainer for this project: https://github.com/matthew-sochor/transfer which aims to make image classification transfer learning to get high quality results super simple and easily sharable. I downloaded your kaggle dataset for the docknet (the zoomed in images, I believe). With some simple data augmentation and the resnet 50 architecture, I was able to get 91.4% accuracy on your validation set.

If you want to try my model, its available here: http://www.mattso.ch/static/models/docknet-v1_best_weights_kfold_0.tar.gz

you can install transfer with: pip install transfer. It does require tensorflow and will not install it by default.

if you download that model, you can import it with: transfer --import <path_to_model>

then predict with: transfer --predict or startup a rest api to predict with transfer --prediction-rest-api

Transfer is built to have multiple projects, so you can also re-train from other settings (hyperparameter tuning, adding k-fold, different architecture, different augmentations). I mostly took the defaults for resnet50 with some seemingly logical augmentations. I’m sure you could improve upon my accuracy without too much difficulty.

Also, this is all just trained on my 5 year old mac laptop in a few hours. The primary requirement is disk space as it does speed up computation by storing lots of intermediate files, so make sure you have a few 10s of GB open before running. It all depends on how much augmentation you add as that can drastically increase your data set.

Let me know if this was helpful and good luck! This is a really cool project!

4 Likes

Hi Matthew, great project, I have been waiting for someone to automate the image recognition transfer learning process. Hopefully this will open up deep learning to a lot more people with practical applications they want to try.

In regards to my data set I feel a bit guilty as my validation set is so small I think that your accuracy my have suffered because of it. I will try and add more images this weekend. I actually think ~90% is more than accurate enough to get started with building out the rest of the system. I know it can be improved with time but this is a nice use case where mistakes are not critical and most error will be edge cases. The main patches of Docks are almost always classified correctly, which is great

Another program/project that would help spread deep learning might be an app with a UI like tinder where you can swipe to classifiy images into relavent folders and then upload to the cloud. Each image may be classified by multiple users to help with errors and it would release the burden from the dataset starter.

Also would it be possible to use your trained model to help classify more photos? Say for example it could automatically move images with high confidence to the correct folder either dock or grass and then anything of which it is not so certain ie between 0.2 and 0.8 it could move to a folder for human classification.

Thanks for building the model, if I install your package and tensorflow will I be able to make classifications inside python. At the moment I am working on building a nested loop in python that crops the image from the sprayer into an 8 by 6 grid. It will then classify the image and then update the array with a value of dock or not dock. The array will then be used to control the spray heads. An encoder on the motor with give a position to python. So the first 8 photos will relate to encoder positions 0-100 the next 8 will be 100-200 as the machine moves over the photo in the real world. So for positions 0-100 the spray heads will be set in relation to the first 8 array values.

I have purchased a wheel chair motor, rear motor bike wheel, 8 spray heads and 12v solenoid valves ready to start building as soon as I can find a day or two to put it all together. The first machine will be simple and just move forward in a straight line with no GPS or steering.

1 Like

Thanks! That was my motivation and I’m happy you are finding it useful.

The package is designed to be used from the command line. If you want predictions from within a python program, the easiest would be to use the rest API mode. So start the prediction server, and then use a package like requests or similar to send it images to predict.

I’m happy to take suggestions on different modes that would be useful and let me know if you run into any issues trying to make predictions.

Hi @Gavztheouch, is this project still open and running? I am keen to contribute via GitHub, possibly base a learning algorithm on Fastai.

Regarding the validation set, currently I wouldn’t think it to be too much of a probably with transfer learning and data augmentation.

Just a thought: If onboard computing (via RPis or some such) are not powerful enough to process the images in real time, you could just assign an accurate GPS coordinate to a set of images. Process them overnight, labelling each image as spray or not, then send out the buggy on a simple spraying mission. This would of course double the amount of sweeps of the area you have to do…

Hey yeah I’m still pursuing a build of the first buggy. Today the last of the parts arrived to build a sprayer boom with 8 solenoid controlled spray heads.

If I can build a physical buggy to use as a test platform for the software that might be the best use of my time.

First mission is to build a buggy with no steering or GPS. Have an encoder on the drive wheel to tell distance. It should be able to take a photo, chop the photo into say and 8 by 8 grid and classify each block in the grid. It can then move the buggy over the grid using the encoder for position X axis and the individual spray heads for y axis and spray the blocks in the grid with docks in them. The nice thing is a whole field is normally sprayed when needed so spraying an area with no docks is ok. In the future it may use no spray and just mechanical removals?

What would you need to build the best classifier. More pics I guess. I have lots more but still need to sort them out

Hi @Gavztheouch, good to hear.

As for the first prototype of the buggy, give the drive wheel to distance measurement, you can just record the start location (either via a marker, or GPS from some hand-held device), and then ‘dead reckon’ the position of every photo. You could possibly include that in the XML file. From there, the second spray sweep knows where to spray.

Also, if the latest RPI, can’t implement the model in real time, you can get ‘hats’ for the pis, and have an RPI cluster on-board. That might do the trick.

At the moment, what you have given is fine I think, to build a reasonable classifier. I might also attempt to estimate boxes for the weeds, as Jeremy has done for the Kaggle fish competition. This will possibly aid spraying (knowing the center of the box…).

Thoughts?

@Gavztheouch maybe a bit of a crazy idea but have you thought about rigging a drone to spray the chemicals?
This way the whole task could be automated:

  1. Drone is scheduled or learns a schedule from manual activation
  2. Drone evaluates field at a higher level (so as to cover more distance effectively) and then identifies areas for spraying on a map.
  3. Drone returns to each identified location and sprays the weed killer.
  4. Drone returns to docking station to charge in preparation for next run.
  5. Drone uploads data to server for coverage data, job metadata, and errors.
  6. Repeat

My initial thought is that it might be a little bit of a stretch for a commercial farm (would require a team of drones) but could certainly be useful for smaller gardens and wanted to share.

@buzz_aldi: I think this was the original plan (see further up the thread), but we are not sure if a cheap onboard computer (Audrino/RPi) could run the spraying in real time. Hence the chatter about dead reckoning etc.

Some pics of the printed drive pulleys. I’m quite surprised how strong and accurate 3d printed parts from hobby machines are. This is my first printer a mk 3 prusa. 4 years ago these type of machines were a bit of a joke

1 Like