Grassland Weed detector

Hi Gavin,

I am Nathan and I try do hook up a Jetson nano (initially a jetbot) to pixhawk. I have success fully trained a Pytorch model on Alexnet for thistle detection.

I was wondering what size of image you use (I use 224x224).

We want to use a Grove kit HAT for ez control of relays and later on some add-on.
Do you have any experience with this kind of kit?

Also, and more on the farmer side, what type of ground vehicle do you recommend for for dealing with crops and grassy environment. I have a feeling that airborne solution are a nono because we want to carry a large amount of spraying fluid.

Cheers,

Nathan

3 Likes

Cool thistles are also on my radar. Actually I’m looking to increase the purity of my grass field so I have only one type of grass much like a golf course or fancy lawn. This would mean all weeds like docks, thistles, buttercups, and also different varieties of grass which could be quite the challenge. I’m aware that such a monoculture would be undesirable for the environment which is opposite to the goals of this project but if I could have 30% of my field just one type of grass that would give me what I need for my product. I could then devote say 10% or more of my remaining fields for rewilding or biodiversity projects. If we can make farming more efficient which I believe we can with ai, we can afford to return more land back to the wild.

The next 4 months are going to be my busy time on the farm so I won’t be able to contribute much to the project until then. I also will possibly be building a house and We’ve just had a new/first baby six months ago.

I would propose the best way to make progress would be to brake the project down into subsections, I feel more comfortable with a lathe and welder than I do coding so I could help develop the hardware?

2 Likes

Nathan, I was using a similar size photo to your 224. It was the same size as the input layer on the neural net that was used for transfer learning.

To control the relay I just used an arduino connected to the jetson nano with a usb cable. The nano sends a list of numbers to the arduino to switch the relays on and off after each classification. For example if you had 6 spray heads it would send 0110017 the 0s and 1s represent the state of the relay and the number 7 tell the arduino that’s the end of the message.

The vehicle can be anything I’ve seen people use children’s electric quad bike they look quite good and cheap to buy. Ride on lawnmower? Then the sky’s the limit with real tractors and sprayer setups. The commercial teams working on this use air based drones, small electric land drones, large diesel hydraulic drones and then retro fits to conventional tractor or standalone sprayers.

1 Like

Hi Gavin,

first of all congratulation for your firstborn! I hope he is well.
Second, thank you so much for your answers, you reply completely and fast, wow!

Thanks for the lawn mover idea. I was fixated on Russian twin corkscrew design (tank-like differential drive). And quad bike is a cheap good solution.

Also for the camera I used the raspberry pi V2 CSI camera. Would you mind telling me what USB camera you used (the Logitech webcam is usually quite popular)?

I loved your usage of the nano. However, I think I’ll stick with the grove HAT for now since we will have 1 sprayer (and 1 nozzle) for now, maybe 2.

Cheers from down down under

Hi Gavin,
Did you follow the fastai cats and dogs example to do this?

Hi Gavin,

how are you getting on with this?

Was chatting to my dad about picking dock weeds before they go to seed (those missed by knapsack spot spraying) and remembered your project.

I noticed your data was used in the Amazon ML blog -> https://aws.amazon.com/blogs/machine-learning/building-a-lawn-monitor-and-weed-detection-solution-with-aws-machine-learning-and-iot-services/

Did you see the following paper? They get decent results from ResNet.
https://uwe-repository.worktribe.com/OutputFile/3043314

They mention they got images from a UKRI grant funded project https://gtr.ukri.org/projects?ref=132339 . Maybe they’d share.

I have been unhealthily busy with work so no progress with the bale collection. On the other hand I have spent a lot of time with XGBoost and have more ideas generally on ML in farming. Yes I posted on that forum. I have slightly run out of steam but hope to get going again around August. I might look at weed spot spraying too but would probably go for drone reconnaissance and processing offline as a first attempt. Can then give the GPX points to my dad to go out with knapsack sprayer to test it :smiley: No new hardware required.

Oh, thistles- I can contribute some photos! Paper above mentions good transfer learning results. Thistles are potentially distinctive than dock leaves under convolution- could be worth trying if you only have a relatively small sample.

Cheers,
Peter

1 Like

Hi Peter, thanks for the heads up on the AWS tutorial that’s really cool to see.

I have also been unhealthily busy, currently making the hay in Scotland. The plan is to get back to it in Autum/Winter but I have been slightly put off by the fact that theres a lot of start ups trying to do this now and it does not feel as exciting to work on. Just like you, I have also been thinking about other ideas to use ML in farming. I am also trying to find a way of using ML and robotics to sequester Carbon.

This year I used an electric bike and back pack sprayer to spot spray my docks. It actually was easier than I thought. The only areas I thought the robot would excel in was the particularly bad area with lots of very small plants. The docks tended to be clumped together but still dispersed over the local spot so maybe a hybrid approach of drone to identify the bad areas in which to send the land drone in to tackle.

We grow Timothy hay which we try to get mainly just Timothy grass. I did begin to think could you go to the extreme of picking out different types of grass to ultimately remove both broad leaves weeds and unwanted grass. This would probably become a project more about how you would apply herbicide as accurately as that. It would be like an inkjet field scale printer. A difficult problem to think about with the 3D nature of growing plants and the bumpy terrain.

Hi Gavin,

The carbon sequestration sounds interesting. I have been wondering about something related to cover crop selection (for zero till arable), which in a way could be linked to sequestration. However, it’s quite hazy! I think variable rate direct drilling with two varieties may become more common and video camera on front of camera could capture useful data for that.

I understand the demotivating effect of the explosion of (apparently) well funded startups but hope you can find time to continue :slight_smile: It’s surprising that fewer people have jumped on your open project really.

electric bike and back pack sprayer

Nice idea!

@bgoo022 @Nathanael - a few extra thistle photos (on field margins and a few in grass) if needed: https://www.kaggle.com/ptd006/thistles . No bounding boxes yet (or preprocessing, only mogrify -resize 50% -strip ).

Cheers,
Peter

Seen this cool project on YouTube. Nice little setup. He beat me to it with the dot matrix style printer using water jets and forward motion.

Will definitely get back to the project in the winter time. The plan is to focus on the hardware and have some sort of documentation so it’s easier to contribute. My favourite open source projects have been 3D printer designs. Although they are more commercial now I love the design approach of the prusa 3D printers.

2 Likes

That’s cool!

More required reading on Deep weeding (albeit focused on Australian weeds) https://www.nature.com/articles/s41598-018-38343-3
Data and code https://github.com/AlexOlsen/DeepWeeds

ResNet again.

1 Like

Hi there Gavin and ptd006.

The electric bike sprayer seems a very good idea. Lots of good ideas around it. That’s even better when things get forward.

Also a big big thank for the thistle photos!

I took all my Sunday afternoon and finally got a basic program working for NVIDIA Jetson nano, a PS3 controller and an old school APM2.8 connected together. I will update you with pictures, the AI integration and the thistle encounter logging.

1 Like

@Nathanael - no probs, just added another bunch. Still mostly on field margins and no preprocessing/chopping/annotating.

NVIDIA Jetson nano, a PS3 controller and an old school APM2.8 connected together. I will update you with pictures,

Can’t wait!

@ptd006

Kaggle is pretty cool to use and sort out datasets. I wish they had an area selection tool or bounding box. This is ok for now because I use image classification so I can jut put each class in a different folder.

PS: APM has its own battery and the USB +5V is not connected because at full power of the nano and even with the charger, the CSI camera is very unstable. So different power source… (1S2C lipo in that case). Went overkill with a M8T GNSS + compass haha, because I can, gotta get those thistles pinpointed to a microbial level.

Also pro tip to not lose patience: the controller has 2 modes: PS3 (by default) and Xbox. And I let you guess which one is easier (hint: not the default one)…






2 Likes

Hi Gavin,

contributing a few of the finest dock leaves down here. I tried to mimic the style in your data set. Not sure of your approach but for reference my full flow was:

  1. Photos taken on phone and sync’d to PC via Google Photos.
  2. Split into the thistles and docks folders (manual). Not strictly necessary, more for organising the files.
  3. Bounding boxes added with labelImg (manual).
  4. All dock bounding boxes (BB) dumped to separate .jpg files with pascalvoc-to-image
  5. 5 random 256 * 256 boxes are taken per dock BB.
  6. I should have weeded out BBs that are less than 256px^2 (I used min(x+256,w), min(y+256,h) without much thought)

Let me know any comments. Pretty small in comparison to your full data but might be an interesting supplemental validation set.

Cheers,
Peter

2 Likes

Thanks @Nathanael

Agreed on Kaggle. Would make the labour intensive part easier to crowd source/organise volunteer mechanical turk.

I also got an M8T- using with RTKlib (rtkexplorer version). So far only using the cheap aerial it came with. Also the nearest open RTK base station is around 50km away. Basically I haven’t had great results so far but not focused much attention on improving it.

Thanks Peter I will add them in next opportunity I get.

Im trying to remember my workflow I think its something like this.

  1. Used digital camera to take pics at high resolution. Transfer with SD card, old school. :slight_smile:
  2. Made a python program to batch split all photos into 256 * 256 images sections.
  3. Used a picture sorting program to move images into correct folders. Ie press Z for dock or X for not dock.
  4. Batch renamed photos in each folder so they are numerically ascending.

Ultimately if we had a working drone it could harvest its own pics. The neural net could pre process the images to try and find docs and then the user could confirm the image. I ended up with lots more not dock pics so filtering out candidates with docks could be handy for rebalancing the dataset.

I was also thinking about a simple android app like dating app Tinder. You could swipe left and right to classify images as dock or not dock or thistle or not thistle.

1 Like

Hi Nathanael so can an APM control a land drone? Does it need a steering system like a car with the addition of a steering sensor or can it be a more simple setup with a motor on each wheel like tank steering?

1 Like

Hi Gavin,

so can an APM control a land drone?

If you use PYDK2 (python drone kit 2) one of the examples is a guided mission for quadri-rotor where you have automatic take-off and landing:

Does it need a steering system like a car with the addition of a steering sensor or can it be a more simple setup with a motor on each wheel like tank steering

All the robots I have made with APM (boat or car-like) have differential steering = tank-like= RCOUT1 is left motor and RCOUT3 is the right motor). The jetbot from NVIDIA is similar. If you use a remote or use PYDK2 RC emulation, RCIN1 is throttle or speed and RCIN3 is roll or steering = left-right.

I was also thinking about a simple android app like dating app Tinder. You could swipe left and right to classify images as a dock or not dock or thistle or not a thistle.

I admit I love the idea of the Thistle Tinder App! I wish I could code Android or iOS. Might be an opportunity to learn.

I will create another dataset with labels for object detection instead of image classification with ptd006’s method. Thanks for sharing your way of doing it! Very much appreciated.

For now, I am battling with Kaggle, and when it is done, I will make the notebooks public.

Bounding boxes added with labelImg (manual).

Thanks for the tool!

Also the nearest open RTK base station is around 50km away

For RTK, I created a DGNSS CORS when I was in Master. Uses a raspberry pi nano, M8T and Xtend9 (900MHz UART over the air) and can get float correction in 16min. Use the German software GNSS surfer to check, and they have an English translation.

Otherwise, in NZ we have a s*** load of them, but I only got an account because I was studying GNSS, guess I will keep it forever…

Also link to my video I extracted the images. I do apologise for the vertical format, what was I thinking?

1 Like

Thanks.

I updated the thistles dataset with 256*256 random clips. I haven’t fully tagged the images yet but may share the VOC XML files with all the bounding boxes for YOLO training soon.

Looking forward to seeing your approach in the kaggle notebook.

So far I am just experimenting on laptop with transfer learning from pretrained ResNet50.

After a single epoch training on Gavin’s dataset it’s getting 89% on my test sample. I’m actually happy with that considering it’s a completely independent sample with varying light conditions etc.

I am doing basic augmentation.

layers.experimental.preprocessing.RandomFlip(), layers.experimental.preprocessing.RandomRotation(0.1), layers.experimental.preprocessing.RandomContrast(0.025)

Edit:
Getting good accuracy scores (on 3 classes- dock, thistle, neither) just training on my laptop (2gb Geforce MX250 GPU). However, I have observed almost all the errors are amongst my photos (i.e. many of my test docks are recognised as thistles)-

  • My docks look more mature generally (more have gone to seed, which may be recognised as thistle feature). Also most are on field margins (docks and thistles).
  • To some extent the net appears to be learning the difference (brightness, colour tones, blur) between my photos (my docks aren’t in train, due to low count) and Gavin’s. To combat this I added RandomContrast(0.025) to the data augmentation. This has improved the situation a little but it did improve the general validation accuracy.
  • Also added RandomCrop (that’s basically how I created my samples after all :slight_smile: )
  • Basically I still need to improve the sample or do more augmentation.

Example augmentations:

Edit: added 50% of my dock photos and only used 50% of the notdocks (to balance the data better) in training and the issue isn’t noticeably. Took another batch of photos this evening too to add.

Instead of drone I might get an old kids electric ride on tractor (similar to what Gavin mentioned above), with remote control kit. Would be easier to attach old laptop to.

Another possibly interesting project- PlantNet https://identify.plantnet.org/ . It seems the data is not generally available though and I only found one data set released in 2017 here -> https://www.imageclef.org/lifeclef/2017/plant

1 Like

Woke early and got shots with my cheap drone (SG907 4K version, although the shots may as well be 1k given the noise!) from around shoulder height. The accuracy is embarrassingly low. Identification of “obviously grass” is OK but dock from slightly blurry thistle is poor indeed. Better drone camera would be needed and probably blur augmentation in training. Training should also include actual drone shots.
image