Please sign in to access this page

Machine Learning Plant Health Classification

Machine Learning Plant Health Classification Used AI

3 devlogs
11h
•  Ship certified
Created by akb

The ML Plant Health Classifier is a simple web application that uses a Teachable Machine model to check the health condition of plants. It can classify whether a plant is healthy, affected by rust, or showing mildew by using live webcam input or sample images. The project is built as a single HTML file so it is easy to share and run without extra setup.

It supports live webcam predictions and lets users test the model by clicking on sample images. There is basic error handling to show messages if the model fails to load or the webcam is blocked. The interface is minimal and responsive, focusing on quick testing of the model.

The application is built with TensorFlow.js, Teachable Machine, HTML, CSS, and JavaScript. It requires only a modern browser with a working webcam and an internet connection.

To use the app, start the webcam to load the model and see live predictions, or use the gallery of sample images to check how the model responds. Future updates may include a better-trained model, an option to upload images, real-time monitoring, and design improvements.

Timeline

Ship 1

1 payout of shell 54.0 shells

akb

20 days ago

akb Covers 3 devlogs and 11h
akb
akb
2h 59m 20 days ago

Today I worked on wrapping up the project by integrating everything and making final improvements. The single file structure from the last update made it easy to test changes quickly. I refined the teachable machine model loading process and added better handling for edge cases. Now if the model fails to load or if the webcam access is blocked by the browser, the app shows a simple error message instead of silently failing.

I also cleaned up some event listener issues that were causing duplicate predictions when switching between webcam and sample images. This was because I didn’t remove old listeners properly, so I added checks to ensure only one prediction loop runs at a time.

The UI has been slightly styled for clarity, with better spacing around buttons and a clearer status box. I also tweaked the prediction display so it updates more smoothly and doesn’t flicker when switching sources. The sample images have been tested thoroughly and now respond instantly, making it easy to test the model without the webcam.

On the model side, I tried a newer version trained with some extra plant health images, which improved the accuracy a bit.
Overall i feel that the app rlly feels much more polished now. It’s a single self contained file that can be shared and run without setup, which was my main goal from the start. I might add a few cosmetic improvements later, but functionally it’s complete and stable.

Update attachment

today i worked on merging the whole code into a single html file instead of having multiple files. i just wanted to make it easier to handle and test quickly. so now both the ui and backend logic are in index.html. this feels cleaner for a small project like this.
first i took the code from part 1 where i had built the interface with buttons, webcam container and sample images. then i added the backend code from part 2 directly inside a script tag. now when i click start webcam it loads the teachable machine model, sets up the webcam and starts showing predictions. the sample images can also be clicked to test predictions without turning on the webcam.
i had some small confusion with attaching event listeners because i forgot to wait for domcontentloaded event and the buttons were not working at first. fixed it by wrapping the init code. i also typed webcam.stop instead of webcam.stop() once and was stuck for 5 mins thinking why the stop button wont work haha.
for now the app is working but i feel like there might be some improvements like error messages when model fails to load or when webcam is blocked by browser. overall the project is coming together nicely. the single file version is easier to share and run without worrying about extra js files. i still need to do a bit of styling but functionally its all good.

Update attachment

Today I worked on creating the basic interface for the Plant Health Classifier project. This step was only about building the layout and structure of the page, not adding any machine learning or backend logic.

What I did:
I set up the main HTML structure with a heading, a button container with Start Webcam and Stop Webcam buttons, and sections for the webcam view and the classification labels. I also added a status box that shows Idle for now. Below that, I made a gallery of sample images so that users can click on them later to test the classifier.
I connected a simple style.css file to keep the layout clean and easy to read. The focus was to make sure everything is well placed for the next steps.

Challenges:
There was nothing very challenging in this part since it was just frontend work. I just tried to keep the structure simple and ready for when I add the backend later.

Next steps:
I will now move on to adding the backend. This means loading the Teachable Machine model using TensorFlow.js, wiring up the Start Webcam button, and making the sample images clickable so they can be used for predictions.

This first step is done and the interface looks minimal but is ready for the logic to be added.

Update attachment