July 20, 2025
I now reconfigured the code so that it can eaisly run on a raspberry pi using a camera i added. there are still some errors like storing the videos in a folder, but i will tackle that in the next devlog.
Made just the basic landing page of the dasboard tab for the website. Also got the datatable sorted out on the backend side of things for this.
I created this script to run a more advanced YOLOv8 setup that can track objects over time, count how many pass through a specific area, and even log all the data to a CSV.
So far have just created a basic script to run a YOLOv8 model that detects objects from my webcam. I can watch the results live, save them with timestamps, or do both like my own DIY object-spotting setup.
I finally just finished! I configured the Raspberry Pi OS to also show my app in the desktop as well as the side bar
I completed the combining the code as well as the UI interface for the COMPLETE app. this means that the user can simply use it as if it was a web app they were launching! i made this using tkinker and python, really happy with this. i also managed to make the colors match with my color scheme, etc, and all.
Finally completed the script that constantly takes images every 0.25 seconds and saves it if it satisfy the Region of Interest (ROI) function for plate detection. Now I am going to focus on developing the Graphical User Interface (GUI) for the project...
Completed making the model where I am connecting my python code with the gpt-4o model specifically for image classification. moreover, I created a detailed prompt.txt(more than 200 lines of code) to make sure the model will output data in the exact format I need it as.
Next I need to make the code that will actually take continuous photos from the raspberry pi and use ROI(region of interest) to sort the helpful pictures from the unhelpful ones..
Took quite long to complete the next part but I got it done!!! I managed to change the UI interface from terminal or command prompt to a Graphical User Interface using the tkinker python library.
The general objective of this was to make the code more user friendly, especially for someone who isn't technical.
My next plan is to develop the code that takes continuous images and sorts them depending on which has a plate and which one doesn't.
I completed the part of the code that takes images from a folder and converts them into base64. Since computers cant process images, it requires the image in base64 format, which is a text format for the computer to understand the picture. I have also managed to pass the images through to the openai model, however I am receiving some errors, that I will troubleshoot later on..
I am making SmartBin AI, which is a project that can analyse both the type and percentage of food waste occurring at a school campus/canteen/kitchen. This project will use a Raspberry Pi, as well as its Module 2 camera to capture the image, after taking photos, it will run a Region of Interest (ROI) analysis on all images to find out which photos has plates. After receiving such list of images, it will pass all of them through the OpenAI o4 model and find out what kind of food it is, as well as the percentage of waste.
I completed the data acquisition and model training phase of my 1d CNN model! Now I can make multiple models, and conduct a validation test and find out which model has the highest accuracy.
I managed to code the whole file that receives data from the brain computer interface, specifically the ultracortex mark iv. This means that I can now receive filtered, EEG data!
My next steps are to combine this code with my previous machine learning model code and see the accuracy of the model.
Managed to complete my training function, which will let me classify the user controls. I have almost completed building my first one-dimensional convolutional neural network. The next steps are to build multiple models where I can run a validation test to fine-tune and optimize a model during the training process.
I can share some photos later on of the arm moving.
Made lots of progress in collecting the data, now that I finished the function. Also started on window extraction.
Started a basic function to collect the data of EEG from the OpenBCI headset
I found out the libraries I will need for the 1D CNN model
I am making a Brain-Computer Interface(BCI) Controlled Exoskeleton Arm targeted at those with paralysis, specifically monoplegia. My main focus right now is to make a 1D CNN model to understand what direction the user wants to move the exoskeleton arm. This part of the project is to program solely the open and closing arm mechanism.
This was widely regarded as a great move by everyone.