June 17, 2025
So the reason it took so much time is that I tried two separate methods. Out of these 2 methods, only the last worked.
First, I tried to load the model locally and have everyone download it, which is a bad idea. The second dumb idea I had while making this first version was the concept that making a request was bad. What was bad was requiring an API key.
So for my second version I started out trying to make a request to hugging face's API, but I was having trouble with requesting without a key. So I took a break on slack and remembered about ai.hackclub.com so I am using that.
Next devlog is with my cli.
VRO: I was going to use InferenceClient.chat_completion but i occured to me I need to run this locally so I am going to redo everything related to the transformer.
So I made the heuristic, and it did not go to plan. So basically, what I based my heuristic on was sensing keywords and then putting the amount of insertions and deletions. This was supposed to be a 15-minute thing, but it took way too long.
I used the tool on my staged changes, and it returned fix(gitcommit_cli): 129 insertions, 3 deletions. THEN. THEN. I came to my senses and realized no one will use this for 5 words. So now I am going to delete it and use Hugging Face Transformers to create the git commit message.
My vision when creating this project is for users to simply type in a command, and for my CLI to take care of the rest. This would make it so much easier for people to commit. This would also help reviewers when reviewing YSWS projects because the more commits, the easier it is to review.
Anyways I created the scafolding, branding, and a function to get the staged commits.
gitcommit-cli is a command-line tool that automatically crafts git commit messages by analyzing your staged diffs with simple heuristics or, if you supply an OpenAI key, by querying ChatGPT. It lets you preview the generated message and then optionally run git commit in one smooth command. By standardizing message style and reducing friction, gitcommit-cli keeps your workflow focused and your history clean.
line thing fixed
The Dashboard now loads real user data from the database, displaying stress scores and reminders over selectable time ranges. The UI features improved spacing, colors, and a dynamic chart that updates with user selections. A debug button was also added for developers to inspect model data directly in the interface.
I made it look better and sql pipe works
BRO I got banned, then got unbanned my projects were deleted tho. I used to be on the front page... I got accepted into shipwrecked tho :) Time to lock in on this project.
need to figure out and remember how my code works 😭
Sorry to say last devlog in a while bc I am doing shipwrecked now.
I made the GUI look better and added an event_type field. Next up is to hook up the data visualization to the database.
The last 3 hours or so on and off have been so frustrated. I tried to make sure the data pipeline from main.py to db.db is very smooth and has 0 to no bugs & latancy. Finnaly I got it to work perfectly and exported it into a .exe file.
I am still making my Python wrapper around the SQL database I am using. This is so when it comes to linking the frames to the model to the database, it is simple. So far my wrapper has 733 lines of code!
So basically I renamed my repo & my local devices directory to get these 10 hours(they were on the same project, tho). I started to create a wrapper for my databases and a way to store the sessions. Additionally, I debugged my .spec file so all data was included in the exe. Overall I just made sure I had a working system so whenever I wanted I could run a script and get .exe file or an .app file.
NOOO My time expiredddd. Its ok. So bassically I got my app into an .exe file.
LOOONG STORRY I GOT BANNED NO IM NOT BANNNNNED :)) :)
Trained and created the code of my V2 with only one main key difference; I used an EfficientNetV2 S model.
Using EfficientNetV2 S now!
Switched from haar cascade to DNN
I created the welcome screen and a basic GUI with essential functions for the application.
Desktop app is up and running!
Started the most basic app to test if it works
Poetry Setup ✅
Sorry, I did not devlog for a minute.
The picture you see below was my proposed website. Midway, I realised I should make a desktop app focused on privacy. I will be using PySide.
I tried to fix the UI I need ideas badly and my emit is not working.
Proposed design of the UI
Sample UI
Progress over 4 versions.
The best accuracy after early stopping for Version 4 was 68.10%
Proposed weights for emotions that can be classified by model ranked on the amount of stress they indicate.
My GIT was not working for a while when I was trying to get these changes committed.
The emission from the model outcome to the HTML page is not working. If you guys know any solution, it would be appreciated!
Hackatime was down, but I am running V4 for 75 epochs after a disappointing run.
Here iss how V4 looks. It has a nice rate of improvment.
Starting V4
Switched to V2(better accuracy)
idk if I want the function of my real time emotion analyzer to judge stress.
One bad thing about my application is that I am using a haar cascade instead of a CNN to identify faces.
Our V1 is the best so I am tweaking it and then making it run for 70 epochs to improve it's accuracy
no WAY IT WORKS
3 versions trained later...
Here is how our progress looks!
Added this type of logging for every single one of my versions not running.
V3 doing so much better now!
Changed the hook, title, etc.
V3 not doing so well. It slowed down after a promising start.
Progress Update: I am getting worried that I am spending too much time training this model after all it has been 25 hours on one of the many steps.
Sorry Wrong Size for the banner.
NEW BANNNERR
V3 is doing almost as well as V2. I think after one last version, we will start by taking the webcam stream.
V3 was so bad I deleted it and started over. I don't know how I did so badly.
New Logo!
V2 With 50 epochs got 68.65%. I am reiterating this script but for 100 epochs!
Our V3 also could not beat 60%. This dataset is particularly hard because of the different ethnicities in the images. I added mix up, random aug, among other things, to bump up the accuracy.
We got to 87 percent on our first & second versions. This seems to be a barrier for us. I am not complaining, though!
Our First Version is barely 60%, and the second is currently training and has a 50%.
The Fer 2013 Dataset is bad....
This was widely regarded as a great move by everyone.