June 17, 2025
just something i had the itch of building, i have this jailbroken iphone se lying around, i wanted it to look like those matrix esp32 powered displays, so i decided to make a website + shader for my se to look like one! i started working on the shader, which is borked, due to my video's resolution..
added storage, cpu percent and most ram hungry process, going to move onto the face reactions now!
got testflight accepted, added readme and updated website links, also had a issue where text was wrapping on ios devices, seems to be fixed now.
back from travelling. also, got access to hc ios dev team! so now the app would be able to be published on testflight for the demo, tried setting up external testing, and idk why but the phone number wasnt working, turns out a + was needed. i also made the demo site with a video demo for people who dont want to install the app!
finally moved on from the basic stat and started working on sm else, getting the charts to work wasnt really hard, charts on swift ui are pretty easy to work with, and the default design was already very close to the hackatime dash, also the editor one was a pain in the ass, cuz i had to get them from heartbeats, and they’re limited, so it shows recent editors used, i also started hitting rate limits, so i added caching (which ai helped in) now, after 5 minutes it refreshes everything! also removed tue refresh button. off topic, but i jb’ed my ipad and made it look like ios 26
done with the base of the app! changed the text at the top to seperate words colored differently and, also added more of a pop by changing the border color of the cards, i want to add stuff charts next. also i was going through the endpoints and the trust factor one was seemingly really cool, even though i dont think there is any mention of it in the main dash! i might be ready to ship!
Nice website, maybe add some more documentation or stuff like that (getting rickrolled at work is crazy)
added demo pages, since my pr is still open, and the links dont lead to anywhere, now they lead to a demo site, made in simple html css.
this project is just one devlog, was pretty happy to make this in one sitting, the site itself is pretty simple, i wasnt really sure on what to make so i tried my best, added a bg image similar to where the scene is. and added images of rick and heidi, also added the lore, added the connections to the scene was pretty confusing until i found the qna canvas. also i hope i dont get cors errors.
scene 44! beware of rickrolls!
too tired, first, i started the day by replicating the hackatime dash as much as i could, added the heading, plus the today time text, then i tried getting the stats endpoint to work, but i kept on hitting unauthorized errors, then, i decided to ask the user for their slack username, and then used /api/v1/user/stats, where user is the username and then get the stats that way, also i cleaned up heartbeat code, and tried replicating the grid layout! also, i couldnt get top project since the stats dont output that.
best day so far. i love making UI’s in swift ui, i added phantom sans and the hc colors to xcode’s pallete, wi
hich i’ve worked with for the first time! will focus on keeping my 2h 30m goal daily + making the stats ui + different views.
deleted both devlogs, cuz i might or might not have leaked my api key,
got a wip dash! added a keychain func, models (which i need to update), api, and added profile view, still need to work the UI out and update content view to be as the api key page. also tried to add stats, which is documented, but i seemed to get unauthorized error, tried to get it working and spammed chatgpt, i still havent worked it out, but i decided to get today’s top languages! need to add more stuff like the real dash, project filtering, total stats, etc.
the devlog before had me setup the basic xcode project with it just sending a req to the api and getting raw data
an ios client for the hackatime dashboard!
added uptime and ram to the bottom bar, also textual is alot like swift ui, with hstacks, and vstacks. i still dont like the face. might change it
it took this much time just to render ascii art on the screen, learnt some textual basics, tried image rendering using textual-image but that didnt seem to work, cuz some terminals dont really support it.
an tamagotchi terminal app that reacts to system usage!
end of project! i want to move onto something new lol, also, there were 20 something commits. and i switched from mint to arch, and i was fed up with head ref issues, so i did a force rebase and push, which deleted all the git commits on gh too. so please reviewers dont flag me. now there are 4 commits.
finally added the rust python function, also after switching from mint to arch, i had to force push and delete my git history :( so the repo only says 2 commits, also the flask app is throwing a ssl error, hope its just provisioning!
tried to add rust suopport and this was by far the hardest thing to add, since i've never really programmed in rust, and dk the syntax that well, so had to skim through tons of ai rust code, github samples of rust code, and the rust book, i havent read the whole thing, but have a basic idea of how it now works, also asked the hc slack and got some tips there too!, but i have gotten a model ready and getting a 95% accuracy!
adding rust dataset, also using nvim as my editor now! got human dataset ready, just need to get ai code.
added ts support to the flask app. probably going to add rust or something next lol
adding ts support, gathered the dataset yesterday, and today, i added the feature extraction, also needed to understand a bit of typescript to know what features to extract, after extracting "num_l", "num_b", "comment_r", "avg_l_len", "indent_var", "num_funcs", "arrow_r", "avg_ind_len",
'num_interfaces', 'num_types', 'num_enums', 'num_classes', 'num_imports', 'num_exports',
'type_annotations', 'generics', 'access_mods'
getting a 98% accuracy
ready to ship! finally caddy is done provisioning it, also needed to install scikit-learn on nest, pretty solid mvp imo, plan on adding more langauges like ts support!
was ready to ship the app, so tried deploying, vercel didnt work, tried render, but that too failed, then decided to use nest, and got it setup, but the caddy is still provisioning the ssl certificate.
made a very simple flask front end, right now only js code works, and you can choose any other file. need to fix that.
started working on flask app, create files to keep terminal prediction and flask app seperate. might make a python package, or not really, idk yet
added JS supprot! got 408 files each of human (from gh) and ai (gemini, gpt 4.1, claude), then i just copy pasted the feature extraction and model training, getting a 96% accuracy! changed the feature extraction to get stuff like arrow to func ratio now!
updated install instructions and added onnxruntime, also the reviewer tried it on windows, while the readme explicitly states that it wont work on windows.
added a percentage, instead of a blunt AI or human, also added a few gpt5 samples and it seems to be pretty accurate, i plan on adding emoji extraction, so if it uses emojis, which gpt code does tend to, its more likely to be AI.
removed html support, since it was really finnicky, removed dataset and associated apps.
html support is added, scraped 400 human samples from github, getting human samples isnt a problem, its seamless, ai smaples are a problem, got 200 from gemini, and the rest were from chatgpt 4.1,
needed to use AI to get a better result, since if prompted, you can easily pass the detector, even after using AI (and tons of fixing ai code), its still pretty dodgy and isnt that reliable, ig its because of the lack of proper data, and i currently wont be able to solve that without paying for an AI service, so ig, HTML detection would be marked as a rough estimate, and it can be pretty dodgy.
also i fixed the scraper by using github's search api, and then decoding its base64 contents!
increased dataset to 400 files each for AI and human, scraped github for human, and used gemini 1.5 flash, claude and ai.hackclub to generate ai code. used tfidf to get a better result, ended up geting 97% accuracy!
also made a simple prediction script, it uses the same extraction code from the features.py, renames the dictionaries keys to suit the models requirements, and then loads the trained model to get the prediction,
currently only works for python code, but i plan on adding html, css and js
first devlog! today, i got 20 samples each, ai samples using ai hackclub using a simple ai code gen python script i created, and manually got human datasets, also built feature extraction and got it to ouput .csv! in the format - filename,label,lines,blanks,comment ratio,line length,indent variations,functions
also i need to gather larger datasets, perhaps from web scraping on gh and pypi
using AI to learn scikit-learn and more about ML training with python note - it might not be accurate, since the datasets are pretty small (1k each), due to system constraints, and ai rate limits! slop_detector1020 is an ai code detector made in python
ready to ship! made the final UI make good, reader can improve, localstorage is still borked, also updated the final readme, and hosted the site on vercel, cant believe i decided to start this as a swiftcrossui app and now its a web app!
Removed the sidebar, and tried to make the app as minimal as possible, now you can load your book, and just read it, its one book at a time, but with a slightly cleaner UI, tried to persist the file and the page, its still a WIP. need to clean the reader and add keyboard shortcuts.
Added functionality for the user to move through pages, need to really make the UI actually look good. also added fflate for faster extraction!
Patch! ai hacklcub changed to a reasoning model which tweaked the ai backend and made it respond with random tags, made a new function to strip through the response and filter it, updated the chatter website to be a demo links page.
first devlog! started off making a cross platform app using swift cross UI, but that didnt work out, since swift cross ui is still in early stages and i couldnt make the app really work, then i decided to make a svelte app instead, got a sidebar ready and got the cover image to load for .cbz files using jszip!
ToDo-
- Add a proper reader and fix the sidebar to import more than one book.
AI used in trying to get persistence and local storage to work, but its still borked in its current state livre (book in french), is an comic book reader made using svelte!
packaged game for macos, couldnt do the same for windows and linux, but got pygbag working and packaged the game for web, although sfx dont work in the web version! updated readme too. and cleaned repo
ready to ship!
Added a progress bar and effects on the yellow mosquito, also added sfx and instructions, made a bg a random pastel color, tried bg images, but they sucked, so used a random color, also updated HUD, game is ready to package!
Today, i changed the lightning to a simple bar on the top, and i removed shadows too, couldnt get a good look out of them.
i also added effects for when the blue mosquito was eaten, using sinwaves (i used AI to understand what sinwave really is lol) also i added a screen shake in the end + made slowdown and speedup cancel each other out! improved HUD too!
Next - add effects for the other type, and also add progress bars
Added mosquitoes sprites, animations, and lighting for the other mosquitoes also tried making my own function for spritesheets, but ended up slicing the sprites, and then using them seperately
got mosquitoes eating functionality down, the rects were already setup, collision wasnt a problem, also added different mosquitoes,
blue - half speed and half tongue speed
yellow - 1.5x speed and tongue speed
also, there wasnt a way of avoiding a mosquitoes effect, so i added bullets, at the start the player has 8 bullets, also got waves to work! each wave has 5 - 15 mosquitoes spawning and finally the game is endless, and added text too.
tbh, got ALOT work done today!
next -
add sprites, glow, art for bullets, clean up code and put frog and tongue in their own classes
10 Hour Mark!
Today's Work - i had a issue where the frog and tongue were tearing while movement, tried to change ALOT of things, but ended up changing the movement to time based, also i setup a mosquito class (since the code was getting messy), also setup multiple mosquitoes spawning and staying in borders.
What's next:
add collisions and mosquitoes eating.
got tongue + animation working + also setup sprites and animation + setup imaging scaling and aligned rects of pre scaled image and after scaled image.
next - add mosquitoes
messed a lot with godot to get a wave manager working, since i wanted the game to be endless, after TONS of debugging, i jumpshifted to using pygame, since i do have experience with it! in pygame, got the frog setup + tongue mechanisms
TODO - add sprites and animations.
packaged the app to pypi, removed windows support (cuz it wasn't working in the first place, also updated readme and removed US models, since wheel was > 100mb
first devlog! added frog movements + tongue mechanisms, also removed mock assets and made my own assets using aseprite, also programmed using gdscript for the first time.
what's next:
mosquitos, glows, waves, background + so much more lol
a frog eating mosquito frenzy, made purely in pygame
realized that you cant really install it on other OS's updated install instructions + requirements, got it working on a linux environment
vibe coded website, and added finishing touches to the whole project, ready to ship!
total project time = 15h 51m
summer event time = 2h 32m
removed settings tab, added customization via config file, updated readme for customization instructions + templates
finally added a settings menu, when you press esc, a simple settings menu shows up with options for your input, output, hints and more, they dont have any functionality yet, i first tried to use the pygame-menu lib, well that looked really ass, but then i saw a guide to make your own menu, so i did that, added an accent color and only keyboard controls (i love vim)
chatter is a voice assistant that uses the hackclub AI and local models (for voice and speech recognition) to be of use to you!
This was widely regarded as a great move by everyone.