June 17, 2025
Managed to implement local AI inference using llamacpppython after checks clock 5 HOURS??? Getting the LLM to work was quite trivial task, but doing the processing on the GPU turns out to be way harder than i thought. CUDA seems to break as soon as I look at it, I had to install 3 different versions, and it still doesnt work...
Added an executable Demo and made the official release
GPU shaders are kicking my butt rn, but at least it will run better.... hopefully
Improved looks on phone, also worked on performance boost, but still to slow
Created my project in Python, now need to port it over to Processing. Encountering some performance issues, need to improve efficiency for phone port.
This is a simple Android Wallpaper showing a simulation of a Cloud Chamber. Inspired by the following video by Sebastian Lague https://www.youtube.com/watch?v=X-iSQQgOd1A. Credit for Project Icon Picture: By Nuledo s.r.o. - https://www.nuledo.com/en/, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=62073772
This was widely regarded as a great move by everyone.