June 17, 2025
I take what I said last week back. Now we are doing swirl!
I'm stopping will TUI dev for now. I dont have much expertise in TUI, soo... Anyways I added websearch support to mdllama run
! See image for more.
Still tinkering with oterm and stuff. Finally CLOSE to working...
First and final devlog. Finished the website and submitted!
The website I made for Boba and Swirl at a workshop recently.
Still working on the TUI. textual
is so stupid!!
Tried making a TUI for mdllama, but did not really work. Now I'm just (basically) making a fork of oterm
and just add openai functionalities to it....
Published a new release on GitHub!
Fixed some markdown realtime rendering issues. Now using Rich.live
Fixed some bugs and changed CTRL-C to interrupt the model output; CTRL-D to quit program
some small bug fixes of systemd stuff. also I have corrupted my microSD card AGAIN
Updated the man
page
Forgot to post devlogs for previous changes. But I have changed a lot of things and fixed some bugs that were introduced when I bumped to v3.0.0. Check my repo for details :))
Fixed the progress bar and other features that got removed in v3.0.0.
Updated the man
page
Added man
page for mdllama! Tested on Debian 13 and Fedora 42!!
Added a check-release
command. See screenshot for details.
TEST 2 this is stupid
Trying to make multi-version work. Also started on testing/beta version of mdllama. (this is supposed to be 6h 21m 12s btw)
Test devlog. time is broken i think
I think my time count is broken on SoM. This is a test.
Today, I made sure that mdllama
works with macOS. It originally had some Permission denied errors when writing the config file. I also modulised the main mdllama.py
, with help from GitHub Copilot, because I messed something up. However, it did not do its job well, forcing me manually correct some Actions files
Some small bug fixes.
Fixed a few critical security issues. Big shoutout to @Devarsh.
I created a live demo here at https://mdllama-demo.qincai.xyz. The demo version is powered by ai.hackclub.com. I also fixed a few bugs and stuff. Check my repo for more.
Fixed the Fedora RPMs. Now they are working! Turned out to be a conflict with pip
since my package had the same name. Now on both Debian and Fedora, I renamed the package to python3-mdllama
.
Made a few updates. Now it can also work with OpenAI compatible endpoints, including https://ai.hackclub.com. Unfortunately, during the process I broke the mdllama
RPM. So in the meantime, users have to use pip
or pipx
. Somehow to DEB package is still working. Interesting.....
I packaged this project, tinkered with GitHub Actions. Now you can install this using apt
, dnf
, rpm
, pip
, pipx
, OR running the installation script.
After some testing and stuff, I have come to a conclusion that my Markdown rendering method is not very efficient. Sometimes, in the middle of a long code output (surrounded by code blocks), the stream just pauses until the output was complete. See image attached; it's completely frozen.
Since I have one microSD card lying around, I quickly flashed a new image and set up the card. Why? Because I need to test that my program is easily usable and reproducible (is that a word??)
Somehow I couldn't get Ethernet Gadget mode to work... See image attached. Nothing showed up on the host.
Installed Ansible on my pi02w, cos, well, why not?
Trying to put this inside Docker cos why not? better isolation and stuff.
More testing and README updates! I added LOADS of screenshots to my README.
Created installer and uninstaller scripts for this project! (Already tested on Debian 13/Trixie and Fedora 42, to be tested on Ubuntu)
Made a working version of the Ollama CLI, it's not very efficient though (ATM), using quite a lot of CPU power.
Developed some tests in the terminal!! including tinkering with systemd..
I feel like the BOM for this project is wayyyyyyyyy too expensive. I'm gonna abandon the hardware part of the project, and instead focus on an Ollama CLI, since I already have a prototype from a while ago.
This session I basically worked on the operating system side of things. I tried to fix the shutdown mechanism and ended up using another OLED SSD1306 display I happened to have around. Now it's working :))
In this session, I added shutdown support for the project, so I don't have to just unplug the power cable, which corrupted my git tree last time. I also modified the systemd service so that it continues the service even if git exits error code (when there is no internet, for example)
I made some test scripts to make sure they are working. Turns out it's pretty good right now, except for the low framerate while timing, which I will fix soon. systemd
is working which means the program runs on boot!
Started experimenting with my custom Ollama client. It needs to remove all the formatting and stuff, and be as simple as possible; however, streaming must be supported.
I think I just fixed the issue where my solve results disappear upon a reboot. Needs to be tested (not now though)
I did more research on the single board computers, and I ended up with an Orange Pi 5 Pro (8GB version with no eMMC). It is like 2x faster than the Raspberry Pi 5, somehow being more efficient at the same time. It is the same price as the Raspberry, even, at US$80, with 8GB RAM. Sad that the shipping is like $13....
Finally made (most) things work. TYSM GitHub Copilot, it did most of the things by fixing the code and finding libraries. The display update is quite laggy tho.. Also I corrupted my Git tree somehow (on the pi02w-cube), which means I probably should add a power button to my project, but that's for another day ig.
Finally got a test script to run!!! For some reason, the st7789
library was not working as it should, so I switched to luma.lcd.
Finally finished downloading the image. Now I am using rpi-imager to burn the image to the microSD card for the Pi
I have also modified the firstrun.sh script to set up Ethernet gadget mode since I do not have a monitor.
For this session, I updated the README to add more descriptive details for my project, and started JOURNAL.md, required by Highway!
Just updated all the Markdown files to comply with some linting rules.
In this session, I worked on tidying up the repo and developed some tests. I have also soldered the pins of my Pi02W, so I could try my code out on the smarter Pi later today :))). But that means I will have to update the code.....
So now I am downloading the image for the Pi02W!! This will take a long time.
I just finalised the whole repository and submitted it to #highway!!!
I started the planning phase of my project. I decided to use the Orange Pi 5 Pro, with 8GB of RAM. Please see attachment for details :))
YAY finally got it to work. I just fixed a few bugs that I hadn't fixed yesterday. I recorded a video on my project in action, so check it out!! :))
A CLI tool that lets you chat with Ollama and OpenAI models right from your terminal, with built-in Markdown rendering. TRY TYPING "MARKDOWN" INTO THE DEMO CHAT WINDOW!!
Today I fixed many bugs introduced in yesterday's features. Writing that sentence makes me think about it's a feature, not a bug. Anyways, as I said, I fixed some bugs like text clipping and some other logic errors. I also worked on the README and JOURNAL.md, just because I felt like it. You can see the journal here
Fixed the timer control function, or attempted to fix. Copilot could not help me with that somehow, so I was on my own. Also a QOL improvement -- turning prompt to red when sensor has been held long enough
Finally got home, so I guess it's time to test the new code on my Pico!!
Tried to fix many bugs, including the backlight control that worked a long time ago.
The most stupid part is getting the version number to display, I had to manually position the text. Good thing that GitHub Copilot finally gave me a working function for it, after feeding it the manual version.
Backlight control now works perfectly. SAVE SOME ELECTRICITY :))
Updated the README and uploaded the software to work on the Pi Pico!
A Raspberry Pi (Zero2) - powered Rubik's Cube Timer. This is a continuation of my Highway project, PiCubePico (which was based on the Raspberry Pi Pico).
This was widely regarded as a great move by everyone.