A CLI tool that lets you chat with Ollama and OpenAI models right from your terminal, with built-in Markdown rendering. TRY TYPING "MARKDOWN" INTO THE DEMO CHAT WINDOW!!
No followers yet
Once you ship this you can't edit the description of the project, but you'll be able to add more devlogs and re-ship it as you add new features!
I'm stopping will TUI dev for now. I dont have much expertise in TUI, soo... Anyways I added websearch support to mdllama run
! See image for more.
Tried making a TUI for mdllama, but did not really work. Now I'm just (basically) making a fork of oterm
and just add openai functionalities to it....
Forgot to post devlogs for previous changes. But I have changed a lot of things and fixed some bugs that were introduced when I bumped to v3.0.0. Check my repo for details :))
Trying to make multi-version work. Also started on testing/beta version of mdllama. (this is supposed to be 6h 21m 12s btw)
Today, I made sure that mdllama
works with macOS. It originally had some Permission denied errors when writing the config file. I also modulised the main mdllama.py
, with help from GitHub Copilot, because I messed something up. However, it did not do its job well, forcing me manually correct some Actions files
I created a live demo here at https://mdllama-demo.qincai.xyz. The demo version is powered by ai.hackclub.com. I also fixed a few bugs and stuff. Check my repo for more.
Fixed the Fedora RPMs. Now they are working! Turned out to be a conflict with pip
since my package had the same name. Now on both Debian and Fedora, I renamed the package to python3-mdllama
.
Made a few updates. Now it can also work with OpenAI compatible endpoints, including https://ai.hackclub.com. Unfortunately, during the process I broke the mdllama
RPM. So in the meantime, users have to use pip
or pipx
. Somehow to DEB package is still working. Interesting.....
I packaged this project, tinkered with GitHub Actions. Now you can install this using apt
, dnf
, rpm
, pip
, pipx
, OR running the installation script.
After some testing and stuff, I have come to a conclusion that my Markdown rendering method is not very efficient. Sometimes, in the middle of a long code output (surrounded by code blocks), the stream just pauses until the output was complete. See image attached; it's completely frozen.
Created installer and uninstaller scripts for this project! (Already tested on Debian 13/Trixie and Fedora 42, to be tested on Ubuntu)
Made a working version of the Ollama CLI, it's not very efficient though (ATM), using quite a lot of CPU power.
I feel like the BOM for this project is wayyyyyyyyy too expensive. I'm gonna abandon the hardware part of the project, and instead focus on an Ollama CLI, since I already have a prototype from a while ago.
Started experimenting with my custom Ollama client. It needs to remove all the formatting and stuff, and be as simple as possible; however, streaming must be supported.
I did more research on the single board computers, and I ended up with an Orange Pi 5 Pro (8GB version with no eMMC). It is like 2x faster than the Raspberry Pi 5, somehow being more efficient at the same time. It is the same price as the Raspberry, even, at US$80, with 8GB RAM. Sad that the shipping is like $13....
For this session, I updated the README to add more descriptive details for my project, and started JOURNAL.md, required by Highway!
I started the planning phase of my project. I decided to use the Orange Pi 5 Pro, with 8GB of RAM. Please see attachment for details :))