mdllama - Mark Up Your Ollama Experience

mdllama - Mark Up Your Ollama Experience Used AI

33 devlogs
80h 2m
•  Ship certified
Created by QinCai

A CLI tool that lets you chat with Ollama and OpenAI models right from your terminal, with built-in Markdown rendering. TRY TYPING "MARKDOWN" INTO THE DEMO CHAT WINDOW!!

Timeline

QinCai
QinCai
4h 36m 3 days ago

I'm stopping will TUI dev for now. I dont have much expertise in TUI, soo... Anyways I added websearch support to mdllama run! See image for more.

Update attachment

Still tinkering with oterm and stuff. Finally CLOSE to working...

Update attachment

Still working on the TUI. textual is so stupid!!

Update attachment

Tried making a TUI for mdllama, but did not really work. Now I'm just (basically) making a fork of oterm and just add openai functionalities to it....

Update attachment

Ship 3

1 payout of shell 106.0 shells

QinCai

17 days ago

QinCai Covers 5 devlogs and 12h 50m

Published a new release on GitHub!

Update attachment

Fixed some markdown realtime rendering issues. Now using Rich.live

Update attachment

Fixed some bugs and changed CTRL-C to interrupt the model output; CTRL-D to quit program

Update attachment

Updated the man page

Update attachment

Forgot to post devlogs for previous changes. But I have changed a lot of things and fixed some bugs that were introduced when I bumped to v3.0.0. Check my repo for details :))

Update attachment

Ship 2

1 payout of shell 394.0 shells

QinCai

21 days ago

QinCai Covers 16 devlogs and 40h 43m

Fixed the progress bar and other features that got removed in v3.0.0.

Update attachment

Updated the man page

Update attachment

Added man page for mdllama! Tested on Debian 13 and Fedora 42!!

Update attachment

Added a check-release command. See screenshot for details.

Update attachment

TEST 2 this is stupid

Update attachment

Trying to make multi-version work. Also started on testing/beta version of mdllama. (this is supposed to be 6h 21m 12s btw)

Update attachment

Test devlog. time is broken i think

Update attachment

I think my time count is broken on SoM. This is a test.

Update attachment

Today, I made sure that mdllama works with macOS. It originally had some Permission denied errors when writing the config file. I also modulised the main mdllama.py, with help from GitHub Copilot, because I messed something up. However, it did not do its job well, forcing me manually correct some Actions files

Update attachment

Some small bug fixes.

Update attachment

Fixed a few critical security issues. Big shoutout to @Devarsh.

Update attachment

I created a live demo here at https://mdllama-demo.qincai.xyz. The demo version is powered by ai.hackclub.com. I also fixed a few bugs and stuff. Check my repo for more.

Update attachment

Fixed the Fedora RPMs. Now they are working! Turned out to be a conflict with pip since my package had the same name. Now on both Debian and Fedora, I renamed the package to python3-mdllama.

Update attachment

Made a few updates. Now it can also work with OpenAI compatible endpoints, including https://ai.hackclub.com. Unfortunately, during the process I broke the mdllama RPM. So in the meantime, users have to use pip or pipx. Somehow to DEB package is still working. Interesting.....

Update attachment

I packaged this project, tinkered with GitHub Actions. Now you can install this using apt, dnf, rpm, pip, pipx, OR running the installation script.

Update attachment

After some testing and stuff, I have come to a conclusion that my Markdown rendering method is not very efficient. Sometimes, in the middle of a long code output (surrounded by code blocks), the stream just pauses until the output was complete. See image attached; it's completely frozen.

Update attachment

Ship 1

1 payout of shell 121.0 shells

QinCai

about 1 month ago

QinCai Covers 8 devlogs and 12h 26m

More testing and README updates! I added LOADS of screenshots to my README.

Update attachment

Created installer and uninstaller scripts for this project! (Already tested on Debian 13/Trixie and Fedora 42, to be tested on Ubuntu)

Update attachment

Made a working version of the Ollama CLI, it's not very efficient though (ATM), using quite a lot of CPU power.

Update attachment

I feel like the BOM for this project is wayyyyyyyyy too expensive. I'm gonna abandon the hardware part of the project, and instead focus on an Ollama CLI, since I already have a prototype from a while ago.

Update attachment

Started experimenting with my custom Ollama client. It needs to remove all the formatting and stuff, and be as simple as possible; however, streaming must be supported.

Update attachment

I did more research on the single board computers, and I ended up with an Orange Pi 5 Pro (8GB version with no eMMC). It is like 2x faster than the Raspberry Pi 5, somehow being more efficient at the same time. It is the same price as the Raspberry, even, at US$80, with 8GB RAM. Sad that the shipping is like $13....

Update attachment

For this session, I updated the README to add more descriptive details for my project, and started JOURNAL.md, required by Highway!

Update attachment

I started the planning phase of my project. I decided to use the Orange Pi 5 Pro, with 8GB of RAM. Please see attachment for details :))

Update attachment