June 16, 2025
what a amazing experience!!!! loved the scene
added all the answers to not-readme.md
Fixed guest user bug, added a bunch of tests in tests/
directory, and updated README to be up-to-date
probably final devlog for now! some changes for README for clarity and up-to-date info!
short session, i know. but i completely removed the hidden websitefrom index, so the server has to send it!
I think i almost finished this :)).
Sector 65 of the SoM Grand Survey Exhibition. STUCK ON THE FIRST PAGE? TRY TYPING help ANYWHERE ON THE PAGE ;)
Short session i know, but i fixed the auth to disable password and keyboardinteractive
Added guest account notice
Now when a user connects as guest
, they are automatically assigned to a guest account (guest1
, guest2
, guest3
, etc). I also added resets to the wallet and history everytime the guest disconnects.
released v1.3.0
Added guest
user support, without any auth (so no keyboard-interactive, sshkeys and whatsnot).
PLAN: allow multiple guest accounts: guest1
, guest2
, guest3
, etc. Reset guest accounts if the guest is inactive.
Initial IG. Sadly i was too stupid to make this entire project myself, so i got a lot of help from Copilot Agent mode. screenshot below is the auth page
Free And Open-source alternative to Banqer (https://banqer.co)
Not much, but modularised the 2 fat files, game.py
and ssh_server.py
. ig its more maintainable now
Probably last devlog for now. I fixed the OpenAI endpoint's streaming issue, turned out to be server latency stuff. Also synced readme to PyPi.
I tried to integrate a Ride the Bus game into the poker ssh server, but it was too complicated so i gave up. however, all the known bugs in the original game are now fixed, including the issue where you get folded if you resize your terminal window. See screenshot below :))
Made the AI smarter :)) now they are more likely to call than just fold
I am trying very very hard to actually make this game production-level, therefore releasing v1.0.0.
Hack Club Nest is wayyy too unreliable, so I migrated the whole server infrastructure to AWS. I changed all references to port 23456 as well, to 22.
Added a HELP button to display the helper (SSH copy helper)
There is no proper tutorial on how to use it and the link to your 'demo-video' leads to a ssl error because the subdomain isn't even registered. Please fix that, deploy your demovideo somewhere else or just make your website work, it's up to you. Please fix and resubmit
https://github.com/poker-ssh/Poker-over-SSH/tree/main?tab=readme-ov-file#play-the-public-demo-fast
added that part of readme ^ and deployed it to AWS. Also added a popup to the website
YAY FINALLY GOT PUB/PRIV key auth to work!! see screenshot attached if you want to see more
Trust me, i really tried implementing the auth feature myself, but I couldn't. so i had to give the work to Copilot Agent :sob:. See https://github.com/poker-ssh/Poker-over-SSH/pull/40
I tried to make a feature we wanted to implement a long time ago: private/public key authorisation. this decreases the chance of someone impersonating others using ssh user@host
. its not really working now, though :((
Released version 0.16.0 (https://github.com/poker-ssh/Poker-over-SSH/releases/tag/0.16.0), fixed the database bug (i think and hope). deployed it, so we are in a testing phase
Made changes to the MOTD section to add legal information (disclaimer), and other stuff
â•───────────────────────────────────────────────────────────────────╮
│ Poker over SSH IS PROVIDED 'AS-IS', with ABSOLUTELY NO WARRANTY, │
│ to the extent permitted by applicable law. │
╰───────────────────────────────────────────────────────────────────╯
Copyleft (LGPL-2.1), Poker over SSH and contributors
By continuing to interact with this game server, you agree to the terms of the LGPL-2.1 license (https://www.gnu.org/licenses/old-licenses/lgpl-2.1.html) or see the project's LICENSE file.
GitHub Repository: https://github.com/poker-ssh/Poker-over-SSH
Please file bug reports: https://github.com/poker-ssh/Poker-over-SSH/issues
i think i fixed some bugs with the SSH server. also added healthcheck, integrated into the server. we worked more on the website, and added a status page (raw html, css, js) @ https://poker.qincai.xyz/status.html, and UptimeKuma @ https://poker-status.qincai.xyz/status/pos-nest. Attached is the UptimeKuma monitor
Added command tgc
and togglecards
to toggle card visibility. also added Docker support, so people can run the server themselves.
Fixed some OpenAI proxy issues. somehow the streaming is not working, but thats work for another day ig.
Made AI smarter (using my OpenAI proxy), fixed a few bugs, added a few features, and started working on the website!
More testing & fine adjustments. also continued working on my OpenAI Proxy for demo purposes
added many commands such as wallet
and server
. also fixed some logic bugs. added support for persistent wallet
Deployed the stable server to Hack Club Nest. Fixed CTRL-C and CTRL-D, window resizes, and bunch of other bugs.
ssh <username>@play.pos.qincai.xyz -p 23456
to connect!
Guess what, i guess its working now!
Play multiplayer Poker through SSH, with AI and wallet support. MAKE SURE YOU HAVE an SSH keypair BEFORE connecting to the game server (see README or website for details) This is a team project built with @DuckyBoi_XD. SEE ALSO https://summer.hackclub.com/projects/11911
took inspration from OpenWebUI, and now uses a proper (ahem....) web search function. Oh and also started working on the LLM proxy, which i plan to be included in my demo (just so ppl vote for this project)
FIxed github.com/qincai-rui/packages for mdllama
packages repos.
added more functionalities to the websearch. fixed a few old bugs.
I take what I said last week back. Now we are doing swirl!
I'm stopping will TUI dev for now. I dont have much expertise in TUI, soo... Anyways I added websearch support to mdllama run
! See image for more.
Still tinkering with oterm and stuff. Finally CLOSE to working...
First and final devlog. Finished the website and submitted!
The website I made for Boba and Swirl at a workshop recently.
Still working on the TUI. textual
is so stupid!!
Tried making a TUI for mdllama, but did not really work. Now I'm just (basically) making a fork of oterm
and just add openai functionalities to it....
Published a new release on GitHub!
Fixed some markdown realtime rendering issues. Now using Rich.live
Fixed some bugs and changed CTRL-C to interrupt the model output; CTRL-D to quit program
some small bug fixes of systemd stuff. also I have corrupted my microSD card AGAIN
Updated the man
page
Forgot to post devlogs for previous changes. But I have changed a lot of things and fixed some bugs that were introduced when I bumped to v3.0.0. Check my repo for details :))
Fixed the progress bar and other features that got removed in v3.0.0.
Updated the man
page
Added man
page for mdllama! Tested on Debian 13 and Fedora 42!!
Added a check-release
command. See screenshot for details.
TEST 2 this is stupid
Trying to make multi-version work. Also started on testing/beta version of mdllama. (this is supposed to be 6h 21m 12s btw)
Test devlog. time is broken i think
I think my time count is broken on SoM. This is a test.
Today, I made sure that mdllama
works with macOS. It originally had some Permission denied errors when writing the config file. I also modulised the main mdllama.py
, with help from GitHub Copilot, because I messed something up. However, it did not do its job well, forcing me manually correct some Actions files
Some small bug fixes.
Fixed a few critical security issues. Big shoutout to @Devarsh.
I created a live demo here at https://mdllama-demo.qincai.xyz. The demo version is powered by ai.hackclub.com. I also fixed a few bugs and stuff. Check my repo for more.
Fixed the Fedora RPMs. Now they are working! Turned out to be a conflict with pip
since my package had the same name. Now on both Debian and Fedora, I renamed the package to python3-mdllama
.
Made a few updates. Now it can also work with OpenAI compatible endpoints, including https://ai.hackclub.com. Unfortunately, during the process I broke the mdllama
RPM. So in the meantime, users have to use pip
or pipx
. Somehow to DEB package is still working. Interesting.....
I packaged this project, tinkered with GitHub Actions. Now you can install this using apt
, dnf
, rpm
, pip
, pipx
, OR running the installation script.
After some testing and stuff, I have come to a conclusion that my Markdown rendering method is not very efficient. Sometimes, in the middle of a long code output (surrounded by code blocks), the stream just pauses until the output was complete. See image attached; it's completely frozen.
Since I have one microSD card lying around, I quickly flashed a new image and set up the card. Why? Because I need to test that my program is easily usable and reproducible (is that a word??)
Somehow I couldn't get Ethernet Gadget mode to work... See image attached. Nothing showed up on the host.
Installed Ansible on my pi02w, cos, well, why not?
Trying to put this inside Docker cos why not? better isolation and stuff.
More testing and README updates! I added LOADS of screenshots to my README.
Created installer and uninstaller scripts for this project! (Already tested on Debian 13/Trixie and Fedora 42, to be tested on Ubuntu)
Made a working version of the Ollama CLI, it's not very efficient though (ATM), using quite a lot of CPU power.
Developed some tests in the terminal!! including tinkering with systemd..
I feel like the BOM for this project is wayyyyyyyyy too expensive. I'm gonna abandon the hardware part of the project, and instead focus on an Ollama CLI, since I already have a prototype from a while ago.
This session I basically worked on the operating system side of things. I tried to fix the shutdown mechanism and ended up using another OLED SSD1306 display I happened to have around. Now it's working :))
In this session, I added shutdown support for the project, so I don't have to just unplug the power cable, which corrupted my git tree last time. I also modified the systemd service so that it continues the service even if git exits error code (when there is no internet, for example)
I made some test scripts to make sure they are working. Turns out it's pretty good right now, except for the low framerate while timing, which I will fix soon. systemd
is working which means the program runs on boot!
Started experimenting with my custom Ollama client. It needs to remove all the formatting and stuff, and be as simple as possible; however, streaming must be supported.
I think I just fixed the issue where my solve results disappear upon a reboot. Needs to be tested (not now though)
I did more research on the single board computers, and I ended up with an Orange Pi 5 Pro (8GB version with no eMMC). It is like 2x faster than the Raspberry Pi 5, somehow being more efficient at the same time. It is the same price as the Raspberry, even, at US$80, with 8GB RAM. Sad that the shipping is like $13....
Finally made (most) things work. TYSM GitHub Copilot, it did most of the things by fixing the code and finding libraries. The display update is quite laggy tho.. Also I corrupted my Git tree somehow (on the pi02w-cube), which means I probably should add a power button to my project, but that's for another day ig.
Finally got a test script to run!!! For some reason, the st7789
library was not working as it should, so I switched to luma.lcd.
Finally finished downloading the image. Now I am using rpi-imager to burn the image to the microSD card for the Pi
I have also modified the firstrun.sh script to set up Ethernet gadget mode since I do not have a monitor.
For this session, I updated the README to add more descriptive details for my project, and started JOURNAL.md, required by Highway!
In this session, I worked on tidying up the repo and developed some tests. I have also soldered the pins of my Pi02W, so I could try my code out on the smarter Pi later today :))). But that means I will have to update the code.....
So now I am downloading the image for the Pi02W!! This will take a long time.
I started the planning phase of my project. I decided to use the Orange Pi 5 Pro, with 8GB of RAM. Please see attachment for details :))
YAY finally got it to work. I just fixed a few bugs that I hadn't fixed yesterday. I recorded a video on my project in action, so check it out!! :))
A CLI tool that lets you chat with Ollama and OpenAI models right from your terminal, with built-in Markdown rendering and websearch. PLEASE READ THE readme's demo sections BEFORE VOTING!! TYSM!!
Today I fixed many bugs introduced in yesterday's features. Writing that sentence makes me think about it's a feature, not a bug. Anyways, as I said, I fixed some bugs like text clipping and some other logic errors. I also worked on the README and JOURNAL.md, just because I felt like it. You can see the journal here
Fixed the timer control function, or attempted to fix. Copilot could not help me with that somehow, so I was on my own. Also a QOL improvement -- turning prompt to red when sensor has been held long enough
Finally got home, so I guess it's time to test the new code on my Pico!!
Tried to fix many bugs, including the backlight control that worked a long time ago.
The most stupid part is getting the version number to display, I had to manually position the text. Good thing that GitHub Copilot finally gave me a working function for it, after feeding it the manual version.
Backlight control now works perfectly. SAVE SOME ELECTRICITY :))
Updated the README and uploaded the software to work on the Pi Pico!
A Raspberry Pi (Zero2) - powered Rubik's Cube Timer. This is a continuation of my Highway project, PiCubePico (which was based on the Raspberry Pi Pico). THE DEMO VIDEO does LOAD. JUST HANG ON FOR A MOMENT. (it's slow)
This was widely regarded as a great move by everyone.