June 18, 2025
some final fixes:
- allowed customizing rolldown's io settings
- started bundling to temporary files in dev instead of base 64 urls, and allowed customizing the location (eg if you made a library external so you need to run functions from the same directory)
- allowed sending back a custom Response, even a streaming one
- improved the dev and (example) prod servers
- worked around a bug where necessary types weren't being bundled
- allowed using (fully typed!) environment variables
more fixes:
- i made sure that themes are reapplied even after refresh (eg if you send a chat message)
- i updated web to actually work with most pages by adding a rewriter
- i fixed some linting issues
- i switched to r2 instead of storing the assets within the val since r2 allows storing >80kb files
- i tweaked some copy
i've lost 99% of my motivation by now so i hope this version ships
i've decided to ship Tangent as a computer. this means dropping any TODO apps and adding mock data or implementations when needed. i'm not sure if i'll continue with the computer framework for future projects, but i'm glad i made it; it was satisfying to make, i learned a lot, it's decent code for llms to learn from, and i filed a few bug reports along the way.
you can now filter by architecture if you'd like to do that
settings pane! i switched the old switches/selects to button groups because they look better. they internally use radio buttons which, surprisingly, also work for booleans. i also added a ram slider: this uses a logarithmic scale to make it comfortable to go from the minimum of 1gb to the maximum of 2,256gb (although we track gpu setups with as much as 20,480gb total ram).
i added some more gpus and improved data processing. menial but important work. bigger changes soon.
on the lium side: i added manual support for some new GPUs (lium is crazy cheap, like an A4000 for $0.08/hour!)
and for vast, i added more from the infinite selection of gpus, like the 5000Ada (powerful Ada Lovelace-based one) and the B200 (surprisingly blackwells are on vast), and improved the checking of ram to use a unified function: entry.gpu_ram >= gib * 1024 * 0.9
.
and now that the core work is done, i worked a little on the documentation: updated the package.json to include some relevant fields and added example usage to the readme. the example for the tryReads uses a TransformStream which i think is neat.
when you make something in a day, the last parts usually end up a bit unfinished. this happened with tinyentities, where stream decoding took 2.5x the time of entities. but note the past tense: i fixed that by not using regex (as much) and now it's neck and neck!
i've added something that would be useful if you need to stream in entities - it basically lets you keep trying to read something that looks like an entity until you figure out whether it is (and you can emit it as its encoding) or it isn't (and you can emit it as text)
i updated the benchmarks to separate init and runtime and from there i just got optimizing. i was able to speed up data unpacking, switch how the map used for encoding purposes is stored for speed, convert xml encoding into one call for speed, switch to regexes more optimized for my purposes (more importantly faster), and fix multiple bugs multiple times... streaming parser next i guess
i set up some benchmarks (well it was a group effort with ai). now we can group functions into:
so room for growth
and now decoding. decoding can be more complicated once i add streaming, but i've gotten a REALLY simple implementation that works going. (also optimized/restructured map.ts a little to add support for decoding and better tree shaking)
i set up the encoding functions. details:
- i read through code in entities and dom-serializer to figure out the services i need to provide at the end of the day
- i implemented the lighter escapeHTML, escapeHTMLAttribute, escapeXML, and escapeXMLAttribute, which escape just enough to not have problems
- then i implemented the more complex encodeHTML and encodeXML, which encode almost everything, with the former even encoding punctuation and multi character entities when possible.
- i also signalled to bundlers that the mapping can be dropped if unused by wrapping the process of loading it in a pure IIFE (the (() => { code })()
things)
i offset the bundle size increase a little: based on any time an entity exists without a semicolon, it also exists with a semicolon, i can only include the semicolonless version and it's implied that the semicolon version also exists. 7742 bytes -> 7520 bytes
i'm now actually generating the map. the map is a bit bigger now so it can include special cases like & (note how there's no ;, so my script packs that as !).
i got the initial version of the mapper going. the idea is to convert the 145.8kb entities.json into a highly optimized map by making some optimizations (for example, we assume there's always an increment of one between codepoints) and restructuring (like separating each first character level with a newline and each second character level with the >
character). it's only 7.1kB gzipped (half the size of a naive mapper) and should work for both encoding and decoding.
i set up the project. i'm not used to making libraries - i usually make web apps - but i'm using tsdown for compilation/transpilation, which i hope will make things easy.
You know HTML entities - like how > is the greater than sign, and how © is ©? Most JS libraries to encode/decode them are very bloated. I'm making a more lightweight one.
i added support for more gpus from vast and salad. (i probably shouldn't be hardcoding this, but true standardization is better than fake standardization...)
i added lium (a bittensor/crypto-based gpu provider), and since the list was getting a little long, i also implemented a grid view. isn't it glorious?
i realized i wanted to add sf compute, but they only provide gpus in clusters, so i implemented support for clusters from prime intellect, vast ai, deepinfra, salad, and sf compute. (prime intellect's clusters are typically not the cheapest)
so about that data... i've added some for salad. i also updated the ui a little (allow filtering for more useful comparison of prime intellect).
the ui was in fact next. back to data.
worked on data extraction, the ui's next i guess?
and now vast ai.
i'm now fetching prime intellect's reference prices.
foundations down.
There are many sites where you can rent GPUs. Prime Intellect tries to aggregate them, but it doesn't always have the best price - after all, it doesn't have all the GPU providers of the world - so this app tracks when.
i started with auth, and i'm returning to auth! students at my current school district can now log in to open School (Tangent's version of the home page) which currently just has a grades panel based on some old code. soon: send your auth to the backend to verify yourself and gain a jwt
i've known from the start that i wanted to base tangent around storage. i've made a first step towards it: a storage object synchronized across the apps. the whole thing goes down, although only incremental changes go up for efficiency's sake. also had to use a syncing
variable to prevent infinite loops
i refined windowing: tweaked the colors used, made the hot corner more reliable (and added using alt as another option), made windows maximal (overlay the essentials instead of adding a chonky bar), and implemented hover tooltips for windows. it's almost time to start building the actual apps
i think i burned out or something. i didn't really feel the motivation to work on it this weekend, or much energy at all. i thought reflecting on the decisions that led me might help - and it did actually, i decided that i need to be a bit more local first and that my current implementation was flawed. this should give me more space to make without worrying about the specifics or foundation tomorrow.
trying something new, documenting architectural decisions, for this project. it was originally just to get credit for time spent thinking, but having to explain things to yourself is clarifying my thoughts in the moment and will help me in the future. that's also one of the things i hope tangent will help people do.
This was widely regarded as a great move by everyone.