toad

toad

30 devlogs
62h 10m
•  Ship certified
Created by ingobeans

A CLI web browser written entirely in Rust, by hand!

Timeline

FINALLY ive made webpages load async! toad has actually had dynamically loading assets like images and stylesheets for a REALLY long time, but still i had webpages pause the rest of the program when loading. this is now fixed!!!!! it was suprinsingly easy to implement, seeing as i already had the infrastructure from the dynamic asset loading. i didnt really implement anything new, just a new asset type for the asset loading system to handle. when you open a page now, instead of loading and parsing the website, then opening that in a new tab, it will instead open a blank tab, and start a thread that loads and parses the website, and add that to the fetch queue, as well with info of the ID of the blank tab. then, when the thread finishes, it finds that placeholder tab from its ID, and replaces the contents with itself.
This honeslty makes me so happy to have finished, i always thought it would be really difficult to implement, but it really wasnt, and it makes the browser experience 10x better imo

Update attachment

i finally made list items draw begin with little dot (or number index depending on list type). the funny thing is the code for this has been in the project since devlog #15, i just hadnt made it render, i had only added the draw context property. so basically the elements were sending the info that they should be drawn with a specific prefix, but it wasnt actually handled.
it still took some time adding the rendering, since i needed to figure out exactly what should get the prefix. first i just made every child node of a list get the prefix, but then list items consisting of multiple elements, like the TOAD homepage has (for green words in the list items), would get multiple dots. i just made it so <il> child elements had an extra fake child rendered, being the dot.

Update attachment
Earned sticker

added BACK / FORWARD BUTTON to the top bar!! yipeee! this also included making links open in the same tab, see, previously, links would always open in new tabs, but now they only open in new tabs if control is held when clicked. i also had to restructure the way the main program accesses its tabs, previously, the tabs were directly stored as webpages, but now, i have a distinct Tab class, that holds its own history of webpages.
Each tab also doesnt just store its history, but also its future- see, when you go back in history, you still want to be able to return forwards. the way i did this is just to have two history buffers, one for history, and one for future. When you go back in history, the latest item in the history buffer gets moved to the future buffer, and vice-versa. The page thats actually displayed is just the last item in the history buffer. havent added keyboard shortcuts to go back and forwards in history though, only the buttons. this is funny since the feedback i got from this project's first ship was requesting keyboard shortcuts for exactly that. will add after i finish writing this :3
Thanks for reading! i love toad

Ive redesigned and reworked large parts of the actual address bar! I felt like the address bar hadnt been receiving enough love, seeing as it literally hasnt been updated since i first added it, on devlog #4.... when this project had like 6 hours. it also hadnt even been updated to use the new graphics system (with buffers, this was implemented in devlog #15). I started by migrating the topbar rendering to the buffer system, instead of printing directly to the monitor, it writes to the screen buffer which gives me more optimization capabilities.
Then i just had a bit of fun redesigning it. The forward and back buttons dont work yet, as tabs dont store history info, but ill work on this next devlog. Everything else is functional. I also had to make the tough decision of making the topbar 3 rows high, rather than just 2, for visual clarity. this does though make TOAD a bit less compact, but i think its worth it (it looks really good imo).
Anyways, will work on polishing tabs next!

Update attachment

I noticed that on my laptop, white backgrounds would show as a dark blue. I figured out that this is due to that terminal replacing the standard white color with its own themed color. Basically, when dealing with terminal colors, you can either use a handfull of standard colors, like white, blue, or black, or you can directly use any RGB color. If you use a standard color, most terminals will use their own shades of the color, according to its theme. However if you use an RGB value, it will generally be respected. So I just replaced all standard colors with RGB values which did solve the issue!

Update attachment

FIRST DEVLOG SINCE SHIP! a little recap on what this project is: TOAD is a web browser written in Rust, that you use in your terminal. It's completely written by hand, even manually implementing things like HTML and CSS parsing!
This devlog i've added a custom implementation for receiving line-input. Basically, when i wanted to query the user for a line of input (like when you click the address bar), i used some standard-library function to do so. This tells the terminal to read exactly one line of text and then submit that. Problem though is that different terminals handles this differently, and it also doesnt give me much customization ability. I got a github issue from a very cool person who noticed that on their terminal, moving the mouse while in this line input, would cause the terminal to get spammed with weird characters. This was easy enough to fix, but it really just made me realize that I want a system to get input thats completely universal, by making it myself! It didnt take that long to do, (ive written manual input handlers for two other projects before lol), and not only did it fix that bug, but i can also do cool stuff like positioning the input area on an input box, i.e., when you press an input box, like the search bar on duckduckgo, the text you type will actually show up inside it. Seems simple but it wasnt like that before! Before it would just show at the top of the screen.
Anyways!!!!! next up, i really want to improve the looks of the browser itself. theres a ton of amazing TUI (terminal user interface) apps that look great that i want to take inspiration from, and now with the freedom of a custom input handler, i can make it a lot cooler!!!

Update attachment

Ship 1

1 payout of shell 1133.0 shells

ingobeans

12 days ago

ingobeans Covers 24 devlogs and 55h 1m

I fixed the tab bar truncation, basically when you open many tabs on a small screen, i need to limit the amount of space each tab is allowed to draw on, except for the selected tab which should always be drawn in full width. This wasnt too hard to implement, just a bit tedious as i have to do it twice, once for drawing, and the other for detecting when the mouse presses a tab. Ive also had to take a break from the project for a week, since i partook in the brackeys game jam :>
Other than that, i also started using a rust input library for typing in input boxes and the address bar, as it gives me more control, and lets me ex. prefill the address bar with https://.
Honestly this project is mostly finished, but since i wont be able to ship until my other project is done being certified and voted on, ig ill keep polishing this project until then.

Update attachment

Ive been working on improving the rendering of specific webpages, in this case, DuckDuckGo and Wikipedia, which are the two websites ive used a lot for testing. Getting both of them to work properly was sort of the goal of the project, or rather getting a search engine and Wikipedia. I noticed a lot of malformed text on some wikipedia pages, which i traced to these HTML tables. Adding a quick fix for them wasnt really hard, since they, like most HTML elements fall either in the category of div-like, or span-like. A div is a container that is drawn on a newline, while a span is a container that is drawn in-line. In my code, most elements are just inherited from one of these elements, with the only change being the name. I also implemented a fix for a DuckDuckGo visual bug, see, previously the Submit button would be rendered inside the first search result, which looked terrible. This was because it was a child of a container with a fixed height, that too short to fit the submit button. My fix was simply making no elements have a height thats shorter than their content's (i.e. their childrens') height. This was kind of a hard decision, since it goes against how real browser handle it. I figured though, that since this prevents elements from overlapping, that it would be a net-positive for TOAD, as it would make sites more readable, even if it meant straying away from the standards. I also further added mouse support, namely making tab bar and address bar interactable with the mouse. I also made closing all tabs close the program, when previously it would simply display the last rendered page indefinetily, until you closed the program or opened a new tab, which was kind of odd. This is also more like a real browser.
Honestly I kind of want to ship this soon, especially since brackey's game jam starts in like 10 hours, which I definetily want to partake in, and also devlog in SoM, so finishing this project up before that would be nice, but I'm not sure its polished enough. We'll see :)

Update attachment

Add a toad logo to homescreen (very important!!!!)
This has been the most eventful devlog so far, I drew a toad and put it in the homepage. I hope users of TOAD will feel loved by the toad, as they should.

Update attachment

[VIDEO SHOWCASE]
I added MOUSE SUPPORT!!!!! (pretty standard for a browser lol)
Getting mouse events working was suprisingly easy, the library I use for communicating with the terminal was easy to enable mouse event capture. The difficult part was mapping out where all interactable elements where. I solved this by literally having the same buffer used for drawing, also have a buffer for every pixel whether it is part of an interactable element. So whenever we draw something to the screen buffer, the modified pixels are also set to be the index of whatever interactable element the thing we drew was part of (if any). Then, on mouse move, its as simple as querying the screen buffer to check what interactable index is at the mouse cursor. Works like a charm!
You can see that the UI on duckduckgo is slightly bugged at the top. Maybe I'll fix it later, but from my debugging it seems difficult, as I'd have to implement a lot new styling, like for positioning, and background images. I'm scared adding positioning, if it isnt perfect, would actually break websites that work now, since maybe elements would get overlayed in an unreadable manner. And adding background images would require the draw context to be heap allocated, which would mean I'd have to rewrite A LOT of code, since I can no longer cheaply copy it.
But we'll see, maybe I can do something :)

JustZvan JustZvan 23 days ago
mouse is for noob add vim mode

I noticed some weird error logs of images not being able to load, which appeared to have really strange URL sources. Well, turns out HTML lets you pass raw base64 data directly as an image source.
I just made a little hook in my asset loading function that checks if the target data type is an image, if so it checks if the URL scheme is data and then handles the base64 parsing, and sidesteps the default behaviour of sending a network request.

Update attachment

DUCKDUCKGO IS SORT OF WORKING !! Everything still looks really buggy, but ive implemented HTML forms now! In a website, to send some query data or form to a server, that can either be done with Javascript, or with a <form>. I'm not adding JS support, but I figured that implementing forms wouldnt be that hard, and it would make search engines usable! (and maybe some password logins).
It took really long, there was a lot to figure out, especially how to store the state of all inputs on the current webpage, and tie them to a parent form, but I just pushed through and now it (very barely) works! You can see in the GIF that I can actually use duckduckgo.com, which is a really cool feeling. A lot of styling is messed up though, so I'll have to get around to fixing that. I also had to add <meta> tag redirects, which is a way to redirect a user to another website without JS. Duckduckgo by default has a very Javascript heavy homepage, something that my browser cant render, but it will redirect all users with Javascript disabled to a JS free version of the website, which uses forms. This redirection is done with one of those meta tags.
I also tried a couple different search engines like google, bing and startpage, but none of them were as compatible as duckduckgo.
Anyways, like I said, I better fix the styling issues of duckduckgo, as it has a lot of random empty space, and weirdly formatted text.

Update attachment

I noticed this very URL was really slow on my browser, and realized it was probably due to all the images. While the program cached the images themselves, they still had to be resized- scaled down- each frame. This was pretty slow on a page like this, where my profile picture is drawn many times. Each time its drawn, its the same image, with the same target width and height, so i decided to add another cache, specifically for resized images. When an image wants to be drawn, with a specific width and height, it first has to check the resized_images_cache, if an image from that source URL, with that width and height already has been cached, if so, that cached image is pulled. Otherwise we do the scaling operation and cache the result. Like I said, the cache has to keep track of not only the images source URL, but also their width and height, since the same image can be included multiple times on the same website with different sizes. This caching system fixed the lag on this page, yey. Interestingly enough, this new code forced me to-for the first time since I started learning Rust- use a Cow. A cow, at least in the way I used it, is basically a type in Rust, that for any given type, lets call it T, either holds a reference to a T, or an owned value of T. This is useful in my case because, if we want to draw an image from the cache, its going to be through a reference, but if it has to be resized now, we will have direct ownership of the object. With a Cow, i can have a single variable that could be either of these states, and I can pass it as a regular reference without changing any other existing code.

Update attachment

[SEE VIDEO]
I finalllllyyyyyy basically fixed the flickering that was sooo pervasive in basically all prior versions. Just optimizing the drawing simply wasnt enough, I had to really restructure all the screen rendering. I decided to look at how other TUI apps draw the screen to avoid flickering. I have a Rust TUI editor called Helix, which is FOSS, that I inspected. It has a screen buffer which consists of a 2D array with Cells. Each cell just has a defined character, color, and styling (such as italics). Then on each render, we compare the new screen buffer to the last, as to only draw the changes. I implemented a very similar system, including some edge cases which their code was designed to handle (its fine to copy code like this, its what FOSS is for, and besides, their implementation was a fork of another rendering library!). Anyways, it looks great now! Another huge benefit of this system, almost greater than the diffing, is that its always drawn top to bottom. This makes it so even when it should flash, like previous versions did, its just looks like the lines at the bottom of the screen are loading in. This new rendering system is such a huuuge improvement!!!!! Im really happy with it! Like just look at that video comparison, wowww!!!

When I fixed the text rendering last time, one change I made actually broke something else, basically, I was fixing the way text's newlines got rendered, because previously they would simply originate from the element's X, but actually it shouldve originated from the parent's X, otherwise it would frequently be drawn offscreen. I fixed this, but realized later that this logic was only implemented for the drawing part of the program, not the layout part, so other elements still expected newlines to originate from the element's X causing this mismatch and multiple text's being drawn on top of each other, see the Bugged header in the attachment, compared to the fixed version. I also noticed a bug especially on wikipedia, where words in a paragraph are often replaced with links, that sometimes the spaces afterwards would be missing. Normally, I disrespect whitespace on most elements, I.e. remove repeated, trailing and leading whitespace. But Ive now added the exception for when the previous element was also drawn on the same row, to the left, to allow specifically leading whitespace.
Next up, I'll continue just testing out different pages to find bugs and fixes. :)

Update attachment

I FINALLY fixed one of the most glaring (text rendering) issues of the program, which were these 0 width paragraphs, that since they were 0 width, required line breaks on each character, causing them to extend vertically downwards. This made most text on Wikipedia unreadable. I said last devlog that I wanted to just focus in on making Wikipedia render well, just adjusting everything and adding all features that I need to, and because of this I've also added CSS ancestor selectors or whatever they are called, ex. when you apply a style to all <p> children off <divs>, with div p { RULES }. I also had to fix a bug in my implementation of my dynamic asset caching system, as I noticed wikipedias stylesheets werent loading, basically, in the part that cached the assets, the URL had its special characters decoded, but in the part where the assets were requested, the URLs didnt have their special characters decoded. THis took SOOO long to figure out, because I was getting no errors, I could see the assets being cached successfully, and when I manually tried applying the CSS, it worked too, but now Ive finally fixed that. I also fixed some other text rendering bugs, and now, wikipedia looks nearly presentable!! I know about one big (but probably easy to fix) text rendering bug, actually visible in the attachment (if youre keen-eyed), but other than that, things are getting kind of good lmao. Its really cool going to some websites I were using earlier in development, and seeing them actually render properly, images, CSS, and all.
For next steps in development, other than that one bug i spoke of, I think I just have to start trying to render a bunch of different pages and fix all the million bugs that will arise lol

Update attachment

Another small devlog, but I decided to actually create a nice little homepage that isnt just to test the rendering engine lol
It's cool having the project far enough along the line that designing the home page is just writing a regular HTML file and having it look exactly like I'd expect it to! The page has both internal and inline CSS, and uses divs and lists, and has nested spans for the colored words and im pretty impressed with how-despite the fact that this browser is terrible-well it works on this simple example !!! I decided to just put some keybinds that will tell the user how to actually use the program, which I figured would be helpful haha. I also noticed that this entire time, ever since I added image rendering, I've had a PNG of a hämis from the videogame Noita included in the program's source code. I had used that as a test image and simply forgot to remove it. But now ive removed it, and im sorry to all noita players who will take offense at that. Anyways, my summer break ends today (:sob:), so I'll try to keep up with devlogs but IDK how that will go, but I really want Wikipedia to look good, and it would be cool if this very page would render properly, so I kind of want to just really dig in to those websites and just add features and adjust settings until they look right. Just so I could have something to show for all this.

Update attachment

Very minor devlog here, I just added scrollbar and fixed a really specific bug with the dynamic asset loading system thats been on my mind since i first implemented it. The scrollbars were really easy to add, since the element's draw code already return some draw data, which includes content height. This was previously only used in recursion, to position elements properly, but I could just use that at the root level, to get the height of the entire webpage. I then added a check such that if the content height is larger than the screen, display a scrollbar. The scrollbar is just a single ascii █ that has its Y relative to how much youve scrolled. I think the scrollbar does a lot to make the pages feel more responsive, and easier to navigate, even though its a tiny change. The asset loading bug I mentioned is something that I knew about when I first added it, but put off for later. Basically, whenever an asset (like an image) finishes loading, I refresh the current page. This is however wrong, since it should refresh the page it belongs to. I implemented an identification system for webpages, so that each page gets a unique int to itself, that I can use to specify, and search for, a specific page. Gonna work on text displaying now, ive noticed a bug where text renders in a container that appears to be 1 column wide, making the text extend vertically downwards.. hmm, will have to fix that.

Update attachment

[SEE VIDEO FOR SHOWCASE]
rendering images was suprisingly not that hard, loading images, on the other hand, took a lot longer. Basically, the infrastructure required to draw images to the screen was pretty much already in place, Its roughly the same as drawing a rect, but instead of going row by row drawing solid colored pixels, I draw pixels from an image. I also had already written a rust terminal image drawing library, so that hepled lol. BUT, what was really the challenge of this devlog was the system of dynamically loading assets. Basically, when you visit a webpage, your browser shouldnt wait for ALL the images and external stylesheets to continue loading, rather, it displays what is already loaded, and sends requests for the other assets, and when they finish, the page updates. This system is a bit more complex, as you need to understand some bit of async programming and data structures. I implemented this by having a buffer of items to be fetched be filled when I parse a page, whenver it parses and element that links to another page, add that to the queue. Then, we take that entire queue, and for each item, create an async thread, basically a parallelly running function, thats in charge of fetching that specific item. Then i have a global table of these threads and their respective URLs, and in the update loop, I check if anyone has finished, if so, add it to an asset cache hashmap, and update the page. Any future requests for that item will go to the asset cache directly. Pretty cool! I also had to make the update loop run at least once per second, rather than only running on a key event, to make assets load whenever possible. I added both support for image loading, but also stylesheets, see, many pages have the CSS file, i.e. the styling code, in a seperate file, and link to that. Now TOAD has support for it!!! yey! Its cool because this lets me really see that the asset loading is working, because I initially see the page flash without its styling, then a second later it refreshes and is styled, and on reloads, its styled right away. Right, for images, I also had to downscale them so they fit in the terminal, which makes them all render as pixel art pretty much. Furthermore, ive added caching of a websites drawcalls! This became a necessity after I added external stylesheets, as wikipedia's CSS was so complex that it made the page slow down really bad whenever a redraw was required. Instead, I only actually redraw when something has changed, and cache all the draw calls generated. Then, for scrolling, I can just reuse the same draw calls, just change which ones are shown (based on scroll).
I want to focus on making pages read better now, and knowing what elements ex. part of the sidebar, and what should actually be displayed as contents. Take wikipedia for instance, to reach the actual article, you have to scroll like 4 pages of junk data. It would be cool to have it render properly.

Devlog 11! After all this time, this browser is actually finally able to browse the internet by ✨clicking links✨
WOO! It didnt actually take very long at all to implement, ive just been lazy up to this point (actually it relies on a lot of infrastructure ive set up previously). One thing I didn't think about is how child items of links are affected, basically, initally I had every child of a link register itself to my array of InteractableElements, but I realized that a link can have many children, which all should count as a single link, rather than 10 unique. To circumvent this I just added a counter to the global state, which I use to make all the children of a link have a single unique identifier. Then I can group interactable elements by identifier. I also added support for more HTML character encodings, which I mentioned last devlog. Since parsing them is quite complex, I had to make a concious choice on how to parse them. At first I considered writing my own parser that iterates through the text as a buffer of characters, but I also considered using a regex library. I really didnt know which option would be faster, so I did both and benchmarked them. So while this devlog is logged as roughly 1 hour, I spent several more in a temporary project writing both these parsers, and benchmarking them. It turns out, my parser was roughly 3x faster on small inputs, while on large files, the regex one was like 80x faster. I decided to go with the regex one, since most html files are large enough to make it worth it. Other than that, I fixed one of the biggest issues of my HTML parser, so it now parses a lot more sites, though I noticed that ive still got some glaring alignment issues with text that I need to fix. I also want to add IMAGES!!! I actually have made a terminal image renderer library before, so I'm just gonna use that lol. Also I would definitely need to add a max size of image for it to be rendered, which would be like 9x9 pixels or something lol, so not very practically useful. Though perhaps I could downscale images ? We'll see!

Update attachment

In the process of optimizing the renderer in Devlog #8, I also really screwed up the rendering, which this devlog was mostly about correcting. Inline text, for one, was COMPLETELY broken, either rendering wayyy offscreen, or overwriting the previous line. Thankfully I could basically check git history to see what I had changed, to find where the problems had arisen. One thing that took really long to realize was that I had forgot to make some text elements have the width of their text, and instead default to 0... yikes. I also added some more CSS features, like class/id selectors with element requirements. Basically, in CSS, you can apply a rule to a specific class with .classname { rules }, but you can also make that rule only apply to elements of a specific type with that class, with element.classname { rules }. I also added support for some HTML special character encodings, like &amp;, which is just an encoding for &. Ive not yet implemented all types of HTML character encodings, like for special unicode characters, by &#nnnn;, where nnnn is a decimal unicode character code.
Anyways, I really want to make links work next, so that webpages can actually be navigated.

Update attachment

Scrolling was at least easy to add! I just added property, scroll_y for each webpage, and all draw calls I compare with scroll_y to determine if they should be shown or hidden. I also offset all draw calls by the scroll amount, and I had to add extra logic to draw calls that draw something multiple lines in height, since they often are on the border of being scrolled to. However, as you can see in the video, scrolling still has its issues, namely flashing. The flashing you see is something ive written about in two previous devlogs i think and is caused by premature flushes to stdout, basically when too much data is sent to the terminal output in a single frame, so rather than draw it all at once, in a single frame, it does it over several frames. It looks really bad. The only way to really get rid of it is to optimize your graphics code. I'm considering storing the terminal state, like all characters printed to the screen and their color, in a matrix and whenever we draw, we compare it to the last frames buffer, and only update the modified regions. Its going to be really tedious to implement though. Maybe I'll look if theres any TUI library that seems really good, but honestly I want to keep this proejct lightweight and homemade lol :-)

I HAVE SPENT COUNTLESS HOURS FIGHTING TO MAKE MY PROGRAMS DRAW FUNCTION O(n). If you dont know, the Big O notation is basically a measure of how efficient an algorithm is, comparing how many inputs it receives, to how many iterations is required to generate a result. Well, yesterday, I noticed my program took several minutes to draw a site from wikipedia, and understood that it probably was due to all the recursion I was doing. Basically, elements can have their width be set to fit-content, which will make their size be just enough to fit all its children. The way I was calculating this was to, whenever I encountered a fit-content, to recursively run a dry draw call on its children, recursively, basically emulating a real draw call, but only to measure how much space is needed. This however quickly snowballed in to millions of layers of recursion, as, if a child also had fit-content, they would have to run their own dry draw call. Point is, it was a mess. Problem though, its a really hard problem to solve, because children of an element can have their size be relative to their parent's. This creates this weird situtation where parents have their size relative to their children, and some of their children have their size relative to the parent. No matter which direction you iterate from, theres always gonna be unknown sizes until the draw call is entirely complete. The way I solved this, to have it all run in a single draw call, was to implement a global table of unknown sized elements, basically, whenever we dont know the size of an element, an entry gets added to the table saying that this value isnt yet known, but will become known later. If they have any children that reference their size, i store that as a reference to an entry in the unknown sized elements table. At the end of the draw call, we can replace all unknown references with proper values. This took me soooooo long to get working, and while SoM claims this devlog has 6 hours of unlogged time, Ive spent SO MUCH time just visualizing this system on a whiteboard, and staring blankly at my screen trying to solve some problem. But, it was worth it in my opinion, now, we can reduce the number of iterations required to draw that specific wikipedia page from over SIX MILLION, to roughly sixteen thousand. Each element is only iterated on a single time, finally. Also, it is by NO means done, optimized or working in any way, but rather now I have the infrastructure to actually make something that wont be held back by having to do tens of thousands of times more work than it needs to. PS: I still havent implemented scrolling or interactive links.

Update attachment

One problem the previous versions of Toad had was that there wasnt really a way to draw text without a set background-color. In a terminal, when you print something, that is going to have a background color and a foreground color. There is no way to tell it to use a transparent background color and use whatever is underneath it. So if you have some text, that doesnt have a background color, that is covering say a colored div, the text would have to overwrite the color where it was overlayed. Now I've restructured the drawing, such that all draw instructions rather send DrawCalls, that I can later handle, with the context of all the rest of the drawcallls, which lets me do what it is you see in the screenshot. The first text is partially covering the div that contains it. In previous versions, the text would either have overwritten the green of the div, making it white, or itself been green entirely, sticking out weirdly. This new version however, is the intended result.
I want to add padding and positioning next. Oh right and also scrolling, that should be easy now that I have a draw queue! I can just filter out everything above and belove a certain Y threshold!!!

Update attachment

Devlog 6! So apparently CSS properties have a secret setting, where they either inately inherit to their children, or not. An example is the color, property, which children inherit, but the background-color property doesnt. This both means that for every property I add, I need to also figure out whether it inherits or not, but also that I had to restructure a lot of code. See, before I was just using an Option value for each property, which is an Enum, that either has a value, or doesnt. I used this to mean that the property has either been set, or that it is unset, and should inherit from its parent. But now, for these special properties which dont inherit, I instead need to use a new data type I made, which I call a NonInheritedField. This is instead an Enum which either is Unset, Specified, or Inherited. If you're wondering why it has an Inherited variant, its because in CSS you can specifically mark a property as inherit, causing it to behave like an inherited property. This is just so my code knows a difference between this field is empty, and therefore must inherit from its parent, and this field is empty, and will therefore use the type's default value. Anyways, you can now see in the screenshot that its doing a pretty good job replication the test page. The red box size difference is because I havent found a good conversion rate between CSS pixel units, and terminal character units. The test was made specifically for this devlog, as it tests this inheritance, you see, all the texts are children of the green div, yet as you see, they dont have a green background themselves. The blue text's background_color value is Specified, while the others are Unset.

Update attachment

I ran in to some issues while getting proper network requests working, but all in all it went pretty smoothly. I had an issue that took me forever to find the cause of, where the parser would basically crash if the page ended with a newline, which my test HTML files didnt have, so I thought it was something network related. I also added CSS width and height properties, which paired with background-color makes webpages actually have a background now as you can see in the picture! Now, what I really want to add is scrolling, right now it just prints the entire webpage always. If the content is larger than your terminal? It just doesnt show. This also makes it unperformant, as all the data makes the stdout flush prematurely like I talked about in a previous devlog, which makes the webpage flash as it renders (basically if you print too much data, it will force a flush, even if you want to wait with flushing until everything is complete).

Update attachment

Ive added CSS stylesheets! I didnt have to write a lot of new logic, as the parser for inline CSS is still used for stylesheets, just that a wrapper code divides it by selectors. Selector types I have support for are element type, class and ID. I actually havent added support for multiple selectors on the same ruleset yet, like when you write h1, p {}, which applies to both <h1>s and <p>s. It shouldnt be that hard though, I think ill just be lazy and have it clone an entry in the global style hashmap for every selector, such that it really just evaluates to one stylesheet per selector. On the image you can also see I've added tabs! Though so far the only tabs you can access are the two hardcoded ones I added for testing lol, so after adding multiple selectors I'll probably add an address bar to go to page by url.

Update attachment

Parsing basic CSS is at least a lot easier than parsing basic HTML, lol. I've added inline CSS (like when you do <p style="color: green">), with the only properties being color, background-color, display and text-align. The reason why (basic) CSS is easy to parse is that each rule ends with a semicolon, and each rule is just a key and a value. The harder part is parsing the values, like did you know you can define CSS colors with hex, like #ffffff, with RGB like rgb(255,255,255) and with a name of a color, like white. That means I have to add support for all those formats, but that wasnt all too hard (though I only have the first 16 web colors). I want to add non-inline CSS next, which affects the entire webpage. I think it should be kind of simple though, as I already have a global draw context that I could add a field to for css, probably a hashmap where the key is which element it affects.

Update attachment

The biggest challenge with my webpage renderer right now was keeping track of the draw state. Basically, a header in my program will make its text bold and red. This is done by modifying the draw state that is passed to its children, then I apply that draw state whenever I print something. Problem though is that for performance reasons, I have to actually only apply the changes in the draw state, and this is because when you color and restyle the terminal, you're actually sending invisible magic bytes of data called ANSI escape codes. If you send too much, like if you always apply all properties of the draw state, like setting italics, bold and foreground color, even if it hasnt changed, you'll be sending wayyy too much bytes to the stdout. In most terminals, there is a limit to how much data can be received before a flush is called (a flush is when the terminal actually draws its data to the screen). This will cause premature draws, basically splitting every draw frame in to like several smaller draws, which causes the terminal to flash briefly which can be really distracting. Ive made some TUI (terminal user interface) apps before and have first hand experience with this :(
The draw state also keeps track of whether whitespace should be respected or not. In HTML, if you make a basic <p>tag with some text, it will remove repeated spaces and remove all newlines. Some elements, like the <pre> changes this, which is why theyre used to display things like codeblocks.
Anyways, I think i'll work on input next, like interactable links and buttons perhaps :)

Update attachment

FIRST DEVLOG! Toad is going to be a web browser that runs in your terminal. Making an entire web browser is really hard though, which is why there are only like 3 browser engines used today (chrome-likes, firefox-likes and safari-likes), so don't expect this to be able to view 100% of websites perfectly, it will be more like 0.2% of websites okay. Thats because HTML and CSS are really old and therefore have a lot of nische old remnants and quirks. I also dont think I'll implement Javascript, which will just on its own make a lot of websites not work. BUT, this will at least be a fun learning project and as long as I get Wikipedia or google searches working, I'll be happy. So far, I've created a basic HTML parsing system. Basically, a system that takes HTML website code and deserializes it to a native data structure in my program, that it can easily understand. On the image attached, to the right, is the output after my program reads an HTML file, and then prints out its representation of the data. It still looks like regular HTML because I made it format it like that, but that shows you that it can accurately interpret simple HTML!
Next up I think I'll work on displaying webpages, like drawing to the terminal window. I'll be using crossterm I think, which is a library for terminal interaction for Rust that ive used before, so I think it will go fine!

Update attachment