Please sign in to access this page

3DRenderer

3DRenderer Used AI

9 devlogs
19h 13m
•  Ship certified
Created by Laeth English

CPU-based 3D renderer in C. Uses all scratch-made math and algos to implement a complete render pipeline without the help of a GPU. See second-to-last devlog for a better description.

NOTICE TO REVIEWERS! Please follow the instructions in the release notes for macOS-- you must allow it with Privacy and Security settings and THEN run it AGAIN with terminal passing the object model as a command line parameter!!!

Timeline

See above devlog for a nice description!

Fixed distribution. Added a release for Linux (you're welcome, reviewers! Please, see my project!) and made it... actually run on Linux. There was a wacky bug with SDL_ESCAPE being erroneously sent every frame. You should now be able to try it out if you're on Linux.

Update attachment

Ship 1

0 payouts of shell 0 shells

Laeth English

20 days ago

Laeth English Covers 8 devlogs and 18h 41m

FINAL VERSION!!!

Lots of time between this devlog and the ones before; I went on vacation for a while and didn't have an opportunity to work. Came back and discovered that I barely had enough steam to wrap up the project. Don't worry, I'll work on some more projects later :)

So what is 3DRenderer? Simple: it's a small, basic 3D rendering pipeline written in C to run on the CPU. 3D rendering pipelines are usually done almost entirely in hardware on the GPU, with OpenGL, Direct3D, Vulkan, etc. abstracting most of the process away from the developer. In an effort to learn more about 3D rendering, I made a simple implementation of all those processes purely in software. No hardware GPU assistance! It's a very simple program that only draws bright green lines and renders single objects.

If you're one of the ~26% of the world that runs macOS, you can check out the GitHub repo to demo it for yourself. Use WASD to move the camera and IJKL to rotate it. Enjoy one of the two 3D models included in the program, or bring your own Wavefront .obj file.

It turns out that 3D rendering is WAY more complicated than I thought; it's quite a formulaic process, but wrapping your head around it is very challenging, and I doubt it's possible to write a rendering pipeline without understanding it.

I did use GitHub Copilot to generate some code snippets and functions that I would have preferred not to write by hand.

'nother low effort devlog. Working on transitioning to a vertex and face type to store normals, colors, etc.

Update attachment

Low effort devlog so I don't lose my minutes. I refactored the culling logic and implemented z-buffering; working towards implementing colors! You'll here more about this in the near future, methinks.

Update attachment

-> Quaternion Rotation! <-

This is a polish improvement from something I had working before. I previously implemented camera controls including rotation, which worked using traditional matrix rotation manipulation. The problem with this is that, over hundreds of frames of manipulation, the camera matrix gets... corrupted, for lack of a better term. The rotation and scale numbers that take up the first 3x3 of the matrix become misaligned and it becomes impossible to properly reverse them in order to render the vertices.

The result of this is that, after rotating a bunch, the image gets really distorted and flattened in a weird way. Well, the solution to this is to encode rotation with quaternions, which aren't subject to any distortion. A quaternion is a 4D number represented as a + bi + cj + dk where a, b, c, and d are real numbers and i, j, and k are vectors. It's a bit wishy washy what all of that actually means, but the point is that we can use quaternions to encode rotation in 3D space.

Well, I wrote a super simple quaternion library, and now store the position of the camera as a simple vector3, and the rotation as a quaternion. It all still ends up as transformation matrices in the end, but it's not prone to misalignment like this. Very cool!

(P.S. movement uses QWEASD and rotation uses UIOJKL; press SPACE to reset rotation)

-> Camera Contols <-

This is a smaller one and there isn't much to talk about here. Basically just implemented a way to control the camera using WASD for movement and arrow keys to look. It's decently hacky and uses the transformation matrix system to work, so there is a likelihood for unintended behaviour. Just... don't try to do anything TOO fancy.

-> Vertex Culling! <-

The program no longer segfaults when there are vertices outside of the camera's view! Incredible! To do this, I implemented vertex culling, which checks for vertices outside of the bounds of the clip space cube.

If you saw my last devlog, you will know that clip space is an intermediate coordinate space in the graphics pipeline where the coordinates of all the vertices are represented with 3D coordinates that range between -1 and 1. Any vertices with coordinates outside of this range are outside of the camera's visual range. Pretty neat!

During the culling step, the program iterates through all the triangles in the scene, looking for vertices that are outside of the visual range.

1) If all three of a triangle's vertices are outside, it is simply discarded.

2) If two of the vertices are out of range, the program determines where the triangle's sides intersect with the plane that represents the border of the clip space cube. Then, it replaces the original triangle with a new triangle with the one original vertex that's inside and the two points where the sides intersect the borders.

3) Finally, if only one vertex is out of range, the program needs to create a trapezoid. That means two new triangles. It creates one triangle using the two points where the triangle's sides intersect the border and one of the original points that's in bounds, and a second triangle using the two points that are in bounds and one of the intersection points.

It's a bit hard to wrap your head around at first, but it's quite an elegant algorithm, I think. Next up, perhaps, I'll implement a way for the user to control the camera with the keyboard!

-> 3D WIREFRAMES! <-

First: my last post was super wrong about what the graphics pipeline is like. It's way more complicated than what I thought it was. The real pipeline looks a little something like this:

1) Interpret 3D models into a local data structure for vertices and faces (which vertices connect to which others to make triangles)

2) Transform the models into world space. This means adding a fourth coordinate (called the homogenous component and labeled w) on top of x, y, and z, and using matrix math to transform the vertices into absolute positions in the world.

3) Transform the models into camera space. This changes each vertex's coordinates to be relative to the camera by multiplying the inverse of the matrix that describes the camera's position in the world. Basically, where is each vertex relative to the camera?

4) Transform the models into clip space. This stage deals with perspective. Basically, this uses the camera's near and far viewing planes to create a frustum that describes what the camera can see, then normalizes it into a rectangular prism. In a frustum, the cross sections are larger the further you get from the camera. In clip space, we reshape the frustum so it becomes a rectangular prism, which draws points that are further from the camera closer into each other. That's the conceptual idea, anyway. This will significantly modify the homogeneous component (w) from earlier.

5) Normalize into NDC (Normalized Device Coordinates). This simply divides each vertex's x, y, and z coordinates by w to put all the coordinates in a range between -1 and 1. Vertices not in this range should be removed, but mine just segfaults instead.

6) Project to screen space coordinates. This extrapolates the x and y coordinates on the viewing screen for each vertex from the NDC space numbers using math that I don't entirely understand.

7) Create triangles from the vertices and the list of face mappings.

8) Draw the triangles to the screen using the system I devised earlier.

whew, that was a lot. My code has reached the point of doing all this stuff! Pretty neat right? I spent quite a lot of time researching this whole process. Most of the time, all of this processing happens inside the GPU, but this code implements the whole pipeline entirely with CPU. Next, I'll extend by culling out-of-frame vertices, filling in triangles, and adding support for color (maybe)!

Render Pipeline and Drawing Triangles

Once I got through the boring task of scaffolding everything and getting SDL working to output my pixel buffer, it came time to start drawing triangles.

The basic render pipeline will look like this:
1. Interpret 3D model
2. Rasterize into triangles
3. Print triangles onto pixel buffer
4. Copy pixel buffer to display window

Step 4 is easy, and this log is about step 3. The core of this is the simple screenspace_draw_line() function, which takes a pair of points and copies a line between them to the pixel buffer.

This function first determines the dy and dx of the line, then determines whether the slope is <= 1 (dx >= dy) or > 1 (dy > dx). If it's less than one, it iterates over every X-coordinate between the two points and determines the exact position of the line's Y-coordinate based on the slope and position of the first point. It then simply rounds the result to the nearest whole pixel. If the slope is greater than one, it does the same, iterating over the Y-coordinates and estimating X-coordinates instead.

I had some issues along the way which aren't terribly interesting: lines had drastically wrong slopes because I completely didn't realize I was doing integer division (whoops...). I was also having an issue with high slope lines appearing extremely jagged and sparse, which was what prompted me to implement the Y-iteration method for lines with dx > dy.

Next up, RASTERIZATION!

Update attachment