Making Connections

6 minute read

80 Days Until I Can Walk

Mark the date! Today is the day I started writing code for the Fourteen Screws engine!

I set out with a pretty simple goal. I wanted to write a function in Rust which returned a static image. Then I wanted a simple web page that would display that image. Really what I was looking for was an understanding of how rendering to a section of a web page would work in my game engine.

My Usual Approach

If I were writing this project with SDL2 and C, I would likely do something like this to render to the screen:

// Note that somewhere out there, renderer is pointing to an
// SDL_Window struct which is what is being rendered to when
// SDL_RenderClear, SDL_RenderCopy, and SDL_RenderPresent are
// called.

void render(SDL_Renderer *renderer, SDL_Texture *texture) {
    int pitch;
    void *texbuf;
    Uint32 *pixels;

    // get the pixel buffer for the texture
    SDL_LockTexture(texture, NULL, &texbuf, &pitch);

    // wipe the texture so all pixels are black
    pixels = texbuf;
    memset(
    	pixels,
    	0,
    	sizeof(Uint32) * texture_width * texture_height
    );
    
    // =======================================
    // render scene into pixels buffer
    // ray cast magic will happen here
    // =======================================

    SDL_UnlockTexture(texture);

    // draw the texture to the screen
    SDL_RenderClear(renderer);
    SDL_RenderCopy(renderer, texture, NULL, NULL);
    SDL_RenderPresent(renderer);
}

At some point as the application starts up, three structures are created:

  • SDL_Window: the window displayed on screen, and the resolution at which our application is running
  • SDL_Renderer: Used to render to the SDL_Window instance
  • SDL_Texture: Image data which we can draw to SDL_Window using SDL_Renderer.

In the above example, we start the render function by locking the SDL_Texture and extracting a raw pointer to its pixel content. Just like the developers at id Software, we now have a pointer to a buffer into which we can render our scene. We set the colour of a pixel by writing an int to the appropriate point in the array. When we are finished rendering, we can unlock the texture and draw it to an SDL_Window using SDL_Renderer.

Note that SDL_RenderCopy can upscale a small SDL_Texture to match the resolution of a larger SDL_Window. So, for example, I might create an SDL_Texture with a logical size of 320x200 pixels (fairly typical values seen when reading about ray casters). But my window (running in fullscreen, perhaps) might have a resolution of 1920x1080. SDL_RenderCopy can upscale the contents of the texture to fit the full area of the window. However, the image will be distorted if the aspect ratio of the window is different to the logical aspect ratio of the texture. This is something we must bear in mind.

Because I have taken this approach before, I wanted to find a similar solution in Fourteen Screws. The answer would appear to be the HTML5 <canvas> element.

Creating a “Window”

The <canvas> element exposes a simple API for rendering on a web page.

Initially I was somewhat confused by the difference between the canvas’ height and width attributes, and its style height and width attribute. I did not realise that they were two separate things. However, I now understand that, using CSS I can style the canvas to appear at a certain size on the page. However, using the height and width attributes on the <canvas> element itself allows me to set the resolution of the canvas. So on the current deployment of the engine I have a canvas that has been styled to have a width of 1240 pixels, with an aspect ratio of 8:5 (the same as a window of size 320x200). The resolution of this canvas is now 320x200.

In JavaScript, I can grab a buffer similar to the raw pixel buffer in the C/SDL2 example by getting the canvas’ context and extracting its image data:

import * as wasm from "fourteen-screws";

let canvas = document.getElementById("canvas");

if (canvas) {
	var context = canvas.getContext("2d");

	if (context) {
		var image = context.getImageData(0, 0, 320, 200);
		wasm.render(image.data);
		context.putImageData(image, 0, 0);	
	}
}

In the example above, wasm.render(image.data) is an external function defined in Rust and implemented as follows:

static PIXELS: &'static [u8] = &[ /* raw image data */ ];

#[wasm_bindgen]
pub fn render(buf: &mut[u8]) {
	buf.clone_from_slice(PIXELS);
}

So JavaScript retrieves a reference to a pixel buffer from the canvas, which it then passes by reference to Rust. Rust renders a scene into the buffer (in this case, just copying a static image over), and JavaScript then displays the result in the canvas.

This gives me something quite similar to what I would have had in C. I have a buffer into which I can write pixel data (although I don’t particularly like how that pixel data is represented by the JavaScript ImageData object). It also allows me to upscale my image to higher resolutions without having to alter the basic assumptions and optimisations that I employ in the ray casting algorithm.

I definitely hit some snags trying to figure out ownership and mutability while writing the above code. Even though it’s short, I found that I learned a lot.

At this stage, I could have taken things much further, but I’m actually happy to stop here. Tomorrow I want to have a look at implementing a first version of the ray caster. I’ll be using a slightly different data structure to most other people for modelling walls, but I won’t talk about that now. I think my approach will give my engine some interesting features, and might add a little bit of new information to what is a very thoroughly discussed rendering algorithm.

Thoughts On Project Structure

In order to run my application, I actually have two projects in one. I have my Rust crate, which will ultimately contain all the logic for rendering a scene, and I also have a Node project for loading the WASM file and rendering the web page. If possible, I would like to remove Node as a dependency. I believe I should be able to craft a JavaScript file which can be dropped into any web page and will load the WASM before starting the rendering logic. This isn’t a big deal right now, so I’m not going to spend much time on it. But if possible, I would like to see Node disappear from my project.

Deploying to the server

Right now the website is running behind nginx. The main blog is written with Jekyll and served, as you would expect from a folder on the server. I have chosen to set up a separate subdomain for the running instance of the engine. This is a node server running behind pm2 and reverse proxied via nginx. I’m reasonably happy with this, but if I can eliminate Node from the project, then I would like to see the engine running within Jekyll itself. Perhaps rendering into a canvas in the middle of a regular post.

Writing Progress

I’ve also started working on my Rust/WebAssembly overview article. It is going to be quite difficult to constrain the topic. There are so many different things to look at and understand. But I’m still going to aim to have this article written by the end of this week.

Conclusion

That’s all for today folks! If you’ve made it this far, then thanks for reading! Tomorrow I’ll be working on the beginnings of the ray cast algorithm. Hopefully you’ll start to see some dynamic behaviour in the engine very soon.