A Floor Beneath Your Feet
66 Days Until I Can Walk
A lot got done today. And now that I’m in the process of writing up my blog post, I’m starting to realise just how much. It’s been a good day in many, many ways. Obviously the big success is that floor texturing now works.
However, in addition to that, I also got some virtual joysticks working for the mobile version of the site. I need to do some proper checking for how far a stick is pushed so that I can adjust turn and movement speed accordingly, but at least now people on mobile can look around the scene a little bit. There’s a major refactoring of the codebase coming soon, and this will likely be one of the issues addressed.
I have also fixed my user input code to handle different keyboard layouts. This was based on some feedback I received in relation to the project on Reddit. You can read about specifically what I did further down this post.
Floor Texturing
In order to implement floor texturing, I looked again at F. Permadi’s tutorials. Casting rays for floors is, in some ways, a good deal simpler than casting for walls. Permadi himself mentions in the comments of his code that his implementation is very easy to optimise. I’ll talk about some of the ways in which I have done this later. For now, let’s talk about how floor texturing works.
Taking a single column of the screen as an example, after we have finished rendering a wall, we then want to render the floor starting from the bottom of the wall and continuing to the bottom of the projection plane. In order to do this, we cast a ray out from the eye of the player, through the pixel on the projection plane that we are currently trying to texture with a floor tile, and then out into the world. I will give a detailed explanation of how this works in a future post, but a good description is given by Permadi on the page linked above.
We follow the ray until it hits a floor tile. We then compute the point of intersection between the ray and the tile and map that to a texture. We select a single pixel from that texture at the appropriate point and render it to the screen.
So unlike rendering the walls, where we fire a single ray out for a column, get the points of intersection with each grid line, and then draw the appropriate texture scaled to a height based on the distance from the player, with floor textures we must fire a ray for each individual pixel that we want to render to. This is is because, of course, floor tiles can be skewed depending on which direction we look at them from.
There are two simple optimisations that can be applied to Permadi’s implementation. It’s likely you can already guess what I am going to propose:
- Convert all floating point arithmentic to fixed point arithmetic
- Use lookup tables to store values that can be precomputed
Converting everything to FPA is trivially simple, of course and has already been implemented.
Generating lookup tables is also quite straightforward. The ray that is fired will be of constant length for each column/row combination of the projection plane. This means that we can compute the length of the ray, and its skew (exactly the same problem that cause the fisheye effect) at compile time and save ourselves some runtime cycles. I hope to add this tomorrow. Because we are only projecting from the bottom half of the projection plane, the size of the lookup table should be PROJECTION_PLANE_WIDTH * PROJECTION_PLANE_HEIGHT /
.
WASD Key-Bindings In JavaScript (And Beyond)
Yesterday I posted a little project update on the /r/rust subreddit, just to try and keep people engaged with what I’m doing here. One piece of feedback I recieved was in relation to how I am handling user input in JavaScript. In my js file, I have a callback registered for key-up and key-down events. The callback updates a global dictionary of key states so that an entry for a key is true when down and false when up. This is how I can let the player strafe and turn simultaneously, for example.
The implementation looked something like the following:
The problem here is that event.key
is tied to the value of the key being pressed, but not its physical location. What this means is that, while WASD bindings work fine on my QWERY keyboard, people with Dvorak or French AZERTY keyboards do not benefit from the same “gamepad” layout of these keys. The solution, as given by stefnotch is to use event.code
, which is tied to the physical location of a key, rather than its value.
Comment
by u/stefnotch from discussion Learning Rust Until I Can Walk Again (Update)
in rust
This advice, of course, does not only apply to JavaScript. For example, if implementing a game with SDL2, then WASD keybindings should be implemented using SDL_Scancode
and not SDL_Keycode
Updating my JavaScript code is trivially simple
Adding Virtual Joysticks
Up to this point, users who have logged into the site to check out the engine on a mobile device have not had any way to move about. They are faced with what is essentially a static render of a view through a window.
I wanted to give mobile users some way to interact with the site, so this morning I added two virtual joysticks to the page. These joysticks will only render on mobile devices. In order to do this I made use of the JoyStick2 JavaScript library by Roberto D’Amico. This gave me a trivially simply way to insert and style two joysticks on the engine page.
The callbacks for the joysticks return the displacement of the stick, and its cardinal direction (N, NE, E, SE, S, SW, W, NW). For now I have just used the cardinal direction to determine what direction the user is trying to move in. So the user experience on mobile is still not great. I’ll need to add some control where the speed of the player varies with the displacement of the stick. But for now, at least mobile users have a bit more interactivity than before.
Figuring out whether or not I was on mobile was trickier than I expected. There is a CSS based solution where you check the maximum resolution of the screen in order to determine whether or not you are on a mobile device. However, this did not work for me and in the end I opted for the JavaScript solution shown below:
I found this in a StackOverflow post by Michael Zaporozhets. It basically uses a massive regular expression to check what browser is being used to access the website. If it is determined to be a mobile version of a browser, then the mobile version of the site is rendered.
Conclusion
So that just about wraps things up for today. Tomorrow I will get texturing working for the ceiling too. Then I will spend the rest of the week performing a massive refactor of my code in an effort to make everything just a little more Rust, and a lot less messy.