3 months of Rust game engine development

Oct. 2, 2024

I've been putting together a game engine for a side project I've been working on. Here's a few clips of the current version.

For the core of the game engine, I'm inspired by:

  • Love2d, for ease of scripting.
  • BYOND, for ease of integrating multiplayer.
  • Source, as a target for game feel and rendering fidelity.

I personally really liked working on these platforms-- they all support hot reload in some way or another, and it's very easy to hack together multiplayer experiments. I really value the developer experience these engines provide.

The major systems are in and I'm working on gameplay now. So far, there's some working implementation of ui, physics, rendering, lighting, netcode, input, scripting, animation, and an asset system.

It's pretty hacky right now, but I'm slowly refactoring it until it's good. I'm taking a very broad strokes approach to building this.

Design

For starters, the game loop and initialization is owned by a VM running TypeScript. It's pretty similar to React Native in architecture right now.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 /* client_entrypoint.ts */ import { GameLoop } from 'platform/game_loop'; import { Window as OsWindow } from 'platform/window'; export default class App implements GameLoop { private window!: OsWindow; init() { this.window = new OsWindow('my game window'); } frame(delta: number) { /* ... */} }

I want people who don't know Rust to be able to use the engine, so the engine's entrypoint is in TypeScript, but the internals are written in Rust.

Some really nice macros for binding help keep the glue code minimal.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 /* platform/window.rs */ use crate::js::prelude::*; use crate::window_module::*; #[js::class()] struct Window { id: WindowId, } #[js::impl()] impl Window { pub fn constructor(title: Option<String>) { let window_module = use_module!(WINDOW_MODULE); let new_window_id = window_module.create_window(); window_module.with_window(new_window_id, |window| { let title = title.clone().unwrap_or_default(); window.set_title(&title); }); Window { id: new_window_id } } }

The pitch is that most users will write TypeScript, and the platform/rendering code can live in Rust, where it's fast, parallel, and memory safe. Some data structures, like the ECS, will be shared between the two.

Eventually, WebAssembly modules might be a good fit, but for now the WASM ecosystem is a little young and I don't want to get blocked in any capacity, so sticking with JS-- super easy to embed.

UI

The UI system is pretty foundational in this engine, in the sense that the only way to render a viewport into the world is by presenting it to the compositor in the UI system. That being said, the engine can still run headless (multiplayer servers, for example) which is also a blessing because the UI system is very ugly right now.

ui

Right now, this lives in a super experimental state. Users can effectively write JSX for layout, but also use a painter to draw each element.

Here's a really trimmed-down example from the windowing shell the UI system uses to present draggable windows:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 /* window.tsx */ export class Window extends Element { title = "Untitled Window"; hideChrome = false; draw(painter: Painter) { if (!this.hideChrome) { // draw window chrome // draw header painter.rect({ x: this.x, y: this.y, width: this.width, height: HEADER_HEIGHT, background: { ninepatch: window_header_9patch, scale: 4.0, }, }); // draw title painter.label(this.title, { x: this.x + 8.0, y: this.y + 4.0, size: 32.0, color: Color.rgb(0.0, 0.0, 0.0), }); // create a drag handle over the top of the header element('drag_handle', { x: this.x, y: this.y, width: this.width, height: HEADER_HEIGHT, }, (elm) => { handler(EventKind.TryStartDrag, (ev) => true); handler(EventKind.DragDelta, (ev) => { this.x += ev.deltaX; this.y += ev.deltaY; }); }); } // draw body painter.rect({ x: this.x, y: this.y + HEADER_HEIGHT, width: this.width, height: this.height - HEADER_HEIGHT, background: { ninepatch: window_body_9patch, scale: 4.0, }, }); // draw window contents painter.pushClipRect( this.x + FRAME_SIZE + CONTENT_PADDING, this.y + HEADER_HEIGHT + CONTENT_PADDING, this.width - FRAME_SIZE * 2, this.height - HEADER_HEIGHT - FRAME_SIZE - CONTENT_PADDING * 2, ); for (const child of this.children) { painter.drawChild(child); } painter.popClipRect(); } }

Some more work needed here, but the combination of JSX with flexbox for layout, plus the immediate mode painter API, works really well with the types of needs game interfaces have (rich animations, etc), in my opinion.

The element(/* ... */) API there is to create an "anonymous" element procedurally, which I've found useful for lots of things, including event handlers-- see how easy it is for me to make the window draggable by creating an invisible element over the top of the window bar.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 // create a drag handle over the top of the window element('drag_handle', { x: this.x, y: this.y, width: this.width, height: HEADER_HEIGHT, }, (elm) => { handler(EventKind.TryStartDrag, (ev) => true); handler(EventKind.DragDelta, (ev) => { // DragDelta is only delivered when the element is being dragged this.x += ev.deltaX; this.y += ev.deltaY; }); }

There's some pretty big footguns when it comes to reactivity in a setup like this, but it's experimental.

I think it's very likely the UI system will change a lot, but for now, it's working great and exactly what I need. I've always wanted to try this mixed-immediate-and-declarative style of UI.

Renderer

The renderer is super basic. It uses wgpu and renders maps exported from Hammer, including baked lighting. The asset system is also very happy to import directly from .blend, so a really nice mapping workflow is emerging by using Hammer and Blender in tandem with hot reload.

renderer

Portability and performance is a huge priority for me, so the fidelity of the renderer will not improve much-- but the tooling for the Source engine that generates the maps will be incrementally replaced by my own stuff. This renderer is designed to support WebGL as a minspec, so a lot of the more interesting features in wgpu are ignored for now.

Assets

An asset build system (inspired by Bazel) produces new artifacts for the game to use. These artifacts contain information that is known to the scripted layer at **build time**-- which can be verified when the scripts are hot-reloaded.

Assets can be imported in TypeScript, like this:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 /* example_draw_ninepatch.ts */ import { Immediate2d } from 'render/immediate_2d'; import window_background from './window_background.9patch.png'; export function drawWindowBackground(painter: Immediate2d) { painter.rect({ x: 100, y: 100, width: 100, height: 100, background: { ninepatch: window_background, }, }); }

The build-time asset information lets APIs like Immediate2d.rect verify the type passed to it is actually a ninepatch before the game is built.

This is all in anticipation of doing asset preparation/optimization per-target platform, tree shaking, and other optimizations-- webpack for games.

The importers themselves are implemented in Rust. For example, the image loader implementation basically looks like this-- offloading most of the work to the image crate.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 /* asset_loaders/image.rs */ #[derive(Debug)] pub struct Image { pub width: u32, pub height: u32, pub pixels: Vec<u8>, } impl Asset for Image { fn extensions() -> &'static [&'static str] { &["png", "jpg"] } fn load(asset_id: &AssetId, uri: &VfsUri, data: &Vec<u8>) -> AssetLoadResponse { log::info!("importing image at {:?} with id {:?}", uri, asset_id); let image_asset = { let img = image::load_from_memory(data) .expect("failed to load image"); Image { width: img.width, height: img.height, pixels: img.pixels().flat_map(|d| d.0).collect::<Vec<_>>(), } }; AssetLoadResponse { asset: image_asset, } } }

Right now, this system is simple. In the future, I'd like to support more complex targets with shared dependencies, similar to Bazel. I thought about adopting Bazel wholesale as part of the toolchain, but I really want the build environment to be highly portable and hosted inside the engine itself.

Already, assets can be streamed in while the engine is running when the JS binary is hot reloaded. Which is awesome!

Scripting

Scripting in TypeScript is a huge priority for my engine. I think that Rust is an awesome language for engine development, but I understand that not everybody wants to write Rust, and not every problem domain is best covered by Rust.

Practically speaking, opting into a scripting language is the easiest way to make hot reloading happen. I chose TypeScript using quickjs for this.

A module system that relies on message passing ties together both environments and also supports asynchronous calls between JS and Rust, which is super convenient.

There is a pretty big category of issue called "re-entrancy"-- where JS calling Rust, which in-turn calls JS that calls Rust, can violate the aliasing rules that the borrow checker upholds. In practice, it's not clear how big of an issue this will be, but a more principled design of the scripting and native layer will eventually eliminate this issue.

ECS

The ECS I'm building isn't following data-oriented design patterns like a traditional ECS. For now, entity handles are just smart pointers, which point to a map of type-erased Components.

Eventually, I want to switch to the DoD style ECS that everyone else uses, for performance reasons. For now, I just wanted to keep it simple and flexible while I experiment with fringe ECS features like relations without worrying about performance or rigid data structures like a traditional ECS.

In TypeScript, entities can be wrapped with handles that are meant to be used with OOP-style polymorphism, for example:

1 2 3 4 5 6 7 8 9 10 11 /* player.ts */ @Kind class Player { @Component transform: Transform; @Component velocity: Velocity; @Component name: Name; } const player = Player.wrap(playerEnt); player.velocity.linear = new Vec3(0.0, 1.0, 0.0);

These handles are called Kinds. They make it really easy/ergonomic to represent certain behaviors. These are basically refcounted handles to entities that let you specialize behavior if an entity has a certain set of components.

I'm not sure if Kinds are useful yet. In some sense, they're obviously more magic than they're worth, but it has already been pretty fun and productive to prototype stuff using traditional OOP-style polymorphism, so who knows.

Networking

Networking is mixed server/client and peer-to-peer via WebRTC (on native too). In cases where peers can't communicate directly, messages route through the backend as a sort of poor-man's TURN server.

Even though it's peer-to-peer, all actions in the game simulation are validated by a special peer called the authority. The authority is expected to run in a trusted environment, like on a server you own. The peer-to-peer communication is only used to gather user input on your client as soon as possible, which is valuable for fighting games (my game).

The backend can be horizontally scaled by starting up more authorities and segmenting the world by region.

The whole networking system is very foundational to the engine, and there are a lot of moving parts that I haven't talked about at all, like interest tracking, etc. I plan on writing more about this in the future, but for now I am very "prototype phase" and nothing is really set in stone.

That's all

I'm building a game on top of this engine, so I'm hacking all this together pretty quickly. I wanted to dump some of the "emergent design" that I'm hacking towards over time, though, and share some screenshots.

Eventually, I plan on refactoring the renderer and other bits, will likely post separately for those efforts.

👋🙂