Rust WebAssembly Graphics Programming

Building IronCanvas: Learning Geometry and WebAssembly Through a Browser-Based Graphics Experiment

Published May 16, 2026 Rust WebAssembly Graphics Programming

I recently started building IronCanvas, a small Rust and WebAssembly project focused on understanding how geometric transformations work inside the browser. The live development version is available at vertex.caydenlunt.com.

The project is intentionally simple right now. It is not a full rendering engine or a GPU-accelerated graphics pipeline yet. The goal at this stage is much more foundational: understand how vertices, transformations, browser rendering, and WebAssembly memory interaction actually work before adding more advanced graphics concepts on top.

Why I started this project

One thing I have been increasingly interested in is moving closer to the lower-level side of software engineering. A lot of modern development abstracts away what is happening underneath the system, which is useful for productivity but can also make it harder to understand how software actually works internally.

Graphics programming forces you to confront those details directly.

Even simple geometry operations require thinking about:

  • coordinate systems
  • trigonometry
  • memory layout
  • data structures
  • browser rendering behavior
  • mathematical transformations

IronCanvas started as a way to explore those ideas in a controlled and understandable environment.

What IronCanvas currently does

Right now, IronCanvas defines a rectangle as a collection of vertices stored in Rust:

Rust vertex structure Geometry data
#[repr(C)]
#[derive(Copy, Clone, Debug)]
pub struct Vertex {
    pub position: [f32; 3],
}

Each vertex stores:

  • X position
  • Y position
  • Z position

The browser reads those vertices directly from WebAssembly memory and renders the shape onto an HTML canvas.

The project also exposes a small Rust API that allows JavaScript to:

  • access vertex buffers
  • reset geometry
  • rotate the rectangle
  • redraw the updated coordinates in real time

When the rectangle rotates, JavaScript is not calculating the transformation itself. Instead, it calls into Rust:

JavaScript to WebAssembly call Runtime update
wasmExports.rotate_current_rectangle_z(degrees);

Rust performs the mathematical transformation, updates the underlying vertex data, and the browser redraws the result.

Learning transformations from first principles

The most interesting part of the project so far has been implementing rotation manually instead of relying on a graphics library.

The rectangle rotates around its center point using trigonometric functions:

Rotation formula 2D transform
x' = x * cos(theta) - y * sin(theta)
y' = x * sin(theta) + y * cos(theta)

A 45-degree rotation becomes:

Rust angle conversion Radians
let angle = 45.0_f32.to_radians();

That small amount of math changes how you think about rendering. Shapes stop feeling like static objects and start feeling like collections of points being transformed mathematically through space.

Even though the project is visually simple right now, it already introduces concepts used in real rendering systems:

  • vertex transformations
  • coordinate spaces
  • geometry manipulation
  • transformation logic
  • runtime rendering updates

Why Rust and WebAssembly

I chose Rust and WebAssembly because I wanted the geometry and transformation logic to live outside the browser runtime itself.

Rust owns the data structures and calculations. JavaScript acts more like a visualization and interaction layer.

That separation creates a cleaner architecture:

Application boundary Architecture
Browser UI
    ->
JavaScript Canvas Rendering
    ->
Rust WebAssembly Geometry Logic

The project is not using WebGPU yet, and the rendering is still happening through the 2D canvas API. The transformation math is currently CPU-side through Rust/WASM rather than GPU accelerated.

That is intentional for this stage of the project. Before introducing shaders, GPU buffers, or rendering pipelines, I wanted to understand the lower-level mathematical and memory concepts first.

More than just drawing a rectangle

One thing I wanted to avoid was treating this as a throwaway experiment. Even though the rendering itself is small, the project still includes:

  • Rust unit tests
  • integration tests
  • GitHub Actions CI
  • automated validation
  • GitHub Pages deployment
  • browser-side testing

That matters to me because software engineering is not just about making something work once. It is about building systems that are maintainable, testable, and repeatable.

Where the project goes next

IronCanvas is still very early, but the direction is becoming clearer.

The next steps are likely:

  • matrix-based transformations
  • triangle rendering
  • projection math
  • separating model space from screen space
  • introducing WebGPU
  • eventually moving toward true GPU-backed rendering

For now, though, the project is serving its original purpose well: helping me better understand how mathematical geometry becomes something visible inside the browser through Rust, WebAssembly, and direct transformation logic.