About bert133.dev
This page explains what bert133.dev actually is, and why it's built the way it is. If you just want to start coding, head to Using Coder. If you're curious about the boxes behind the curtain — read on.
What is it?
bert133.dev is a single small computer that the team owns and runs. It lives wherever the team is — the shop during build season, the pit during competition. It hosts everything the programming team needs, all behind one login:
- Zitadel — accounts and single sign-on (the "Sign in with BERT" button).
- Forgejo — where the team's code lives.
- Coder — the cloud-style workspaces that give you VS Code in a browser.
- Grafana — dashboards at https://dash.bert133.dev for cluster health now, and season-long robot metrics later.
- A Kubernetes cluster (k3s) under the hood, managed by Flux, that keeps all of the above running and updates them automatically when we push changes to the configuration repo.
The machine itself runs Ubuntu Desktop, not a stripped-down server install. That means in addition to the web services above, the box can be plugged into a monitor and used as a real workstation — for robot simulation, a pit dashboard, driver-station debugging, anything that wants a graphical desktop near the field.
Why is it built this way?
Three constraints shaped almost every design decision.
1. It has to work offline, in the pit
Competition venues are loud RF environments and the public internet is often unreachable from the pit. The server is built so that everything important works with the WAN cable unplugged:
- A local DNS and DHCP server (
dnsmasq) runs on the host. It is authoritative forbert133.devon the LAN, soforge.bert133.devandcode.bert133.devresolve even with no internet. - TLS certificates are issued via Let's Encrypt's DNS-01 challenge during brief windows when the WAN is up. Once issued, they're valid for 90 days — easily covering a multi-week competition trip.
- Container images are pulled and cached ahead of time. Forgejo, Coder, and the workspace base images all live on local disk by the time we leave the shop.
- The cluster's CoreDNS is configured to forward
bert133.devqueries back to the host's local resolver, so even cluster-internal traffic doesn't depend on outside name servers.
Once bootstrapped, the system can run for weeks without a working internet connection.
2. A student on a school iPad must be able to program the robot
Most students on the team have a school-issued iPad and may not own a laptop. We can't ask them to install Java, the WPILib, or any other heavyweight toolchain — they don't have the device permissions, and the iPad couldn't run it anyway. So:
- Coder runs in the cluster and serves browser-based VS Code (
code-server) over HTTPS. - A pre-built
frcworkspace template uses the WPILibroborio-cross-ubuntuimage, which already has Java, Gradle, the cross-compiler, and the simulation runtime installed. - Coder's external-auth feature handles Forgejo Git credentials automatically — students authorize once, and
git clone/git pushjust work inside the workspace, no SSH keys to manage.
The result: a student opens Safari on their iPad, taps Sign in with BERT, opens their workspace, and is editing real robot code within a minute. The heavy lifting happens on the server; the iPad is just a screen and keyboard.
3. The network has to comply with FRC field rules
FRC requires the field network to follow the form 10.TE.AM.x/24 — for Team 133, that's 10.1.33.0/24. The server is the LAN gateway and DHCP authority for that subnet:
- The team number lives in a single Ansible variable; the subnet, server IP (
10.1.33.10), and DHCP pool are all derived from it. Forking teams change one number. - The LAN interface is pinned with NetworkManager so it keeps
10.1.33.10even if the shop wifi tries to hand out a different address. - Reserved FRC addresses —
.1for the radio,.2for the roboRIO,.4for the Driver Station — are kept out of the DHCP pool by default. nftableshandles NAT and forwarding so devices on the team subnet can reach the internet when it's available, but unsolicited inbound traffic from outside is dropped.
This means the same server that hosts our code and workspaces can be the network backbone in the pit — no separate router required.
Observability
A kube-prometheus-stack deployment (Prometheus, Alertmanager, node-exporter, kube-state-metrics) runs in the monitoring namespace and feeds Grafana at https://dash.bert133.dev. Sign in with the same BERT account; your role in Zitadel decides whether you land as a Viewer, Editor, or Admin. Prometheus keeps 180 days of history on a 30 GiB volume.
Today, the dashboards cover cluster health only — node CPU and memory, pod counts, namespace state. The longer-term plan is to ingest the robot's WPILib DataLog files after each match and graph performance trends across the whole competition season. That's the niche Grafana fills for us: AdvantageScope is the right tool for picking apart a single match in detail; Grafana is the right tool for asking "is our shooter getting more accurate over the last six events?"
Robot metrics ingestion isn't built yet — that's a future project.
Where is the source?
Everything is open. The infrastructure repo lives at https://forge.bert133.dev/bert/server; this site's source lives at https://forge.bert133.dev/bert/site. If your team wants to fork any of it, you're welcome to.
Where does this site come from?
The site you're reading is itself a small piece of the same system: a static site built with Seite, packaged into a container by Forgejo Actions, and rolled out to the cluster by Flux. When a mentor edits a page and pushes to main, the new version is live in about three minutes — same loop the rest of the infrastructure uses.