Post

Connecting Multiple Clusters with OPNsense and WireGuard

Connecting Multiple Clusters with OPNsense and WireGuard

In one of the tech stacks I manage, we operate three separate clusters running our servers and infrastructure services. Two are cloud based, and one is an on-premise setup. Despite differences in virtualization, operating systems, and tooling, all of these networks are managed using OPNsense.

The goal is simple: from my laptop, these clusters should feel as if they are part of my local home network. No exposed management ports, no public SSH access, and no mental overhead when moving between environments.

This post explains how OPNsense, paired with WireGuard, makes that possible.

Why OPNsense

If you’re not familiar with OPNsense, it’s an open source operating system focused on routing, firewalling, and network security. It’s BSD based, but installation and management are no more complex than a typical Linux distribution, thanks to a very approachable web interface.

What makes OPNsense special is not that it does one thing well, but that it does almost everything you would expect from enterprise-grade network equipment, without being locked to proprietary hardware.

Some of the features we rely on heavily include:

  • Stateful firewalling, NAT, and DHCP
  • Multiple WAN and LAN interfaces
  • WAN failover and load balancing
  • VLANs and subnet segmentation
  • Traffic shaping and QoS
  • High availability support
  • IDS and IPS via Suricata
  • VPN support, including WireGuard
  • DNS services with filtering and blacklisting
  • Centralized monitoring and reporting
  • User and access management
  • A full API for automation

All of this comes from free, open source software. With a reasonably powerful box, around the 500€ mark, you can achieve feature parity with network appliances that cost an order of magnitude more. Performance, of course, still depends on the hardware you run it on.

Why This Matters in Practice

My background in networking comes largely from production environments. Live streaming, competitive gaming events, and broadcast setups are unforgiving. If latency spikes, packets drop, or routing fails, everything breaks immediately.

Over the years, this meant dealing with:

  • Redundant WAN links
  • Traffic shaping to prevent streams from affecting gameplay
  • Strict network isolation between production, admin, and public LANs
  • VPN access for remote operators
  • Constant reconfiguration under time pressure

Before OPNsense, this usually meant juggling routers, managed switches, additional services running on separate machines, and far too many configuration interfaces. Once we consolidated routing, firewalling, VPNs, and policies into OPNsense, things became dramatically simpler.

With proper routing and traffic policies in place, streaming traffic stopped interfering with game latency. Downloads no longer impacted live broadcasts. Even in events with a single WAN connection, clean subnet separation and bandwidth limits kept the network stable. Once configured, everything became almost plug-and-play.

Extending That Model to Clusters

That same model translates extremely well to server infrastructure.

Instead of exposing SSH or management services to the public internet, all clusters are fully private. SSH is only available on local interfaces. External access happens exclusively through WireGuard tunnels terminated on OPNsense.

Each cluster lives in its own routed subnet. There is no Layer 2 bridging between sites. Everything is pure Layer 3 routing, which keeps the design simple, predictable, and debuggable.

From my laptop, I can reach any server as if it were on my home LAN. When I’m away, I can route traffic through any of the clusters as if they were my personal VPN endpoints.

This also means that clusters can communicate with each other securely over private addressing, without ever exposing services to the public internet.

WireGuard as the Glue

WireGuard is not the star of the show here, but it is the glue that makes this setup practical.

Each OPNsense instance acts as a WireGuard peer. Routing rules define which subnets are reachable through which tunnel. From the operating system’s perspective, this is just another network interface.

One important benefit is security through silence. WireGuard uses UDP and does not respond to unauthenticated traffic. Unlike SSH, there is no banner, no handshake, and nothing to fingerprint. Without valid keys, the service is effectively invisible.

Placing SSH behind WireGuard means:

  • No exposed SSH ports
  • No brute force attempts
  • No scanning noise
  • No need for complex bastion hosts

From an operational standpoint, this significantly reduces attack surface while also simplifying access.

The Learning Curve That Wasn’t

I expected a steep learning curve. In reality, if you already understand basic networking concepts like routing, subnets, and firewall rules, OPNsense is surprisingly easy to pick up.

And if you don’t, the documentation is solid, the community is active, and there is no shortage of real-world examples. At this point, even an LLM like GPT can guide you through most configurations with reasonable accuracy, provided you understand what you are asking for.

Conclusion

This setup works well for me because our three cluster networks are fully designed, with addresses intentionally assigned across multiple subnets and VLANs. It fits the way I work and the environments I manage.

Using OPNsense with WireGuard allows me to:

  • Treat remote clusters as local networks
  • Eliminate exposed management services
  • Centralize routing, security, and access control
  • Keep infrastructure simple, private, and predictable
  • Get rid of jump boxes that were previously needed

It’s not magic, and it’s not new technology. It’s simply a clean application of solid networking principles using the right tools.

If you manage multiple environments and find yourself juggling VPNs, bastions, and firewall rules, this approach is well worth exploring.