Proxmox multi-NIC configuration

Running Proxmox with multiple network connections

I have a Proxmox cluster running with 3 nodes and I have 2x 10Gbe interfaces per machine, one for my internal backend network for the cluster communications and storage replication and one for my normal LAN traffic.

But I want both sides to be accessible from several other subnets and VLANs but normally you can only have a single gateway, how to configure this?

My personal take on this!

This is my personal take on this configuration, you might not agree or might need a different setup in your configuration, that’s up to you! In my case I mainly want to prevent traffic from running through my router if that’s not really required since the machine has a network card in 2 subnets.

Normally you’d say “but then you don’t need a gateway”, yes that’s true but I have several other networks and VLANs that I also wish to access the machine using one of the 2 interfaces and then it needs to understand how it can talk back, this config accomplishes that!

This article is as much personal documentation as a public explanation! 😉

My setup

As I said each of my node has 2x 10Gbe network cards, enp4s0 and enp6s0.

enp4s0 is my normal network, 10.10.128.0/24
enp6s0 is my backend network, 10.10.10.0/24

Each node will have the same I number at the end (11, 12, 13) for each of the subnets.

The need for 2 routing tables

Each interface needs to keep it’s own routing table, to accomplish this we need to edit /etc/iproute2/rt_tables and change that file to add 2 new routing tables:

#
# reserved values
#
255     local
254     main
253     default
0       unspec
#
# local
#
#1      inr.ruhep
1 vmbr1
2 vmbr0

The order of these matter, in my case I want the machine to use vmbr1 first and then vmbr0 so I have added the 2 new routing tables in that order. You can use any name you wish, in my case I found it easy to have them have the same name as the bridge I use them with.

My /etc/network/interfaces file

Easiest is to just share my configuration file for node1 and then I will give some clarification. Make sure to add this configuration to all nodes that are applicable and change the values with their IPs.

auto lo
iface lo inet loopback

iface enp4s0 inet manual

iface enp6s0 inet manual

iface enp7s0 inet manual


auto vmbr0
iface vmbr0 inet static
        address 10.10.10.11/24
        bridge-ports enp6s0
        bridge-stp off
        bridge-fd 0
        post-up ip route add 10.10.10.0/24 dev vmbr0 src 10.10.10.11 table vmbr0
        post-up ip route add default via 10.10.10.254 dev vmbr0 table vmbr0
        post-up ip rule add from 10.10.10.11/32 table vmbr0
        post-up ip rule add to 10.10.10.11/32 table vmbr0
#Backend network


auto vmbr1
iface vmbr1 inet static
        address 10.10.128.11/24
        bridge-ports enp4s0
        bridge-stp off
        bridge-fd 0
        post-up ip route add default via 10.10.128.254 dev vmbr1
        post-up ip route add 10.10.128.0/24 dev vmbr1 src 10.10.128.11 table vmbr1
        post-up ip route add default via 10.10.128.254 dev vmbr1 table vmbr1
        post-up ip rule add from 10.10.128.11/32 table vmbr1
        post-up ip rule add to 10.10.128.11/32 table vmbr1
#LAN network

What’s happening?

First we enable the interfaces and then we add 2 linux bridges, vmbr0 and vmbr1

Since we want each to have it’s own routing we basically add some “post-up” commands to add the routing destinations (in my case “default” as in 0.0.0.0/0 to the gateway on that network.

Since the OS itself also needs to know what interface to use by default (even though both interfaces have their own default gateway) we need to add a default gateway to one of the interfaces without the routing table too. In the above config that’s done in the vmbr1 bridge using the first route add.

Results

The end results can be checked using

ip route

default via 10.10.128.254 dev vmbr1
10.10.10.0/24 dev vmbr0 proto kernel scope link src 10.10.10.11
10.10.128.0/24 dev vmbr1 proto kernel scope link src 10.10.128.11

but we can also check the interface/bridge specific routing rules

ip route list table vmbr0

default via 10.10.10.254 dev vmbr0
10.10.10.0/24 dev vmbr0 scope link src 10.10.10.11
ip route list table vmbr1

default via 10.10.128.254 dev vmbr1
10.10.128.0/24 dev vmbr1 scope link src 10.10.128.11

and also check the routing rules

ip rule show

0:      from all lookup local
32762:  from all to 10.10.128.11 lookup vmbr1
32763:  from 10.10.128.11 lookup vmbr1
32764:  from all to 10.10.10.11 lookup vmbr0
32765:  from 10.10.10.11 lookup vmbr0
32766:  from all lookup main
32767:  from all lookup default

Bonus, fix DNS

If you want to make sure the machine knows it is available in 2 subnets and what DNS names it has there, make sure to update /etc/hosts accordingly.

I have it configured as follows:

127.0.0.1 localhost.localdomain localhost
10.10.10.11 node1.quinmanage.lan node1
10.10.128.11 node1.quindorian.lan node1

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

The end result

The end result is an OS that by default uses vmbr1 to access anything beyond that’s available without routing on the vmbr0 and vmbr1 interface.

That’s how it works by default of course but with the above changes clients outside of either subnet can also access the machine and have a route back, back through the same interface! You can of course also set this up with dedicated routing rules for each subnet, but in a complex network that’s a lot more work.

Again, this is specific to my needs and normally this shouldn’t be required, especially if you don’t have any routed subnets or VLANs!

Credits

This article and this forum post was helpful in piecing the above together!

Leave a Reply

Your email address will not be published. Required fields are marked *