This setup should be based on a proxmox, being behind a opnsense VM hosted on the Proxmox itself which will protect proxmox, offer a firewall, a privat LAN and DHCP/DNS to the VMs and offer a IPsec connection into the LAN to access all VMs/Proxmox which are not NATed. The server is the typical Hetzner Server, so only on NIC but multiple IPs or/subnets on this NIC.
Due to the cluster-blocker with the PCI-passthrough setup this is my alternative
- Proxmox Server with 1 NIC(eth0)
- 3 Public 1IPs, IP2/3 are routed by MAC in the datacenter (to eth0)
- KVM bridged setup ( eth0 no ip, vmbr0 bridged to eth0 with IP1 )
- A private LAN on vmbr30, 10.1.7.0/24
- A shorewall on the proxmox server
To better outline the setup, i create this drawing: (not sure its perfect, tell me what to improve)
Textual description:
Network interfaces on Proxmox
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
pre-up sleep 2
auto vmbr0
# docs at
iface vmbr0 inet static
address External-IP1(148.x.y.a)
netmask 255.255.255.192
# Our gateway is reachable via Point-to-Point tunneling
# put the Hetzner gateway IP address here twice
gateway DATACENTER-GW1
pointopoint DATACENTER-GW1
# Virtual bridge settings
# this one is bridging physical eth0 interface
bridge_ports eth0
bridge_stp off
bridge_fd 0
pre-up sleep 2
bridge_maxwait 0
metric 1
# Add routing for up to 4 dedicated IP's we get from Hetzner
# You need to
# opnsense
up route add -host External-IP2(148.x.y.b)/32 dev vmbr0
# rancher
up route add -host External-IP2(148.x.y.c)/32 dev vmbr0
# Assure local routing of private IPv4 IP's from our
# Proxmox host via our firewall's WAN port
up ip route add 10.1.7.0/24 via External-IP2(148.x.y.b) dev vmbr0
auto vmbr30
iface vmbr30 inet static
address 10.1.7.2
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
pre-up sleep 2
metric 1
Shorewall on Proxmox
interfaces
wan eth0 detect dhcp,tcpflags,nosmurfs
wan vmbr0 detect bridge
lan vmbr30 detect bridge
policies:
lan lan ACCEPT - -
fw all ACCEPT - -
all all REJECT INFO -
OPNsense
- WAN is ExternalIP2, attached to vmbr0 with MAC-XX
- LAN is 10.1.7.1, attached to vmbr30
What is working:
- The basic setup works fine, i can access opnsense with IP2, i can access proxmox on IP1 and i can access rancher-VM on ip3 - that is what does not need any routing.
- i can connect with a IPSec mobile client to OPNsense, offering access to LAN (10.1.7.0/24) from a virtual ip range 172.16.0.0/24
- i can access 10.1.7.1 ( opnsense ) while connected with OpenVPN
- i can access 10.1.7.11 / 10.1.7.151 from OPNsense(10.1.7.1) (shell)
- i can access 10.1.7.11 / 10.1.7.1 from othervm(10.1.7.151) (shell)
Whats not working:
a) connecting to 10.1.7.11/10.1.7.151 or 10.1.7.2 from the IPsec client
b) [SOLVED in UPDATE 1]connecting to 10.1.7.2 from 10.1.7.1 (opnsense)
c) Its seems like i have asynchron routing, and while i can access e.g. 10.1.7.1:8443 i see a lot if entries
d) IPSec LAN sharing would include i rule in IPSEC chain, "from * to LAN ACCEPT" - but that did not work for me, i had to add "from * to * ACCEPT"
Questions:
I) Of course i want to fix a)b)c)d), probably starting with understanding c) and d)
II) would it help, in this setup, to add a second NIC?
III) could it be an issue that i activated net.ipv4.ip_forward on the proxmox host ( shouldnt it be routed rather? )
When i got this straighten out i would love to place a comprehensive guide on how to run OPNsense as a Appliance with a private network in on Proxmox, passing some services to the outer world using HAproxe+LE and also accessing the private lan using IPsec
UPDATE1:
Removing up ip route add 10.1.7.0/24 via IP2 dev vmbr0
from vmbr0 on proxmox fixed the issue that neither proxmox could access 10.1.7.0/24 nor it could have been access from the LAN network.
UPDATE2:
I created an updated / changed setup where pci-passthrough is used. Goals are the same - it reduces the complexity - see here
Some direly needed rough basics first:
Further you speak of vmbr0/1/30, but only 0 and 30 are shown in your config. Shorewall does not matter for your vm connectivity (iptables is layer3, ebtables would be layer2 for contrast, but your frames should just fly by the shorewall, not getting to the HV but instead going to the VM's directly. shorewall is just a frontend using iptables in the background).
With that out of the way:
Usually you don't need any routing on the proxmox BRIDGES. A bridge is a switch, as far as you are concerned. vmbr0 is a virtual external bridge which you linked with eth0 (thus created an in-kernel link between a physical nic and your virtual interface, to get packets to flow at all). The bridge could also run without an IP attached to it at all. But to have the HV accessible, usually an external IP is attached to it. Otherwise you'd have to setup your firewall gateway plus a VPN tunnel, give vmbr30 an internal ip, and then you could access the internal IP of the HV from the internet after establishing a tunnel connnection, but that's just for illustration purposes for now.
Your ipsec connectivity issue sounds an awful lot like a misconfigured VPN, but also mobile IPSEC is just often a pain in the butt to work with due to protocol implementation differences, openvpn works a LOT better, but you should know your basics about PKI and certificates to implement that. Plus if opnsense is as counter-intuitive as pfsense when it comes to openvpns, you are possibly in for a week of stabbing at the dark easily. For pfsense there's a installable openvpn config export package which makes life quite easier, don't know wether this one is available for opnsense, too.
It does not so much look like what you call asynchronous routing but rather like a firewall issue you had, concerning the first picture. For your tunnel firewall (interface IPSEC or interface openvpn on opnsense, depending on the tunnel you happen to use) just leave it at ipv4 any:any to any:any, you should only get into the LAN net anyway by the definition of the tunnel itself, opnsense will automatically send the packets out from the LAN interface only, on the second picture.
net.ipv4.ip_forward = 1
= enable routing in the kernel at all on the linux OS's interfaces where you activated it. You can do NAT-ting stuff via iptables, thus making it possible for getting into your LAN by using your external HV IP on vmbr0 in theory, but that's not something you should happen to achieve by accident, you might be able to disable forwarding again without loosing connectivity. At least to the HV, I am unsure about is your extra routes for the other external IPs, but these should be configurable the same way from within the opnsense directly (create the point-to-point links there, the frames will transparently flow through vmbr0 and eth0 to the hetzner gateway) and work, which would be cleaner.Also you should not make the rancher-VM accessible externally directly and thus bypassing your firewall, I doubt this is what you want to achieve. Rather put the external ip onto the opnsense (as virtual ip of type ip alias), set up 1:1 NAT from IP3 to the internal ip of the rancher-vm, and do the firewalling via opnsense.
Some ascii art how things possibly should look from what I can discern from your information so far, for the sake of brevity only interfaces are used, no distinction is made between physical/virtual servers, and no point-to-point links are shown.
If you want to firewall the HV via opnsense, too, these would be the steps to do so while maintainting connectivity: