... Aug 31 22:29:31 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information. Aug 31 22:29:41 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information. Aug 31 22:29:51 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information. Aug 31 22:30:01 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information. Aug 31 22:30:11 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information. Aug 31 22:30:21 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information. Aug 31 22:30:31 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information. Aug 31 22:30:41 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information. Aug 31 22:30:51 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information. Aug 31 22:31:01 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information. Aug 31 22:31:11 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information. Aug 31 22:31:21 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information. Aug 31 22:31:31 pve-sg pve-firewall[1727]: status update error: iptables_restore_cmdlist: Try `iptables-restore -h' or 'iptables-restore --help' for more information. ...
如果直接操作 ipset, iptables -vnL 命令会提示:
1 2 3 4 5 6
ipset v7.10: Cannot open session to kernel. command 'ipset save' failed: exit code 1 ...
iptables v1.8.9 (legacy): can't initialize iptables table...: Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded.
conn net-net1 leftupdown="/var/lib/strongswan/ipsec-vti.sh 0 169.254.232.77/32 169.254.232.78/32" # 这里是p2p通道的GCP侧IP和本端IP left=10.0.8.4 # In case of NAT set to internal IP, e.x. 10.164.0.6 leftid=10.0.8.4 leftsubnet=0.0.0.0/0 leftauth=psk right={GCP平台的公网IP} rightid=%any rightsubnet=0.0.0.0/0 rightauth=psk type=tunnel # auto=add - means strongSwan won't try to initiate it # auto=start - means strongSwan will try to establish connection as well # Note that Google Cloud will also try to initiate the connection auto=start # dpdaction=restart - means strongSwan will try to reconnect if Dead Peer Detection spots # a problem. Change to 'clear' if needed dpdaction=restart mark=%unique # mark=1001 # reqid=1001
conn net-net2 leftupdown="/var/lib/strongswan/ipsec-vti.sh 1 169.254.155.53/32 169.254.155.54/32" # 同上 left=10.0.8.4 # In case of NAT set to internal IP, e.x. 10.164.0.6 leftid=10.0.8.4 leftsubnet=0.0.0.0/0 leftauth=psk right={GCP平台的公网IP} rightid=%any rightsubnet=0.0.0.0/0 rightauth=psk type=tunnel # auto=add - means strongSwan won't try to initiate it # auto=start - means strongSwan will try to establish connection as well # Note that Google Cloud will also try to initiate the connection auto=start # dpdaction=restart - means strongSwan will try to reconnect if Dead Peer Detection spots # a problem. Change to 'clear' if needed dpdaction=restart mark=%unique # mark=1002 # reqid=1002
LOCAL_IF="${PLUTO_INTERFACE}" VTI_IF="vti${VTI_TUNNEL_ID}" # GCP's MTU is 1460, so it's hardcoded GCP_MTU="1460" # ipsec overhead is 73 bytes, we need to compute new mtu. VTI_MTU=$((GCP_MTU-73))
case"${PLUTO_VERB}"in up-client) ${IP}link add ${VTI_IF}type vti local${PLUTO_ME} remote ${PLUTO_PEER} okey ${PLUTO_MARK_OUT_ARR[0]} ikey ${PLUTO_MARK_IN_ARR[0]} ${IP} addr add ${VTI_LOCAL} remote ${VTI_REMOTE} dev "${VTI_IF}" ${IP}linkset${VTI_IF} up mtu ${VTI_MTU}
# If you would like to use VTI for policy-based you should take care of routing by yourselv, e.x. #if [[ "${PLUTO_PEER_CLIENT}" != "0.0.0.0/0" ]]; then # ${IP} r add "${PLUTO_PEER_CLIENT}" dev "${VTI_IF}" #fi ;; down-client) ${IP} tunnel del "${VTI_IF}" ;; esac
... 10.0.1.0/24 via 169.254.232.77 dev vti0 proto bird metric 32 ... 169.254.232.77 dev vti0 proto kernel scope link src 169.254.232.78 169.254.232.77 dev vti0 proto bird scope link metric 32 ...