Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

init container - iptables-nft-restore failed #12658

Open
dbones opened this issue Jan 23, 2025 · 5 comments
Open

init container - iptables-nft-restore failed #12658

dbones opened this issue Jan 23, 2025 · 5 comments
Labels
kind/bug A bug triage/pending This issue will be looked at on the next triage meeting

Comments

@dbones
Copy link

dbones commented Jan 23, 2025

Kuma Version

2.9.3

Describe the bug

deploying the demo application (kumahq/kuma-demo)

as kubernetes sets up the pods, the init container on both (redis, and demo-app), does not work, with a failed code

COMMIT
# [iptables] [1/5] /usr/sbin/iptables-nft-restore --noflush /tmp/iptables-rules.4137472968.txt
# [iptables] [1/5] restoring failed: exit status 4: iptables-nft-restore v1.8.7 (nf_tables): , line 9: RULE_APPEND failed (Operation not supported): rule in chain PREROUTING, line 9: RULE_APPEND failed (Operation not supported): rule in chain OUTPUT, line 9: RULE_APPEND failed (Operation not supported): rule in chain OUTPUT, line 9: RULE_APPEND failed (Operation not supported): rule in chain OUTPUT
# [iptables] [1/5] will try again in 2s
# [iptables] [2/5] /usr/sbin/iptables-nft-restore --noflush /tmp/iptables-rules.4137472968.txt
# [iptables] [2/5] will try again in 2s
# [iptables] [2/5] restoring failed: exit status 4: iptables-nft-restore v1.8.7 (nf_tables): , line 9: RULE_APPEND failed (Operation not supported): rule in chain PREROUTING, line 9: RULE_APPEND failed (Operation not supported): rule in chain OUTPUT, line 9: RULE_APPEND failed (Operation not supported): rule in chain OUTPUT, line 9: RULE_APPEND failed (Operation not supported): rule in chain OUTPUT
# [iptables] [3/5] /usr/sbin/iptables-nft-restore --noflush /tmp/iptables-rules.4137472968.txt
# [iptables] [3/5] restoring failed: exit status 4: iptables-nft-restore v1.8.7 (nf_tables): , line 9: RULE_APPEND failed (Operation not supported): rule in chain PREROUTING, line 9: RULE_APPEND failed (Operation not supported): rule in chain OUTPUT, line 9: RULE_APPEND failed (Operation not supported): rule in chain OUTPUT, line 9: RULE_APPEND failed (Operation not supported): rule in chain OUTPUT
# [iptables] [3/5] will try again in 2s
# [iptables] [4/5] /usr/sbin/iptables-nft-restore --noflush /tmp/iptables-rules.4137472968.txt
# [iptables] [4/5] restoring failed: exit status 4: iptables-nft-restore v1.8.7 (nf_tables): , line 9: RULE_APPEND failed (Operation not supported): rule in chain PREROUTING, line 9: RULE_APPEND failed (Operation not supported): rule in chain OUTPUT, line 9: RULE_APPEND failed (Operation not supported): rule in chain OUTPUT, line 9: RULE_APPEND failed (Operation not supported): rule in chain OUTPUT
# [iptables] [4/5] will try again in 2s
# [iptables] [5/5] /usr/sbin/iptables-nft-restore --noflush /tmp/iptables-rules.4137472968.txt
# [iptables] [5/5] restoring failed: exit status 4: iptables-nft-restore v1.8.7 (nf_tables): , line 9: RULE_APPEND failed (Operation not supported): rule in chain PREROUTING, line 9: RULE_APPEND failed (Operation not supported): rule in chain OUTPUT, line 9: RULE_APPEND failed (Operation not supported): rule in chain OUTPUT, line 9: RULE_APPEND failed (Operation not supported): rule in chain OUTPUT
Error: failed to setup transparent proxy: unable to restore iptables rules: /usr/sbin/iptables-nft-restore failed

To Reproduce

kuma 2.9.3 has been installed onto the K3s Cluster (Arm CPU) using GitOps

helm chart setttings:

  defaultNamespace: kuma-system
  helm:
    chart: kuma
    repo: https://kumahq.github.io/charts
    version: 2.9.3
    values:
      controlPlane:
        mode: zone
      egress:
        enabled: false

deploy application

kubectl apply -f https://bit.ly/3Kh2Try

Expected behavior

application should be deployed, added to the mesh with a working side car

Additional context (optional)

Cluster

Image

Overview

Image

Pod

Image

@dbones dbones added kind/bug A bug triage/pending This issue will be looked at on the next triage meeting labels Jan 23, 2025
@bartsmykla
Copy link
Contributor

@dbones It seems like the kernel on your cluster nodes doesn't fully support iptables (maybe some modules are missing?). If you're using non-standard hardware (like an Orange PI), make sure the OS installed on these machines includes full iptables support.

@lukidzi
Copy link
Contributor

lukidzi commented Jan 27, 2025

Closing. If @bartsmykla suggestion doesn't help, feel free to reopen

@lukidzi lukidzi closed this as not planned Won't fix, can't repro, duplicate, stale Jan 27, 2025
@dbones
Copy link
Author

dbones commented Jan 31, 2025

hi, this is not my best area of expertise, (and from my extremely limited understanding, i think i have the correct modules now, but im not really sure.... any help would be greatly appricated as i would love to test this software out for a number of other larger projects)

my setup on the orange-pi

  • Orange Pi 1.1.4 Jammy with Linux 5.10.110-rockchip-rk3588
  • Docker version 23.0.1, build a5ee5b1
  • no app armour or lockdown

I have tried the following (but it did not work, still have the same error)

Missing modules

Identified that i did not have xt_REDIRECT and xt_TCPMSS modules loaded (and was able to load them)

tried iptables (with nf_tables and then legacy)

iptables --version
update-alternatives --display iptables
iptables v1.8.7 (nf_tables)
iptables - auto mode
  link best version is /usr/sbin/iptables-nft
  link currently points to /usr/sbin/iptables-nft
  link iptables is /usr/sbin/iptables
  slave iptables-restore is /usr/sbin/iptables-restore
  slave iptables-save is /usr/sbin/iptables-save
/usr/sbin/iptables-legacy - priority 10
  slave iptables-restore: /usr/sbin/iptables-legacy-restore
  slave iptables-save: /usr/sbin/iptables-legacy-save
/usr/sbin/iptables-nft - priority 20
  slave iptables-restore: /usr/sbin/iptables-nft-restore
  slave iptables-save: /usr/sbin/iptables-nft-save

now its set to

iptables v1.8.7 (legacy)
iptables - manual mode
  link best version is /usr/sbin/iptables-nft
  link currently points to /usr/sbin/iptables-legacy
  link iptables is /usr/sbin/iptables
  slave iptables-restore is /usr/sbin/iptables-restore
  slave iptables-save is /usr/sbin/iptables-save
/usr/sbin/iptables-legacy - priority 10
  slave iptables-restore: /usr/sbin/iptables-legacy-restore
  slave iptables-save: /usr/sbin/iptables-legacy-save
/usr/sbin/iptables-nft - priority 20
  slave iptables-restore: /usr/sbin/iptables-nft-restore
  slave iptables-save: /usr/sbin/iptables-nft-save

Checked the CNI output (i think, and it doe snot look liek there are any issues)

dmesg | tail -50

[989453.952414] cni0: port 4(veth276ea313) entered blocking state
[989453.952420] cni0: port 4(veth276ea313) entered disabled state
[989453.952527] device veth276ea313 entered promiscuous mode
[989453.960589] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[989453.960638] IPv6: ADDRCONF(NETDEV_CHANGE): veth276ea313: link becomes ready
[989453.960684] cni0: port 4(veth276ea313) entered blocking state
[989453.960688] cni0: port 4(veth276ea313) entered forwarding state
[989508.010817] cni0: port 4(veth276ea313) entered disabled state
[989508.012280] device veth276ea313 left promiscuous mode
[989508.012295] cni0: port 4(veth276ea313) entered disabled state
[989510.801006] cni0: port 4(veth44bb4b9c) entered blocking state
[989510.801011] cni0: port 4(veth44bb4b9c) entered disabled state
[989510.801187] device veth44bb4b9c entered promiscuous mode
[989510.809445] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[989510.809495] IPv6: ADDRCONF(NETDEV_CHANGE): veth44bb4b9c: link becomes ready
[989510.809543] cni0: port 4(veth44bb4b9c) entered blocking state
[989510.809547] cni0: port 4(veth44bb4b9c) entered forwarding state
[989518.068742] cni0: port 4(veth44bb4b9c) entered disabled state
[989518.070250] device veth44bb4b9c left promiscuous mode
[989518.070264] cni0: port 4(veth44bb4b9c) entered disabled state
[989519.919984] cni0: port 4(veth1e0f46a9) entered blocking state
[989519.919994] cni0: port 4(veth1e0f46a9) entered disabled state
[989519.920159] device veth1e0f46a9 entered promiscuous mode
[989519.926979] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[989519.927071] IPv6: ADDRCONF(NETDEV_CHANGE): veth1e0f46a9: link becomes ready
[989519.927150] cni0: port 4(veth1e0f46a9) entered blocking state
[989519.927156] cni0: port 4(veth1e0f46a9) entered forwarding state
[989573.326345] cni0: port 5(vethd5bedaca) entered blocking state
[989573.326356] cni0: port 5(vethd5bedaca) entered disabled state
[989573.326539] device vethd5bedaca entered promiscuous mode
[989573.333108] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[989573.333186] IPv6: ADDRCONF(NETDEV_CHANGE): vethd5bedaca: link becomes ready
[989573.333248] cni0: port 5(vethd5bedaca) entered blocking state
[989573.333254] cni0: port 5(vethd5bedaca) entered forwarding state
[989717.976849] cni0: port 5(vethd5bedaca) entered disabled state
[989717.978402] device vethd5bedaca left promiscuous mode
[989717.978417] cni0: port 5(vethd5bedaca) entered disabled state
[989718.176334] cni0: port 5(vethae5ca6aa) entered blocking state
[989718.176344] cni0: port 5(vethae5ca6aa) entered disabled state
[989718.176517] device vethae5ca6aa entered promiscuous mode
[989718.176589] cni0: port 5(vethae5ca6aa) entered blocking state
[989718.176634] cni0: port 5(vethae5ca6aa) entered forwarding state
[989718.185682] IPv6: ADDRCONF(NETDEV_CHANGE): vethae5ca6aa: link becomes ready
[989799.687631] cni0: port 6(vethf4e6e992) entered blocking state
[989799.687636] cni0: port 6(vethf4e6e992) entered disabled state
[989799.687765] device vethf4e6e992 entered promiscuous mode
[989799.694442] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[989799.694502] IPv6: ADDRCONF(NETDEV_CHANGE): vethf4e6e992: link becomes ready
[989799.694553] cni0: port 6(vethf4e6e992) entered blocking state
[989799.694558] cni0: port 6(vethf4e6e992) entered forwarding state

Checked with AI (because i have no idea what else to do)

it also said try doing

sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080

@dbones
Copy link
Author

dbones commented Jan 31, 2025

forgot to mention, here is the mods in this area i have loaded

uname -a
lsmod | grep nft
lsmod | grep xt_
cat /proc/modules | grep nft
Linux mars-1 5.10.110-rockchip-rk3588 #1.1.4 SMP Wed Mar 8 14:50:47 CST 2023 aarch64 aarch64 aarch64 GNU/Linux
nft_limit              16384  10
nft_chain_nat          16384  8
nft_counter            16384  361
nft_compat             20480  604
nf_tables             167936  891 nft_compat,nft_counter,nft_chain_nat,nft_limit
nf_nat                 57344  6 ip6table_nat,xt_nat,nft_chain_nat,iptable_nat,xt_MASQUERADE,xt_REDIRECT
nfnetlink              16384  12 nft_compat,nfnetlink_acct,nf_conntrack_netlink,nf_tables,ip_set,nfnetlink_log
xt_TCPMSS              16384  0
xt_REDIRECT            16384  5
xt_CT                  16384  0
xt_owner               16384  0
xt_physdev             16384  34
xt_NFLOG               16384  19
xt_limit               16384  9
xt_set                 20480  8
xt_multiport           16384  5
ip_set                 40960  2 ip_set_hash_ip,xt_set
xt_statistic           16384  6
xt_nat                 16384  48
xt_MASQUERADE          16384  11
xt_mark                16384  106
xt_nfacct              16384  8
xt_addrtype            16384  29
xt_comment             16384  618
xt_conntrack           16384  72
nf_nat                 57344  6 ip6table_nat,xt_nat,nft_chain_nat,iptable_nat,xt_MASQUERADE,xt_REDIRECT
nf_conntrack          135168  7 xt_conntrack,nf_nat,xt_nat,nf_conntrack_netlink,xt_CT,xt_MASQUERADE,xt_REDIRECT
nfnetlink_acct         16384  9 xt_nfacct
nft_limit 16384 10 - Live 0x0000000000000000
nft_chain_nat 16384 8 - Live 0x0000000000000000
nft_counter 16384 361 - Live 0x0000000000000000
nft_compat 20480 604 - Live 0x0000000000000000
nf_tables 167936 891 nft_limit,nft_chain_nat,nft_counter,nft_compat, Live 0x0000000000000000
nf_nat 57344 6 ip6table_nat,xt_REDIRECT,xt_nat,xt_MASQUERADE,nft_chain_nat,iptable_nat, Live 0x0000000000000000
nfnetlink 16384 12 nfnetlink_log,ip_set,nf_conntrack_netlink,nft_compat,nf_tables,nfnetlink_acct, Live 0x0000000000000000

Copy link
Contributor

Removing closed state labels due to the issue being reopened.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug A bug triage/pending This issue will be looked at on the next triage meeting
Projects
None yet
Development

No branches or pull requests

3 participants