Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SrcAddr setting is not wired into the API #132

Closed
MatthiasValvekens opened this issue Mar 31, 2024 · 6 comments · Fixed by #137
Closed

SrcAddr setting is not wired into the API #132

MatthiasValvekens opened this issue Mar 31, 2024 · 6 comments · Fixed by #137

Comments

@MatthiasValvekens
Copy link

Hi, there seems to be an internal SrcAddr setting to populate update-source for neighbors that is not exposed in the K8S API. Is that intentional?

I think my use case would benefit from it. I'm trying to expose some routes to IPv6 load balancers from multiple nodes in my cluster, and to do that I'm peering with an IPv6 router. Now, the interface that is connected to that router has many IPv6 addresses (on all nodes involved), so I need to make sure that the one selected by the kernel matches the one that is used to establish the BGP session from the peer end. This leads me to the following problem:

  • In MetalLB in "regular" FRR mode, I can't seem to define two BGPPeer resources with the same neighbor address (even with mutually exclusive node selectors) because the API forbids that, so I can only align the source address for at most one node.
  • With frrk8s.enabled=true, I thought I could get away with a single BGPPeer for all nodes and override the source address on a per-node basis by introducing extra FRRConfiguration resources scoped to specific nodes. But since this setting is apparently not exposed in the API, the sourceAddress feature doesn't work at all in this setup.

Is that something that simply hasn't been implemented yet, or is there something fundamental I'm not understanding here?

Since I control all the devices involved, I can probably work around this problem by giving extra IPs to the interface on the peer router's end, allowing me to create one BGPPeer per node in my cluster, but that feels like a hack. Thoughts?

Thank you!

@oribon
Copy link
Member

oribon commented Apr 1, 2024

Is that intentional?

probably not, we pretty much copied the internal structs and did the adjustments, there's a chance we missed implementing this😅 I don't remember we had some discussion about not implementing this.
even if we do implement this we'll need to think if the merging is right - that is giving the priority to a specified srcAddr (by an additional config) over an empty one (by MetalLB).

ccing @fedepaol maybe he remembers something about this (he should be back in a few days )

Thanks a lot for raising this!

@MatthiasValvekens
Copy link
Author

Thanks!

that is giving the priority to a specified srcAddr (by an additional config) over an empty one (by MetalLB).

Just as a point of clarification: MetalLB also allows specifying sourceAddress at the BGPPeer level (and that works in FRR mode), but this setting is also not applied with frr-k8s AFAICT (which would make sense, since MetalLB generates input for the k8s API).

@oribon
Copy link
Member

oribon commented Apr 1, 2024

yep we're on the same page 😄 nothing is implemented regarding frr-k8s and source addresses

@fedepaol
Copy link
Member

fedepaol commented Apr 4, 2024

I must be honest, I am quite surprised we implemented it in the metallb - frr implementation.
This is because I don't think a per peer src address makes sense at all in a multi node environment, it will require some flexing with node selector to provide exactly one bgppeer per node for the same router. It'd make much sense to have a per node / per instance configuration for example. This is along the line with the per - neighbor ASN.

Said that, we can cover the src address as it is today with frr and think about making it better.

@MatthiasValvekens
Copy link
Author

This is because I don't think a per peer src address makes sense at all in a multi node environment, it will require some flexing with node selector to provide exactly one bgppeer per node for the same router. It'd make much sense to have a per node / per instance configuration for example.

I agree, and it is exactly this that I was trying to achieve by specifying a FRRConfiguration per-node with selectors.

I'm not sure what an alternative design would look like, though... I guess you could introduce a separate resource type to configure source addresses per-node (or even per network interface?). Or, failing that, perhaps annotate the speaker pods with the source addresses used for each interface? (That information could then be used to set up the peer if sufficiently stable) Just thinking out loud.

@fedepaol
Copy link
Member

fedepaol commented Apr 4, 2024

This is because I don't think a per peer src address makes sense at all in a multi node environment, it will require some flexing with node selector to provide exactly one bgppeer per node for the same router. It'd make much sense to have a per node / per instance configuration for example.

I agree, and it is exactly this that I was trying to achieve by specifying a FRRConfiguration per-node with selectors.

I'm not sure what an alternative design would look like, though... I guess you could introduce a separate resource type to configure source addresses per-node (or even per network interface?). Or, failing that, perhaps annotate the speaker pods with the source addresses used for each interface? (That information could then be used to set up the peer if sufficiently stable) Just thinking out loud.

I have the very same concerns :-)
We should have a per - node config, or even something more complex if we want to have different src ips with a simple configuration (node selector may work for asns, where we can admit the same - different asn for the same set of nodes).

Said that, as I wrote above, I think it's sane to wire up the API as it is today, it's out in MetalLB and probably people are using it (even though it's a bit of a surprise to me).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants